IA TV

The automatic semantics of images

Recognizing faces, objects, patterns, music, architecture, or even camera movements: thanks to progress in artificial intelligence, every plan or sequence in a video can now be characterized. In the IA TV joint laboratory created last October between France Télévisions and Télécom SudParis, researchers are currently developing an algorithm capable of analyzing the range of fiction programs offered by the national broadcaster.

 

As the number of online video-on-demand platforms has increased, recommendation algorithms have been developed to go with them, and are now capable of identifying (amongst other things) viewers’ preferences in terms of genre, actors or themes, boosting the chances of picking the right program. Artificial intelligence now goes one step further by identifying the plot’s location, the type of shots and actions, or the sequence of scenes.

The teams of France Télévisions and Télécom SudParis have been working towards this goal since October 2019, when the IA TV joint laboratory was created. Their work focuses on automating the analysis of the video contents of fiction programs. “Today, our recommendation settings are very basic. If a viewer liked a type of content, program, film or documentary, we do not know much about the reasons why they liked it, nor about the characteristics of the actual content. There are so many different dimensions which might have appealed to them – the period, cast or plot,” points out Matthieu Parmentier, Head of the Data & AI Department at France Télévisions.

AI applied to fiction contents

The aim of the partnership is to explore these dimensions. Using deep learning, a neural network technique, researchers are applying algorithms to a massive quantity of videos. The different successive layers of neurons can extract and analyze increasingly complex features of visual scenes: the first layer extracts the image’s pixels, while the last attaches labels to them.

Thanks to this technology, we are now able to sort contents into categories, which means that we can classify each sequence, each scene in order to identify, for example, whether it was shot outside or inside, recognize the characters/actors involved, identify objects or locations of interest and the relationships between them, or even extract emotional or aesthetic features. Our goal is to make the machine capable of progressing automatically towards interpreting scenes in a way that is semantically close to that of humans”, says Titus Zaharia, a researcher at Télécom SudParis and specialist in AI applied to multimedia content.

Researchers have already obtained convincing results. Is this scene set in a car? In a park? Inside a bus? The tool can suggest the most relevant categories by order of probability. The algorithm can also determine the types of shots in the sequences analyzed: wide, general or close-up shots. “This did not exist until now on the market,” says Matthieu Parmentier enthusiastically. “And as well as detecting changes from one scene to another, the algorithm can also identify changes of shot within the same scene.

According to France Télévisions, there are many possible applications. Firstly, the automatic extraction of the key frames, meaning the most representative image to illustrate the content of a fiction, for each sequence and according to aesthetic criteria. Then there is the identification of the “ideal” moments in a program to insert ad breaks. “Currently, we are working on fixed video shots, but one of our next aims is to be able to characterize moving shots such as zooms, traveling or panoramic shots. This could be very interesting for us, as it could help to edit or reuse contents”, adds Matthieu Parmentier.

Multimodal AI solutions

In order to adapt to the new digital habits of viewers, the teams of France Télévisions and Télécom SudParis have been working together for over five years. They have contributed to the creation of artificial intelligence solutions and tools applied to digital images, but also to other forms of content, texts and sounds. In 2014, the two entities launched a collaborative project, Média4Dplayer, a prototype of a media player designed for all four types of screens (TV, PC, tablet and smartphone). This would be accessible to all, and especially to elderly people or people with disabilities. A few months later, they were looking into the automatic generation of subtitles. The are several advantages to this: equal access to content and the possibility to view a video without sound.

In the case of television news, for example, subtitles are generated live by professionals typing, but as we have all seen, this can sometimes lead to errors or to delays between what is heard and what appears on screen,” explains Titus Zaharia. The solution developed by the two teams allows automatic synchronization for the Replay content offered by France TV. The teams were able to file a joint patent after two and a half years of development.

In time, we are hoping to be able to offer perfectly synchronized subtitles just a few seconds after the broadcast of any type of live television program,” continues Matthieu Parmentier.

France Télévisions still has issues to be addressed by scientific research and especially artificial intelligence. What we are interested in is developing tools which can be used and put on the market rapidly, but also tools that will be sufficiently general in their methodology to find other fields of application in the future,” concludes Titus Zaharia.

 

 

Dagobah

DAGOBAH: Tables, AI will understand

Human activities produce massive amounts of raw data presented in the form of tables. In order to understand these tables quickly, EURECOM and Orange are developing DAGOBAH, a semantic annotation platform. It aims to develop a generic solution that can optimize AI applications such as personal assistants, and facilitate the management of complex data sets of any company.

 

On a day-to-day basis, online keyword searches often suffice to make up for our thousands of memory lapses, clear up any doubts we may have or satisfy our curiosity. The results even anticipate our needs by offering more information than we asked for: a singer’s biography, a few song titles, upcoming concert dates etc. But have you ever wondered how the search engine always provides an answer to your questions? In order to display the most relevant results, computer programs must understand the meaning and nuances of data (often in the form of tables) so that they can answer users’ queries. This is one of the key goals of the DAGOBAH platform, created through a partnership between EURECOM and Orange research teams in 2019.

DAGOBAH’s aim is to automatically understand the tabular data produced by humans. Since there is a lack of explicit context for this type of data – compared to a text – understanding it depends on the reader’s knowledge. “Humans know how to detect the orientation of a table, the presence of headings or merging lines, relationships between columns etc. Our goal is to teach computers how to make such natural interpretations,” says Raphaël Troncy, a data science researcher at Eurecom.

The art of leveraging encyclopedic knowledge

After identifying a table’s form, DAGOBAH tries to understand its content. Take two columns, for example. The first lists names of directors and the second, film titles. How does DAGOBAH go about interpreting this data set without knowing its nature or content? It performs a semantic annotation, which means that it effectively applies a label to each item in the table. To do so, it must determine the nature of a column’s content (directors’ names etc.) and the relationship between the two columns. In this case: director – directed – film. But an item may mean different things. For example, “Lincoln” refers to a last name, a British or American city, the title of a Steven Spielberg film etc. In short, the platform must resolve any ambiguity about the content of a cell based on the overall context.

To achieve its goal, DAGOBAH searches existing encyclopedic knowledge bases (Wikidata, DBpedia). In these bases, knowledge is often formalized and associated with attributes: “Wes Anderson” is associated with “director.” To process a new table, DAGOBAH compares each item to its database and proposes possible candidates for attributes: “film title”, “city” etc. But they must remain simply candidates. Then, for each column, the candidates are grouped together and put to a majority vote. The nature being sought is therefore deduced with a varying degree of probability.

However, there are limitations to this method when it comes to complex tables. Beyond applications for the general public, industrial data may contain statistics related to business-specific knowledge or highly specialized scientific data that is difficult to identify.

Neural networks to the rescue  

To reduce the risk of ambiguity, DAGOBAH uses neural networks and a word embedding technique. The principle: represent a cell’s content in the form of a vector in multidimensional space.  Within this space, vectors of two words that are semantically close to one another are grouped together geometrically in the same place. Visually speaking, the directors are grouped together, as are the film titles. Applying this principle to DAGOBAH is based on the assumption that items in the same column must be similar enough to form a coherent whole. “To remove ambiguity between candidates, categories of candidates are grouped together in vector space. The problem is then to select the most relevant group in the context of the given table,” explains Thomas Labbé, a data scientist at Orange. This method becomes more effective than a simple search with a majority vote when there is little information available about the context of a table.

However, one of the drawbacks of using deep learning is the lack of visibility about what happens inside the neural network. “We change the hyperparameters, turning them like oven dials to obtain better results. The process is highly empirical and takes a long time since we repeat the experiment over and over again,” explains Raphaël Troncy. The approach is also time-consuming in terms of computing time. The teams are also working on scaling up the process. As such, Orange’s dedicated big data infrastructures are a major asset.  Ultimately, the researchers seek to implement an all-purpose approach, created in an end-to-end way and which is generic enough to meet the needs of highly diverse applications.

Towards industrial applications

The semantic interpretation of tables is a goal but not an end. “Working with EURECOM allows us to have almost real-time knowledge about the latest academic advances as well as an informed opinion on the technical approaches we plan to use,” says Yoan Chabot, a researcher in artificial intelligence at Orange. DAGOBAH’s use of encyclopedic data makes it possible to optimize question/response engines in the kind of natural language used by voice assistants. But the holy grail will be to provide an automatic processing solution for business-specific knowledge in an industrial environment. “Our solution will be able to address the private sector market, not just the public sector, for internal use by companies who produce massive amounts of tabular data,” adds Yoan Chabot.

This will be a major challenge, since industry does not have knowledge graphs to which DAGOBAH may refer. The next step will therefore be to succeed in semantically annotating data sets using knowledge bases in their embryonic stages. To achieve their goals, for the second year in a row the academic and industry partners have committed to take part in an international semantic annotation challenge, a very popular topic in the scientific community. For four months, they will have the opportunity to test their approach in real-life conditions and will compare their results with the rest of the international community in November.

To learn earn more: DAGOBAH: Make Tabular Data Speak Great Again

Anaïs Culot for I’MTech

Antenna 5G infrastructure

Mathematical tools to meet the challenges of 5G

The arrival of 5G marks a turning point in the evolution of mobile telecommunications standards. In order to cope with the constant increase in data traffic and the requirements and constraints of future uses, teams at Télécom SudParis and Davidson Consulting have joined forces in the AIDY-F2N joint laboratory. Their objective is to provide mathematical and algorithmic solutions to optimize the 5G network architecture.

 

Before the arrival of 5G, which is expected to be rolled out in Europe in 2020, many scientific barriers remain to be overcome. “5G will concern business networks and certain industrial sectors that have specific needs and constraints in terms of real time, security and mobility. In order for these extremely diverse uses to coexist, 5G must be capable of adapting” presents Badii Jouaber, telecommunications researcher at Télécom SudParis. To meet this challenge, he is piloting a new joint laboratory between Télécom SudParis and Davidson Consulting which was launched in early 2020. The main objective of this collaboration is to use artificial intelligence and mathematical modeling technologies to meet the requirements of new 5G applications.

Read on I’MTech: What is 5G?

Configuring custom networks

In order to support levels of service adapted to both business and consumer uses, 5G uses the concept of network slicing. The network is thus split into several virtual “slices” operated from a common shared infrastructure. Each of these slices can be configured to deliver an appropriate level of performance in terms of reliability, latency, bandwidth capacity or coverage. 5G networks will thus have to be adaptable, dynamic and programmable from end to end by means of virtual structures.

“Using slicing for 5G means we can meet these needs simultaneously and in parallel. Each slice of the network will thus correspond to a use, without encroaching on the others. However, this coexistence is very difficult to manage. We are therefore seeking to improve the dynamic configuration of these new networks in order to manage resources optimally. To do so, we are developing mathematical and algorithmic analysis tools. Our models, based on machine learning techniques, among other things, will help us to manage and reconfigure these networks on a permanent basis,” says Badii Jouaber. Networks that can therefore be set up, removed, expanded or reduced according to demand.

A priority for Davidson Consulting

Anticipating issues with 5G is one of the priorities of Davidson Consulting. The company is present in major cities in France and abroad, with 3,000 employees. It was co-founded in 2005 by Bertrand Bailly, a former Télécom SudParis student, and is a major player in telecoms and information systems. “For 15 years we have been carrying out expert assessment for operators and manufacturers. The arrival of 5G brings up new issues. For us, it is essential to contribute to these issues by putting our expertise to good use. It’s also an opportunity to support our clients and help them overcome these challenges”, says David Olivier, Director of Research and Development at Davidson. For him, it is thus necessary to take certain industrial constraints into account from the very first stages of research, so that their work can be operational quickly.

Another one of our goals is to achieve energy efficiency. With the increase in the number of connected objects, we believe it is essential to develop these new models of flexible, ultra-dynamic and configurable mobile networks, to minimize and reduce their impact by optimizing energy consumption”, David Olivier continues.

Bringing technology out of the labs for the networks of the future

The creation of the AIDY-FN2 joint laboratory is the culmination of several years of collaboration between Télécom SudParis and Davidson Consulting, beginning in 2016 with the support of a thesis supervised by Badii Jouaber. “By initiating a new joint research activity, we aim to strengthen our common research interests around the networks of the future, and the synergies between academic research and industry. Our two worlds have much in common!” says David Olivier enthusiastically.

Under this partnership, the teams at Davidson Consulting and Télécom SudParis will coordinate and pool their skills and research efforts. The company has also provided experts in AI and Telecommunications modeling to co-supervise, with Badii Jouaber, the scientific team of the joint laboratory that will be set up in the coming months. This work will contribute to enhancing the functionality of 5G within a few years.

ixblue

iXblue: Extreme Fiber Optics

Belles histoires, Bouton, CarnotSince 2006, iXblue, a French company based in Lannion, and the Hubert Curien laboratory [1] in Saint-Étienne have partnered to develop cutting-edge fiber optics. This long partnership has established iXblue as a global reference in the use of fiber optics in harsh environments. The scientific and technological advances have enabled the company to offer solutions for the nuclear, space and health sectors. But there’s something different about these optical fibers: they’re not used for telecommunications.

 

Last June, iXblue and the Hubert Curien laboratory officially opened LabH6, a joint research laboratory dedicated to fiber optics. This latest development comes from a partnership that has existed since 2006 and the explosion of the internet bubble. In fact, iXblue was born from the ashes of a start-up specializing in fiber optics for telecommunications. After the disappointment experienced in the digital technology sector in the early 2000s, “we decided to make a complete U-turn, leaving telecommunications behind, while remaining in fiber optics,” explains Thierry Robin, present since the beginning and currently the company’s CTO.

A daring move, at a time when fiber optics in domestic networks was in its infancy. But it was a move that paid off. In 13 years, the young company became a pivotal stakeholder in fiber optics for harsh environments. The company owes its success to the innovations developed with the Hubert Curien laboratory. The company’s products are now used in high-temperature conditions, under nuclear irradiation and in the vacuum of space.

Measuring nuclear irradiation

One of the major achievements of this partnership has been the development of optical fibers that can measure the radiation dose in an environment. The light passing through an optical fiber is naturally diminished over the length of the fiber. This attenuation, called optical loss, increases when the fiber is under nuclear radiation. “We understand the law governing the relationship between optical loss and the radiation dose received by the fiber,” explains Sylvain Girard, a researcher at the Hubert Curien laboratory. “We can therefore have an optical fiber play the role of hundreds of dosimeters by measuring the radiation value.”

There are two advantages to this application of the fiber. First of all, the resulting data can be used to establish a continuous mapping of the radiation over the length of the fiber, whereas dosimeters provide a value from their specific location. Secondly, the optical fiber provides a real-time measurement, since the optical loss is measured live. Dosimeters, on the other hand, are usually left for days or months in their locations before the value of the accumulated radiation can be measured.

The fibers used in this type of application are unique. They must be highly sensitive to radiation in order to accurately measure the variations. Research conducted for this purpose resulted in fibers doped with phosphorus or aluminum. This type of optical fiber is currently installed in the CERN Large Hadron Collider (LHC) in Geneva during the 2-year shutdown that will continue until 2020. “This will enable CERN to assess the vulnerability of the electronic equipment to radiation and hence avoid unplanned shutdowns caused by outages,” Sylvain Girard explains.

These optical fibers are also being assessed at the TRIUMF particle accelerator center in Canada for proton therapy. This high-precision medical technique treats ocular melanomas using radiation. The radiation dose deposited on the melanoma must be very precise. “The fiber should make it possible to measure the radiation dose in real-time and stop it once the required value is reached,” the researcher explains. “Without the fiber, doctors can only determine the total dose the patient received at the end of the treatment. They must therefore accumulate three low-dose radiation sessions one after the other to come as close as possible to the total target dose.”

Surviving space

While the fibers used in dosimetry must be sensitive to radiation for measurement purposes, others must be highly resistant. This is the case for fibers used in space. Satellites are susceptible to space radiation. However, the gyroscopes satellites use to position themselves use optical fiber amplifiers. iXblue and the Hubert Curien laboratory therefore partnered together to develop hydrogen or cerium-doped optical fibers. Two patents have been filed for these fiber amplifiers, and their level of resistance has made them the reference in optical fibers for the space sector.

The same issue of resistance to radiation exists in the nuclear industry, where it is important to measure the temperature and mechanical stress in the core of nuclear reactors. “These environments are exposed to doses of a million Grays. For comparison purposes, a lethal dose for humans is 5 Grays,” Sylvain Girard explains. The optical fiber sensors must therefore be extremely resistant. Once again, the joint research conducted by iXblue and the Hubert Curien laboratory led to two patents for new fibers that meet the needs of manufacturers like Orano (formerly AREVA). These fibers will also be deployed in the fusion reactor project, ITER.

All this research will continue at the new LabH6, which will facilitate the industrial application of the research conducted by iXblue and the Hubert Curien laboratory. The stakes are high, as the uses for optical fibers beyond telecommunications continue to increase. While space and nuclear environments may seem to be niche sectors, the optical fibers developed for these applications could also be used in other contexts. “We are currently working on fibers that are resistant to high temperatures for use in autonomous cars,” says Thierry Robin. “These products are indirectly derived from developments made for radiation-resistant fibers,” he adds. After leaving the telecommunications sector and large volume production 13 years, iXblue could soon return to its origins.

[box type=”shadow” align=”” class=”” width=””]A word from the company: Why partner with an academic institute like the Hubert Curien laboratory?

We knew very early on that we wanted an open approach and exchanges with scientists. Our partnership with the Hubert Curien laboratory allowed us to progress within a virtuous relationship. In an area where competitors maintain a culture of secrecy, we inform the researchers we work with of the exact composition of the fibers. We even produce special fibers for them that are only used for the scientific purposes of testing specific compositions. We want to enable our academic partners to conduct their research by giving them all the elements they need to make advances in the field. This spirit is what has allowed us to create unique products for the space and nuclear sectors.[/box]

[1] The Hubert Curien Laboratory is a joint research unit of CNRS/Université Jean Monnet/Institut d’Optique Graduate School, where Télécom Saint-Étienne conducts much of its research.

SERTIT

SERTIT: satellite imagery for the environment and crisis management

I’MTech is dedicating a series of stories to success stories from research partnerships supported by the Carnot Télécom & Société Numérique Institute (TSN), to which Télécom Physique Strasbourg and IMT belong.

[divider style=”normal” top=”20″ bottom=”20″]

Belles histoires, Bouton, CarnotThe regional image processing and remote sensing service (SERTIT) has specialized in producing geographic information production for over 30 years. It is linked with the ICube[1] laboratory, a key partner for Télécom Physique Strasbourg, and is part of the Carnot Télécom & Société Numérique Institute’s technology platform offer. Its role is to transform raw satellite images into a useful source of information to provide insights into regional land planning, environmental and biodiversity management, or rescue and relief operations in response to natural disasters. Mathilde Caspard, a remote sensing engineer at SERTIT, explains the platform’s various activities.

 

The SERTIT platform makes it possible to produce geographical information: what does this involve?

Mathilde Caspard: We mainly use satellite images, which we analyze and use to obtain information to help different actors make decisions. Our service makes it possible, for example, to map the forest cover or bodies of water. We can therefore inform land planning choices by providing information about the environment. We also have applications linked to crisis management following natural disasters such as floods, fires, hurricanes etc.

What role does the platform play in crisis management?

MC: We take part in rapid mapping operations. These actions make it possible to quickly deploy satellites to produce post-event maps in a short time frame. The satellite images are used to extract geographical information about the events. This information is then provided to the agencies which manage relief operations. We contribute to such efforts in particular through the COPERNICUS Emergency Management Service (EMS) European program. If there were to be major flooding in France for example, the authorized French user, the General Directorate for Civil Security and Crisis Management (DGSCGC), would request that the European Union activate the emergency rapid mapping service. If the request is accepted, the European program calls on our services. We must then provide information about the extension of the flooding, road and bridge conditions, submerged buildings etc. in less than ten hours’ time. The SERTIT rapid mapping service, which is certified ISO 9001, is available 365 days a year, and 24 hours a day for this type of mission.

In concrete terms, how do you ensure that SERTIT can respond to the request so quickly?

MC: As soon as we receive satellite data, we begin the image processing steps. We’ve been in existence since 1986, so we’ve developed numerous tools to speed up production. For example, we have algorithms that allow us to quickly extract bodies of water in images. In the event of forest fires, other algorithms help us identify burnt areas and untouched areas. Then, we cross-check this information with other sources of data, such maps made before the disaster. This helps us identify destroyed buildings, or unusable roads. Once all this information has been extracted, we deliver information in the form of a map and files that decision-makers can use directly in their systems to organize relief efforts.

This example of a map produced by SERTIT illustrates the type of geographical information it can provide.  The map shows the region surrounding Chimanimani in Zimbabwe, on 21 March 2019, following a tropical cyclone. SERTIT identifies blocked or unusable roads,  damaged bridges, affected industrial zones, flooded areas etc.

This example of a map produced by SERTIT illustrates the type of geographical information it can provide. The map shows the region surrounding Chimanimani in Zimbabwe, on 21 March 2019, following a tropical cyclone. SERTIT identifies blocked or unusable roads, damaged bridges, affected industrial zones, flooded areas etc.

Do you only intervene in disasters that affect France?  

MC : The European COPERNICUS EMS program is a consortium made up of several production sites spread out over France, Italy, Germany and Spain. Depending on the number and magnitude of the events, the services of several production sites can be called on at the same time. Our services may just as likely be called upon for disasters in France and in Europe as they may be for events elsewhere in the world. The European Commission may provide assistance to countries outside the European Union which are affected by natural disaster.  In such cases, it calls on its rapid mapping service, since it must be able to determine how much assistance is required. Recently, for example we’ve worked on a cyclone in Mozambique, another in Australia, flooding in Iran, and fires in Kenya.

When SERTIT is not working on crisis management, what do the platform’s activities involve?

MC: We have a wide range of environmental applications. For example, we are frequently asked to carry out forest cover mapping. We quantify the clearings and deforestation at a given moment and compare it to previous data to track it over time . In Alsace we’re in frequent contact with foresters since they then integrate this data in their decision support tools to guide their cutting and forest maintenance as a result. In the same way, we measure urban areas to help local authorities with land planning. These are SERTIT’s long-standing activities. And we also receive occasional requests, for example, for specific biodiversity monitoring.

How do satellite images help monitor biodiversity?

MC: A good example is our work to help protect the European hamster. It’s an endangered species in our region since its habitat is threatened. An official program has been put in place to help reintroduce the hamster. Associations have worked to identify burrows and mark them with GPS coordinates. For our part, we have created survival indicators based on the geographic information associated with these GPS coordinates. For example, the hamster feeds exclusively on wheat and alfalfa and does not travel more than 300 meters from its burrow. We therefore assessed the areas in which hamsters emerging from hibernation were most likely to survive, based on the burrows’ surroundings. In addition to this activity, we’ve also worked on fine-scale vegetation for the mapping the Eurométropole de Strasbourg. These maps were used to create  ecological corridors allowing for the movement of species in urban areas.

Where does the satellite data that you use for SERIT’s various applications come from? 

MC: The European COPERNICUS program has a fleet of Earth observation satellites with various characteristics — not just for rapid mapping for disasters.  This is somewhat unique in the world because the images are free as well. However, they aren’t always very high-resolution images. So, at the same time, we also use commercial images provided by companies such as Airbus or DigitalGlobe, whose images are much higher-resolution. It all depends on the desired objective: rapid image capture, wide field, accuracy etc. And in certain rapid mapping cases, in addition to all this, we also have at our disposal images acquired through the “International Space and Major Disasters Charter” which brings together 16 space agencies. It allows for international collaboration to provide free satellite images to best contribute to relief efforts.

[1] ICube is a joint research unit between University of Strasbourg/CNRS/ENGEES/INSA Strasbourg.

[box type=”shadow” align=”” class=”” width=””]

A guarantee of excellence
in partnership-based research since 2006

 

Having first received the Carnot label in 2006, the Télécom & Société numérique Carnot institute is the first national “Information and Communication Science and Technology” Carnot institute. Home to over 2,000 researchers, it is focused on the technical, economic and social implications of the digital transition. In 2016, the Carnot label was renewed for the second consecutive time, demonstrating the quality of the innovations produced through the collaborations between researchers and companies.

The institute encompasses Télécom ParisTech, IMT Atlantique, Télécom SudParis, Institut Mines-Télécom Business School, Eurecom, Télécom Physique Strasbourg and Télécom Saint-Étienne, École Polytechnique (Lix and CMAP laboratories), Strate École de Design and Femto Engineering. Learn more [/box]

camouflage, military vehicles

Military vehicles are getting a new look for improved camouflage

I’MTech is dedicating a series of articles to success stories from research partnerships supported by the Télécom & Société Numérique Carnot Institute (TSN), to which IMT Atlantique belongs.

[divider style=”normal” top=”20″ bottom=”20″]

Belles histoires, Bouton, CarnotHow can military vehicles be made more discreet on the ground? This is the question addressed by the Caméléon project of the Directorate General of Armaments (DGA), involving Nexter group and IMT Atlantique in the framework of the Télécom & Société numérique Carnot Institute. Taking inspiration from the famous lizard, researchers are developing a high-tech skin able to replicate surrounding colors and patterns.

 

Every year on July 14, the parades on the Champs Élysées show off French military vehicles in forest colors. They are covered in a black, green and brown pattern for camouflage in the wooded landscapes of Europe. Less frequently seen on the television are specific camouflages for other parts of the world. Leclerc tanks, for example, may be painted in ochre colors for desert areas, or grey for urban operations. However, despite this range of camouflage patterns available, military vehicles are not always very discreet.

There may be significant variations in terrain within a single geographical area, making the effectiveness of camouflage variable,” explains Éric Petitpas, Head of new protection technologies specializing in land defense systems at Nexter Group. Adjusting the colors to the day’s mission is not an option. Each change of paint requires the vehicle to be immobilized for several days. “It slows down reaction time when you want to dispatch vehicles for an external operation,” underlines Eric Petitpas. To overcome this lack of flexibility, Nexter has partnered with several specialized companies and laboratories, including IMT Atlantic, to help develop a dynamic camouflage. The objective is to be able to equip vehicles with technology that can adapt to its surroundings in real time.

This project, named Caméléon, was initiated by the Directorate General of Armaments (DGA) and “is a real scientific challenge“, explains Laurent Dupont, a researcher in optics at IMT Atlantique (member of the Télécom & Société numérique Carnot Institute). For scientists, the challenge lies first and foremost in fully understanding the problem. Stealth is based on the enemy’s perception. It therefore depends on technical aspects (contrast, colors, brightness, spectral band, pattern etc.) “We have to combine several disciplines, from computer science to colorimetry, to understand what will make a dynamic camouflage effective or not,” the researcher continues.

Stealth tiles

The approach adopted by the scientists is based on the use of tiles attached to the vehicles. A camera is used to record the surroundings, and an image analysis algorithm identifies the colors and patterns representative of the environment. A suitable pattern and color palette are then displayed on the tiles covering the vehicle to replicate the colors and patterns of the surrounding environment. If the vehicle is located in an urban environment, for example, “the tiles will display grey, beige, pink, blue etc. with vertical patterns to simulate buildings in the distance” explains Éric Petitpas.

To change the color of the tiles, the researchers use selective spectral reflectivity technology. Contrary to what could be expected, it is not a question of projecting an image onto the tile as though it were a TV screen. “The color changes are based on a reflection of external light, selecting certain wavelengths to display as though choosing from the colors of the rainbow,” explains Éric Petitpas. “We can selectively choose which colors the tiles will reflect and which colors will be absorbed,” says Laurent Dupont. The combination of colors reflected at a given point on the tile generates the color perceived by the onlooker.

A prototype of the new “Caméléon” camouflage was presented at the 2018 Defense Innovation Forum

This technology was demonstrated at the 2018 Defense Innovation Forum dedicated to new defense technology. A small robot measuring 50 centimeters long and covered in a skin of Caméléon tiles was presented. The consortium now wants to move on to a true-to-scale prototype. In addition to needing further development, the technology must also adapt to all types of vehicles. “For the moment we are developing the technology on a small-scale vehicle, then we will move on to a 3m² prototype, before progressing to a full-size vehicle,” says Éric Petitpas. The camouflage technology could thus be quickly adapted to other entities – such as infantrymen, for example.

New questions are emerging as technology prototypes prove their worth, opening up new opportunities to further the partnership between Nexter and ITM Atlantic that was set up in 2012. Caméléon is the second upstream study program of the DGA in which IMT Atlantic has taken part. On the technical side, researchers must now ensure the scaling up of tiles capable of equipping life-size vehicles. A pilot production line for these tiles, led by Nexter and E3S, a Brest-based SME, has been launched to meet the program’s objectives. The economic aspect should not be forgotten either. Tile covering will inevitably be more expensive than painting. However, the ability to adapt the camouflage to all types of environment is a major operational advantage that doesn’t require immobilizing the vehicle to repaint it. There are plenty of new challenges to be met before we see stealth vehicles in the field… or rather not see them!

 

[divider style=”normal” top=”20″ bottom=”20″]

A guarantee of excellence in partnership-based research since 2006

The Télécom & Société Numérique Carnot Institute (TSN) has been partnering with companies since 2006 to research developments in digital innovations. With over 1,700 researchers and 50 technology platforms, it offers cutting-edge research aimed at meeting the complex technological challenges posed by digital, energy and industrial transitions currently underway in in the French manufacturing industry. It focuses on the following topics: industry of the future, connected objects and networks, sustainable cities, transport, health and safety.

The institute encompasses Télécom ParisTech, IMT Atlantique, Télécom SudParis, Institut Mines-Télécom Business School, Eurecom, Télécom Physique Strasbourg and Télécom Saint-Étienne, École Polytechnique (Lix and CMAP laboratories), Strate École de Design and Femto Engineering.

[divider style=”normal” top=”20″ bottom=”20″]

Qualcomm

Qualcomm, EURECOM and IMT joining forces to prepare the 5G of the future

Belles histoires, Bouton, Carnot5G is moving into the second phase of its development, which will bring a whole host of new technological challenges and innovations. Research and industry stakeholders are working hard to address the challenges posed by the next generation of mobile communication. In this context, Qualcomm, EURECOM and IMT recently signed a partnership agreement also including France Brevets. What is the goal of the partnership? To better prepare 5G standards and release technologies from laboratories as quickly as possible. Raymond Knopp, a researcher in communication systems at EURECOM, presents the content and challenges of this collaboration.

 

What do you gain from the partnership with Qualcomm and France Brevets?

Raymond Knopp: As researchers, we work together on 5G technologies. In particular, we are interested in those which are closely examined by 3GPP, the international standards organization for telecommunication technologies. In order to apply our research outside our laboratories, many of our projects are carried out in collaboration with industrial partners. This gives us more relevance in dealing with the real-world problems facing technology. Qualcomm is one of these industrial partners and is one of the most important companies in the generation of intellectual property in 4G and 5G systems. In my view, it is also one of the most innovative in the field. The partnership with Qualcomm gives us a more direct impact on technology development. With additional support from France Brevets, we can play a more significant role in defining the standards for 5G. We have a lot to learn from the intellectual property generation, and these partners provide us with this knowledge.

What technologies are involved in the partnership?

RK: 5G is currently moving into its second phase. The first phase was aimed at introducing new network architecture aspects and new frequencies. This meant increasing the frequency bands by about 5 or 6 times. This phase is now operational, and so innovations are secondary. The technologies we are working on now are mainly for the second phase. It is oriented more towards private networks, for applications involving machines and vehicles, new network control systems, etc. Priority will be given to network division and software-defined network (SDN) technologies, for example. This is also the phase in which low latency and very highly robust communication will be developed. This is the type of technology we are working on under this partnership.

Are you already thinking of the implementation of the technologies developed in this second phase?

RK: For now, our work on implementation is very much aimed at the first-phase technologies. We are involved in the H2020, 5Genesis and 5G-Eve projects, for conducting tests on 5G, both for mobile terminals and the network side of things. These trials involve our platform OpenAirInterface. For now, the implementation of second-phase technologies is not a priority. Nevertheless, intellectual property and any standards generated in the partnership with Qualcomm could potentially undergo implementation tests on our platform. However, it will be some time before we reach that stage.

What does a partnership with an industrial group like this represent for an academic researcher like yourself?

RK: It is an opportunity to close the loop between research, prototyping, standards and industrialization, and to see our work applied directly to the 5G technologies we will be using tomorrow. In the academic world in general, we tend to be uni-directional. We write publications, and some of them contain issues that could be included in standards, but this isn’t done and they are left accessible to everyone. Of course, companies go on to use them without our involvement, which is a pity. By setting up partnerships like this one with Qualcomm, we learn to appreciate the value of our technologies and developing them together. I hope it will encourage more researchers to the same. The field of academic research in France needs to be aware of the importance of closely following the standards and industrialization process!

 

The TeraLab data machines.

TeraLab: data specialists serving companies

Belles histoires, bouton, CarnotTeraLab is a Big Data and artificial intelligence platform that grants companies access to a whole ecosystem of specialists in these fields. The aim is to remove the scientific and technological barriers facing organizations that want to make use of their data. Hosted by IMT, TeraLab is one of the technology platforms proposed by the Carnot Télécom & Société Numérique. Anne-Sophie Taillandier, Director of TeraLab, presents the platform.

 

What is the role of the TeraLab platform?

Anne-Sophie Taillandier: We offer companies access to researchers, students and innovative enterprises to remove technological barriers in the use of their data. We provide technical resources, infrastructure, tools and skills in a controlled, secure and neutral workspace. Companies can prototype products or services in realistic environments with a view to technology transfer as fast as possible.

In what ways do you work with companies?

AST: First of all, we help them formalize the use case. Companies often come to us with a vague outline of the use case, so we help them with that and can provide specialist contributions if necessary. This is a crucial stage because our aim is also for companies to be able to assess the return on investment at the end of the research or innovation work. It helps them estimate the investment required to launch production, so the need must be clearly defined. We then help them understand what they have the right to do with the data. There again we can call upon expert legal advice if necessary. Lastly, we support them in the specification of the technical architecture.

How do you stand out from other Big Data and artificial intelligence service platforms?

AST: Firstly, by the ecosystem we benefit from. TeraLab is associated with IMT, so we have a number of specialist researchers in these fields as well as students we can mobilize to resolve technological challenges posed by companies. Secondly, TeraLab is a pre-competitive platform. We can also define a framework that brings together legal and technical aspects to meet companies’ needs in an individual way. We can strike a fairly fine balance between safety and flexibility to reassure the organizations who come to us and at the same time give researchers enough space to find solutions to the problems posed.

What level of technical security can you provide?

AST: We can reach an extremely high level of technical security, where the user of the data supplied, such as the researcher, can see it but never extract it. Generally speaking, a validation process involving the data supplier and the Teralab team must be followed in order to extract a piece of data from the workspace. During a project, data security is guaranteed by a combination of technical and legal factors. Moreover, we work in a neutral and controlled space which also provides a form of independence that reassures companies.

What does neutrality mean for you?

AST: The technical components we propose are open source. We have nothing against products under license, but if a company wants to use a specific tool, it must provide the license itself. Our technical team has excellent knowledge of the different libraries and APIs as well as the components required to set up a workspace. They adapt the tools to the company’s needs. We do not host the service beyond the end of the experimentation phase. Instead, we enter a new phase of technology transfer to allow the products or services to be integrated at the client’s end. We therefore have nothing to “sell” except our expertise. This also guarantees our neutrality.

What use cases do you work on?

AST: Since we started TeraLab, more than 60 projects have come through the platform, and there are currently 20 on the go. They can last between 3 months and 3 years. We have had projects in logistics, insurance, public services, energy, mobility, agriculture etc. At the moment, we are focusing on three sectors. The first is cybersecurity: we are interested in seeing what data access barriers there are, how to make a workspace compliant, and how to guarantee respect of personal data. We also work a lot in the health sector and industry. Geographically speaking, we are increasingly working at a European level in the framework of H2020 projects. The platform also benefits from growing recognition among European institutions with, in particular, the “Silver i-space” label awarded by the BDVA.

Physically, what does TeraLab look like?

AST: TeraLab comprises machines at Douai, a technical team in Rennes and a business team in Paris. The platform is accessible remotely, so there is no need to be physically close to it, making it different to other service platforms. We have recently also been able to secure client machines directly on site if the client has specific restrictions with regard to the movement of data.

 

User immersion, between 360° video and virtual reality

I’MTech is dedicating a series of success stories to research partnerships supported by the Télécom & Société Numérique (TSN) Carnot Institute, which the IMT schools are a part of.

[divider style=”normal” top=”20″ bottom=”20″]

To better understand how users interact in immersive environments, designers and researchers are comparing the advantages of 360° video and full-immersion virtual reality. This is also the aim of the TroisCentSoixante inter-Carnot project uniting the Télécom & Société Numérique and the M.I.N.E.S. Carnot Institutes. Strate Research, the research department at Strate School of Design which is a member of the Carnot TSN, is studying this comparison in particular in the case of museography mediation.  

 

When it comes to designing immersive environments, designers have a large selection of tools available to them. Mixed reality, in which the user is plunged into a more or less interactive environment, covers everything from augmented video to fully synthetic 3D images. To determine which is the best option, researchers from members of the TSN Carnot Institute (Strate School of Design) and the M.I.N.E.S Carnot Institute (Mines ParisTech and IMT Mines Alès) have joined forces. They have compared, for different use cases, the differences in user engagement between 360° video and full 3D modeling, i.e. virtual reality.

“At the TSN Carnot Institute we have been working on the case of a museum prototype alongside engineers from Softbank Robotics, who are interested in the project,” explains Ioana Ocnarescu, researcher at Strate. A room containing exhibits such as a Minitel, tools linked to the development of the internet, photos of famous researchers in robotics and robots has been created at Softbank Robotics to create mediation on science and technology. Once the object is in place, a 3D copy is made and a visit route is laid out between the different exhibits. This base scenario is used to film a 360° video guided by a mediator and to create a virtual guide in the form of a robot called Pepper, which travels around the 3D scene with the viewer. In both cases, the user is immersed in the environment using a mixed reality headset.

Freedom or realism: a choice to be made

Besides the graphics, which are naturally different between video and 3D modelling, the two technologies have one fundamental difference: freedom of action in the scenario. “In 360° video the viewer is passive,” explains Ioana Ocnarescu. “They follow the guide and can zoom in on objects, but cannot move around freely as they wish.” Their movement is limited to turning their head and deciding to spend longer on certain objects than others. To allow this, the video is cut in several places allowing a decision tree to be made that leads to specific sequences depending on the user’s choices.

Like the 3D mediation, the 360°-video trial mediation is guided by a robot called Pepper.

Like the 3D mediation, the 360°-video trial mediation is guided by a robot called Pepper.

 

3D modeling, on the other hand, grants a large amount of freedom to the viewer. They can move around freely in the scene, choose whether to follow the guide or not, walk around the exhibits and look at them from any angle, which is where 360° video is limited by the position of the camera. “User feedback shows that certain content is better suited to one device or the other,” the Strate researcher reports. For a painting or a photo, for example, there is little use in being able to travel around the object, and the viewer prefers to be in front of the exhibit in it its surroundings with as much realism as possible. “360° video is therefore better adapted for museums with corridors and paintings on the walls,” she points out. On the other hand, 3D modeling is particularly adapted to looking at and examining 3D artefacts such as statues.

These experiments are extremely useful to researchers in design, in particular because they involve real users. “Knowing what people do with the devices available is at the heart of our reflection,” emphasizes Ioana Ocnarescu. Strate has been studying user-machine interaction for over 5 years to develop more effective interfaces. In this project, the people in immersion can give their feedback directly to the Strate team. “It is the most valuable thing in our work. When everything is controlled in a laboratory environment, the information we collect is less meaningful.

The tests must continue to incorporate a maximum amount of feedback from as many different types of audience as possible. Once finished, the results will be compared with those of other use cases explored by the M.I.N.E.S Carnot Institute. “Mines ParisTech and IMT Mines Alès are comparing the same two devices but in the case of self-driving cars and exploration of the Chauvet cave,” explains the researcher.

 

[divider style=”normal” top=”20″ bottom=”20″]

Carnot TSN, a guarantee of excellence in partnership-based research since 2006

The Télécom & Société numérique (TSN) Carnot Institute has partnered companies in their research to develop digital innovations since 2006. On the strength of over 1,700 researchers and 50 technology platforms, it offers cutting-edge research to resolve complex technological challenges produced by digital, energy and environmental and industrial transformations within the French production fabric. It addresses the following themes: Industry of the Future, networks and smart objects, sustainable cities, mobility, health and security.

The TSN Carnot Institute is composed of Télécom ParisTech, IMT Atlantique, Télécom SudParis, Institut Mines-Télécom Business School, Eurecom, Télécom Physique Strasbourg, Télécom Saint-Étienne, École Polytechnique (Lix and CMAP laboratories), Strate School of Design and Femto Engineering.

[divider style=”normal” top=”20″ bottom=”20″]

IRON-MEN: augmented reality for operators in the Industry of the Future

I’MTech is dedicating a series of success stories to research partnerships supported by the Télécom & Société Numérique (TSN) Carnot Institute, which the IMT schools are a part of.

[divider style=”normal” top=”20″ bottom=”20″]

The Industry of the Future cannot happen without humans at the heart of production systems. To help operators adapt to the fast development of industrial processes and client demands, elm.leblanc, IMT and Adecam Industries have joined forces in the framework of the IRON-MEN project. The aim is to develop an augmented reality solution for human operators in industry.

 

Many production sites use manual processes. Humans are capable of a level of intelligence and flexibility that is still unattainable by industrial robots, an ability that remains essential for the French industrial fabric to satisfy increasingly specific, demanding and unpredictable customer and user demands.

Despite alarmist warnings about replacement by technology, humans must remain central to industrial processes for the time being. To enhance the ability of human operators, IMT, elm.leblanc and Adecam Industries have joined forces in the framework of the IRON-MEN project. The consortium will develop an augmented reality solution for production operators over a period of 3 years.

The augmented reality technology will be designed to help companies develop flexibility, efficiency and quality in production, as well as strengthen communication among teams and collaborative work. The solution developed by the IRON-MEN project will support users by guiding and assisting them in their daily tasks to allow them to increase their versatility and ability to adapt.

The success of such an intrusive piece of technology as an augmented reality headset depends on the user’s physical and psychological ability to accept it. This is a challenge that lies at the very heart of the IRON-MEN project, and will guide the development of the technology.

The aim of the solution is to propose an industrial and job-specific response that meets specific needs to efficiently assist users as they carry out manual tasks. It is based on an original approach that combines digital transformation tools and respect for the individual in production plants. It must be quickly adaptable to problems in different sectors that show similar requirements.

IMT will contribute its research capacity to support elm.leblanc in introducing this augmented reality technology within its industrial organization. Immersion, which specializes in augmented reality experiences, will develop the interactive software interface to be used by the operators. The solution’s level of adaptability in an industrial environment will be tested at the elm.leblanc production sites at Drancy and Saint-Thégonnec as well as through the partnership with Adecam Industrie. IRON-MEN is supported by the French General Directorate for Enterprises in the framework of the “Grands défis du numérique” projects.