Petros Elia, ERC, Eurecom, DUALITY

Petros Elia was awarded an ERC grant for its research on wireless networks

To obtain a grant from the European Research Council is one of the ultimate goals for a scientist based in Europe. That’s exactly what Prof. Petros Elia, a reseacher at Eurecom, achieved last November. This makes a total of two ERC grants at Eurecom in two years.

 

 

Petros Elia, Eurecom, ERC grant

No doubt, getting the ERC grant is a very important milestone in one’s career” says Petros Elia, Professor in Mobile Communications. The prestigious nature of this honor also reflects very well on Eurecom as a whole, and on the Communications Systems Department in particular which now boasts two ERC grants (Prof. David Gesbert received it in 2015). Petros admits that the ERC has the potential to bring about some changes and challenges: “It could change my day-to-day work since it could allow me to pursue more ambitious research goals and riskier funding endeavors.” Petros will also be involved in the recently created Eurecom committee specialized in ERC grants that helps scientists benefit from the experience of other scientists who already received such grants: “Of course, I’m very available to help Eurecom professors write their ERC proposals,” adds Petros Elia. The five-year grant he received represents €2 million, which is the maximum for “Consolidator grants”, the type of ERC Petros Elia applied for. It will help him develop DUALITY, his ground-breaking project that aims to revolutionize the way wireless networks are working.

 

DUALITY Project

For some, the explosion in the volumes of data may mean the end of wireless networks, but for Petros Elia, increased data storage capabilities could actually lead to major gains in data transmission rates in wireless networks. This shows how big and challenging DUALITY is. For some months, Petros has been feeling that “something exciting was coming up” in this area. He actually proved – in a specific setting (a broadcast channel) – there was some synergy between cache memory and feedback information theory. He now needs to prove it for all sorts of configurations. “Even without the ERC grant, I would be doing this type of research. I strongly believe that this approach can lead to something very new and powerful. It’s a very exciting area of research,” he adds. For him, DUALITY deals with concepts that will have an impact far beyond 5G. It is typically a high-risk/high-gain topic, which he nicely demonstrated in his ERC proposal. The big challenge of DUALITY is to lay the theoretical foundations of transforming memory into better data rates through wireless communications. In other words, that means creating a new theory. Nothing less. A theory that would reveal how, with a little feedback and a splash of smart caching of a small fraction of popular content, it would be possible to adequately handle the anticipated extreme increase in users and demand.

It is actually urgent to find solutions since “we are running out of available resources for transmitting signals over wireless networks – whose traffic is expected to increase by 10 in just 5 years,” Petros explains. And there is an unexplored synergy between two seemingly very different structural worlds: Feedback information theory and distributed storage.Seemingly disconnected, both theories have severe limitations when used separately, especially in the presence of an increasing number of users. “But the mathematical convergence of them could lead to something powerful. Each method could build on the imperfections of the other.” Hence the name of the project, DUALITY, that says it all.

Petros Elia’s idea is to use distributed memory at some nodes of the network in a way that it absorbs structure from the data and converts it into structure for the network. This would reconstruct the structure of the network, yielding faster networks with less unwanted interferences. How would it work? The first step is to be able to predict probable interfering data streams so they can be cleverly placed in advance in some collective cache memory of subgroups of users. And then, when time comes, the system should be able to cleverly transmit the right signals/information to the right users. For example, the system would predict that shows like “The Big Bang Theory” or “The Man in the High Castle” could be downloaded on Wednesday. So, the day before, it would store some of these in specific places across the wireless network, so it – with a single common transmission! – can deliver different series to different people; the key is to create interferences on purpose! It is easy to understand that multiple users and multiple data make the whole process a lot more complicated and interesting. The basics of the new theory imagined by Petros can be summarized in three words: maths, maths and maths – Feedback information theory, Coding theory and Algebraic combinatorics… These are the main tools that will help Eurecom scientists find how and where to place memory across the network, so that restructuring phenomena can happen.

 

Scientific and technological impacts

Getting to this result will obviously take a few years and some major milestones have to be reached in the research process. First, fundamental limits of memory-aided wireless communications will be explored so that the relationship between feedback information and storage-capacity can be better understood. This will lead to the design of specific algorithms that will focus, for example, on “feedback-boosted coded caching” or memory-aided interference alignment. A third important step will be about the use of memory to simplify the structure of wireless networks; it turns out that a bit of feedback-aided caching can fundamentally alter the structure of the network into something much simpler. All these results will then be able to lead to a unified theory stating how memory can be converted into throughput in the case of very large networks such as ultra-high frequency networks, optical networks or the cloud. Finally, “all this will be validated with a series of wireless testbed experiments”, explains Petros, “either on the OpenAirInterface simulator – the EURECOM wireless technology platform – or with the help of companies like a German start-up that has the right receivers and base stations to perform the experiments”. The results of these experiments will hopefully show if a novel class of algorithms placed directly on mobile devices, can efficiently manage memory and feedback, in order to boost network performance.

According to Petros, it is possible that in 5-6 years from now, memory-aided algorithms will guarantee ultra-fast massive downloads, no matter how the network load increases. Actually, the more storage we will have, the better wireless networks will work! Increasing storage capacities could even mean endless resources for wireless networks, making us gradually enter a new period of the wireless history: the “Eldorado of Memory” as Petros Elia likes to call it. Future wireless standards will then have to take this entirely new paradigm into account.

 

Learn more about ERC Grants

 

Gasification, Pilote, Mines Albi, VALTHERA, Javier Escudero

Gasification, the future of organic waste recovery

At a time when the challenge of waste recovery is becoming increasingly evident, gasification is emerging as a promising solution. The process allows organic waste to be decomposed into synthetic gas, which can be burned for energy purposes, or reprocessed to obtain gases of interest, such as methane and hydrogen. Javier Escudero has been studying this virtuous alternative to incineration for over eight years at Mines Albi. At the RAPSODEE laboratory (UMR CNRS 5302), he is developing a pilot process for recovering problematic waste, such as non-recyclable plastic materials and certain types of agricultural residue.

 

This century-old technique is now more relevant than ever. Gasification, which generates combustible gas from carbonaceous solids, such as coal and wood, was popularized in the 19th century to power producer-gas vehicles. They sparked renewed interest during World War II, when they were used to produce synthetic fuels from coal during the oil shortage.

 

Waste, tomorrow’s resource

In this season of energy transition, researchers are reviving this technique to recover a much more promising carbon source: organic waste! Javier Escudero is one such researcher. His credo? “Waste is tomorrow’s resource.” At Mines Albi, he is working to optimize this recovery method, which is more virtuous than outright incineration. His target materials are forest residues, household waste and non-recyclable plastic materials, etc. “Gasification is used particularly for dry and solid waste. It is complementary to the biological methanation process, which is used more for wet waste,” he explains.

Several steps are involved in the gasification process of transforming waste into gas. The waste, which is preconditioned and dried beforehand, first undergoes pyrolysis in a low-oxygen atmosphere at temperatures of over 300°C. “In these conditions, the energy produced breaks the molecular bonds. The carbonaceous materials separate into gas and solid residue. The following step is the true gasification stage: at 750°C or higher, the water vapor or carbon dioxide that are present complete the decomposition of these elements into a mixture of small molecules called synthesis gas, essentially composed of carbon monoxide and hydrogen,” Javier Escudero explains.

This synthesis gas, the basic “building block” of petrochemistry, has proven to be very useful: it can be incinerated, providing a greater yield than the combustion of the original solid. It can also power a cogeneration motor to produce heat and electricity. Finally, it can be reprocessed to produce gases of interest: methane, hydrogen, acetylene, etc… “We can therefore replace one source of energy or fossil material with its renewable energy equivalent,” Javier Escudero explains. It is thanks to this great versatility that gasification provides a virtuous alternative to incineration. However, some optimizations must still be made to improve its economic results.

 

Thermal recovery for industrial benefit

Javier Escudero has been working towards this goal since his arrival at Mines Albi in 2008. His goal is to identify the best means for enhancing the yield of the process, of which some mechanisms remain relatively unknown. In 2013, one of his team’s publications,1 explaining the respective influences of carbon dioxide and water vapor in the efficiency of gasification, was well received by the scientific community.

 

[box type=”shadow” align=”” class=”” width=””]

VALTHERA, waste recovery here, there, and everywhere

The VALTHERA platform (which in French stands for VALorisation THErmique des Résidus de transformation des Agro-ressources, the Thermal Recovery of Processing Residues from Agro-Resources), is located at the Mines Albi site, and is backed by the Agri Sud-Ouest and Derbi competitiveness clusters. It is a technological platform specialized in the development of highly energy-efficient thermal processes for the recovery of biomass waste and by-products. Its technological offer includes drying, pyrolysis, torrefaction, combustion, and gasification. Different means of recovery are being studied for this waste that is widely available, which would generate energy or value-added materials. Another specific feature of the VALTHERA platform is that it develops a source of solar power intended to power all of these thermal processes and improve their ecological footprint. It also offers high-performance equipment for treating various types of emissions and pollutants. The platform also acts as a catalyst for companies, and specifically for SMEs seeking to carry out research and development programs, demonstrate the feasibility of a project, or generalize a process.[/box]

 

Gazéification, Javier Escudero, VALTHERA, Mines AlbiNow, the time has come to apply this research. The researcher and his team are therefore working to develop the VALTHERA platform (in French: VALorisation THErmique des Résidus de transformation des Agro-ressources, the Thermal Recovery of Processing Residues from Agro-Resources). This platform is aimed at developing various processes for thermal waste recovery in partnership with industrial stakeholders (see box). In particular, Javier Escudero and his colleagues at the RAPSODEE laboratory (Recherche d’Albi en génie des Procédés des Solides Divisés, de l’Énergie et de l’Environnement, the Albi Research Centre for Process Engineering in Particulate Solids, Energy and the Environment) are working on a 100 kW pilot gasification process. This process is scheduled to be operational by the end of 2016, and will be a forerunner of final processes reaching up to 3 MW, “a power range that is suitable for processing a small-scale of generated organic waste, which could suit the needs of an SME.” The team is particularly focused on “fixed-bed” technology. With this system, the entire process takes place within a single reactor. The waste is “piled in” from the top, and then gradually goes through the steps of pyrolysis and gasification, driven downwards by the force of gravity, until the synthesis gas is recovered at the bottom of the reactor.

The researchers are working in partnership with the French gasifier manufacturer, CogeBio, to expand the possibilities of this technology. “The only commercial solutions that exist are for wood chips. We are going to assess the use of other types of waste, such as vine shoots,” explains Javier Escudero. Eventually, the project will expand to include other sources, such as non-recyclable plastics, still in connection with solutions industrial stakeholders are seeking. “Today, the processing cost for certain types of waste is negative, because the demand to get rid of this waste is greater than the processing capacities,” the researcher explains. In terms of recovery, the synthesis gas will first be burned for energy purposes. Based on the different partnerships, more ambitious recovery processes could be implemented. A top process of interest is the production of hydrogen: a high-value-added energy carrier. All of these valuable initiatives are aimed at transforming our waste into renewable energy!

 

Javier Escudero, Mines Albi, Gazéification

Curiosity: the single driving force

Nothing predestined Javier Escudero to develop gasification in France… unless it was his scientific curiosity. After falling in love with research during an internship at a Swiss polymer manufacturer, the Spanish student began his thesis on polymerization, under the joint direction of a Spanish manufacturer. After completing his post-graduate research on the same theme at the Laboratory of Chemical Engineering – LGC – in Toulouse (UMR 5503), in 2008 he applied for a research position at Mines Albi in the area of waste gasification, a subject that strayed from his beginnings in chemistry. However, his curiosity and industrial experience combined to bring him success. Eight years later, he is now an Assistant Professor at the RAPSODEE laboratory (UMR CNRS 5302)… and extremely passionate about sustainable development. In addition to his daily work on gasification, he is co-organizing the international WasteEng conference (conference on engineering for waste and biomass valorisation), which brings together stakeholders from across the waste chain, from the identification of sources to their recovery.

 

(1) Guizani, C. et al ; The gasification reactivity of high-heating-rate chars in single and mixed atmospheres of H2O and CO2 ; Fuel 108 (2013) 812–823

 

Roisin Owens received a ERC Consolidator Grant to carry on her work in the filed of bioelectronics.

Roisin Owens scores a hat-trick with the award of a third ERC grant

In December 2016, Roisin Owens received a Consolidator Grant from the European Research Council (ERC). Following her 2011 Starting Grant and her 2014 Proof of Concept Grant, it is therefore the third time the ERC rewards the quality of the projects she leads at Mines Saint-Étienne, in France. Beyond a funding source, this is also a prestigious peer recognition, since only around 300 Consolidator Grants are awarded to researchers each year[1]. We have asked Roisin Owens a few questions to better understand what a new ERC grant means for her and her work.

 

How do you feel now that you have been awarded a Consolidator Grant by the ERC?

Roisin Owens: I feel more confident. When I was awarded the Starting Grant in 2011, I thought I had been lucky, as if I had just been in the right place at the right time. But now I don’t think it is luck anymore. I think there is a true value in my work. Of the 13 researchers who evaluated my project proposal answering the call for the Consolidator Grant, 12 have qualified it as “outstanding” or “very good”. I knew the idea was good, but I also knew the grant was very competitive: there are some world class scientists in the running for it!

 

What does the Consolidator Grant gives you that the Starting Grant did not? 

RO: The Consolidator Grant brings a better scientific recognition. The Starting Grant rewards future potential and supports a young and promising researcher. So if  you have a good idea, a good thesis and some scientific publications you can be eligible. For the Consolidator, you need to have already been published at least ten articles as  a postdoctoral researcher or project leader — principal investigator. This means that this grant is dedicated to researchers who already have some scientific credibility, and for whom the ERC will consolidate a mid-career position.

 

How did your research take consistency along the ERC grants you received?

RO: The Starting Grant allowed me to start my work in bioelectronics. Since I am a biologist, I wanted to develop a set of technologies based on conducting polymers to measure biological activity in a non-invasive way. This is what I did in the Ionosense project. With the Consolidator, I want to go deeper. Now the technologies are functional, I will try to answer questions that have never been even asked yet, because researchers did not have access to the tools to do so.

Read more about the scientific work of Roisin Owen: When biology meets electronics

 

Which tools do your technologies give to researchers?

RO: When scientists work on cancer or on the effects of microorganisms on our biological system, they have to use animal experiments. This takes time and is expensive, notwithstanding the ethical concerns. Furthermore, the mouse is not necessarily a good model of the human organism. My idea is to perform in vitro modelling of biological systems that accurately reflect human physiology. To do this, I mimic the human body using complex 3D microfluidic systems that recreate fluid circulation in organs. Then I include electronics to monitor a variety of effects on this system. For me, it is a way of adapting technology to the reality of the biology. Currently, the opposite usually happens in laboratories: researchers force biology to adapt to the equipment!

 

Do you think you could be at this point in your research if you had not been awarded your ERC grants?

RO: Definitely not. First of all, the Starting Grant opened doors for me. It gave me some credibility and the possibility to build partnerships. For example, when I got the grant, I was able to recruit a postdoctoral fellow from Stanford, a top university in the US. I am not sure I could have recruited that person without the Starting Grant. ERC grants are the only ones in Europe to give you such independence. They provide 1.5-2 million euros for five years! This means you don’t spend so much of your time looking for money for research, and you can really focus on your work. The alternative would be to go through a national funding process, like those of the French national research agency [ANR], but this is not at the same scale: we are talking about 400 000 euros per project.

 

Between your Starting Grant and your Consolidator Grant, you received a Proof of Concept (POC) Grant. What was it for?

RO: It is small grant compared to the others: 150 000 euros over a single year. This one is dedicated to researchers who already have had another ERC Grant. As its name suggests, it provides you some extra money to generate a proof of concept. If one of the technologies you have developed during your first grant shows some commercial potential, you can then explore this with a view to a more concrete application. In our first project — Ionosense — one part of the project looked promising in terms of commercialisation. With the POC, we were able to make a prototype. Now we have patented a technology for in vitro toxicology tests, and we are currently in negotiations with a company to produce the prototype. For me, it is very important to find applications for my research that could be useful for society, since my work is funded through taxes paid by European citizens.

 

Since we are talking about it: what are grants specifically used for?

RO: Essentially, to build a team. We are carrying out multidisciplinary work, so we need a wide range of expertise. I have a large expertise in multiple fields of biology, and I am starting to acquire a good knowledge of electronics, but I can’t cover everything. To help me, I have to recruit young, talented people, passionate about key topics of my project: microfluidics, analytical chemistry, 3D modelling of cellular biology, etc. Since I recruit them when they just have finished their thesis, they are up to date with the latest technologies. It is also important for me to hire young researchers and to train them, to stop the brain drain towards foreign countries.

 

Every scientist who gets an ERC Grant is doing valuable work. But with three ERC grants awarded to you, there is something more than quality. What is your secret?

RO: First, I am a native English speaker. I was born in Ireland, and was bilingual in Gaelic and English at an early age. This is a big advantage when you write project proposals. I also like to take time to let my ideas nurture and blossom. The ERC projects I submited were not just written in a few weeks before the deadline. They are well thought out over multiple months. I also have to thank the Cancéropôle PACA who provided financial support for me to consult with an advisor on project building. And I have to admit I truly have a secret weapon — two actually: my sisters. One is an editor for a Nature journal, and the other works on communications  in museums. Every time I write a proposal, I send it to them so they can help me polish it!

 

[1] In 2016, 314 researchers have been awarded a Consolidator Grant over 2274 projects evaluated by the ERC (success rate : 13.8%). In 2015, 302 researchers have been awarded the same grant over 2023 projects (14.0%). Source : ERC statistics.

ERC Grants, Francesco Andriulli, Yanlei Diao, Petros Elia, Roisin Owens

4 ERC Consolidator Grants for IMT

The European Research Council has announced the results of its 2016 Call for Consolidator Grants. Out of the 314 researchers to receive grants throughout Europe (across all disciplines), four come from IMT schools.

 

10% of French grants

These four grants represent 10% of all grants obtained in France, with 43 project leaders awarded from French institutions (therefore placing France in 3rd position, behind the United Kingdom with 58 projects and Germany with 48 projects).

For Christian Roux, the Executive Vice President for Research and Innovation at IMT, “this is a real recognition of the academic excellence of our researchers on a European level. Our targeted research model, which performs well in our joint research with the two very active Carnot institutes, will also benefit from ERC’s more fundamental work to support major scientific breakthroughs.”

Consolidator Grants reward experienced researchers with a sum of € 2 million to fund projects for a duration of five years, therefore providing them with substantial support.

 

[one_half][box]Francesco Andriulli, Télécom Bretagne, ERCAfter Claude Berrou in 2012, Francesco Andriulli is the second IMT Atlantique researcher to be honored by Europe as part of the ERC program. He will receive a grant of €2 million over five years, enabling him to develop his work in the field of computational electromagnetism. Find out more +
[/box][/one_half]

[one_half_last][box]Yanlei Diao, ERC, Télécom ParisTechYanlei Diao, a world-class scientist, recruited jointly by École Polytechnique, the Inria Saclay – Île-de-France Centre and Télécom ParisTech, has been honored for scientific excellence for her project as well as her innovative vision in terms of “acceleration and optimization of analytical computing for big data”. [/box][/one_half_last]

[one_half][box]

Petros Elia, Eurecom, ERCPetros Elia is a professor of Telecommunications at Eurecom and has been awarded this ERC Consolidator Grant for his DUALITY project (Theoretical Foundations of Memory Micro-Insertions in Wireless Communications).
Find out more +

[/box][/one_half]

[one_half_last][box]

Roisin Owens, Mines Saint-Étienne, ERCThis marks the third time that Roisin Owens, a Mines Saint-Étienne researcher specialized in bioelectronics, has been rewarded by the ERC for the quality of her projects. She received a Starting Grant in 2011 followed by a Proof of Concept Grant in 2014.
Find out more +
[/box][/one_half_last]

chaire AXA, Maurizio Filippone, Eurecom

Accurate Quantification of Uncertainty. AXA Chair at Eurecom

AXA Chairs reward only a few scientists every year. With his chair on New Computational Approaches to Risk Modeling, Maurizio Filippone a researcher at Eurecom joins a community of prestigious researchers such as Jean Tirole, French Professor who won the Nobel prize in economics.

 

Maurizio, you’ve just been awarded an AXA chair. Could you explain what it is about and what made your project selected?

AXA chairs are funded by the AXA Research Fund, which supports fundamental research to advance our understanding of risk. Started in 2008, the AXA Chair scheme funds about 50 new projects annually, of which four to eight are chairs. They are individual fellowships, and the one I received is going to support my research activities for the next seven years. My project is entitled New Computational Approaches to Risk Modeling”. The AXA Chair selection process is not based on the project only. For this type of grant, several criteria are important: timeliness, vision, credibility of both the proposal and the candidate (track record, collaborations, etc.), institution and fit within institution’s strategy. For example, the fact that the research area of this topic is in line with the Eurecom long-term strategy in Data science played a major role in the selection process of my project. This grant definitely represents a major achievement in my career.

 

What is your project about exactly?

My project deals with one simple question: How do you go from data to decisions? Today, we can access so much data generated by so many sensors, but we are facing difficulties in using these data in a sensible way. Machine learning is the main technique that helps make sense of data and I will use and develop novel techniques in this domain throughout this project. Quantification of risk and decision-making require accurate quantification of uncertainty, which is a major challenge in many areas of sciences involving complex phenomena like finance, environmental and medical sciences. In order to accurately quantify the level of uncertainty, we employ flexible and accurate tools offered by probabilistic nonparametric statistical models. But today’s diversity and abundance of data make it difficult to use these models. The goal of my project is to propose new ways to better manage the interface between computational and statistical models – which in turn will help get accurate confidence on predictions based on observed data.

 

How will you be able to do that? With what kind of advanced computing techniques?

The idea behind the project is that it is possible to carry out exact quantification of uncertainty relying exclusively on approximate, and therefore cheaper, computations. Using nonparametric models is difficult and generally computationally intractable due to the complexity of the systems and amount of data. Although computers are more and more powerful, exact computations remain serial, too long, too expensive and sometimes almost impossible to carry out. The way approximate computations will be designed in this project will be able to reduce computing time by orders of magnitude! The exploitation of parallel and distributed computing on large scale computing facilities – which is a huge expertise at Eurecom – will be key to achieve this. We will thus be able to develop new computer models that will make accurate quantification of uncertainty possible.

 

What are the practical applications?

Part of the focus of the project will be on life and environmental applications that require quantification of risk.  We will then use mostly life sciences data (e.g., neuroimaging and genomics) and environmental data for our models. I am confident that this project will help tackle the explosion of large scale and diverse data in life and environmental sciences. This is already a huge challenge today, and it will be even more difficult to deal with in the future. In the mid-term, we will develop practical and scalable algorithms that learn from data and accurately quantify their uncertainty on predictions. On the long term, we will be able to improve on current approaches for risk estimation: they will be timely and more accurate. These approaches can have major implications in the development of medical treatment strategies or environmental policies for example. Is some seismic activity going to trigger a tsunami for which it is worth warning the population or not? Is a person showing signs of a systemic disease, like Parkinson, actually going develop the disease or not? I hope the results of our project will make it easier to answer these questions.

 

Do you have any partnerships in this project?

Of course! I will initiate some new collaborations and continue collaborating with several prestigious institutions worldwide to make this project a success:  Columbia University in NYC, Oxford, Cambridge, UCL and Glasgow in the UK, the Donders Institute of Neuroscience in the Netherlands, New South Wales in Australia, as well as INRIA in France. The funding from the AXA Research Fund will help create a research team at Eurecom: the team will comprise myself and two PhD students and one Post Doc. I would like the team to comprise a blend of expertise, since novelty requires an interdisciplinary approach: computing, statistics, mathematics, physics, plus some expertise in life and environmental sciences.

 

What are the main challenges you will be facing in this project?

Attracting talents is one of the main challenges! I’ve been lucky so far, but it is generally difficult. This project is extremely ambitious; it is a high-risk, high-gain project, so there are some difficult technical challenges to face – all of them related to the cutting-edge tools, techniques and strategies we will be using and developing. We will find ourselves in the usual situation when working on something new and visionary – namely, being stuck in blind alleys or being forced to dismiss promising ideas that do not work to give some examples. But that is why it has been funded for 7-years! Despite these difficulties, I am confident this project will be a success and that we will make a huge impact.

 

French National Library

The French national Library is combining sociology and big data to learn about its Gallica users

As a repository of French culture, the Bibliothèque Nationale de France (BnF, the French national Library) has always sought to know and understand its users. This is no easy task, especially when it comes to studying the individuals who use Gallica, its digital library. To learn more about them, without limiting itself to interviewing sample individuals, the BnF has joined forces with Télécom ParisTech, taking advantage of its multidisciplinary expertise. To meet this challenge, the scientists are working with IMT’s TeraLab platform to collect and process big data.

[divider style=”normal” top=”20″ bottom=”20″]

 

[dropcap]O[/dropcap]ften seen as a driving force for technological innovation, could big data also represent an epistemological revolution? The use of big data in experimental sciences is nothing new; it has already proven its worth. But the humanities have not been left behind. In April 2016, the Bibliothèque Nationale de France (BnF) leveraged its longstanding partnership with Télécom ParisTech (see box below) to carry out research on the users of Gallica — its free, online library of digital documents. The methodology used is based in part on the analysis of large quantities of data collected when users visit the website.

Every time a user visits the website, the BnF server records a log of all actions carried out by the individual on Gallica. This information includes pages opened on the website, time spent on the site, links clicked on the page, documents downloaded etc. These logs, which are anonymized in compliance with the regulations established the CNIL (French Data Protection Authority), therefore provide a complete map of the user’s journey, from arriving at Gallica to leaving the website.

With 14 million visits per year, this information represents a large volume of data to process, especially since it must be correlated with the records of the 4 million documents available for consultation on the site — which include the type of document, creation date, author etc. — which also provide valuable information for understanding users and their interest in documents. Carrying out sociological fieldwork alone, by interviewing larger or smaller samples of users, is not enough to capture the great diversity and complexity of today’s online user journeys.

Researchers at Télécom ParisTech therefore took a multidisciplinary approach. Sociologist Valérie Beaudouin teamed up with François Roueff to establish a dialogue between the sociological analysis of uses through field research, on one hand, and data mining and modeling on the other. “Adding this big data component allows us to use the information contained in the logs and records to determine the typical behavioral profiles and behavior of Gallica users,” explains Valérie Beaudouin. The data is collected and processed on IMT’s TeraLab platform. The platform provides researchers with a turnkey working environment that can be tailored to their needs and offers more advanced features than commercially-available data processing tools.

Also read on I’MTech TeraLab and La Poste have teamed up to fight package fraud

What are the different profiles of Gallica users?

François Roueff and his team were tasked with using the information available to develop unsupervised learning algorithms in order to identify categories of behavior within the large volume of data. After six months of work, the first results appeared. The initial finding was that only 10 to 15% of Gallica users’ browsing activity involves consulting several digital documents. The remaining 85 to 90% of users represent occasional visits, for a specific document.

We observed some very interesting things about the 10 to 15% of Gallica users involved,” says François Roueff. “If we analyze the Gallica sessions in terms of variety of types of documents (monographs, press, photographs etc.), eight out of ten categories only use a single type,” he says. This reflects a tropism on the part of users toward a certain form of media. When it comes to consulting documents, in general there is little variation in the ways in which Gallica users obtain information. Some search for information about a given topic solely by consulting photographs, while others consult solely press articles.

According to Valérie Beaudouin, the central focus of this research lies in understanding such behavior. “Using these results, we develop hypotheses, which must then be confirmed by comparing them with other survey methodologies,” she explains. Data analysis is therefore supplemented by an online questionnaire to be filled out by Gallica users, field surveys among users, and even by equipping certain users with video cameras to monitor their activity in front of their screens.

[tie_full_img]Photo d'une affiche de communication de la Bibliothèque nationale de France (BnF) avec pour slogan "Êtes-vous déjà entré à l'intérieur d'une encyclopédie ?", octobre 2016. Pour l'institution, rendre la culture accessible au public est une mission cruciale, et cela passe par un accès aux ressources numériques adapté aux utilisateurs.[/tie_full_img]

Photo from a poster for the Bibliothèque Nationale de France (BnF), October 2016. For the institution, making culture available to the public is a crucial mission, and that means digital resources must be made available in a way that reflects users’ needs.

 

Field studies have allowed us to understand, for example, that certain groups of Gallica users prefer downloading documents so they can read them offline, while others would rather consult them online to benefit from the high-quality zoom feature,” she says. The Télécom ParisTech team also noticed that in order to find a document on the digital library website, some users preferred to use Google and include the word “Gallica” in their search, instead of using the website’s internal engine.

Confirming the hypotheses also means working closely with teams at BnF, who provide knowledge about the institution and the technical tools available to users.  Philippe Chevallier, project manager for the Strategy and Research delegation of the cultural institution, attests to the value of dialogue with the researchers: “Through our discussions with Valérie Beaudouin, we learned how to take advantage of the information collected by community managers about individuals who are active on social media, as well as user feedback received by email.”

Analyzing user communities: a crucial challenge for institutions

The project has provided BnF with insight into how existing resources can be used to analyze users. This is another source of satisfaction for Philippe Chevallier, who is committed to the success of the project. “This project is the proof that knowledge about user communities can be a research challenge,” he says with excitement. “It’s too important an issue for an institution like ours, so we need to dedicate time to studying it and leverage real scientific expertise,” he adds.

And when it comes to Gallica, the mission is even more crucial. It is impossible to see Gallica users, whereas the predominant profile of users of BnF’s physical locations can be observed. “A wide range of tools are now available for companies and institutions to easily collect information about online uses or opinions: e-reputation tools, web analytics tools etc. Some of these tools are useful, but they offer limited possibilities for controlling their methods and, consequently, their results. Our responsibility is to provide the library with meaningful, valuable information about its users and to do so, we need to collaborate with the research community,” says Philippe Chevallier.

In order to obtain the precise information it is seeking, the project will continue until 2017. The findings will offer insights into how the cultural institution can improve its services. “We have a public service mission to make knowledge available to as many people as possible,” says Philippe Chevallier. In light of observations by researchers, the key question that will arise is how to optimize Gallica. Who should take priority? The minority of users who spend the most time on the website, or the overwhelming majority of users who only use it sporadically? Users from the academic community— researchers, professors, students — or the “general public”?

The BnF will have to take a stance on these questions. In the meantime, the multidisciplinary team at Télécom ParisTech will continue its work to describe Gallica users. In particular, it will seek to fine-tune the categorization of sessions by enhancing them with a semantic analysis of the records of the 4 million digital documents. This will make it possible to determine, within the large volume of data collected, which topics the sessions are related to. The task poses modeling problems which require particular attention, since the content of the records is intrinsically inhomogeneous: it varies greatly depending on the type of document and digitization conditions.

 

[divider style=”normal” top=”20″ bottom=”20″]

plusOnline users: a focus for the BnF for 15 years

The first study carried out by the BnF to describe its online user community dates back to 2002, five years after the launch of its digital library, in the form of a research project that already combined approaches (online questionnaires, log analysis etc.). In the years that followed, digital users became an increasingly important focus for the institution. In 2011, a survey of 3,800 Gallica users was carried out by a consulting firm. Realizing that studying users would require more in-depth research, the BnF turned to Télécom ParisTech in 2013 with the objective of assessing the different possible approaches for a sociological analysis of digital uses. At the same time, BnF launched its first big data research to measure Gallica’s position on the French internet for World War I research. In 2016, the sociology of online uses and big data experiment components were brought together, resulting in the project aiming to understand the uses and users of Gallica.[divider style=”normal” top=”20″ bottom=”20″]

 

Eurecom, HIGHTS, Autonomous car, H2020

The autonomous car: safety hinging on a 25cm margin

Projets européens H2020Does an autonomous or semi-autonomous car really know where it is located on a map? How accurately can it position itself on the road? For the scientists who are part of the European H2020 “HIGHTS” project, intelligent transportation systems must know their position down to one quarter of a meter. Jérôme Härri, a researcher in communication systems at Eurecom — a partner school for this project — explains how the current positioning technology must be readjusted to achieve this level of precision. He also explains why this involves a different approach than the one used by manufacturers such as Tesla or Google.

 

You are seeking solutions for tracking vehicles’ location within a margin of 25 centimeters. Why this margin?

Jérôme Härri: It is the car’s average margin for drifting to the right or left without leaving its traffic lane. This distance is found both in the scientific literature and in requests from industrial partners seeking to develop intelligent transportation. You could say it’s the value at which driving autonomously becomes possible while ensuring the required safety for vehicles and individuals: greater precision is even better; less precision, and things get complicated

 

Are we currently far from this spatial resolution? With what level of precision do the GPS units in most of our vehicles locate us on the road?

JH: A basic GPS can locate us with an accuracy of 2 to 10 meters, and the new Galileo system promises an accuracy of 4m. But this is only possible when there is sufficient access to satellites and in an open, or rural area. In the urban context, tall buildings make satellites less accessible and reaching a level of accuracy under 5 meters is rare. The margin of error is then reduced by projection, so that the user only rarely experiences such a large error in the positioning. But this does not work for an autonomous car. Improvements to GPS systems do exist, such as differential GPS, which can position us with an accuracy of one meter, or even less. Real time kinematic technology (RTK), used for cartography in the mountains, is even more efficient. Yet it is expensive, and also has its limits in the city. RTK technology is becoming increasingly popular for use in the dynamics of digital cities, but we have not yet reached that point.

 

And yet Google and Tesla are already building their autonomous or semi-autonomous cars. How are these cars being positioned?

JH: The current autonomous cars use a positioning system on maps that is very precise, down to the traffic lane, which combines GPS and 4G. However, this system is slow. It is therefore used for navigation, so that the car knows what it must do to reach its destination, but not for detecting danger. For this aspect, the cars use radar, lidars — in other words, lasers ­— or cameras. But this system has its limits: the sensors can only see around 50 meters away. However, on the highway, cars travel at a speed of 30, even 40 meters per second. This gives the autonomous car one second to stop, slow down, or adapt in the event of a problem… Which is not enough. And the system is not infallible. For example, the Tesla car accident that occurred last May was caused by the camera that is supposed to detect danger confusing a truck’s light color with the light color of the sky.

 

What approaches are you taking in the HIGHTS project for improving the geolocation and reliability?

JH: We want to know within a 25-centimeter margin where a vehicle is located on the road, not just in relation to another car. In order to do this, we use the cooperation between vehicles to triangulate and reduce the effect of a weak GPS signal. We consider that every vehicle nearby can be an anchor for the triangulation. For example, an autonomous car can have a weak GPS signal, but can have three surrounding cars with a better signal. We can increase the car’s absolute positioning by triangulating its position in relation to three nearby vehicles. In order to do this, we need communication technologies for exchanging GPS positions — Bluetooth, Zizbee, Wi-Fi, etc. — and technology such as cameras and radar in order to improve the positioning in relation to surrounding vehicles.

 

And what if the car is isolated, without any other cars nearby?

JH: In the event that there are not enough cars nearby, we also pursue an implicit approach. Using roadside sensors at strategic locations, it is possible to precisely locate the car on the map. For example, if I know the distance between my vehicle and a billboard or traffic light, and the angles between these locations and the road, I can combine this with the GPS position of the billboard and traffic light, which don’t move, making them very strong positioning anchors. We therefore combine the relative approach with the absolute position of the objects on the road. Yet this situation does not occur very frequently. In most cases, what enables us to improve the accuracy is the cooperation with other vehicles.

 

So, does the HIGHTS project emphasize the combination of different existing technologies rather than seeking to find new ones?

JH: Yes, with the aim of validating their effectiveness. Yet at the same time we are working on developing LTE telecommunication networks for the transmission of information from vehicle to vehicle — which we refer to as LTE-V2X. In so doing we are seeking to increase the reliability of the communications. Wi-Fi is not necessarily the most robust form of technology. On a computer, when the Wi-Fi isn’t working, we can still watch a movie. But for cars, the alternative V2X technology ensures the communications if the Wi-Fi connection fails, whether it is by accident or due to a cyber-attack. Furthermore, these networks provide the possibility of using pedestrians’ smartphones to help avoid collisions. With the LTE networks, HIGHTS is testing the reliability of the device-to-device LTE approach for inter-vehicle communication. Our work is situated upstream of the standardization work. The experience of this project enables us to work beyond the current standards and develop them along with organizations such as ETSI-3GPP, ETSI-ITS and IETF.

 

Does your cooperative approach stand a chance of succeeding against the individualistic approach used by Tesla and Google, who seek to remain sovereign regarding their vehicles and solutions? 

JH: The two approaches are not incompatible. It’s a cultural issue. Americans (Google, Tesla) think “autonomous car” in the strictest sense, without any outside help. Europeans, on the other hand, think “autonomous car” in the broader sense, without the driver’s assistance, and therefore are more likely to use a cooperative approach in order to reduce costs and improve the interoperability of the future autonomous cars. We have been working on the collaborative aspect for several years now, which has included research on integrating cars into the internet of things, carried out with the CEA and BMW — which are both partners of the HIGHTS project. We therefore have some very practical and promising lines of research on our side. And the U.S. Department of Transportation has issued a directive requiring vehicles to have a cooperative unit beginning in 2019. Therefore, Google and Tesla can continue to ignore this technology, but since they will be present in vehicles and made freely available to them, there’s a good chance they will use it.

 

[box type=”shadow” align=”” class=”” width=””]

HIGHTS: moving towards a demonstration platform

Launched in 2015, the 3-year HIGHTS project answers the call made by the H2020 research program on the theme of smart, green, and integrated transportation. It brings together 14 academic and industrial partners[1] from five different countries, and includes companies that work closely with major automakers like BMW. Its final objective is to establish a demonstration platform for vehicle positioning solutions, from the physical infrastructure to the software.

[1] Germany: Jacobs University Bremen, Deutsche Zentrum für Luft- und Raumfahrt (DLR), Robert Bosch, Zigpos, Objective Software, Ibeo Automotive Systems, Innotec21.
France: Eurecom, CEA, BeSpoon.
Sweden
: Chalmers University of Technology.
Luxemburg: FBConsulting.
The Netherlands
: PSConsultancy, TASS International Mobility Center.

[/box]

Ocean Remote sensing, data, IMT Atlantique

Ocean remote sensing: solving the puzzle of missing data

The satellite measurements that are taken every day rely greatly on atmospheric conditions, the main cause of missing data. In a scientific publication, Ronan Fablet, a researcher at Télécom Bretagne, proposes a new method for reconstructing the temperature of the ocean surface to complete incomplete observations. This reconstructed data provides fine-scale mapping of the homogeneous details that are essential in understanding the many different physical and biological phenomena.

 

What do a fish’s migration through the ocean, a cyclone, and the Gulf Stream have in common? They can all be studied using satellite observations. This is a theme Ronan Fablet appreciates. As a researcher at Télécom Bretagne, he is particularly interested in processing satellite data to characterize the dynamics of the ocean. This designation involves several themes, including the reconstruction of incomplete observations. Missing data impairs satellite observations and limits the representation of the ocean, its activities and interactions. This represents essential components used in various areas, from the study of marine biology to ocean-atmosphere exchanges that directly influence the climate. In an article published in June 2016 in the IEEE J-STARS[1] Ronan Fablet proposed a new statistical interpolation approach for compensating for the lack of observations. Let’s take a closer look at the data assimilation challenges in oceanography.

 

Temperature, salinity…: the oceans’ critical parameters

In oceanography, the name of a geophysical field refers to its fundamental parameters of sea surface temperature (or SST), salinity (quantity of salt dissolved in the water), water color, which provides information on the primary production (chlorophyll concentrations), and the altimetric mapping (ocean surface topography).

Ronan Fablet’s article focuses on the SST for several reasons. First of all, the SST is the parameter that is measured the most in oceanography. It benefits from high-precision or high-resolution measurements. In other words, a relatively short distance of one kilometer separates two observed points, unlike salinity measurements, which have a lower level of precision (distance of 100km between two measurement points). Surface temperature is also an input parameter that is often used to design digital models for studying ocean-atmosphere interactions. Many heat transfers take place between the two. One obvious example is cyclones. Cyclones are fed by pumping heat from the oceans’ warmer regions. Furthermore, the temperature is also essential in determining the major ocean structures. It allows surface currents to be mapped on a small-scale.

But how can a satellite measure the sea surface temperature? As a material, the ocean will react differently to a given wavelength. “To study the SST, we can, for example, use an infrared sensor that first measures the energy. A law can then be used to convert this into the temperature,” explains Ronan Fablet.

 

Overcoming the problem of missing data in remote sensing

Unlike the geostationary satellites that orbit at the same speed as the Earth’s rotation, moving satellites generally complete one orbit in a little over 1 hour and 30 minutes. This enables them to fly over several terrestrial points in one day. They therefore build images by accumulating data. Yet some points in the ocean cannot be seen. The main cause of missing data is satellite sensors’ sensitivity to atmospheric conditions. In the case of infrared measurements, clouds block the observations. “In a predefined area, it is sometimes necessary to accumulate two weeks’ worth of observations in order to benefit from enough information to begin reconstructing the given field,” explains Ronan Fablet. In addition, the heterogeneous nature of the cloud cover must be taken into account. “The rate of missing data in certain areas can be as high a 90%,” he explains.

The lack of data is a true challenge. The modelers must find a compromise between the generic nature of the interpolation model and the complexity of its calculations. The problem is that the equations that characterize the movement of fluids, such as water, are not easy to process. This is why these models are often simplified.

 

A new interpolation approach

According to Ronan Fablet, the techniques that are being used do not take full advantage of the available information. The approach he proposes reaches beyond these limits: “we currently have access to 20 to 30 years of SST data. The idea is that among these samples we can find an implicit representation of the ocean variations that can identify an interpolation. Based on this knowledge, we should be able to reconstruct the incomplete observations that currently exist.

The general idea of Ron Fablet’s method is based on the principle of learning. If a situation that is observed today corresponds to a previous situation, it is then possible to use the past observations to reconstruct the current data. It is an approach based on analogy.

 

Implementing the model

In his article, Ronan Fablet therefore used an analogy-based model. He characterized the SST based on a law that provides the best representation of its spatial variations. The law that was chosen provides the closest reflection of reality.

In his study, Ronan Fablet used low-resolution SST observations (100km distances between two observations). With low-resolution data, optimum interpolation is usually favored. The goal is to reduce errors in reconstruction (differences between the simulated field and observed field) at the expense of small-scale details. The image obtained through this process has a smooth appearance. However, when the time came for interpolation, the researcher chose to maintain a high level of detail. The only uncertainty that remains is where the given detail is located on the map. This is why he opted for a stochastic interpolation. This method can be used to simulate several examples that will place the detail in different locations. Ultimately, this approach enabled him to create SST fields with the same level of detail throughout, but with the local constraint of the reconstruction error not improving on that of the optimum method.

The proportion of ocean energy within distances under 100km is very significant in the overall balance. At these scales, a lot of interaction takes place between physics and biology. For example, schools of fish and plankton structures are formed under the 100km scale. Maintaining a small-scale level of detail also serves to measure the impact of physics on ecological processes,” explains Ronan Fablet.

 

The blue circle represents the missing data fields. The maps represent the variations in SST at low-resolution based on a model (left), and at high-resolution based on observations (center) and at high resolution based on the model in the article (right).

 

New methods ahead using deep learning

Another modeling method has recently begun to emerge using deep learning techniques. The model designed using this method learns from photographs of the ocean. According to Ronan Fablet, this method is significant: “it incorporates the idea of analogy, in other words, it uses past data to find situations that are similar to the current context. The advantage lies in the ability to create a model based on many parameters that are calibrated by large learning data sets. It would be particularly helpful in reconstructing the missing high-resolution data from geophysical fields observed using remote sensing.”

 

Also read on I’MTech

[one_half]

[/one_half][one_half_last]

[/one_half_last]

[1] Journal of Selected Topics in Applied Earth Observations and Remote Sensing. An IEEE peer-reviewed journal.

Télécom ParisTech, Michèle Wigger, Starting Grant, ERC, 5G, communicating objects

Michèle Wigger: improving communications through coordination

Last September, Michèle Wigger was awarded a Starting Grant from the European Research Council (ERC). Each year, this distinction supports projects led by the best young researchers in Europe. It will enable Michèle Wigger to further develop the work she is conducting at Télécom ParisTech on information and communications theory. She is particularly interested in optimizing information exchanges through cooperation between communicating objects.

 

The European Commission’s objective regarding 5G is clear: the next generation mobile network must be available in at least one major city in each Member state by 2020. However, the rapid expansion of 5G raises questions on network capacity levels. With this fifth-generation system, it is just a matter of time before our smartphones can handle virtual and augmented reality, videos in 4K quality and high definition video games. It is therefore already necessary to start thinking about the quality of service, particularly during peaks in data traffic, which should not hinder loading times for users.

Optimizing the communication of a variety of information is a crucial matter, especially for researchers, who are on the front lines of this challenge. At Télécom ParisTech, Michèle Wigger explores the theoretical aspects of information transmission. One of her research topics is focused on using storage space distributed throughout a network, for example, in various base stations or in terminals from internet access providers — “boxes”. “The idea is to put the data in these areas when traffic is low, during the night for example, so that they are more readily available to the user the next evening during the peaks in network use,” summarizes Michèle Wigger.

Statistical models have shown that it was possible to follow how a video spread geographically, and therefore anticipate, with a few hours in advance, where it will be viewed. Michèle Wigger’s work would therefore enable the smoother use of networks to prevent saturation. Yet she is not only focused on the theoretical aspects behind this method for managing flows. Her research focuses on the physical layer of the networks, in other words, the construction of the modulated signals to be transmitted by antennas to reduce bandwidth usage.

She adds that these communications assisted by cache memory can also go a step further, by using data that is not stocked on our boxes, but on our neighbors’ boxes. “If I want to send a message to two people who are next to each other, it’s much more practical to distribute the information between them both, rather than repeat the same thing to each person.” She explains. To further develop this aspect, Michèle Wigger is exploring power modulations that enable different data to be sent, using only one signal, to two recipients — for example, neighbors — who can then work together collaboratively to exchange the data. “Less bandwidth is therefore required to send the required data to both recipients,” she explains.

 

Improving coordination between connected objects for smart cities

Beyond optimizing communications using cache memories, Michèle Wigger’s research is more generally related to exchanging information between communicating agents. One of the other projects she is developing involves coordination between connected objects. Still focusing on the theoretical aspect, she uses the example of intelligent transportation to illustrate the work she is currently carrying out on the maximum level of coordination that can be established between two communicating entities. “Connected cars want to avoid accidents. In order to accomplish this, what they really want to do is to work together,” she explains.

Read on our blog The autonomous car: safety hinging on a 25cm margin

However, in order to work together, these cars must exchange information using the available networks, which may depend on the technology used by manufacturers or on the environment where they are located. In short, the coordination to be established will not always be implemented in the same manner, since the available network will not always be of the same quality. “I am therefore trying to find the limits of the coordination that is possible based on whether I am working with a weak or even non-existent network, or with a very powerful network,” explains Michèle Wigger.

A somewhat similar issue exists regarding sensors connected to the internet of things, aimed at assisting in decision-making. A typical example is buildings that are subject to risks such as avalanches, earthquakes or tsunamis. Instruments measuring the temperature, vibrations, noise and a variety of other parameters collect data that is sent to decision-making centers that decide whether to issue a warning. Often, the information that is communicated is linked, since the sensors are close together, or because the information is correlated.

In this case, it is important to differentiate the useful information from the repeated information, which does not add much value but still requires resources to be processed. “The goal is to coordinate the sensors so that they transmit the minimum amount of information with the smallest possible probability of error,” explains Michèle Wigger. The end goal is to facilitate the decision-making process.

 

Four focus areas, four PhD students

Her research was awarded a Starting Grant from the European Research Council (ERC) in September both for its promising nature and for its level of quality. A grant of €1.5 million over a five-year period will enable Michèle Wigger to continue to develop a total of four areas of research, all related in one way or another to improving how information is shared with the aim of optimizing communications.

Through the funding from the ERC, she plans to double the size of her team at the information processing and communication laboratory (UMR CNRS and Télécom ParisTech), which will expand to include four new PhD students and two post-doctoral students. She will therefore be able to assign each of these research areas to a PhD student. In addition to expanding her team, Michèle Wigger is planning to develop partnerships. For the first subject addressed here — that of communications assisted by cache memory — she plans to work with INSA Lyon’s Cortexlab platform. This would enable her to test the codes she has created. Testing her theory through experimental results will enable her to further develop her work.

Godefroy Beauvallet, Innovation, Economics

Research and economic impacts: “intelligent together”

What connections currently exist between the world of academic research and the economic sphere? Does the boundary between applied research and fundamental research still have any meaning at a time when the very concept of collaboration is being reinterpreted? Godefroy Beauvallet, Director of Innovation at IMT and Vice Chairman of the National Digital Technology Council provides some possible answers to these questions. During the Digital Technology Meetings of the French National Research Agency (ANR) on November 17, he awarded the Economic Impact Prize to the Trimaran project, which unites Orange, Institut Paul-Langevin, Atos, as well as Télécom Bretagne in a consortium that has succeeded in creating a connection between two worlds that are often wrongly perceived as opposites.

 

 

When we talk about the economic impact of research, what exactly does this mean?

Godefroy Beauvallet: When we talk about economic impact, we’re referring to research that causes a “disruption,” work that transforms a sector by drastically improving a service or product, or the productivity of their development. This type of research affects markets that potentially impact not just a handful, but millions of users, and therefore also directly impact our daily lives.

 

Has it now become necessary to incorporate this idea of economic impact into research?

GB: The role of research institutions is to explore realities and describe them. The economic impacts of their work can be an effective way of demonstrating they have correctly understood these realities. The impacts do not represent the compass, but rather a yardstick—one among others—for measuring whether our contribution to the understanding of the world has changed it or not. At IMT, this is one of our essential missions, since we are under the supervision of the Ministry of the Economy. Yet it does not replace fundamental research, because it is through a fundamental understanding of a field that we can succeed in impacting it economically. The Trimaran project, which was recognized alongside another project during the ANR Digital Technology Meetings, is a good example of this, as it brought together fundamental research on time reversal and issues of energy efficiency in telecommunication networks through the design of very sophisticated antennas.

 

So, for you, applied research and fundamental research do not represent two different worlds?

GB: If we only want a little economic impact, we will be drawn away from fundamental research, but obtaining major economic impacts requires a return to fundamental research, since high technological content involves a profound understanding of the phenomena that are at work. If the objective is to cause a “disruption”, then researchers must fully master the fundamentals, and even discover new ones. It is therefore necessary to pursue the dialectic in an environment where a constant tension exists between exploiting research to reap its medium-term benefits, and truly engaging in fundamental research.

“If the objective is to cause a disruption, then researchers must fully master the fundamentals”

And yet, when it comes to making connections with the economic sphere, some suspicion remains at times among the academic world.

GB: Everyone is talking about innovation these days. Which is wonderful; it shows that the world is now convinced that research is useful! We need to welcome this desire for interaction with a positive outlook, even when it causes disturbances, and without compromising the identity of researchers, who must not be expected to turn into engineers. This requires new forms of collaboration to be created that are suitable for both spheres. But failure to participate in this process would mean researchers having to accept an outside model being imposed on them. Yet researchers are in the best position to know how things should be done, which is precisely why they must become actively involved in these collaborations. So, yes, hesitations still exist. But only in areas where we have not succeeded in being sufficiently intelligent together.

 

Does succeeding in making a major economic impact, finding the disruption, necessarily involve a dialogue between the world of research and the corporate world?

GB: Yes, but what we refer to as “collaboration” or “dialogue” can take different forms. Like the crowdsourcing of innovation, it can provide multiple perspectives and more openness in facing the problems at hand. It is also a reflection of the start-up revolution the world has been experiencing, in which companies are created specifically to explore technology-market pairs. Large companies are also rethinking their leadership role by sustaining an ecosystem that redefines the boundary between what is inside and outside the company. Both spheres are seeking new ways of doing things that do not rely on becoming more alike, but rather on embracing their differences. They have access to tools that propose faster integration, with the idea that there are shortcuts available for working together more efficiently. In our field this translates into an overall transformation of the concept of collaboration, which characterizes this day and age –particularly due to the rise of digital technology.

 

From a practical perspective, these new ways of cooperating result in the creation of new working spaces, such as industrial research chairs, joint laboratories, or simply through projects carried out in partnership with companies. What do these working spaces contribute?

GB: Often, they provide the multi-company context. This is an essential element, since the technology that results from this collaboration is only effective and only has an economic impact if it is used by several companies and permeates an entire market. The company is then under certain short-term requirements, with annual or even quarterly requirements. From this point of view, it is important for the company to work with actors who have a slower, more long-term tempo; to ensure that it will have a resilient long-term strategy. And these spaces work to build trust among the participants: the practices and interactions are tightly regulated legally and culturally, which protects the researchers’ independence. This is the contribution of academic institutions, like Institut Mines-Télécom, and public research funding authorities, like ANR, which provide the spaces and means for inventing collaborations that are fruitful and respectful of each other’s identity.