sharing economy

The sharing economy: simplifying exchanges? Not necessarily…

The groundbreaking nature of the sharing economy raises hope as well as concern among established economic players. It is intriguing and exciting. Yet what lies behind this often poorly defined concept? Godefroy Dang Nguyen, a researcher in economics at Télécom Bretagne, gives us an insight into this phenomenon. He co-authored a MOOC on the subject that will be available online next September.

 

To a certain extent, the sharing, or collaborative, economy is a little like quantum physics: everyone has heard of it, but few really know what it is. First and foremost, it has a dual nature, promoting sharing and solidarity, on the one hand, and facilitating profit-making and financial investment, on the other. This ambiguity is not surprising, since the concept behind the sharing economy is far from straightforward. When we asked Godefroy Dang Nguyen, an economist at Télécom Bretagne, to define it, his reaction said it all: a long pause and an amused sigh, followed by… “Not an easy question.” What makes this concept of the collaborative economy so complex is that it takes on many different forms, and cannot be presented as a uniform set of practices.

 

Wikipedia and open innovation: two methods of collaborative production

First of all, let’s look at collaborative production. “In this case, the big question is ‘who does it benefit’?” says Godefroy Dang Nguyen. This question reveals two different situations. The first concerns production carried out by many operators on behalf of one stakeholder, generally private. “Each individual contributes, at their own level, to building something for a company, for example. This is what happens in what we commonly refer to as open innovation,” explains the researcher. The second situation relates to collaborative production that benefits the community: individuals create for themselves, first and foremost. The classic example is Wikipedia.

Although the second production method seems to be more compatible with the sharing concept, it does have some disadvantages, however, such as the “free rider” phenomenon. “This describes individuals who use the goods produced by the community, but do not personally participate in the production,” the economist explains. To take the Wikipedia example, most users are free riders — readers, but not writers. Though this phenomenon has only a small impact on the online encyclopedia’s sustainability, it is not the case for the majority of other community services, which base their production on the balance maintained with consumption.

 

Collaborative consumption: with or without an intermediary?

The free rider can indeed jeopardize a self-organized structure without an intermediary. In this peer-to-peer model, the participants do not set any profit targets. Therefore, the consumption of goods is not sustainable unless everyone agrees to step into the producer’s shoes from time to time, and contribute to the community, thus ensuring its survival. A rigorous set of shared organizational values and principles must therefore be implemented to enable the project to last. Technology could also help to reinforce sharing communities, with the use of blockchains, for example.

Yet these consumption methods are still not as well known as the systems requiring an intermediary, such as Uber, Airbnb and Blablacar. These intermediaries organize the exchanges, and in this model, the collaborative peer-to-peer situation seen in the first example now becomes commercial. “When we observe what’s happening on the ground, we see that what is being developed is primarily a commercial peer-to-peer situation,” explains Godefroy Dang Nguyen. Does this mean that the collaborative peer-to-peer model cannot be organized? “No,” the economist replies, “but it is very complicated to organize exchanges around any model other than the market system. In general, this ends up leading to the re-creation of an economic system. Some people really believe in this, like Michel Bauwens, who champions this alternative organization of production and trade through the collaborative method.

 

La Ruche qui dit Oui!

La Ruche qui dit Oui! is an intermediary that offers farmers and producers a digital platform for local distribution networks. Credits: La Ruche qui dit Oui!

 

A new role: the active consumer

What makes the organizational challenge even greater is that the sharing economy is based on a variable that is very hard to understand: the human being. The individual, referred to in this context as the active consumer, plays a dual role. Blablacar is a very good example of this. The service users become both participants, by offering the use of their cars by other individuals, and consumers, who can also benefit from offers proposed by other individuals — if their car breaks down, for example, or if they don’t want to use it.

Yet it is hard to understand the active consumer’s practices. “The big question is, what is motivating the consumer?” asks Godefroy Dang Nguyen. “There is an aspect involving personal quests for savings or for profits to be made, as well as an altruistic aspect, and sometimes a desire for recognition from peers.” And all of these aspects depend on the personality of each individual, as each person takes ownership of the services in different ways.

There’s no magic formula… But some contexts are more faborable than others.

 

Among the masses… The ideal model?

Considering all the differentiating factors in the practices of the sharing economy, is there one model that is more viable than another? Not really, according to Godefroy Dang Nguyen. The researcher believes “there’s no magic formula: there are always risk factors, luck and talent. But some contexts are more favorable than others.

The success experienced by Uber, Airbnb and Blablacar is not by chance alone. “These stakeholders also have real technical expertise, particularly in the networking algorithms,” the economist adds. Despite the community aspect, these companies are operating in a very hostile environment. Not only is there tough competition in a given sector, with the need to stand out, but they must also make their mark in an environment where new mobile applications and platform could potentially be launched for any activity (boat exchanges, group pet-walking, etc.). To succeed, the service must meet a real need, and find a target audience ready to commit to it.

The sharing economy? Nothing new!

Despite these success factors, more unusual methods also exist, with just as much success — proving there is no ideal model. The leboncoin.fr platform is an excellent example. “It’s a fairly unusual exception: the site does not offer any guarantees, nor is it particularly user-friendly, and yet it is widely used,” observes Godefroy Dang Nguyen. The researcher attributes this to the fact that “leboncoin.fr is more a classified ad site than a true service platform,” which reminds us that digital practices are generally an extension of practices that existed before the Internet.

After all, the sharing economy is a fairly old practice with the idea of “either mutually exchanging services, or mutually sharing tools,” he summarizes. In short, a sharing solution is at the heart of the social life of a local community. “The reason we hear about it a lot today, is that the Internet has multiplied the opportunities offered to individuals,” he adds. This change in scale has led to new intermediaries, who are in turn much bigger. And behind them, a multitude of players are lining up to compete with them.

Read the blog post Digital commons: individual interests to serve a community

 

[box type=”shadow” align=”” class=”” width=””]

Discover the MOOC on “Understanding the sharing economy”

The “Understanding the sharing economy” MOOC was developed by Télécom Bretagne, Télécom École de management and Télécom Saint-Étienne, with La MAIF. It addresses the topics of active consumers, platforms, social changes, and the risks of the collaborative economy.

In addition to the teaching staff, consisting of Godefroy Dang Nguyen, Christine Balagué and Jean Pouly, several experts participated in this MOOC: Anne-Sophie Novel, Philippe Lemoine, Valérie Peugeot, Antonin Léonard and Frédéric Mazzella.

 

[/box]

Avec PREVER, tous les résidus sont valorisables en énergie

PREVER: residue turned into energy

Mines Nantes is home to PREVER – France’s only technological platform that offers a large range of competencies and tools for converting industrial and household waste into energy. It provides customized support for companies wanting to produce combustibles or fuel from miscellaneous residues.

 

As a species, Homo sapiens have a very strange way of disposing of their waste. They dig an enormous hole, put their waste and debris in it, cover it up, then start the process all over again in a nearby location. And there you have it! “Out of sight and out of mind”. Over the past few decades, some individuals from this species have been warning their fellow creatures about the dangers of continuing this practice. They describe its consequences for the sustainability of their habitat and environment, and therefore their own survival. Consequences that could almost make them deny their very identity as humans. Fortunately, solutions are beginning to emerge!

At Mines Nantes, an area of almost 520 m2 is dedicated to a research and analysis platform for energy recovery from residues (PREVER). What is the goal of the scientists working there? To work together with companies to explore new solutions for recycling household and industrial waste and transforming it into energy sources. “This includes any companies that want to use their residues to produce their own energy, and SMEs and start-ups that want to develop a recycling program,” explains Mohand Tazerout, who is in charge of PREVER.

The platform team meets with companies and identifies their needs, whether this involves solving specific problems for very small businesses, or developing large-scale programs for major companies. The researchers then prototype a solution, taking the entire recovery chain into account: from the raw residue to the final product. This residue may be in the form of a synthesis gas, a solid fuel or another fuel. This platform also offers the benefit of working alongside companies from the prototyping to full-scale production stages.

 

A technological solution that’s unique in France

Another advantage is that PREVER has all the machinery required to study the performance of recovery processes. “When we’re working with a liquid fuel, for example, we have engine test units that enable us to study its energy characteristics or its impacts on the environment,” describes Mohand Tazerout. The platform is very well equipped with cutting-edge equipment, including electricity-generating boilers, gas microturbines, hydrothermal liquefaction machinery, etc. “I believe we are the only platform in France to feature all the material for the recovery chain in the same place,” the researcher continues.

In addition to the technical aspect, PREVER also boasts a physico-chemical analysis laboratory, which extends its range of services: atomic spectroscopy, thermogravimetric analysis, octane number measurement, etc. It can also rely on its qualified staff, who are highly qualified for their tasks. As Mohand Tazerout mentions, “The department associated with PREVER includes around ten researchers who work with the companies on the programs, plus technicians and PhD students who are specially recruited for specific projects.

The reason the platform offers this wide range of energy recovery methods is because it can work with all types of residue. The researcher explains, “We work with all types of waste, from the food industry to household waste, organic, inorganic, liquid and solid waste…” This also includes unusual and atypical waste, such as maritime wood contaminated by oil spills. PREVER seeks to demonstrate that all residues can be converted into energy, and that hiding them in a hole is no longer a solution in our modern day and age.

Read the blog post Vabhyogaz uses our waste to produce hydrogen

 

Mines Alès, Mines Douai, Agro-based composite materials

Agro-based composite materials – a Mines area of expertise

The materials of the future need to be more than just high-performance. They must also be environmentally friendly: more recyclable and based on renewable resources. This is why scientists and industrialists are taking a closer look at composite materials developed from plant fibers. They represent a major research focus for Institut Mines-Télécom (IMT) schools — especially at Mines Alès and Mines Douai — which participated in the symposium on March 30th on “Materials: realities and new frontiers”.

 

 

Patricia Krawczak is convinced: for certain structures, composite materials developed from plant fibers can now compete with those developed with fiberglass. This researcher from Mines Douai, who led the session on “The opportunities and limits of eco-materials” at the IMT Materials symposium at the end of March, has been working on this topic for around ten years. She has witnessed the emergence and development of these agro-based composites: “In the beginning, it was just plastics reinforced with short lengths of natural fibers (flax, hemp, or cellulose and wood). This produced materials with poor mechanical performance,” she explains. It is only over the past few years that the industrial field has been restructured to develop solutions capable of manufacturing high-performance parts with a rigidity, strength and durability comparable to fiberglass, which currently predominates. “Agro-based composites are no longer used only as cladding components; they can also be used to develop semi-structural parts,” explains Patricia Krawczak. This is the case in transport sectors (automobile, aeronautics, rail, ship building, etc.) and in the building sector.

Among the plants tested for these new uses, flax and hemp stand out from the others, especially in France. They have the advantage of being abundant resources in our country. As Patricia Krawczak reminds us, “the emergence of agro-based materials is rooted in a sustainable development approach.” The use of plant fibers can potentially result in composites that are more easily recycled. The use of local crops therefore enables supply and transport with a low carbon footprint, consistent with this principle of limiting the impact on the environment. In addition to being abundant, these plants also offer benefits of a scientific and technical nature. A great deal of work is being carried out to improve the understanding and mastery of the resulting composite materials.

 

Mines Douai, Mines Alès, Agro-based composite materials

France is the world’s leading producer of textile flax. Credits: BERTFR.

 

Mastering the performance of agro-based composites

At Mines Alès, Anne Bergeret’s team is therefore seeking to identify the parameters that influence the wear properties of plant fiber-based composites, such as mechanical properties, thermal stability, and durability. The manner in which the plant fibers are dispersed within the polymer matrix has proven to be a key parameter. This depends on the fiber’s intrinsic characteristics, such as its chemical composition, surface chemistry, structure, and the application conditions. Another decisive parameter is the quality of the interface between the plant fiber and the polymer matrix. Research carried out at Mines Alès has shown that certain plant fiber treatments, though commonly used, such as caustic soda processing, have an impact on the fiber’s structure, and therefore on its properties and state of dispersion within the matrix. In light of this paradox, Anne Bergeret reminds us of “the need for a full understanding of the fiber’s processing conditions and the application of agro-based composites”.

If a part needs to conform to a particular functional specification or a specific request, lightening the material at the expense of its strength and rigidity is not a solution”

This a view shared by Patricia Krawczak, whose research work at Mines Douai has also been subject to a compromise between lightening and performance. “If a part needs to conform to a particular functional specification or a specific request, lightening the material at the expense of its strength and rigidity is not a solution. Therefore, we always think in terms of specific properties, in other words, performance/density ratios,” she explains. This partially explains why plant fibers can compete with fiberglass for certain uses, but will probably not rival carbon fiber, which has an indisputably superior mechanical property/density ratio.

 

Integrating the specific characteristics of natural fibers into digital simulation tools

Despite the performance characteristics of plant fibers, such as flax, they are in no position to replace more conventional fibers in all product specifications. To think otherwise would be to ignore the intrinsic characteristics of these fibers. Plants, by their nature, have variable characteristics, due to their growing conditions (seasonality, climate, place of production, etc.). They are also porous, which makes their fibers sensitive to humidity. They also have a low level of tolerance for high temperatures; it would therefore be difficult to associate them with polymer matrices with forming processes that require heating to very high temperatures. But agro-based composites are already highly suitable for use in more favorable conditions, in which their sensitivity to humidity does not present a problem, or by associating them with polymer matrices that can be transformed at lower temperatures. Furthermore, research is underway to remedy these problematic aspects. The goal is to make natural fiber reinforcements water-repellent, or to adapt plastic manufacturing processes to allow for the production of composite parts under gentler conditions.

We will not be able to compensate for the porous nature of the fibers, and it will be difficult to overcome their variable characteristics due to their natural origin,” Patricia Krawczak points out. “However, we can make sure these specific features are taken into account in the models and simulations, and in this way ensure that designers, processers and users of industrial parts fully understand the behavior of agro-based composites.” The development of the virtual engineering chain for these materials is booming. The same is true for the market growth of these agro-based composites. “Industrialists from different fields of application are open to using them, but will only do so if the simulation tools they use in their design offices are compatible,” explains the researcher. Her team is working on these aspects as well.

 

Turning the weaknesses of the plant fibers into strengths

While in Mines Douai scientists are focusing on modeling in order to better understand the limits of plant fibers, Anne Bergeret and her team at Mines Alès are seeking to turn these limitations into advantages. The plants have low thermal stability (decomposition beginning at 250°C), making them relatively inflammable. However, they have the particular feature of forming a stable carbonaceous residue after decomposition. The combustion of flax-based composites therefore produces this residue on the surface, which has the property of protecting the underlying material and slowing down its deterioration. Anne Bergeret’s team has therefore sought to promote this appearance of charred residue by grafting phosphorous flame-retardants onto the surface of fibers. The results have shown an improvement in the reaction to fire, and should therefore improve the resilience of the composite, whose structure will be less severely damaged.

Whether the aim is to understand the limitations of agro-based composites or to take advantage of them, either way, researchers are dedicated to using biomass and turning it into new useful materials for the industrial sectors seeking innovations — in aeronautics, as well as in electronics and medical equipment. With them, the sectors are being reorganized, through projects like Fiabilin (PIA-PSPC) and Sinfoni (PIA-PSPC), in which Mines Douai has participated, and Enafilia (ADEME) and Hermes (H2020), with the involvement of Mines Alès, involving flax and hemp growers alongside multidisciplinary scientists, plastics manufacturers, and end users. The emergence of these materials is not merely a hope, it is a necessity because, as Anne Bergeret puts it: “In all our studies, we see the need for an integrated, interdisciplinary approach, combining the skills of materials specialists like us with those of farmers, biologists and chemists, for a comprehensive understanding and the optimal use of biomass.

Read more on the blog What is a composite material?

Find out more about the research in agro-based composites at Mines Douai
Find out more about the use of biomass for materials

[box type=”shadow” align=”” class=”” width=””]

Reminder: What is a composite material?

Just like wattle and daub, made of straw and mud, a composite material is created from a fiber reinforcement and a binder called the matrix. Glass and carbon fibers are currently the most commonly used fibers, but they are facing competition from plant fibers, which facilitate recycling and decrease the material’s environmental impact.

The composite’s matrix is generally a thermoplastic or thermosetting polymer. When it is thermoplastic, the polymer is heated to be liquefied and combined with fibers, then cooled to form the final solid material. A thermosetting polymer is originally liquid and easily incorporates the fibrous reinforcements, but it must then be heated in order to harden and form the final composite material. Polymer matrices are also called organic matrices, but they are not the only matrices that exist. There are also ceramic and metal matrices.

Through the careful combination of fibers and the matrix, composite materials offer unusual mechanical properties, which few other materials can offer. The specifications, which dictate the final target properties based on the future use of the part, are what generally define the choice of fibers (type, content, orientation, cut/continuous/woven/braided, etc.) and of the matrix.[/box]

4G, LTE4GPMR

LTE4PMR: Securing 4G Radio Telecommunications

In January 2016, Airbus Defence and Space announced the launch of LTE4PMR, a demonstration project for secure mobile radio telecommunications based on 4G networks. The project is aimed at providing law enforcement officials and emergency services with communication technology adapted to their needs. On April 19, 2016, the French government awarded this initiative and another one dedicated to the same purpose with a grant of €23 million as part of its Investments for the Future program. Alongside the Airbus Group subsidiary, two industrial partners — Nokia et Sequans — and two Institut Mines-Télécom (IMT) schools — Télécom SudParis and Télécom ParisTech — will participate in this project.

 

 

Although law enforcement officials already benefit from designated secure frequencies for communication, these frequencies currently depend on Tetrapol technologies. Yet there is a problem: this technology has a limited bandwidth. Security and rescue services would like to be able to benefit from high speed radio telecommunication services. The additional capacity of video communications between the teams intervening on-site and the command center would allow the services to provide better guidance, improving the coordination between units. In the same way, the transmission of data from thermographic or infrared cameras would offer agents and coordinators with vision tools in “blind” conditions.

Airbus Defence and Space is partnering with Nokia, Sequans and two Institut Mines-Télécom (IMT) schools – Télécom SudParis and Télécom ParisTech – to upgrade the communication technology of French law enforcement and national security services. This project is part of a wider public and private sector investment effort that amounts to €55 million, €23 million of which were financed under the “Strategic R&D Projects for Competitiveness” component of the Investments for the Future program, operated by Bpifrance. All participants allied to Airbus Defence and Space will focus their work on the LTE networks, which are already an integral part of 4G networks that provide coverage for commercial use in the territory. The goal will be to offer a range of private mobile radio-telecommunication services (PMR) by removing existing scientific limits. Two IMT schools will take on this challenge in this project entitled LTE4PMR.

 

Replacing current technologies and ensuring network resilience

The goal of Télécom SudParis’ scientific team, led by Badii Jouaber, is to provide technological upgrades that can be applied to services such as group communications and Push-to-talk communications on LTE networks. “These services will eventually replace older technology such as Tetrapol or Tetra explains the coordinator. The researchers at the school in Evry are facing various obstacles, including the following: the immediate and dynamic nature of Push-to-talk communications, which contrast the classic 4G communications that require each user to dial a number and make a connection after a certain amount of time. In addition, the Télécom SudParis team will need to “find a way to manage users in network cells or different radio conditions and optimize the management of radio resources,” explains Badii Jouaber.

As for the Télécom ParisTech team, led by Philippe Martins with assistance from Philippe Godlewski and Anaïs Vergne, it will focus its efforts on two different objectives. The first involves defining new techniques for broadcasting video information. Researchers will propose new methods for transmitting information that are stronger and more reliable. “We will also define algorithms that will simultaneously take into account the coverage, quality of the video and the management of group calls,” explains Philippe Martins. The second objective will be to deal with the resilience challenges facing infrastructures in extreme situations in which network coverage is damaged or becomes inoperable. The scientists will therefore need to determine the best methods for restoring network coverage. To accomplish this goal, they will base their work on algorithmic tools that make it possible to alternate between the structures of ad hoc emergency services and public infrastructures.

 

The quick restoration of network coverage during natural disasters (like the flood pictured above) is essential in allowing emergency teams to carry out their operations. That is one of the missions of the LTE4PMR project.

The quick restoration of network coverage during natural disasters (like the flood pictured above) is essential in allowing emergency teams to carry out their operations.

 

Experimentation and standardization

In addition to finalizing the LTE4PMR demonstrator, Télécom ParisTech and Télécom SudParis will concurrently develop their own platforms for testing solutions, which will in turn be transferred to the project’s industrial partners. This will also provide an opportunity, on a smaller scale, to test types of technology that are more advanced than those being implemented on the large demonstrator. The two schools will face this challenge with a time limit of only 27 months — the duration of the project — to test and transfer their solutions.

Standardization will also be a key component of this project. It will be coordinated with the work conducted by the 3GPP standardization consortium for 4G telecommunication networks. The current changes to the 4G standard and the future 5G standard, which is being developed for commercial use, will not conflict with the achievements of the LTE4PMR project. On the contrary, these developments will be complementary, opening possibilities in the area of connected objects.

Biomass

Biomass: green gold at our fingertips

In the race towards renewable resources, biomass could well have an important role to play. Biomass includes organic matter derived from living organisms, particularly plants, which have more to offer than you might think. Patrick Navard is a researcher in materials at Mines ParisTech. Following his presentation at the “Materials: realities and new frontiers” symposium, organized on March 30-31, 2016 by Institut Mines-Télécom, we asked him to shed some light on the issues and limitations relating to the use of biomass.

 

 

Will green gold eventually replace black gold? Will oil’s overwhelming domination be challenged by the biological resources all around us? The Brent Crude oil price per barrel, which has been falling for over a year, seems to leave little hope for any competition. Yet biomass — all organic matter of animal and plant origin — could very well put up a fight. The reasons for its appeal are indeed more complex than issues of economic competition in the energy and materials markets, sectors predominantly supplied by hydrocarbons and their derivatives. Whether oil prices are up or down, it nevertheless remains a “limited resource, from which we will need to break free eventually,” says Patrick Navard, a researcher in bio-sourced materials at Mines ParisTech. This desire to find alternative solutions is also motivated by the growing appeal among citizens for eco-labeled products. “We all want to leave a cleaner planet for our children,” adds this expert in bio-inspired materials, putting environmental issues back at the heart of the matter. This dimension is already integrated by brands into their product marketing, in which they boast about compliance with countless ISO standards and ever lower CO2 emissions.

While these driving forces for an ecological transition require massive efforts over a relatively long period of time — a new vision of society is rarely accepted immediately and unanimously — they are supported by more practical aspects. Biomass is not simply a replacement solution: it is also a catalyst for innovation. “Certain products obtained from renewable resources are much better than those manufactured using fossil resources,” Patrick Navard argues. “A good example of this is cars: composite materials that use natural fibers are 15 to 20% lighter than those made with glass fiber.” This boost in performance from the bio-sourced material, in this case, is directly accompanied by a decrease in shipping and handling costs and, eventually, a reduced carbon footprint — a lighter car that emits less CO2 per kilometer.

 

Biomass – an opportunity for materials

The materials industry is one of the sectors that could, like energy, be greatly impacted by this transition. Polymers, which are components of plastic materials, are in the vanguard of this change. But there is still a long way to go. As Patrick Navard likes to say: “The use of biomass to develop plastic materials is developing rapidly, but it started out at a very low level.” Bio-sourced polymers currently only represent 0.1% of the polymers used around the world. And yet their properties are not that different. In fact, they are similar. “Whether we are talking about using oil or biomass to produce polymers, in the end it boils down to the same thing,” the researcher explains. Because oil, after all, is nothing other than biomass that has been buried in the ground and digested by the earth through chemical and biological processes over millions of years.

 

La production mondiale de matière plastique s'élève à 300 millions de tonnes. La biomasse pourrait devenir une source de polymères dans le futur et alimenter ce marché.

The global production of plastic materials amounts to 300 million metric tonnes. The use of biomass as a polymer source could therefore increase considerably to meet the growing demand (250 million in 2009, 204 million in 2002).

 

In Patrick Navard’s opinion, making any kind of polymer using renewable resources is perfectly conceivable: “The plant fibers must simply be broken down to form chemical building blocks, thus reproducing what nature does over a much longer period of time.” Such products already exist: polyethylene, for example, can be synthesized using cane sugar. Likewise, cornstarch can initiate the synthesis of polylactic acid, a biodegradable polymer used for food packaging. Another highly prized natural molecule is already used in industry: cellulose. It represents over 50% of the plant biomass and regularly supplies pulp mills.

 

Biomass refineries

Yet a problem remains in all these syntheses initiated by natural molecules: once the desired substance has been extracted, what is done with the remainder of the used plant matter? Cellulose is a major component of plants, but it is not the only molecule. The others, like lignin and hemicellulose, are mostly burned in current industrial practices. And yet lignin plays an important role in the rigidity of plants and could be recovered. “Lignin is the most abundant source of aromatic compounds on earth, and yet it is currently almost completely untapped, while industry uses many synthetic aromatic substances, which are particularly polluting,” regrets Patrick Navard. The lignin molecule present in wood could therefore be a precursor to the development of products that are in high demand. However, the researcher cautions that “Many research projects around the world are trying to exploit lignin and, so far, with little success, because it is a complex molecule.

This leaves us with the idea of recovering compounds currently considered as waste. This principle is at the heart of the biorefinery concept. It involves modeling the oil processing procedures, in which the various molecules composing the crude oil are separated: the heavy fractions are used for bitumen and the lighter fractions become solvents for the chemical industry. Nothing is wasted from this precious black gold. So why discard the biomass by-products? This example illustrates the advantages of waste recovery: when Miscanthus, a herbaceous plant, is harvested, an initial sifting procedure removes the sediments. Rather than discarding these sediments, “they can be used in the development of composite materials,” Patrick Navard explains.

 

In addition to the sediments obtained from screening, the Miscanthus plant is used to reinforce composite materials. The material’s mechanical properties vary according to the chosen species.

 

Environmental impact and land management

The prospects presented by biorefineries must also be tempered with caution. Though their development may be synonymous with more environmentally sustainable production, since it is based on renewable resources, this does not necessarily mean the environmental impact will be reduced. “If highly polluting chemistry must be used to recover the plant’s resources, the environmental impact is not reduced at all,” Patrick Navard cautions. The researcher illustrates these remarks with a comparative impact study carried out at two cellulose thread factories owned by the same company, one in Austria, and the other in Indonesia. Despite following the same manufacturing process, the Austrian factory was less polluting. The reason for this was the distance from the forest from which the trees were obtained. In Indonesia, the forest was located several hundred kilometers away, significantly increasing the carbon footprint from the transportation of the wood, whereas the Austrian factory used a more local supply. “This problem does not exist for the oil industry, since it does not cost a lot to push crude oil through a pipe. But wood can’t be transported in the same way,” Patrick Navard explains.

The issue of geographical location does not just apply to biorefinery sites. With the use of biomass comes the problem of land being used for purposes other than agri-food production. This issue already arose several years ago with the cultivation of crops for biofuel, which led to highly questionable results. Indeed, numerous problems exist – from humanitarian disasters caused by the increased price of corn, to the simple impossibility, given the yields, of allocating enough land. This possibility of producing first-generation biofuels is flawed for many reasons, and is not ethical. “The problem is different for materials,” explains Patrick Navard. “We do not need to produce as many crops in this case. While fuel constitutes one of the biggest oil products, materials only use the equivalent of a few percentage points of crude oil.” Therefore, less land is required to redistribute the demand for materials to biomass than would be required to meet the demand for fuel. In addition, the cultivation of crops used for dual purposes can be envisioned, combining food purposes with the production of resources for the development of materials.

 

Convincing farmers and industrialists

Yet there is still an obstacle to the development of biorefineries and the use of biomass: their appeal to farmers and industrialists. “Farmers will not go into this business unless they are certain they will be able to sell their crops, and industrialists will not develop these products unless they are sure they can buy at a reasonable price,” explains Patrick Navard. The structuring of these sectors is therefore the key issue. In Germany, agreements have been made between farmers and manufacturers, enabling the launch of these initiatives. But not all the initiatives are successful. In South America, the industry based on the Curauá plant did not develop to the expected extent due to a lack of stability: the presence of only one distributor on the market cannot guarantee the security of production. In France, Patrick Navard assures us that initiatives are emerging, but adds that things are slow and difficult at times: “There’s a lot of red tape to get through, at the regional, departmental and municipal levels.” Yet time seems to be running out. On the one hand, oil resources are diminishing, and prices will increase due to the ever-growing demand, which will speed up the ecological and environmental transition. But on the other hand, CO2 emissions are skyrocketing, leaving our societies with even less time to limit the irreversible impacts of our activities.

Read more on our blog

Vabhyogaz

Vabhyogaz uses our waste to produce hydrogen

Hydrogen is a resource that is prized for its applications in the chemical industry and for its role in the fuel cells used in electric vehicles. Vabhyogaz – a project initiated by Didier Grouset, a researcher at Mines Albi – proposes to convert biogas from our waste into hydrogen. The project, which began nearly ten years ago, is now entering its third and final phase of development in 2016.

 

Over two months after the 21st Conference of the Parties to the United Nations Framework Convention on Climate Change (COP 21), are any concrete solutions for limiting CO2 really emerging? Near Albi in the South West of France in any case, the Vabhyogaz project is quietly entering its third phase of development. This project initiated in 2007 has the goal of converting biogas from waste into hydrogen. This resource is particularly sought after for powering vehicle fuel cells, while only emitting water vapor.

Initially, Vabhyogaz stemmed from cooperation between N-GHY, an SME created by Didier Grouset as a spin-off from Mines Albi, specializing in the hydrogen industry, and TRIFYL, the Tarn regional federation for the recovery of household waste. The first stage of the project united regional partners together around TRIFYL and N-GHY, including Mines-Albi and Phyrénées, an association initiated in 2007 by these three entities, with the aim of bringing together a community focusing on hydrogen issues in the Languedoc-Roussillon-Midi-Pyrénées region. “To take an interest in the entire hydrogen value chain, and bring local stakeholders together to establish the originality of the Vabhyogaz concept,” explains Didier Grouset, a researcher at Mines Albi. The project now includes many different partners: SMEs, public authorities, and subsidiaries of multinational groups*.

 

Le projet Vabhyogaz a été identifié comme "Solution climat" lors de la COP21 et présenté au Grand Palais à Paris.

The Vabhyogaz project was identified as a “Climate solution” at COP21 and was presented at the Grand Palais in Paris.

 

The process of converting waste into hydrogen starts with a natural process: methanization. In an oxygen-deficient environment, micro-organisms break down our waste and give off a gas: a mixture of carbon dioxide (CO2) and methane (CH4). This mix is what is what we call “biogas”. To speed up this process, which can last around thirty years in a landfill, Trifyl uses a bioreactor, reducing the biogas production time to fifteen years. The above-ground anaerobic fermentation of our domestic, agricultural and food-processing waste, using a biogas plant, will also produce biogas in only a few days.

The biogas is then purified by removing the minor pollutant compounds, such as hydrogen sulfide and, usually, CO2 is removed as well, but the unique aspect of the Vabhyogaz process is that it eliminates this costly step,” explains Didier Grouset. The chemist adds: “Instead, we take the methane-carbon dioxide directly and place it under 15 bar of pressure at 900°C by introducing water vapor, all in a nickel-based catalyst.” The methane and water therefore react to form carbon monoxide (CO) and the coveted hydrogen (H2). This reaction is completed by the reaction of CO with the remaining water vapor at 200 °C, again forming hydrogen and CO2. The hydrogen produced therefore comes from the waste and water that have reacted.

 

Extra-pure hydrogen

Once the hydrogen synthesis process is completed, it is purified to obtain a purity of 99.995%. “This is essential for complying with the supply standard for hydrogen fuel cells, and guaranteeing their long-term service life,” warns Didier Grouset. Mastering this procedure brought an end to the second phase of the Vabhyogaz project, which was completed in 2014. “Today, our partners have a prototype capable of producing 10 kg of hydrogen per day,” the researcher notes. This is enough to power a few hydrogen-powered electric Kangoo vehicles that can travel over 200 km per day. Didier Grouset recognizes that this “is still small compared to our future needs,” and that a production unit “becomes advantageous, in other words, economically profitable, starting at 100 kg of hydrogen per day.

This explains the upgrade scheduled to take place during the third phase of the Vabhyogaz project, which will begin in early 2016, as part of the Investments for the Future program. This phase will be aimed at commercializing units with production capacities ranging between 100 and 800 kg of hydrogen per day. These units could then become distributed production units, each able to supply several operators at the local level. The thinking on how to transport the hydrogen resource will also be included in this phase. Containers of tanks made of composite materials, which are lighter and better adapted to transporting hydrogen, are being developed as part of the project.

 

What does the future hold for the hydrogen energy sector?

Identified as a “climate solution” at COP21 where it was presented at the Grand Palais in Paris, the Vabhyogaz project has a promising future. But its viability is based just as much on the quality of the project, as on the need for changes in the use of hydrogen as energy. Didier Grouset seems optimistic about this, initially mentioning a national specificity that has led to the development of less expensive hydrogen-powered vehicles: “One of France’s distinctive practices is the use of hydrogen fuel cells as battery range extenders, and not as the main power supply for the electric motor.” For French manufacturers, fuel cells used to recharge the battery only need to produce the average power required for the electric vehicle, which has a motor that is primarily powered by the battery. This is different from the practices of the foreign competitors, which have favored the use of a fuel cell capable of supplying the maximum power for the motor. The researcher helps explain the context: “In the first situation, the fuel cell must have an output of 5 kW, as opposed to 100 kW in the second.

But hydrogen energy is up against a very difficult opponent: oil. “With the Vabhyogaz solution, the price becomes competitive,” says Didier Grouset. In numbers, this would translate as a target of one kilogram of hydrogen costing €8 including tax, keeping in mind that it takes approximately 1 kg of hydrogen to travel 100 km. But this target is set in comparison to a liter of diesel fuel at €1.20 including tax. And, the current context of falling oil prices appears to make this an uneven fight. Still, Vabhyogaz is not doomed to failure. “The motivation shown by users remains a key to success,” he declares confidently. The project can benefit from its history in this area, as the researcher reminds us: “Around Albi, the hydrogen sector has been a topic of conversation since 2007; this is especially due to the Phyrénées association.” Vabhyogaz will therefore seek to target company vehicle fleets during its third phase.

 

Hydrogen, a resource for industry

The automotive sector is not the only industry targeted by this project. Although fuel cells are the most remarkable application of hydrogen, another application also exists, and is just as important: the chemical industry. The manufacture of semiconductors – key components in microelectronic equipment – requires large quantities of hydrogen. The same applies to the production of high-quality glass and the heat treatment of metals. However, “there are very few production sites for commercial hydrogen, resulting in long transport distances,” explains Didier Grouset. And yet, many industrial needs exist, ranging between 100 kg and 500 kg of hydrogen per day. The production units developed by Vabhyogaz could therefore directly supply the stakeholders in question.

In order to reach this stage, phase 3 of the Vabhyogaz project will also include life-cycle analyses and energy optimization studies for all steps in the procedure. This will specifically involve confirming that consumption has decreased for the hydrogen distribution units and for its transport, with the aim of reducing the environmental impact of the production chain for the conversion of biogas into hydrogen. Vabhyogaz is truly an environmentally motivated project, in terms of the purpose of hydrogen use, and also throughout the entire hydrogen value chain.

Read more on our blog: PREVER, residue turned into energy

[box type=”shadow” align=”” class=”” width=””]

Vabhyogaz, a collaborative success story

The Vabhyogaz 2 project has been coordinated by ALBHYON, a subsidiary of HERA France (HERA group, which originated in Spain). ALBHYON has continued the work of N-GHY in partnership with TRIFYL. This project was partially funded by ADEME as part of the TITEC program.

Following the proof of concept for phase 2, HERA decided to develop a range of products for producing and distributing hydrogen from renewable sources. This is the goal of phase 3 of the Vabhyogaz 3 project, which includes the following partners:

• HERA-France and its subsidiary ALBHYON, project coordinator
• HP SYSTEMS, an SME from La Rochelle
• WH2, an SME from Lyon
• TRIFYL, the Tarn regional federation for the recovery of household waste
• EMTA, a subsidiary of SARP Industrie, (VEOLIA group)
• Mines Albi

This project will last 4 years, and was submitted in the Storage and Energy Conversion call for projects of the Investments for the Future program in December 2015. The total budget for the project is €9.76 million and €4.47 million of funding has been requested (largely refundable). The application has been submitted for examination by ADEME. [/box]

Croquis cerveau

The brain: the mechanics of convolutions

Why do our brains have so many folds? The answer to this question is far from simple. In fact, the answer only came at the beginning of 2016, from a team of researchers including members from Harvard University and Télécom Bretagne. Unlike some of the theories previously proposed, this answer has nothing to do with genetics. The convolutions in our brains are caused by mechanical constraints. This work was published in the Nature Physics journal, and co-authored by François Rousseau, a researcher at Télécom Bretagne.

 

With its multiple convolutions, the strange structure of our brains fascinates some people and frightens others, but leaves few people indifferent. The fact that this convoluted shape is not common to all species only increases this fascination. While humans and several primates have a sinuous brain, this is not the case for rodents, which have a smooth cortex. The scientific community has been debating the origins of these convolutions for years. Some researchers believe it is the result of complex biochemical complexes. In 1975, researchers from Harvard University proposed another theory: the brain’s development into convolutions is the result of mechanical constraints that emerge during its growth. Forty years later, an international collaboration has just confirmed this second hypothesis. This work, involving researchers from the universities of Harvard, Jyväskylä (Finland), Aix-Marseille and Télécom Bretagne, was published in the Nature Physics journal in February 2016.

 

Work with an international scope

To prove the role of physical constraints in the development of folds, the scientists first developed mechanical models by growing shapes similar to that of the brain. The team from Harvard first published the observations from these experiments in 2014. “But the physical models developed up until now were simulations carried out on a type of sphere, and the physicists merely observed whether or not folds appeared,” explains François Rousseau, researcher at Télécom Bretagne, and co-author of the publication. It was precisely for this purpose of refining the model, and making it closer to reality, that French scientists joined the team. Between 2008 and 2013, François Rousseau participated in the European Research Council project (ERC) on fetal brain MRI. His skills in signal processing were then used to extract data from the images. “It is difficult to obtain a good image of the fetal brain, since the fetus may move during the examination for example,” the researcher mentions. After developing and applying algorithms to correct the movement on the MRI images, the fetal brain can be identified and isolated from the surrounding liquid. After this point, the data can be used as the basis for 3D modeling.

 

François Rousseau, IMT Atlantique, brain, convolutions

Based on cross-sectional images of a fetal brain, François Rousseau sought to create 3D models.

 

It was during this process that François Rousseau met Nadine Girard and Julien Lefèvre, the recent winner of the young researcher grant from the French National Research Agency (ANR) on the study of the brain during its development. When they discovered the 2014 publication on the work by Tuomas Tallinen, Jun Young Chung and Lakshminarayanan Mahadevan, they decided to join them, with the conviction that their work could benefit the project by improving the physical model used by the researchers from Harvard and Jyväskylä. The work on the extraction of shapes carried out on fetal MRIs at different stages of prenatal development enabled the scientists to better understand brain development during the gestation period. This is how the twenty-second week of pregnancy came to be identified as a pivotal period, since it is the moment at which the brain enters a rapid growth phase. From this moment and until adulthood, its volume increases twentyfold. However, this increase does not take place in a consistent manner. Over the same period, the cerebral cortex – the brain’s outer layer – therefore increases to thirty times the volume it occupied during the twenty-second week of pregnancy.

 

Heterogeneous cerebral growth

And, it was precisely this asynchronicity that was identified as the potential source of mechanical stresses leading to the formation of convolutions. But the researchers still needed to prove it. Using MRI imaging of the fetal brain at twenty-two weeks, the team 3D-printed a replica, which was then used to form a silicon mold. Using this, the physicists then created imitation brains out of a gel material, which were then covered with another gel that could swell by absorbing a solvent such as hexane. After being exposed to the solvent for sixteen minutes, the brain model developed convolutions and folds that greatly resembled those of real brains. In addition, the stages in the development of these shapes were similar to those observed via MRI imaging. According to the researchers, the gyrification — the process of forming the folds — “is initiated by the formation of linear grooves, which grow longer and branch out, establishing most of the patterns before birth.

 

François Rousseau, IMT Atlantique, brain, convolutions

The researchers observed a development in the mechanical model (in pink above) similar to real brain development (in white, shown using synthetic imaging).

 

Although the results already supported the hypothesis presented in 1975, the researchers still wanted to improve on their simulations. This process has certain limitations. First of all, the observations reveal a notable asymmetry between the two hemispheres of the imitation brain. While perfect symmetry does not exist in a real human cortex either, the scientists noted that the two halves of the model “differ more than in real life”. Why is this? François Rousseau believes that this could be caused by “slight errors in the digital segmentation during the image processing stage, which may be amplified during the transition to the mechanical model and distort the simulation”. Secondly, due to the need to ensure the feasibility of the experiment, the researchers considered the growth of the cortex to be uniform, although they knew this was not the case. The model also omits the skull’s role in the development of the brain’s surface. Finally, the mechanical model is not yet able to attain a thirtyfold increase in its volume by absorbing the solvent, as the human brain does during its growth.

The researchers will seek to correct all of these details in the next stages of their work. And their exploration of this subject does not stop there. Beyond improvements to the model, the scientists want to take the simulations a step further, seeking to make the folding process take place in reverse order. This project could improve the detection and understanding of lissencephaly disorders – diseases caused by a genetic abnormality that results in a smooth cortex. “Using MRI imaging taken at a specific point-in-time, we would like to return to that point through simulation, to better understand how the changes occur in the folds’ structure,” explains François Rousseau.

Quèsaco, What is?, 5G, Frédéric Guilloud

What is 5G?

5G is the future network that will allow us to communicate wirelessly. How will it work? When will it be available for users? With the Mobile World Congress in full swing in Barcelona, we are launching our new “What is…?” series with Frédéric Guilloud, Research Professor at IMT Atlantique, who answers our questions about 5G.

 

What is 5G?

Frédéric Guilloud: 5G is the fifth generation of mobile telephone networks. It will replace 4G (also referred to as LTE, for Long Term Evolution). Designing and deploying a new generation of mobile communication systems takes a lot of time. This explains why, at a time when 4G has only recently become available to the general public, it is already time to think about 5G.

What will it be used for?

FG: Up until now, developing successive generations of mobile telephone networks has always been aimed at increasing network speed. Today, this paradigm is beginning to change: 5G is aimed at accommodating a variety of uses (very dense user environments, man-machine communications, etc.). The specifications for this network will therefore cover a very broad spectrum, especially in terms of network speed, transmission reliability, and time limits.

How will 5G work?

FG: Asking how 5G will work today would be like someone in the 1980s asking how GSM would work. Keep in mind that the standardization work for GSM began in 1982, and the first commercial brand was launched in 1992. Even though developing the 5th generation of mobile communications will not take as long as it did for the 2nd, we are still only in the early stages.

From a technical standpoint, there are many questions to consider. How can we make the different access layers (Wi-Fi, Bluetooth, etc.) compatible? Will 5G be able to handle heterogeneous networks, which do not have the same bandwidths? Will we be able to communicate using this network without disturbing these other networks? How can we increase reliability and reduce transmission times?

Several relevant solutions have already been discussed, particularly in the context of the METIS European project (see box). The use of new bandwidths, with higher frequencies, such as 60-80 GHz bands, is certainly an option. Another solution would be to use the space remaining on the spectrum, surrounding the bandwidths which are already being used (Wi-Fi, Bluetooth, etc.), without interfering with them, by using filters and designing new waveforms.

How will the 5G network be deployed?

FG: The initial development phase for 5G was completed with the end of the projects in the 7th Framework R&D Technological Program (FP7), and particularly through the METIS project in April 2015. The second phase is being facilitated by the H2020 projects, which are aimed at completing the pre-standardization work by 2017-2018. The standardization phase is then expected to last 2-3 years, and 2020 could very well mark the beginning of the 5G industrialization phase.

 

Find out more about Institut Mines-Télécom and France Brevets’ commitment to 5G

[box type=”shadow” align=”” class=”” width=””]

The METIS European project

The METIS project (Mobile and wireless communications Enablers for the Twenty-twenty Information Society) was one of the flagship projects of the 7th Framework R&D Technological Program (FP7) aimed at supporting the launch of 5G. It was completed in April 2015 and brought together approximately 30, primarily European, industrial and academic partners, including IMT Atlantique. METIS laid the foundations for designing a comprehensive system to respond to the needs of the 5G network by coordinating the wide variety of uses and the different technical solutions that will need to be implemented.

The continuation of the project will be part of the Horizon 2020 framework program. The METIS-II project, coordinated by the 5G-PPP (the public-private partnership that brings together telecommunications operators), is focused on the overall system for 5G. It will integrate contributions from other H2020 projects, such as COHERENT and FANTASTIC-5G, which were launched in July 2015: each of these projects are focused on specific aspects of 5G. The COHERENT project, in which Eurecom is participating (including Navid Nikain), is focused on developing a programmable cellular network. The FANTASTIC-5G project, with the participation of IMT Atlantique, under the leadership of Catherine Douillard, is aimed at studying, over a two-year period, the issues related to the physical layer (signal processing, coding, implementation, waveform, network access protocol, etc.) for frequencies under 6 GHz.

Find out more about the METIS / METIS-II project[/box]

Octave : sécuriser la biométrie vocale contre l’usurpation

Octave: trustworthy and robust voice biometric authentication

Projets européens H2020Surely, voice biometric authentication would be an easier alternative to the large amount of passwords that we use daily. One of the barriers to exploitation involves robustness to spoofing and challenging acoustic scenarios. In order to improve the reliability of voice biometric authentication systems, Nicholas Evans and his team at Eurecom are involved since June 2015 — and for a two years duration — in a H2020 European project called Octave.

 

What is the purpose of the Objective Control of Talker Verification (Octave) project?

Nicholas Evans: The general idea behind this project is to get rid of the use of passwords. They are expensive in terms of maintenance: most people have many different passwords and often forget them. While simultaneously relieving end-users from the inconvenience of dealing with textual passwords, Octave will reduce the economic and practical burden of service providers related to password loss and recovery. Octave will deliver a scalable, trusted biometric authentication service — or TBAS. The project is about providing a reliable service that works in diverse, practical scenarios, including data-sensitive and mission-critical application.

 

Eurecom is leading the third work package of this H2020 European project. What is the role of the school?

NE: Our main mission is to ensure the reliability of the underlying automatic speaker verification technology. To do so, our work package has two objectives. First, insuring the proper functioning of the TBAS in a variety of environments. Indeed, the Octave platform should work properly whether it be deployed in a limited bandwidth and channel-variable telephony context or in a noisy physical access context. Eurecom’s focus is on our second objective, which is counter-spoofing.

 

How does your research team ensure the security of the system against spoofing?

NE: If I want to steal your identity, one strategy might be to learn a model of your voice and then to build a system to transform mine into yours. Anything like that would typically introduce a processing artefact. I could also try to synthetize your voice, but again this would produce processing artefacts. So, one of the highest level approaches to identify a spoofing attempt is to build an artefact detector. In order to do that, we apply pattern recognition and machine learning algorithms to learn the processing artefacts from huge databases of spoofed speech.

 

Portable telephone

 

So researchers have a large database of spoofed speech at their disposal?

NE: This is a very tricky issue. Ideally, we would use real data, that is to say real examples of spoofed speech. These don’t exist, however. Even if they did, they would most likely not contain many samples. Therefore, we have to generate these spoofed speech datasets ourselves. We try to imagine how an attacker would try to spoof a system and then we fabricate a large number of spoofed samples in the same way. Fortunately, we can do this much better than a spoofer might, for we can imagine many possibilities and many advanced spoofing algorithms.

However, this methodology results in an unfortunate bias: when we use artificially generated datasets of spoofed speech, then we are in a really good position to know how spoofers faked the voice, because… well, we were the spoofers. To design reliable spoofing detectors we must then try to use the databases blindly, that is to say we must try not to use our knowledge of the spoofing attacks – in the real world, we will never know how the spoofing attacks were generated.

Luckily a very large, standard database of spoofed speech is now available and this database was used recently for a competitive evaluation. Since participants were not told anything about some of the spoofing attacks used to generate this database, the results are the best indication so far of how reliably we might be able to detect spoofing in the wild. Eurecom co-organised this evaluation, ASVspoof 2015, with another Octave partner, the University of Eastern Finland, among others.

 

Who are the other partners working along Eurecom on the Octave project?

NE: Among our partners, we count Validsoft in the United Kingdom, a voice biometrics product vendor. Eurecom is working with Validsoft to validate Octave technologies in a commercial grade voice biometrics platform. This is not the only category of industrial partners that we work with. Whereas APLcomp are another of Octave’s product vendor partners, Advalia are custom solution developers. ATOS are Octave’s large-scale ICT integrators. Business users are represented by airport operator, SEA, whereas Findomestic, owned by BNP Paribas Personal Finance, represent the banking sector. These two partners, SEA and Findomestic, will help us with evaluation, by offering us the possibility to deploy the TBAS in their respective environments. Airports and banking ecosystems are really different, allowing us to ensure that Octave works in real, diverse conditions.

 

Learn more about the Octave project

 

[box type=”shadow” align=”aligncenter” class=”” width=”95%”]

Le+bleu

The Octave project:

The Objective Control of Talker Verification (Octave) project is a European project funded through the Horizon 2020 call on “Digital security: cybersecurity, privacy and trust”. It started in June 2015 and will last two years. The research program is segmented in eight work packages, among which the third, “Robustness in speaker verification”, is led by Eurecom. The school, part of the Institut Mines-Télécom, was contacted to work on Octave because of its experience on spoofing detection in voice biometric systems. Previous to Octave, Eurecom was involved in the FP7 project named Tabula Rasa.

List of Octave members:

carte partenaires Octave

[/box]

Brain, Alexandre Gramfort

Alexandre Gramfort translates our brain waves with algorithms

Alexandre Gramfort is a young researcher at Télécom ParisTech and just received an ERC starting grant. This prestigious European prize and support acknowledges his research efforts in signal processing and machine learning. For the last eight years, Alexandre Gramfort has worked on mathematical tools to better extract, analyze and visualize brain signals, essentially using electroencephalograms and magnetoencephalograms.

 

In order to study the brain in a non-invasive way with a good temporal resolution, electroencephalogram (EEG) and magnetoencephalogram (MEG) are standard techniques. They respectively measure the electrical activity of our neurones and the magnetic fields that this activity creates. For a patient, the EEG examination consists in nothing more than wearing a helmet with multiple electrodes on the head. The practitioner then visualizes signals and 3D models of the patient’s brain, in which coloured areas indicate the neuronal activity. As described here, everything seems simple…

But in fact, a whole crucial aspect of the imaging technique has been forgotten: signal processing. Indeed, to convert raw measurements into a dynamic visualization of the brain, mathematical and algorithmic tools are required. This step is at the heart of the research work done by Alexandre Gramfort at the LTCI — a mixed research unit Télécom ParisTech and CNRS.

The young researcher has been developing this subject for eight years. First during his PhD thesis at Inria on cerebral activity detection, completed in 2009. Then during his post-docs (CEA Neurospin, Harvard), and today in the “Audio, acoustic and waves” team at the LTCI. Alexandre Gramfort’s works in functional neuroimaging have been highlighted by the development of an open-source software: MNE. Now used in several places over the world, it allows its users to process EEG and MEG signals, from raw data to the visualization of active brain regions. MNE takes care of many aspects of the analysis of such signals, including machine learning.

 

Alexandre Gramfort, ERC Grant

MEG and EEG measurements and their localization inside the brain (red spot).

Laureate of an ERC starting grant

The quality of Alexandre Gramfort’s research has recently been acknowledged by the European Research Council (ERC) with a starting grant. Being worth 1.5 million euros, delivered over five years, these grants not only award works achieved by young researchers, they also encourage them to build their own teams. The Télécom ParisTech laureate thereby announced that he will recruit six PhD students or post-docs and one engineer.

A huge part of the work lies in mathematical developments, algorithms and software, Alexandre Gramfort explains. In this research field, one has to deal with a large amount of data, and it is almost impossible to do it alone”. Thus, the increased workforce will allow the researcher to build up a team to address the data analysis challenges. Alexandre Gramfort is looking for different profiles, in order to cover the diversity of the required expertises, from data mining to software development.

Thanks to these new resources, research topics will be further explored. One challenge is to process data that are not currently useable due to spurious signals, called noise. “Noise can come from sensors, but also from patient’s brains” Alexandre Gramfort tells us. “When you measure the neuronal signal created by someone’s thought or action, there is not only one part of the brain that activates: everything else keeps working”.

When we asked about potential impact of his research, Alexandre Gramfort answered that “this type of research is very important for everyone working on acquiring and processing data”. Behind the algorithms lies the objective for neuroscientists to better understand brain mechanisms. In their sight: pathologies like epilepsy or autism. But Alexandre Gramfort prefers to temper the expectations: “functional brain imaging is essentially oriented towards diagnosis, not treatment. It is mostly about identifying biomarkers that could help to detect pathologies as early as possible”.

Source localization of continuous Magnetoencephalography (MEG) data