Auragen

The Auragen project: turning the hopes of genomic medicine into reality

There is only one way to unlock the mysteries of certain genetic diseases — analyze each patient gene by gene. Genome analysis offers great promise for understanding rare diseases and providing personalized treatment for each patient. The French government hopes to make this new form of medicine available through its healthcare system by 2025. To achieve this aim, institutional healthcare stakeholders have joined forces to develop gene sequencing platforms. One such platform, named Auragen, will be established in the Auvergne Rhône-Alpes region. Mines Saint-Étienne is one of the partners in the project. Vincent Augusto, an industrial engineering researcher at the Saint-Etienne school, explains Auragen’s objectives and how he is involved in creating the platform.  

 

What is the purpose of genomic medicine?

Vincent Augusto: Certain diseases are caused by modifications of the genetic code, which are still not understood. This is true for many forms of cancer or other rare pathologies. In order to treat patients with these diseases, we must understand genetic alterations and be able to determine how these genes are different from those of a healthy individual. The goal of genomics is therefore to sequence a patient’s genome, the entire set of genes, in order to understand his disease and provide personalized diagnosis and treatment.

 

Is genomic medicine a recent idea?

VA: Gene sequencing has existed for 40 years, but it was very costly and could take up to several months to determine the entire genome of a living being. Thanks to advances in technology, a human being’s genome can now be sequenced in just a few hours. The main limitation to developing genomic medicine is an economic one, some startups offer sequencing for several thousand euros. But in order to make this service available to patients through the healthcare system, the processes must be industrialized in order to bring down the cost. And this is precisely what the Auragen project aims to do.

 

What is the Auragen project?

VA: It is part of the France Genomic Medicine 2025 Plan launched in 2016 with the aim of developing genomics in the French healthcare system. The Auragen project strives to create one of the two sequencing platforms in France in the Auvergne Rhône-Alpes region (the other platform, SeqOIA is located in the Ile-de-France region). To do so, it has brought together the University Hospital Centers of Lyon, Lyon, Grenoble, Saint-Étienne and Clermont-Ferrand, two cancer centers and research centers including Mines Saint-Étienne. The goal is to create a platform that provides the most efficient way to sequence and centralize samples and send them to doctors, as quickly and inexpensively as possible.

 

How are you contributing to the project?

VA: At Mines Saint-Étienne, we are involved in the organizational assessment of the platform. Our role is to model platform components and the players who will be involved to optimize the analysis of sequences and the speed with which samples are transmitted. To do so, we use mathematical healthcare models to find the best possible way to organize the process, from a patient’s consultation with an oncologist to the result. This assessment is not only economic in nature. We also aim to quantitatively asses the platform’s benefits for patients. The assessment tools will be designed to be reproduced and used in other gene sequencing platforms initiatives.

 

What research are you drawing on to assess the organization of the Auragen platform?

VA: We are drawing on the e-SIS project we took part in, in which we evaluated the impact of information and communication technologies in oncology. This project was part of a research program to study the performance of the Ministry of Health’s healthcare system. We proposed methods for modeling different processes, computerized and non-computerized, in order to compare the effectiveness of both systems. This allowed us to quantitatively evaluate the benefits of computer systems in oncologists’ offices.

 

What challenges do you face in assessing a sequencing platform?

VA: The first challenge is to try to size and model a form of care which doesn’t exist yet. We’ll need to have discussions with oncologists and genomics researchers to determine at what point in the treatment pathway sequencing technologies should be integrated. Then comes the question of the assessment itself. We have a general idea about the cost of sequencing devices and operations, but these methods will also lead to new treatment approaches whose costs will have to be calculated. And finally, we’ll need to think about how to optimize everything surrounding the sequencing itself. The different computational biology activities for analyzing data and the transmission channels for the samples must not be slowed down.

 

What is the timeline for the Auragen project?

VA: Our team will be involved in the first three years of the project in order to carry out our assessment. The project will last a total of 60 months. At the end of this period, we should have a working platform which is open to everyone and whose value has been determined and quantified. But before that, the first deadline is in 2019, at which time we must already be able to maintain a pace of 18,000 samples sequenced per year.

 

 

Une dossier 5G, pollution numérique

5G: the new generation of mobile is already a reality

[dropcap]W[/dropcap]hile 4G is still being rolled out, 5G is already in the starting blocks. For consumers whose smartphones still show “3G” alongside a reception indicator which rarely makes it over two bars out of four, this can be quite perplexing. Is it realistic to assure them that 5G is already a reality, when they can barely see proof of 4G? It is. And not only because manufacturers argue that the roll-out of 5G will be faster and more homogeneous. Without casting doubt over this promise, it is important to consider the economic factors at play behind the rhetoric and positioning.

5G is indeed a reality. This is essentially because mobile technologies are far more advanced and more efficient than they were in the early days of 4G. Researchers have been working on concrete solutions for providing a very high-speed mobile service, and these are now ready. Millimeter wave technology has proven itself in laboratories, and the first prototypes of networks using this technology are beginning to appear.

The saturated 4G frequency bands can make way for the growth in mobile terminals and communications. This is one of the outcomes of the European project Metis, which among other things, led to the discovery of more efficient wave forms. Error-correcting codes are also ready to manage faster speeds. Researchers are already working on pushing speeds even further, to perhaps deal with what comes after 5G.

Another reason for taking 5G seriously, is that it implies much more than just a faster service for end users. The goal is not only to increase the speed of data transfer between users, it is also to meet new needs for use, such as communication between machines. Without a network to handle communications between connected objects, there can be no Smart City. No smart, communicating cars, either. Researchers have been working for years to create new network architectures to satisfy potential new actors.

In the end, the question is not whether 5G is a reality or not, it is more about understanding the changes it will bring when it is released commercially in 2020, as the European Commission wishes. How will the current actors adapt to the changes in the telecommunications market? How will new actors find their place? Will 5G be an evolution, or rather, a revolution?

[divider style=”normal” top=”20″ bottom=”20″]

To find out more…

To read more on the subject of 5G, here are some additional articles from I’MTech archives:

[divider style=”normal” top=”20″ bottom=”20″]

 

Une ondes millimétriques, millimeter waves

Millimeter waves for mobile broadband

5G will inevitably involve opening new frequency bands allowing operators to increase their data flows. Millimeter waves are among the favorites, as they have many advantages: large bandwidth, adequate range, and small antennas. Whether or not they are opened will depend on whether they live up to expectations. The European project H2020 TWEETHER is trying to demonstrate precisely this point.

 

In telecommunications, the larger the bandwidth, the greater the maximum volume of data it can carry. This rule stems from the work of Claude Shannon, the father of the theory of communication. Far from anecdotal, this physical law partly explains the relentless competition between operators (see the insert at the end of the article). They fight for the largest bands in order to provide greater communication speed, and therefore, a higher quality service to their users. However, the frequencies they use are heavily regulated, as they share the spectrum with other services: naval and satellite communication, those reserved for the forces of law, medical units, etc. There are so many different users, part of the spectrum of frequencies is currently saturated.

We must therefore shift to higher frequencies, into unprecedented ranges, to allow operators to use more bands and increase their data rates. This is of primordial importance in the development of 5G. Among the potential bands to be used by mobile operators, experts are looking into millimeter waves. “The length of waves is directly linked with frequency” explains Xavier Begaud, telecommunications researcher at Télécom ParisTech. “The higher the frequency, the shorter the wavelength. Millimeter waves are located at high frequencies, between 30 and 300 GHz.”

Besides being available on a large bandwidth, they offer several other advantages. With a range of between several hundred meters and a few kilometers, they correspond to the size of the microcells planned for improving the country’s network coverage. With the rising number of smartphones and mobile devices, the current cells, which span several tens of kilometers, are saturated. By reducing the size of the cells, the speed of each antenna would be spread over fewer users. This would provide better service for everyone. In addition, cellular communication is less effective when the user is far from the antenna. Smaller cells mean greater proximity with antennas.

Another advantage is that the size of an antenna correlates with the length of the waves it transmits and receives. For millimeter waves, base stations and other transmission hubs would be a few centimeters in size at most. Besides aesthetics, the discretion of millimeter devices would be appreciated as people become wary of electromagnetic waves and the large antennas used by operators. Plus, using smaller base stations would mean fewer installation operations, and would therefore be quicker and less costly.

An often-cited downside of these waves is that they are attenuated by the atmosphere. Dioxygen in the air absorbs at 60 GHz, and other molecules absorb the waves above and below this frequency. Of course, this is an inevitable limitation, but for Xavier Begaud, this characteristic may be seen as an advantage. This natural attenuation means they may be confined to small areas. “By limiting their propagation, we can minimize interference with other 60 GHz systems” the researcher highlights.

 

TWEETHER: creating infrastructure for millimeter waves

Since 2015, Xavier Begaud has been involved in the European project, TWEETHER, funded by the H2020 program, and leaded by Prof Claudio Paoloni from Lancaster University. The partners include both public (Goethe University of Frankfurt, Universitat Politècnica de València, Telecom ParisTech) and private actors (Thales Electron Devices, OMMIC, HFSE GmbH, Bowen, Fibernova Systems SL) working on creating a demonstration of infrastructure for 2018. The objective of the TWEETHER project is set a milestone in the millimetre wave technology with the realization of the first W-band (92-95GHz) wireless system for distribution of high speed internet everywhere. The TWEETHER aim is to realise the millimetre wave Point multi Point segment to finally link fibre, and sub-6GHz distribution — which is the final distribution currently achieved by LTE and WiFi, and soon by 5G. This would mean a full three segment hybrid network (fibre, TWEETHER system, sub-6GHz distribution), that is the most cost-effective architecture to reach mobile or fix final individual client. The TWEETHER system will provide indeed economical broadband connectivity with a capacity up to 10 Gbits/km² and distribution of hundreds of Mbps to tens of terminals. This will allow the capacity and coverage challenges of current backhaul and access solutions to be overcome.

Horn antenna for millimeter waves

This system has been made possible thanks to recent technological progress. As many parts of the system must be millimetric in size, they have to be designed with great precision. One of the essential elements in the TWEETHER system is the traveling-wave tube used in the hub which amplifies the power of the waves. The tube itself is not a new discovery, and has been used for other frequency ranges for several decades. However, for millimeter waves, it needed to be miniaturized, which was previously impossible, to deliver close to 40W of power for TWEETHER. Creating the antennas, several horns and a lens, for systems of this scale, is also challenging at these high frequencies. This part was supervised by Xavier Begaud. The antennas were measured at Télécom ParisTech in an anechoic chamber, allowing researchers to characterize the radiation patterns  up to 110 GHz. The project overcame scientific and technological barriers, opening up the possibilities for large-scale millimeter systems.

The TWEETHER project is a classic example of the potential of millimeter waves in providing broadband Internet to a large number of users. Other than basic mobile communication, they could also provide an attractive alternative to supplying Fiber to the Home (FTTH), which requires many civil engineering interventions, and high maintenance costs. Transporting data to buildings using wireless broadband channels rather than fiber could therefore interest operators.

This article is part of our dossier 5G: the new generation of mobile is already a reality

[box type=”info” align=”” class=”” width=””]

A fight over bands between operators, under that watch of Arcep

In France, the allocation of frequency bands is managed by the telecommunications regulatory authority, Arcep. When the agency decides to open a new range of frequencies to operators, it holds an auction. In 2011, for example, the bands at 800 MHz and 2.6 GHz were sold to four French operators for a total sum of €3.5 billion, to the benefit of the State, as Arcep is a governmental authority. In reality, “opening the band at 800 MHz” means selling duplex lots of 10 MHz (10 MHz uploading and the same for downloading) to operators around this frequency. SFR, for example, paid over €1 billion for the band between 842 and 852 MHz uploading, and between 801 and 811 MHz for downloading. The same applied for the band at 2.6 GHZ, sold by lots of 15 or 20 MHz.

[/box]

Une 5G standardisation, standardization

5G… and beyond: Three standardization challenges for the future of communication

[dropcap]G[/dropcap]enerations of mobile technologies come around at the rate of about one every ten years. Ten years is also roughly the amount of time it takes for them to be created. No sooner has a generation been offered to end consumers than researchers are working on the next one. Therefore, it is hardly surprising that we are already seeing technologies which could appear in the context of a 5G+, or even a potential 6G. That is, as long as they manage to convince the standardization agencies, those who decide on the technologies selected, before their final choices are made by 2019.

Researchers at IMT Atlantique are working on new, cutting-edge technologies for transmission and signal coding. Three of these technologies will be presented here. They offer a glimpse of both the technical challenges in improving telecommunications, and the challenges of standardization presented ahead of the commercial roll-out of 5G.

 

Turbo codes: flexible up to a point

Turbo codes were invented at IMT Atlantique (formerly Télécom Bretagne) by Claude Berrou in 1991. They are an international standard in error-correcting codes. In particular, they are used in 4G. Their advantage lies in their flexibility. “With one turbo code, we can code any size of message” highlights Catherine Douillard, a digital communications researcher at IMT Atlantique. Like all error-correcting codes, the better the transmission quality, the more errors they correct. However, there is a threshold beyond which they can no longer improve their correction rate despite an improved signal.

We have recently found a way to solve this problem, which has led to a patent with Orange” explains Catherine Douillard. Turbo codes could well continue as the inevitable error-correcting codes in telecommunications. However, the standardization phases for 5G have already begun. They are divided into three phases, relating to the three types of purpose the new generation is expected to fulfill: increased data speeds, extra-reliable communication, and machine-to-machine communication. For the first purpose, other error-correcting codes have been selected: LDPCs, based on the work of Robert Gallagher at MIT in 1960. The control channel will be protected by polar codes. For the other purposes, standardization committees are due to meet in 2018. Turbo codes, polar codes and LDPCs will once again be in competition.

Beyond the technological battle for 5G, the three families of codes are also being examined closely as scenarios for the longer term. The European project H2020 Epic brings together manufacturers and researchers, including IMT Atlantique, to look at one issue: the increasing speed of error-correcting codes. Turbo codes, LDPCs and polar codes are also being examined, worked on and updated at the same time. The goal is to make them compatible with the decoding of signals travelling at speeds of around a terabit per second. For this, they are directly implemented in the material part of mobile terminals and antennae (see the insert at the end of the article).

 

FBMC: a new form of wave to replace OFDM?

If 5G is to combine communication between connected objects, it will have to make space on the frequency bands to allow machines to speak to each other. “We will have to make holes at very specific frequencies in the existing spectrum to insert communication by the Internet of Things” says Catherine Douillard. But the current form of the standardized wave, called OFDM, does not allow this. Its level of interference is too high. In other words, the space in the frequency band is not “clean”, and would suffer from interference from adjacent frequencies. Another form of wave is therefore being studied: FBMC. “With this, we can take out a frequency here and there to insert a communication system without disruption” the researcher sums up.

FBMC also provides a higher quality of service when mobile terminals move quickly in a cell. “The faster a mobile terminal moves, the higher the Doppler effect is” explains Catherine Douillard, “and OFDM is not very resistant to this effect”. And yet, 5G is supposed to provide good communication at speeds of up to 400 kilometers per hour, like in a TGV train, on the classical 4G frequency bands. The advantage of FBMC is even more significant on millimeter frequencies, as the Doppler effect is even greater at higher frequencies.

OFDM is already used for 4G, and for the time being, the 5G standardization agencies are maintaining it as the default wave form. But we probably haven’t heard the last of FBMC. It is more complex to set up, but researchers are working to simplify its implementation. Again, the next phases of standardization could be decisive.

 

NOMA: desaturating longstanding frequency bands

The frequency bands currently used for communications are becoming ever more saturated. Millimeter frequencies certainly could alleviate the problem, but this is not the only possibility being explored by researchers. “We are working on increasing the capacity of systems to transmit more data on the same bandwidth” explains Catherine Douillard. NOMA technology puts several users on the frequency band. Interferences can be avoided by allocating each user a different power level, known as multiplexing.

“The technique works well when we associate two users with a different channel quality on one frequency” the researcher explains. In concrete terms, a user situated close to an antenna can use NOMA to share the same frequency with a user further away. However, two users who are the same distance from the antenna, and so with more or less the same quality of reception, could not share it. This technique could therefore resolve the cellular saturation problem which 5G aims to address.

This article is part of our dossier 5G: the new generation of mobile is already a reality

[box type=”info” align=”” class=”” width=””]

Algorithms implemented directly into material

Algorithms are not necessarily lines of programming code. They can also be directly implanted into integrated circuits, using transistors which act as logic gates. “The advantage is that these algorithms take up a lot less space this way, as opposed to when they have to be executed on processors”, specifies Michel Jezequel, head of the electronics department at IMT Atlantique. “Algorithms are also faster and consume less energy”, he continues. To increase turbo codes to speeds of around a terabit per second, there is no choice but to use material implementation, for example. Their homologue programs would not be able to process the data fast enough.

[/box]

openairinterface, SDN

SDN and virtualization: more intelligence in 5G networks

New uses for 5G imply a new way of managing telecommunications networks. To open the way for new actors, we will have to divide and virtualize network slices, and find a dynamic way of allocating them to new services. This organization is made possible thanks to SDN: a technique which redesigns network architecture to make it as flexible as possible. Researchers at EURECOM are using SDN to make networks more intelligent.

 

The sole objective of 4G is to serve one purpose: broadband Internet. Not only will 5G need to pursue this effort, it will also have to satisfy the needs of the Internet of Things and provide considerably more reliable means of communication for the transmission of sensitive data. Combining these three usages in one type of network is far from simple. Especially as each of them will create many services, and therefore many new operators who will have to get along with each other. Depending on demand, certain services will have to be favored over others. This requires managing resources in a dynamic way.

A new way of organizing the network will need to be found. One solution is network slicing. The infrastructure remains unchanged, under the control of the current operators, but is shared virtually with the new operators. “Each service shares the network with others, but has its own independent slice which is specific to them” explains Adlen Ksentini, mobile networks researcher at Eurecom. The slice left to the virtual operator is a slice from end to end. This means that it leaves space for a new entrant both in sharing the radio bandwidth and on the management platform for this radio resource.

For researchers, the main challenge is the lifespan of the slices. The slicing system needs to be able to create and close them on demand. A data collection system for connected object will not run all day long, for example. “If an electricity distributer records data from smart counters between midnight and 4am, a slice needs to be created for this precise timeframe to allocate resources to other services the rest of the time” Adlen Ksentini illustrates.

Greater network intelligence

Establishing these slices is made possible through a new type of network architecture, whose behavior is programmed by software. Until present, the paths followed by data in the network were dictated by routers. The physical boxes, spread throughout the network, handled the way in which packages of information were sent. With the new architecture, known as SDN,  or software-defined network, a central entity controls the equipment and makes the routing decisions.

The greatest advantage of SDN is its flexibility in network management. “In a traditional architecture, without SDN, the routing rules are set in advance and are difficult to change” explains Christian Bonnet, another networks and telecommunications researcher at Eurecom. “SDN allows us to make the network intelligent, to change the rules if we need to” he continues. This greater freedom is what makes it possible to cut the network up, creating rules which isolate data pathways for specific use by each service.

Eurecom researchers are exploring the possibilities offered by these new architectures on the technological platform OpenAirInterface (OAI). “We are experimenting both with how to transform the 4G network to introduce intelligence, and how to shift towards a 5G architecture” Christian Bonnet explains. This open source work helps us to understand how SDN impacts the state of radio resources, its potential for creating new services, and the associated constraints or opportunities for improvement in managing mobility (see insert at the end of the article).

“Technically speaking, there are several possibilities for installing an SDN and slicing up the network. As each operator has slightly different requirements, there are many different angles to explore”, the researcher explains. Each operator could have their own way of virtualizing the network to allow for new services. 3GPP, the standardization body for mobile communication technologies, could however play a role of consolidation in the near future, should more operators decide to go in a common direction.

This article is part of our dossier 5G: the new generation of mobile is already a reality

[box type=”info” align=”” class=”” width=””]

SDN for better management of mobility

SDN architecture may be used to ensure continual data emission to a mobile terminal. This technique makes it possible to better handle changes of interface in switching from a 4G network to a WiFi network without interrupting the flow of information. Compared with traditional mobility techniques, SDN is faster, and reduces the amount of flow reduction operations required. For the end user, this results in a higher quality service.

[/box]

 

Mobile World Congress 2016, market

Will 5G turn the telecommunications market upside-down?

The European Commission is anticipating the arrival of the fifth generation in mobile phones (5G) in 2020. It is expected to significantly increase data speeds and offer additional uses. However, the extent of the repercussions on the telecommunications market and on services is still difficult to evaluate, even for the experts. Some believe that 5G will be no more than a technological step up from 4G, in the same way that 4G progressed from 3G. In which case, it should not create radical change in the positions of the current economic stakeholders. Others believe that 5G has the potential to cause a complete reshuffle, stimulating the creation of new industries which will disrupt the organization among longstanding operators. Marc Bourreau sheds light on these two possibilities. He is an economist at Télécom ParisTech, and in March, co-authored a report for the Centre on Regulation in Europe (Cerre) entitled “Towards the successful deployment of 5G in Europe: What are the necessary policy and regulatory
conditions?”.

 

Can we predict what 5G will really be like?

Marc Bourreau: 5G is a shifting future. It is a broad term which encompasses the current technical developments in mobile technologies. The first of these will not reach commercial maturity until 2020, and will continue to develop afterwards, similar to the way in which 4G is still developing today. At present, 5G could go in a number of directions. But we can already imagine the likely scenarios from the positioning of economic actors and regulators.

Is seeing 5G as a simple progression from 4G one of those scenarios?  

MB: 5G partly involves improving 4G, using new frequency bands, increasing antenna density, and improving the efficiency of wireless technology to allow greater data speeds. One way of seeing 5G is indeed as an improved 4G. However, this is probably the smallest progression that can be envisaged. Under this hypothesis, the structure of the market would be fairly similar, with mobile operators keeping an economic model based on sales of 5G subscriptions.

Doesn’t this scenario worry the economic stakeholders?

MB: Not really. In this case, the regulations would not change a great deal, which would mean there would be no need for a major adaptation by the longstanding stakeholders. There may be questions over investment for the operators, for example in antennae for which the density is set to rise. They would have to find a way of financing the new infrastructure. There would perhaps also be questions surrounding installation. A large density of 5G antennae would mean that development would primarily take place in urban areas, where installing antennae poses fewer problems.

Which scenario could change the way the current mobile telecommunications market is structured?

MB: Contrary to the scenario of a simple progression, is that of a total revolution. In this case, 5G would provide network solutions for particular industries. Economically speaking, we use the term of industry “verticals”. Connected cars are a vertical, as is health, and connected objects. These sectors could develop new services with 5G. It would be a true revolution, as these verticals require access to the network and infrastructure. If a carmaker creates an autonomous vehicle, it must be able to receive and send data on a dedicated network. This means that antennae and bandwidths will need to be shared with mobile phone operators.

To what extent would this “revolution” scenario affect the market?

MB: Newcomers will not have their own infrastructure. They will therefore be virtual operators, as opposed to classical operators. They will probably have to rent access. This means the longstanding operators will have to change their economic model to incorporate this new role. In this scenario, the current operators would become network actors rather than service actors. Sharing the network like this could imply regulations to help the different actors to negotiate with each other. As each virtual operator will have different needs, the quality of service will not be identical for each vertical. The question of preserving or adapting the neutrality of the net for 5G will inevitably arise.

Isn’t the scenario of a revolution, along with new services, more advantageous?

MB: It certainly promises to use the full potential of technology to achieve many things. But it also involves risk. It could disrupt operators’ economic models, and who knows if they will be able to adapt? Will the longstanding operators be capable of investing in infrastructure which will then be open to all? An optimistic view would be to say that by opening the networks, the many services created will generate value which will, in part, come back to the operators, allowing them to finance the development of the network. But we should not forget the slightly more pessimistic view that the value might come back to the newcomers only. If this were to happen, the longstanding operators would no longer be able to invest, infrastructure would not be rolled out on a large scale, and the scenario of a revolution would not be possible.

Of the two scenarios, a “progression” or a “revolution”, is one more likely than the other?

MB: In reality, we have to see the two scenarios as an evolution in time, rather than a choice. Once 5G is launched in 2020, there will be a development margin. Technology will progress from the umbrella term ”5G”, which will bring together the pieces of basic technology. After all, each mobile generation brings about changes which consumers do not necessarily notice. When the technology is launched commercially, it will probably be more of a progression from 4G. The question is, will it then develop into the more ambitious scenario of a revolution?

What will influence the deepening role of 5G?

MB: The choice of scenario now depends on choices of normalization. Dictating the state of technology can either facilitate or place a limit on transformations in the market. Normalization is carried out in large economic areas. There are hopes of partnerships, for example between Europe and Korea, to unify standards and produce a homogeneous 5G. But we must not forget that the different economic areas can also have their own interest in sticking with a progression or opting for a revolution.

How do the interests of each economic area come into play?

MB: This technology is interesting both from an industrial point of view and a social one. Choices may be made on each of these aspects, depending on the policy preferred by an economic area. From an industrial point of view, a conservative approach to protect current actors will favor normalization. Conversely, other choices may be made, allowing new actors to emerge, which would be more of a “revolution” scenario. From a social point of view, we need to look at what the consumer wants, whether the new services created risk disrupting those currently on offer, etc.

What roles do the various stakeholders play in the decision-making process? 

MB: The choice may be decentralized to the stakeholders. Operators are in discussion and negotiation with the vertical stakeholders. I think it is worth letting this process play out, allowing it to generate experimentation. The situation is similar to the early days of mobile web, where no one knew what the right application was, or the right economic model, etc. For 5G, no one knows what the relationship between mobile operators and carmakers, for example, might be. They must be left to find their own common ground. Behind this, the role of public policy is to support experimentation, respond to market errors, but only if these do occur. The European Commission is there to coordinate the stakeholders, support them in their transformation and in their experimentation. The H2020 program is a typical example of this, a research project bringing together scientists and industrial actors to come up with solutions.

This article is part of our dossier 5G: the new generation of mobile is already a reality

Speakshake

Speakshake, shaking up distance language learning

The start-up Speakshake has created a communication platform for improving your foreign language speaking skills. As well as communication, it offers users a variety of learning tools to accompany their discussions. The platform has been recognized by the French Administration as a vehicle for integration of both French nationals abroad and foreigners in France.

 

Venezuela, Brazil, China, Chili… Fanny Vallantin spent several years working as an engineer in different countries, discovering new cultures and languages wherever she went. In order to maintain her skills and keep practicing all these languages, she created a program enabling her to communicate with her colleagues in different countries. From what began as a personal tool, she created a platform her friends could also use, and then a start-up. The result was the company Speakshake, incubated at ParisTech Entrepreneurs since April 2017.

The start-up offers a way of connecting two users who each want to mutually improve their skills in the other’s native language, via a web service. The two participants begin a 30-minute video discussion, split into two 15-minute halves, each carried out in one of the two languages. Because of the length of the conversation, access to the service requires users to have a basic level of practical skills. “You need to have the level of a tourist who can get by abroad, who can order a coffee in a bar, in the language you want to work on” explains Fanny Vallantin.

The service aims to give its users the tools to integrate themselves in a country, and so the conversations are directed towards cultural subjects. During the discussion, Shakespeak offers various documents on the country’s traditions, history or current events. The resources are prepared in collaboration with students at the Sorbonne Nouvelle university, the official partner institution of the platform. The subjects spark discussion, but are not imposed on the participants, who remain free to speak about whatever they want, and may browse through the resources available as they please.

The 15-minute conversation in the user’s language is based on their partner’s culture. A French person speaking with a German person will speak in French about German culture, and in German about their own culture. This structure means that users are continually learning about the foreign country, even when speaking their own language. The start-up currently has seven languages on offer: French, English, Spanish, German, Portuguese, Mandarin Chinese and Italian. The list is set to grow next September, to include Japanese, Arabic, Hebrew, Russian and Korean, with support from Ile-de-France tourism funds.

In addition to cultural resources, the start-up’s platform offers a host of digital tools for learning. For instance, the conversation interface includes an online chat feature for spelling out words. It also includes an online dictionary and translator. All words written in these interfaces can be added to the dashboard, which the user may look at once the conversation has finished. An oral and written report system allows users to give advice to their conversation partners, and to receive tips for their own improvement.

By focusing on oral learning through conversation, Speakshake takes a new approach in the language education sector, which is often centered around writing. And by providing educational tools, it offers an enriched communication service. This point of difference is what helped the start-up win the Quai d’Orsay hackathon in January, as a service helping young foreigners to be better integrated in France, and French expats to become integrated in their host country. The young company has also been recognized by L’Institut Français and the Minister of Europe and Foreign Affairs as the perfect tool for improving speaking skills in a foreign language.

 

25 termes, Intelligence artificielle, IA, AI

23 terms to help understand artificial intelligence

Neural networks, predictive parsing, chatbots, data analysis, machine learning, etc. The 8th Fondation Mines-Télécom booklet provides a glossary of 23 terms to clarify some of the terms used in artificial intelligence (AI).

 

[box type=”download” align=”” class=”” width=””]Click here to download the booklet Cahier de veille IA, Fondation Télécom
on Artificial Intelligence (in French) [/box]

AI winters  Moments in the history of AI, in which doubts overshadowed previous enthusiasm.

API – (Application Programming Interface), a standardized set of methods by which a software program provides services for other software programs.

Artifact  Object made by a human.

Big Data – Massive data.

Further reading: What is big data?

Bots – Algorithmic robots.

Chatbots – Conversational bots.

Cognitivism – Paradigm of cognitive science focusing on symbols and rules.

Commodities – Basic everyday products.

Cognitive agent  Software that acts in an autonomous, intelligent manner.

Cognitive sciences  Set of scientific disciplines grouping together neurosciences, artificial intelligence, psychology, philosophy, linguistics, anthropology etc. An extremely vast cross-disciplinary field interested in human, animal and artificial thinking.

Connectionism – Paradigm of cognitive science based on neural networks.

Data analysis and data mining  Extraction of knowledge from data.

Decoder – An element in the signal processing chain responsible for recovering a signal after it has passed through a noisy channel.

Deep learning  Learning technique based on deep neural networks, meaning they are composed of many overlapping layers.

Expert systems  Systems that make decisions based on rules and facts.

Formal neural networks – Mathematical and computational representations of biological neurons and their connections.

FPGA (Field-Programmable Gate Array), an integrated circuit which can be programmed after manufacturing.

GPU  (Graphics Processing Unit), a processor specialized in processing signals which is well-suited for neural networks calculations.

Predictive parsing – Techniques derived from statistics, data mining and game theory to devise hypotheses.

Machine learning  Techniques and algorithms which give computers the ability to learn.

Further reading: What is machine learning?

Semantic networks  Graphs modeling the representation of knowledge.

Value sensitive design  Approach to designing technology that accounts for human values.

Weak/strong AI  Weak AI may specialize in playing chess but is hopeless at cooking. Strong AI excels in all areas where humans are skilled

 

Further reading on this topic:

[one_half]

[/one_half]

[one_half_last]

[/one_half_last]

 

space, René Garello, IMT Atlantique

Climate change as seen from space

René Garello, IMT Atlantique – Institut Mines-Télécom

[divider style=”normal” top=”20″ bottom=”20″]

[dropcap]T[/dropcap]he French National Centre for Space Research has recently presented two projects based on greenhouse gas emission monitoring (CO2 and methane) using satellite sensors. The satellites, which are to be launched after 2020, will supplement measures carried out in situ.

On a global scale, this is not the first such program to measure climate change from space: the European satellites from the Sentinel series have already been measuring a number of parameters since Sentinel-1A was launched on April 3, 2014 under the aegis of the European Space Agency. These satellites are part of the Copernicus Program (Global Earth Observation System of Systems), carried out on a global scale.

Since Sentinel-1A, the satellite’s successors 1B, 2A, 2B and 3A have been launched successfully. They are each equipped with sensors with various functions. For the first two satellites, these include a radar imaging system, for so-called “all weather” data acquisition, the radar wavelength being indifferent to cloudy conditions, whether at night or in the day. Infrared optical observation systems allow the second two satellites to monitor the temperature of ocean surfaces. Sentinel-3A also has four sensors installed for measuring radiometry, temperature, altimetry and the topography of surfaces (both ocean and land).

The launch of these satellites builds on the numerous space missions that are already in place on a European and global scale. The data they record and transmit grant researchers access to many parameters, showing us the planet’s “pulse”. These data partially concern the ocean (waves, wind, currents, temperatures, etc.) showing the evolution of large water masses. The ocean acts as an engine to the climate and even small variations are directly linked to changes in the atmosphere, the consequences of which can sometimes be dramatic (hurricanes). Data collected by sensors for continental surfaces concern variations in humidity and soil cover, whose consequences can also be significant (drought, deforestation, biodiversity, etc.).

[Incredible image from the eye of #hurricane #Jose taken on Saturday by the satellite #Sentinel2 Pic @anttilip]

Masses of data to process

Processing of data collected by satellites is carried out on several levels, ranging from research labs to more operational uses, not forgetting formatting activity done by the European Space Agency.

The scientific community is focusing increasingly on “essential variables” (physical, biological, chemical, etc.) as defined by groups working on climate change (in particular GCOS in the 1990s). They are attempting to define a measure or group of measures (the variable) that will contribute to the characterization of the climate in a critical way.

There are, of course, a considerable number of variables that are sufficiently precise to be made into indicators allowing us to confirm whether or not the UN’s objectives of sustainable development have been achieved.

space

The Boreal AJS 3 drone is used to take measurements at a very low altitude above the sea

 

The identification of these “essential variables” may be achieved after data processing, by combining this with data obtained by a multitude of other sensors, whether these are located on the Earth, under the sea or in the air. Technical progress (such as images with high spatial or temporal resolution) allows us to use increasingly precise measures.

The Sentinel program operates in multiple fields of application, including: environmental protection, urban management, spatial planning on a regional and local level, agriculture, forestry, fishing, healthcare, transport, sustainable development, civil protection and even tourism. Amongst all these concerns, climate change features at the center of the program’s attention.

The effort made by Europe has been considerable, representing an investment of over €4 billion between 2014 and 2020. However, the project also has very significant economic potential, particularly in terms of innovation and job creation: economic gains in the region of €30 million are expected between now and 2030.

How can we navigate these oceans of data?

Researchers, as well as key players in the socio-economic world, are constantly seeking more precise and comprehensive observations. However, with spatial observation coverage growing over the years, the mass of data obtained is becoming quite overwhelming.

Considering that a smartphone contains a memory of several gigabytes, spatial observation generates petabytes of data to be stored; and soon we may even be talking in exabytes, that is, in trillions of bytes. We therefore need to develop methods for navigating these oceans of data, whilst still keeping in mind that the information in question only represents a fraction of what is out there. Even with masses of data available, the number of essential variables is actually relatively small.

Identifying phenomena on the Earth’s surface

The most recent developments aim to pinpoint the best possible methods for identifying phenomena, using signals and images representing a particular area of the Earth. These phenomena include waves and currents on ocean surfaces, characterizing forests, humid, coastal or flooding areas, urban expansion in land areas, etc. All this information can help us to predict extreme phenomena (hurricanes), and manage post-disaster situations (earthquakes, tsunamis) or monitor biodiversity.

The next stage consists in making processing more automatic by developing algorithms that would allow computers to find the relevant variables in as many databases as possible. Intrinsic parameters and information of the highest level should then be added into this, such as physical models, human behavior and social networks.

This multidisciplinary approach constitutes an original trend that should allow us to qualify the notion of “climate change” more concretely, going beyond just measurements to be able to respond to the main people concerned – that is, all of us!

[divider style=”normal” top=”20″ bottom=”20″]

René Garello, Professor in Signal and Image Processing, “Image and Information Processing” department, IMT Atlantique – Institut Mines-Télécom

The original version of this article was published on The Conversation.

Young Scientist Prize, julien bras, biomaterial

Julien Bras: nature is his playground

Cellulose is one of the most abundant molecules in nature. At the nanoscale, its properties allow it to be used for promising applications in several fields. Julien Bras, a chemist at Grenoble INP, is working to further develop the use of this biomaterial. On November 21st he received the IMT-Académie des Sciences Young Scientist Prize at the official awards ceremony held in the Cupola of the Institut de France.

 

Why develop the use of biomass?

Julien Bras: When I was around 20, I realized that oil was a resource that would not last forever, and we would need to find new solutions. At that time, society was beginning to become aware of the problems of pollution in cities, especially due to plastics, as well as the dangers of global warming. So I thought we should propose something that would allow us to use the considerable renewable resources that nature has to offer. I therefore went to an engineering school in chemistry on developing the use of agro-resources, and then did a thesis for Ahlstrom on biomaterials.

What type of biomaterials do you work with?

JB: I work with just about all renewable materials, but especially with cellulose, which is a superstar in the world of natural materials. Nature produces hundreds of billions of tons of this polymer each year. For thousands of years, it has been used to make clothing, paper, etc. It is very well known and offers numerous possibilities. Although I work with all biomaterials, I am specialized in cellulose, and specifically its nanoscale properties.

What makes cellulose so interesting at the nanoscale?

JB: There are two major uses for cellulose at this scale. We can make cellulose nanocrystals, which have very interesting mechanical properties. They are much more solid than glass fibers, and can be used, for example, to reinforce plastics. And we can also design nanofibers, which are longer and more flexible than the crystals, which are easily tangled. This makes it possible to make very light, transparent systems covering a large surface. In one gram of nanofiber, the available surface area for exchange can reach up to two hundred square meters.

In which industry sectors do we find these forms of nanocellulose? 

JB: For now, few sectors really use them on a large scale. But it’s a material that is growing quickly. We do find nanocellulose in a few niche applications, such as composites, cosmetics, paper and packaging. Within my team, we are leading projects with a wide variety of sectors, to make car fenders, moisturizer, paint, and even bandages for the medical sector. This shows how interested manufacturers are in these biomaterials.

Speaking of applications, you helped create a start-up that uses cellulose

JB: Between 2009 and 2012, we participated in the European project Sunpap. The goal was to scale-up cellulose nanoparticles.  The thesis conducted as part of this project led us to file 2 patents for cellulose powders and functionalized nanocellulose. We then embarked on an adventure to create a start-up called Inofib. As one of the first companies in this field, the start-up significantly contributed to the industrial development of these biomaterials. Today, the company is focused on developing specific functionalization and applications for cellulose nanofibers. It is not seeking to compete with other major players in this field, who have since begun working on nanocellulose with European support, rather it seeks to differentiate itself through its expertise and the new functions it offers.

Can nanocellulose be used to design smart materials?  

JB: When I began my research, I was working separately on smart materials and nanocellulose. In particular, I worked with a manufacturer to develop conductive and transparent inks for high-quality materials, which led to the creation of another start-up: Poly-Ink. As things continued to progress, I decided to combine the two areas I was working on. Since 2013, I have been working on designing nanocellulose-based inks, which make it possible to create flexible, transparent and conductive layers to replace, for example, layers that are on the screens of mobile devices.

In the coming years, what areas of nanocellulose will you be focusing on?

JB: I would like to continue in this area of expertise by further advancing the solutions so that they can be produced. One of my current goals is to design them using green engineering processes, which limit the use of toxic solvents and are compatible with an environmental approach. Then I would like to increase their functions so that they can be used in more fields and with improved performance. I really want to show the interest of developing nanocellulose. I need to keep an open mind, so I can find new applications.

 

[divider style=”normal” top=”20″ bottom=”20″]

Biography of Julien Bras

Julien Bras, 39, has been an associate research professor at Grenoble INP- Pagora since 2006, as well as deputy director of LGP2 (Paper Process Engineering Lab). He was previously an engineer in a company in the paper industry in France, Italy and Finland. For over 15 years, Julien Bras has been focusing his research on developing a new generation of high-performance cellulosic biomaterials and developing the use of these agro-resources.

The industrial aspect of his research is not restricted to his collaborations as it also extends to the 9 registered patents and in particular, the founding of two spin-offs to which Julien Bras contributed. One is specialized in producing conductive and transparent inks for the electronics industry (Poly-Ink), and the other is specialized in producing nanocellulose for the paper, composite and chemical industries (Inofib).

[divider style=”normal” top=”20″ bottom=”20″]