Une ondes millimétriques, millimeter waves

Millimeter waves for mobile broadband

5G will inevitably involve opening new frequency bands allowing operators to increase their data flows. Millimeter waves are among the favorites, as they have many advantages: large bandwidth, adequate range, and small antennas. Whether or not they are opened will depend on whether they live up to expectations. The European project H2020 TWEETHER is trying to demonstrate precisely this point.

 

In telecommunications, the larger the bandwidth, the greater the maximum volume of data it can carry. This rule stems from the work of Claude Shannon, the father of the theory of communication. Far from anecdotal, this physical law partly explains the relentless competition between operators (see the insert at the end of the article). They fight for the largest bands in order to provide greater communication speed, and therefore, a higher quality service to their users. However, the frequencies they use are heavily regulated, as they share the spectrum with other services: naval and satellite communication, those reserved for the forces of law, medical units, etc. There are so many different users, part of the spectrum of frequencies is currently saturated.

We must therefore shift to higher frequencies, into unprecedented ranges, to allow operators to use more bands and increase their data rates. This is of primordial importance in the development of 5G. Among the potential bands to be used by mobile operators, experts are looking into millimeter waves. “The length of waves is directly linked with frequency” explains Xavier Begaud, telecommunications researcher at Télécom ParisTech. “The higher the frequency, the shorter the wavelength. Millimeter waves are located at high frequencies, between 30 and 300 GHz.”

Besides being available on a large bandwidth, they offer several other advantages. With a range of between several hundred meters and a few kilometers, they correspond to the size of the microcells planned for improving the country’s network coverage. With the rising number of smartphones and mobile devices, the current cells, which span several tens of kilometers, are saturated. By reducing the size of the cells, the speed of each antenna would be spread over fewer users. This would provide better service for everyone. In addition, cellular communication is less effective when the user is far from the antenna. Smaller cells mean greater proximity with antennas.

Another advantage is that the size of an antenna correlates with the length of the waves it transmits and receives. For millimeter waves, base stations and other transmission hubs would be a few centimeters in size at most. Besides aesthetics, the discretion of millimeter devices would be appreciated as people become wary of electromagnetic waves and the large antennas used by operators. Plus, using smaller base stations would mean fewer installation operations, and would therefore be quicker and less costly.

An often-cited downside of these waves is that they are attenuated by the atmosphere. Dioxygen in the air absorbs at 60 GHz, and other molecules absorb the waves above and below this frequency. Of course, this is an inevitable limitation, but for Xavier Begaud, this characteristic may be seen as an advantage. This natural attenuation means they may be confined to small areas. “By limiting their propagation, we can minimize interference with other 60 GHz systems” the researcher highlights.

 

TWEETHER: creating infrastructure for millimeter waves

Since 2015, Xavier Begaud has been involved in the European project, TWEETHER, funded by the H2020 program, and leaded by Prof Claudio Paoloni from Lancaster University. The partners include both public (Goethe University of Frankfurt, Universitat Politècnica de València, Telecom ParisTech) and private actors (Thales Electron Devices, OMMIC, HFSE GmbH, Bowen, Fibernova Systems SL) working on creating a demonstration of infrastructure for 2018. The objective of the TWEETHER project is set a milestone in the millimetre wave technology with the realization of the first W-band (92-95GHz) wireless system for distribution of high speed internet everywhere. The TWEETHER aim is to realise the millimetre wave Point multi Point segment to finally link fibre, and sub-6GHz distribution — which is the final distribution currently achieved by LTE and WiFi, and soon by 5G. This would mean a full three segment hybrid network (fibre, TWEETHER system, sub-6GHz distribution), that is the most cost-effective architecture to reach mobile or fix final individual client. The TWEETHER system will provide indeed economical broadband connectivity with a capacity up to 10 Gbits/km² and distribution of hundreds of Mbps to tens of terminals. This will allow the capacity and coverage challenges of current backhaul and access solutions to be overcome.

Horn antenna for millimeter waves

This system has been made possible thanks to recent technological progress. As many parts of the system must be millimetric in size, they have to be designed with great precision. One of the essential elements in the TWEETHER system is the traveling-wave tube used in the hub which amplifies the power of the waves. The tube itself is not a new discovery, and has been used for other frequency ranges for several decades. However, for millimeter waves, it needed to be miniaturized, which was previously impossible, to deliver close to 40W of power for TWEETHER. Creating the antennas, several horns and a lens, for systems of this scale, is also challenging at these high frequencies. This part was supervised by Xavier Begaud. The antennas were measured at Télécom ParisTech in an anechoic chamber, allowing researchers to characterize the radiation patterns  up to 110 GHz. The project overcame scientific and technological barriers, opening up the possibilities for large-scale millimeter systems.

The TWEETHER project is a classic example of the potential of millimeter waves in providing broadband Internet to a large number of users. Other than basic mobile communication, they could also provide an attractive alternative to supplying Fiber to the Home (FTTH), which requires many civil engineering interventions, and high maintenance costs. Transporting data to buildings using wireless broadband channels rather than fiber could therefore interest operators.

This article is part of our dossier 5G: the new generation of mobile is already a reality

[box type=”info” align=”” class=”” width=””]

A fight over bands between operators, under that watch of Arcep

In France, the allocation of frequency bands is managed by the telecommunications regulatory authority, Arcep. When the agency decides to open a new range of frequencies to operators, it holds an auction. In 2011, for example, the bands at 800 MHz and 2.6 GHz were sold to four French operators for a total sum of €3.5 billion, to the benefit of the State, as Arcep is a governmental authority. In reality, “opening the band at 800 MHz” means selling duplex lots of 10 MHz (10 MHz uploading and the same for downloading) to operators around this frequency. SFR, for example, paid over €1 billion for the band between 842 and 852 MHz uploading, and between 801 and 811 MHz for downloading. The same applied for the band at 2.6 GHZ, sold by lots of 15 or 20 MHz.

[/box]

Une 5G standardisation, standardization

5G… and beyond: Three standardization challenges for the future of communication

[dropcap]G[/dropcap]enerations of mobile technologies come around at the rate of about one every ten years. Ten years is also roughly the amount of time it takes for them to be created. No sooner has a generation been offered to end consumers than researchers are working on the next one. Therefore, it is hardly surprising that we are already seeing technologies which could appear in the context of a 5G+, or even a potential 6G. That is, as long as they manage to convince the standardization agencies, those who decide on the technologies selected, before their final choices are made by 2019.

Researchers at IMT Atlantique are working on new, cutting-edge technologies for transmission and signal coding. Three of these technologies will be presented here. They offer a glimpse of both the technical challenges in improving telecommunications, and the challenges of standardization presented ahead of the commercial roll-out of 5G.

 

Turbo codes: flexible up to a point

Turbo codes were invented at IMT Atlantique (formerly Télécom Bretagne) by Claude Berrou in 1991. They are an international standard in error-correcting codes. In particular, they are used in 4G. Their advantage lies in their flexibility. “With one turbo code, we can code any size of message” highlights Catherine Douillard, a digital communications researcher at IMT Atlantique. Like all error-correcting codes, the better the transmission quality, the more errors they correct. However, there is a threshold beyond which they can no longer improve their correction rate despite an improved signal.

We have recently found a way to solve this problem, which has led to a patent with Orange” explains Catherine Douillard. Turbo codes could well continue as the inevitable error-correcting codes in telecommunications. However, the standardization phases for 5G have already begun. They are divided into three phases, relating to the three types of purpose the new generation is expected to fulfill: increased data speeds, extra-reliable communication, and machine-to-machine communication. For the first purpose, other error-correcting codes have been selected: LDPCs, based on the work of Robert Gallagher at MIT in 1960. The control channel will be protected by polar codes. For the other purposes, standardization committees are due to meet in 2018. Turbo codes, polar codes and LDPCs will once again be in competition.

Beyond the technological battle for 5G, the three families of codes are also being examined closely as scenarios for the longer term. The European project H2020 Epic brings together manufacturers and researchers, including IMT Atlantique, to look at one issue: the increasing speed of error-correcting codes. Turbo codes, LDPCs and polar codes are also being examined, worked on and updated at the same time. The goal is to make them compatible with the decoding of signals travelling at speeds of around a terabit per second. For this, they are directly implemented in the material part of mobile terminals and antennae (see the insert at the end of the article).

 

FBMC: a new form of wave to replace OFDM?

If 5G is to combine communication between connected objects, it will have to make space on the frequency bands to allow machines to speak to each other. “We will have to make holes at very specific frequencies in the existing spectrum to insert communication by the Internet of Things” says Catherine Douillard. But the current form of the standardized wave, called OFDM, does not allow this. Its level of interference is too high. In other words, the space in the frequency band is not “clean”, and would suffer from interference from adjacent frequencies. Another form of wave is therefore being studied: FBMC. “With this, we can take out a frequency here and there to insert a communication system without disruption” the researcher sums up.

FBMC also provides a higher quality of service when mobile terminals move quickly in a cell. “The faster a mobile terminal moves, the higher the Doppler effect is” explains Catherine Douillard, “and OFDM is not very resistant to this effect”. And yet, 5G is supposed to provide good communication at speeds of up to 400 kilometers per hour, like in a TGV train, on the classical 4G frequency bands. The advantage of FBMC is even more significant on millimeter frequencies, as the Doppler effect is even greater at higher frequencies.

OFDM is already used for 4G, and for the time being, the 5G standardization agencies are maintaining it as the default wave form. But we probably haven’t heard the last of FBMC. It is more complex to set up, but researchers are working to simplify its implementation. Again, the next phases of standardization could be decisive.

 

NOMA: desaturating longstanding frequency bands

The frequency bands currently used for communications are becoming ever more saturated. Millimeter frequencies certainly could alleviate the problem, but this is not the only possibility being explored by researchers. “We are working on increasing the capacity of systems to transmit more data on the same bandwidth” explains Catherine Douillard. NOMA technology puts several users on the frequency band. Interferences can be avoided by allocating each user a different power level, known as multiplexing.

“The technique works well when we associate two users with a different channel quality on one frequency” the researcher explains. In concrete terms, a user situated close to an antenna can use NOMA to share the same frequency with a user further away. However, two users who are the same distance from the antenna, and so with more or less the same quality of reception, could not share it. This technique could therefore resolve the cellular saturation problem which 5G aims to address.

This article is part of our dossier 5G: the new generation of mobile is already a reality

[box type=”info” align=”” class=”” width=””]

Algorithms implemented directly into material

Algorithms are not necessarily lines of programming code. They can also be directly implanted into integrated circuits, using transistors which act as logic gates. “The advantage is that these algorithms take up a lot less space this way, as opposed to when they have to be executed on processors”, specifies Michel Jezequel, head of the electronics department at IMT Atlantique. “Algorithms are also faster and consume less energy”, he continues. To increase turbo codes to speeds of around a terabit per second, there is no choice but to use material implementation, for example. Their homologue programs would not be able to process the data fast enough.

[/box]

openairinterface, SDN

SDN and virtualization: more intelligence in 5G networks

New uses for 5G imply a new way of managing telecommunications networks. To open the way for new actors, we will have to divide and virtualize network slices, and find a dynamic way of allocating them to new services. This organization is made possible thanks to SDN: a technique which redesigns network architecture to make it as flexible as possible. Researchers at EURECOM are using SDN to make networks more intelligent.

 

The sole objective of 4G is to serve one purpose: broadband Internet. Not only will 5G need to pursue this effort, it will also have to satisfy the needs of the Internet of Things and provide considerably more reliable means of communication for the transmission of sensitive data. Combining these three usages in one type of network is far from simple. Especially as each of them will create many services, and therefore many new operators who will have to get along with each other. Depending on demand, certain services will have to be favored over others. This requires managing resources in a dynamic way.

A new way of organizing the network will need to be found. One solution is network slicing. The infrastructure remains unchanged, under the control of the current operators, but is shared virtually with the new operators. “Each service shares the network with others, but has its own independent slice which is specific to them” explains Adlen Ksentini, mobile networks researcher at Eurecom. The slice left to the virtual operator is a slice from end to end. This means that it leaves space for a new entrant both in sharing the radio bandwidth and on the management platform for this radio resource.

For researchers, the main challenge is the lifespan of the slices. The slicing system needs to be able to create and close them on demand. A data collection system for connected object will not run all day long, for example. “If an electricity distributer records data from smart counters between midnight and 4am, a slice needs to be created for this precise timeframe to allocate resources to other services the rest of the time” Adlen Ksentini illustrates.

Greater network intelligence

Establishing these slices is made possible through a new type of network architecture, whose behavior is programmed by software. Until present, the paths followed by data in the network were dictated by routers. The physical boxes, spread throughout the network, handled the way in which packages of information were sent. With the new architecture, known as SDN,  or software-defined network, a central entity controls the equipment and makes the routing decisions.

The greatest advantage of SDN is its flexibility in network management. “In a traditional architecture, without SDN, the routing rules are set in advance and are difficult to change” explains Christian Bonnet, another networks and telecommunications researcher at Eurecom. “SDN allows us to make the network intelligent, to change the rules if we need to” he continues. This greater freedom is what makes it possible to cut the network up, creating rules which isolate data pathways for specific use by each service.

Eurecom researchers are exploring the possibilities offered by these new architectures on the technological platform OpenAirInterface (OAI). “We are experimenting both with how to transform the 4G network to introduce intelligence, and how to shift towards a 5G architecture” Christian Bonnet explains. This open source work helps us to understand how SDN impacts the state of radio resources, its potential for creating new services, and the associated constraints or opportunities for improvement in managing mobility (see insert at the end of the article).

“Technically speaking, there are several possibilities for installing an SDN and slicing up the network. As each operator has slightly different requirements, there are many different angles to explore”, the researcher explains. Each operator could have their own way of virtualizing the network to allow for new services. 3GPP, the standardization body for mobile communication technologies, could however play a role of consolidation in the near future, should more operators decide to go in a common direction.

This article is part of our dossier 5G: the new generation of mobile is already a reality

[box type=”info” align=”” class=”” width=””]

SDN for better management of mobility

SDN architecture may be used to ensure continual data emission to a mobile terminal. This technique makes it possible to better handle changes of interface in switching from a 4G network to a WiFi network without interrupting the flow of information. Compared with traditional mobility techniques, SDN is faster, and reduces the amount of flow reduction operations required. For the end user, this results in a higher quality service.

[/box]

 

Speakshake

Speakshake, shaking up distance language learning

The start-up Speakshake has created a communication platform for improving your foreign language speaking skills. As well as communication, it offers users a variety of learning tools to accompany their discussions. The platform has been recognized by the French Administration as a vehicle for integration of both French nationals abroad and foreigners in France.

 

Venezuela, Brazil, China, Chili… Fanny Vallantin spent several years working as an engineer in different countries, discovering new cultures and languages wherever she went. In order to maintain her skills and keep practicing all these languages, she created a program enabling her to communicate with her colleagues in different countries. From what began as a personal tool, she created a platform her friends could also use, and then a start-up. The result was the company Speakshake, incubated at ParisTech Entrepreneurs since April 2017.

The start-up offers a way of connecting two users who each want to mutually improve their skills in the other’s native language, via a web service. The two participants begin a 30-minute video discussion, split into two 15-minute halves, each carried out in one of the two languages. Because of the length of the conversation, access to the service requires users to have a basic level of practical skills. “You need to have the level of a tourist who can get by abroad, who can order a coffee in a bar, in the language you want to work on” explains Fanny Vallantin.

The service aims to give its users the tools to integrate themselves in a country, and so the conversations are directed towards cultural subjects. During the discussion, Shakespeak offers various documents on the country’s traditions, history or current events. The resources are prepared in collaboration with students at the Sorbonne Nouvelle university, the official partner institution of the platform. The subjects spark discussion, but are not imposed on the participants, who remain free to speak about whatever they want, and may browse through the resources available as they please.

The 15-minute conversation in the user’s language is based on their partner’s culture. A French person speaking with a German person will speak in French about German culture, and in German about their own culture. This structure means that users are continually learning about the foreign country, even when speaking their own language. The start-up currently has seven languages on offer: French, English, Spanish, German, Portuguese, Mandarin Chinese and Italian. The list is set to grow next September, to include Japanese, Arabic, Hebrew, Russian and Korean, with support from Ile-de-France tourism funds.

In addition to cultural resources, the start-up’s platform offers a host of digital tools for learning. For instance, the conversation interface includes an online chat feature for spelling out words. It also includes an online dictionary and translator. All words written in these interfaces can be added to the dashboard, which the user may look at once the conversation has finished. An oral and written report system allows users to give advice to their conversation partners, and to receive tips for their own improvement.

By focusing on oral learning through conversation, Speakshake takes a new approach in the language education sector, which is often centered around writing. And by providing educational tools, it offers an enriched communication service. This point of difference is what helped the start-up win the Quai d’Orsay hackathon in January, as a service helping young foreigners to be better integrated in France, and French expats to become integrated in their host country. The young company has also been recognized by L’Institut Français and the Minister of Europe and Foreign Affairs as the perfect tool for improving speaking skills in a foreign language.

 

25 termes, Intelligence artificielle, IA, AI

23 terms to help understand artificial intelligence

Neural networks, predictive parsing, chatbots, data analysis, machine learning, etc. The 8th Fondation Mines-Télécom booklet provides a glossary of 23 terms to clarify some of the terms used in artificial intelligence (AI).

 

[box type=”download” align=”” class=”” width=””]Click here to download the booklet Cahier de veille IA, Fondation Télécom
on Artificial Intelligence (in French) [/box]

AI winters  Moments in the history of AI, in which doubts overshadowed previous enthusiasm.

API – (Application Programming Interface), a standardized set of methods by which a software program provides services for other software programs.

Artifact  Object made by a human.

Big Data – Massive data.

Further reading: What is big data?

Bots – Algorithmic robots.

Chatbots – Conversational bots.

Cognitivism – Paradigm of cognitive science focusing on symbols and rules.

Commodities – Basic everyday products.

Cognitive agent  Software that acts in an autonomous, intelligent manner.

Cognitive sciences  Set of scientific disciplines grouping together neurosciences, artificial intelligence, psychology, philosophy, linguistics, anthropology etc. An extremely vast cross-disciplinary field interested in human, animal and artificial thinking.

Connectionism – Paradigm of cognitive science based on neural networks.

Data analysis and data mining  Extraction of knowledge from data.

Decoder – An element in the signal processing chain responsible for recovering a signal after it has passed through a noisy channel.

Deep learning  Learning technique based on deep neural networks, meaning they are composed of many overlapping layers.

Expert systems  Systems that make decisions based on rules and facts.

Formal neural networks – Mathematical and computational representations of biological neurons and their connections.

FPGA (Field-Programmable Gate Array), an integrated circuit which can be programmed after manufacturing.

GPU  (Graphics Processing Unit), a processor specialized in processing signals which is well-suited for neural networks calculations.

Predictive parsing – Techniques derived from statistics, data mining and game theory to devise hypotheses.

Machine learning  Techniques and algorithms which give computers the ability to learn.

Further reading: What is machine learning?

Semantic networks  Graphs modeling the representation of knowledge.

Value sensitive design  Approach to designing technology that accounts for human values.

Weak/strong AI  Weak AI may specialize in playing chess but is hopeless at cooking. Strong AI excels in all areas where humans are skilled

 

Further reading on this topic:

[one_half]

[/one_half]

[one_half_last]

[/one_half_last]

 

space, René Garello, IMT Atlantique

Climate change as seen from space

René Garello, IMT Atlantique – Institut Mines-Télécom

[divider style=”normal” top=”20″ bottom=”20″]

[dropcap]T[/dropcap]he French National Centre for Space Research has recently presented two projects based on greenhouse gas emission monitoring (CO2 and methane) using satellite sensors. The satellites, which are to be launched after 2020, will supplement measures carried out in situ.

On a global scale, this is not the first such program to measure climate change from space: the European satellites from the Sentinel series have already been measuring a number of parameters since Sentinel-1A was launched on April 3, 2014 under the aegis of the European Space Agency. These satellites are part of the Copernicus Program (Global Earth Observation System of Systems), carried out on a global scale.

Since Sentinel-1A, the satellite’s successors 1B, 2A, 2B and 3A have been launched successfully. They are each equipped with sensors with various functions. For the first two satellites, these include a radar imaging system, for so-called “all weather” data acquisition, the radar wavelength being indifferent to cloudy conditions, whether at night or in the day. Infrared optical observation systems allow the second two satellites to monitor the temperature of ocean surfaces. Sentinel-3A also has four sensors installed for measuring radiometry, temperature, altimetry and the topography of surfaces (both ocean and land).

The launch of these satellites builds on the numerous space missions that are already in place on a European and global scale. The data they record and transmit grant researchers access to many parameters, showing us the planet’s “pulse”. These data partially concern the ocean (waves, wind, currents, temperatures, etc.) showing the evolution of large water masses. The ocean acts as an engine to the climate and even small variations are directly linked to changes in the atmosphere, the consequences of which can sometimes be dramatic (hurricanes). Data collected by sensors for continental surfaces concern variations in humidity and soil cover, whose consequences can also be significant (drought, deforestation, biodiversity, etc.).

[Incredible image from the eye of #hurricane #Jose taken on Saturday by the satellite #Sentinel2 Pic @anttilip]

Masses of data to process

Processing of data collected by satellites is carried out on several levels, ranging from research labs to more operational uses, not forgetting formatting activity done by the European Space Agency.

The scientific community is focusing increasingly on “essential variables” (physical, biological, chemical, etc.) as defined by groups working on climate change (in particular GCOS in the 1990s). They are attempting to define a measure or group of measures (the variable) that will contribute to the characterization of the climate in a critical way.

There are, of course, a considerable number of variables that are sufficiently precise to be made into indicators allowing us to confirm whether or not the UN’s objectives of sustainable development have been achieved.

space

The Boreal AJS 3 drone is used to take measurements at a very low altitude above the sea

 

The identification of these “essential variables” may be achieved after data processing, by combining this with data obtained by a multitude of other sensors, whether these are located on the Earth, under the sea or in the air. Technical progress (such as images with high spatial or temporal resolution) allows us to use increasingly precise measures.

The Sentinel program operates in multiple fields of application, including: environmental protection, urban management, spatial planning on a regional and local level, agriculture, forestry, fishing, healthcare, transport, sustainable development, civil protection and even tourism. Amongst all these concerns, climate change features at the center of the program’s attention.

The effort made by Europe has been considerable, representing an investment of over €4 billion between 2014 and 2020. However, the project also has very significant economic potential, particularly in terms of innovation and job creation: economic gains in the region of €30 million are expected between now and 2030.

How can we navigate these oceans of data?

Researchers, as well as key players in the socio-economic world, are constantly seeking more precise and comprehensive observations. However, with spatial observation coverage growing over the years, the mass of data obtained is becoming quite overwhelming.

Considering that a smartphone contains a memory of several gigabytes, spatial observation generates petabytes of data to be stored; and soon we may even be talking in exabytes, that is, in trillions of bytes. We therefore need to develop methods for navigating these oceans of data, whilst still keeping in mind that the information in question only represents a fraction of what is out there. Even with masses of data available, the number of essential variables is actually relatively small.

Identifying phenomena on the Earth’s surface

The most recent developments aim to pinpoint the best possible methods for identifying phenomena, using signals and images representing a particular area of the Earth. These phenomena include waves and currents on ocean surfaces, characterizing forests, humid, coastal or flooding areas, urban expansion in land areas, etc. All this information can help us to predict extreme phenomena (hurricanes), and manage post-disaster situations (earthquakes, tsunamis) or monitor biodiversity.

The next stage consists in making processing more automatic by developing algorithms that would allow computers to find the relevant variables in as many databases as possible. Intrinsic parameters and information of the highest level should then be added into this, such as physical models, human behavior and social networks.

This multidisciplinary approach constitutes an original trend that should allow us to qualify the notion of “climate change” more concretely, going beyond just measurements to be able to respond to the main people concerned – that is, all of us!

[divider style=”normal” top=”20″ bottom=”20″]

René Garello, Professor in Signal and Image Processing, “Image and Information Processing” department, IMT Atlantique – Institut Mines-Télécom

The original version of this article was published on The Conversation.

Grand Prix IMT-Académie des sciences

Pierre Rouchon: research in control

Pierre Rouchon, a researcher at Mines ParisTech, is interested in the control of systems. Whether it be electromechanical systems, industrial facilities or quantum particles, he works to observe their behavior and optimize their performance. In the course of his work, he had the opportunity to work with the research group led by Serge Haroche, winner of the 2012 Nobel Prize in Physics. On November 21st, Pierre Rouchon was awarded the Grand Prix IMT-Académie des Sciences at an official ceremony held in the Cupola of the Institut de France.

 

From the beginning, you have focused your research on control theory. What is it?

Pierre Rouchon: My specialty is automation: how to optimize the control of dynamic systems. The easiest way to explain this is through an example. I worked on a problem that is well known in mobile robotics: how to parallel park a car hauling several trailers. If you have ever driven a car with a trailer in reverse, you know that you intuitively take the trajectory of the back of the trailer as the reference point. This is what we call a “flat output”; together, the car and trailer form a “flat system” for which simple algorithms exist for planning and tracking the trajectories. For this type of example, my research showed the value of controlling the trajectory of the last trailer, and developing efficient feedback algorithms based on that trajectory. This requires modelling — or, as we used to say, expression through equations — for the system and its movements.

What does this type of research achieve?

PR: It reduces the calculations that need to be made. A crane is another example of a flat system. By taking the trajectory of the load carried by the crane as the flat output, rather than the crane’s arm or hoisting winch, much fewer calculations are required. This leads to the development of more efficient software that assists operators in steering the crane, which speeds up their handling operations.

This seems very different from your current work in physics!

PR: What I’m interested in is the concept of feedback. When you measure and observe a classical system, you do so without disturbing it. You can therefore make a correction in real time using a feedback loop: this is the practical value of feedback, which makes systems easier to run and resistant to the disturbances they face. But in quantum systems, you disturb the system just by measuring it, and you therefore have an initial feedback due to the measurement. Moreover, the controller itself can be another quantum system. In quantum systems, the concept of feedback is therefore much more complex. I began studying this with one of my former students, Mazyar Mirrahimi, in the early 2000s. In fact, in 2017 he received the Prix Inria-Académie des Sciences Young Researcher Prize, and we still work together.

What feedback issue did you both work on in the beginning?

PR: When we started, we were taking Serge Haroche’s classes at the Collège de France. In 2008, we started working with his team on the experiment he was conducting. He was trying to manipulate and control photons that were trapped between two mirrors. He had developed very subtle “non-destructive” measures for counting the photons without destroying them. He earned a Nobel Prize in 2012 for his work. Along with Nina Amini, who was working on her thesis under our joint supervision, Mazyar and I first worked on the feedback loop that in 2011 made it possible to stabilize the number of photons around a setpoint, a whole number of several units.

Are you still interested in quantum feedback today?

PR: Yes, we are seeking to develop mathematical systematic methods for designing feedback loops with a hybrid structure: the first part of the controller is conventional, whereas the second part is a quantum system. To design these methods, we rely on superconducting quantum circuits. These are electronic circuits with quantum behavior at a low temperature, which are currently the object of much study. They are controlled and measured by radio frequency waves in the gigahertz range, which propagate along coaxial cables. We are currently working with experimenters to develop a quantum logic bit (logical qubit), which is one of the basic components of the famous quantum computer that everyone is working towards!

Is it important for you to have practical and experimental applications for your research?

PR:  Yes. It is important for me to have direct access to the problem I’m studying, to the objective reality shared by the largest possible audience. Working on concrete issues, with a real experiment or a real industrial process enables me to go beyond simulations and better understand the underlying mathematical methods. But it is a daunting task: in general, nothing goes according to plan. When I was working on my thesis, I worked with an oil refinery on controlling the quality of distillation columns. I arrived at the site with a floppy disk containing a Fortran code for a control algorithm tested through laboratory simulations. The engineers and operators on-site said, “Ok, let’s try it, at worst we’ll pour into the cavern”. The cavern was used to store the non-compliant portion of the product, to be reprocessed later. At first, the tests didn’t work, and it was awful and devastating for a theoretician. But when the feedback algorithm finally started working, what a joy and relief!

 

[divider style=”normal” top=”20″ bottom=”20″]

Biography of Pierre Rouchon

Pierre Rouchon

Pierre Rouchon, 57, is a professor at Mines ParisTech, and the director of the school’s Mathematics and Systems Department. He is a recognized specialist in Control Theory. He has made major scientific contributions to the three major themes of this discipline: flat systems in connection with trajectory planning, quantum systems and invariant asymptomatic observers.

His work has had, and continues to have, a significant impact on a fundamental level. It has been presented in 168 publications, cited 12,000 times, and been the subject of 9 patents. His work has been further reinforced by industrial collaborations, through which concrete and original solutions have been created. Examples include Schneider Electric’s order for electric engines, developing cryogenic distillation of air for Air Liquide and regulating diesel engines to reduce fine particle emissions with IFP and PSA.

[divider style=”normal” top=”20″ bottom=”20″]

 

optical communications

Sébastien Bigo: setting high-speed records

Driven by his desire to take the performance of fiber optics to the next level, Sébastien Bigo has revolutionized the world of telecommunications. His work carried out at Nokia Bell Labs has now set nearly 30 world records for the bandwidth and distance of optical communications. Some examples: the first communication transmitted at a rate of 10 terabits per second. The coherent optical networks he helped develop are now used every day for transmitting digital data. On November 21st he received the Grand Prix IMT-Académie des sciences for the entirety of his work at the official awards ceremony held at the Cupola of the Institut de France.

 

How did you start working on optical communications?

Sébastien Bigo: Somewhat by mistake. When I was in preparation class, I was interested in electronics. On the day of my entrance exams, I forgot to hand in an extra page where I had written a few calculations. When I received my results, I was one point away from making the cutoff for admission to the electronics school I wanted to attend — and would have been able to attend if I had handed in that paper. However, my exam results allowed me to attend the graduate school SupOptique, which recruits students using the same entrance exam, based on a slightly different scale. It’s funny actually: if I had handed in that paper, I would be working on electronics!

But were you at least interested in optics?

SB: I had a fairly negative image of optical telecommunications. At the time, the work of optics engineers consisted in simply finding the right lens for injecting light into a fiber. I didn’t think that was very exciting… When I contacted Alcatel in search of a thesis topic, I asked them if they had anything more advanced. I was interested in optical signal processing: what light can do to itself. They just happened to have a topic on this subject.

And from there, how did you begin your work in telecommunications?

SB: Through my work in optical signal processing, I came to work on pulses that are propagated without changing their shape: solitons. Thanks to these waves, I was able to make the first completely optical regeneration of a signal, which allows an optical signal to be sent further without converting it into an electrical signal. This enabled me to create the first demonstration of a completely optical transatlantic communication. Later, solitons were replaced by WDM technologies — multicolored pulses produced by a different laser beam for each color — which produce much better rates. This is when I truly got started in the telecommunications profession, and I started setting a series of 29 world records for transmission rates.

What do these records mean for you?  

SB: The competition to find the best rates is a global one. It’s always gratifying when we succeed before the others. This is what makes the game so interesting for me: knowing that I’m competing against people who are always trying to make things better by reinventing the rules every time. And winning the game has even greater merit since I don’t win every time. Pursuing records then leads to innovations. In the early 2000s, we developed the TeraLight fiber, which was a huge industrial success. This enabled us to continue to set remarkable records later.

Are some records more important than others?

SB: The first one was, when I succeeded in making the first transmission over a transatlantic distance at a rate of 20 gigabits per second, using optical periodic regeneration. Then there are records that are symbolic. Like when I successfully reached a rate of 10 terabits per second. No one had done this before, despite the fact that we had given the secret away shortly before, when we reached 5 terabits per second. And that time we finished our measurements at 7am on the first day of the conference where we would announce the record. I almost missed my flight because of it. The competition is so intense that we submit the results at the very last minute.

Is this quest for increasingly higher rates what led you to develop coherent optical networks?   

SB: I began working on coherent optical networks in 2006, when we realized that we had reached the limit of what we knew how to do. The records had allowed us to independently fine-tune elements that no one had put together before. By adapting our previous findings to modulation, receivers, signal processing, propagation and polarization, we created an optical system that is truly a cut above the rest, and it has become the new industry standard. This led to a new record being set, with the product of the speed and the propagation distance reaching over 100 petabits per second per kilometer [1 petabit = 1,000 terabits]. To achieve this, we transmitted 15.5 terabits per second over a distance of 7,200 kilometers. This is above all a perfect example of what a system is: a combination of elements that together are worth much more than the sum of each one separately.

What is your current outlook for the future?

SB: Today I am working on optical networks, which in a way are systems of systems. For a European network, I am focusing on what path to take in order for data transport to be as inexpensive and efficient as possible. I am convinced that this is the area in which major changes will occur in coming years. It is becoming difficult to increase the fibers’ capacity as we approach the Shannon limit. Therefore, to continue transmitting information, we need to think about how we can optimize the filling of communication channels. The goal is to transform the networks to introduce intelligence and make life easier for operators.

 

[divider style=”normal” top=”20″ bottom=”20″]

Biography of Sébastien Bigo

Sébastien Bigo, optical communications

Sébastien Bigo, 47, director of the IP and Optical Networks research group at Nokia Bell Labs, belongs to the great French school of optics applied to telecommunications. Through his numerous innovations, he has been and continues to be a global pioneer in high-speed fiber optic transmission.

The topics he has studied have been presented in 300 journal publications and at conferences. He has also filed 42 patents representing an impressive number of contributions to different aspects of the scientific field that he has had such a profound impact on. These multiple results have been cited over 8,000 times and have enabled 29 experimental demonstrations to take place, together constituting a world record in terms of bandwidth or transmission distance.

Some of the resultant innovations have generated significant economic activity. Particular examples include Teralight Fiber, that Sébastien Bigo helped develop, which was rolled out over several million kilometers, and coherent networks, which are now used by billions every week. These are certainly two of France’s most resounding successes in communication technology.

[divider style=”normal” top=”20″ bottom=”20″]

 

 

 

Invenis

Invenis: machine learning for non-expert data users

Invenis went into incubation at Station F at the beginning of July, and has since been developing at full throttle. This start-up has managed to make a name for itself in the highly competitive sector of decision support solutions using data analysis. Its strength? Providing easy-to-use software aimed at non-expert users, which processes data using efficient machine learning algorithms.

 

In 2015, Pascal Chevrot and Benjamin Quétier, both in the Ministry of Defense at the time, made an observation that made them want to launch a business. They considered that the majority of businesses were using outdated digital decision support tools that were increasingly ill-suited to their needs. “On the one hand, traditional software was struggling to adapt to big data processing and artificial intelligence”, Pascal Chevrot explains. “On the other hand, there were expert tools that existed but were inaccessible to anyone that didn’t have significant technical knowledge.” Faced with this situation, the two colleagues founded Invenis in November 2015 and joined the ParisTech Entrepreneurs incubator. On July 3, 2017, less than two years later, they joined Station F, one of the biggest start-up campuses in the world located in the 13th arrondissement of Paris.

The start-up is certainly appealing: it aims to rectify the lack of available decision support tools with SaaS software (Software as a Service). Its goal is to make the value provided by data available to people that manipulate them every day in order to obtain information, but who are by no means experts. Invenis therefore targets professionals that know how to extract data and use it to obtain information, but who find themselves limited by the capabilities of the tools that they use when they want to go further. Through their solution, Invenis allows these professionals to carry out data processing using machine learning algorithms, simply.

Pascal Chevrot illustrates how simple it is to use, with an example. He takes two data sets and uploads them to Invenis: one is the number of sports facilities per activity and per department, and the other is the population by city in France. The user can then choose what kind of data processing they wish to perform from a library of modules. For example, they could first decide to group the different kinds of sports facilities (football stadiums, boules pitches, swimming pools, etc.) according to regions in France. In parallel, the software will then aggregate the number of inhabitants per commune in order to provide a population value on a regional scale. Once each of these actions has been completed, the user can then carry out an automated segmentation, or “clustering”, in order to classify regions into different groups according to the density of sports facilities in that particular region. In a few clicks, Invenis thus allows users to visualize the regions that have the highest number of sports facilities and those with a low number in relation to the population size, and which should therefore be invested in. Each process carried out on the data is done simply by dragging a processing module into the interface associated with the desired procedure and using this to create a full data processing session.

The user-friendly nature of the Invenis software lies in how simple it is to use these processing modules. Every action has been designed to be simple for the user to understand. The algorithms come from open source libraries Hadoop and Spark, which are references in the sector. “We then add our own algorithms to these existing algorithms, making them easier to manage”, highlights Pascal Chevrot.

For example, the clustering algorithm they use ordinarily requires a certain number of factors to be defined. Invenis’ processing module automatically calculates these factors using its proprietary algorithms. It does, however, allow expert users to modify these if necessary.

In addition to how simple it is to use, the Invenis program has other advantages, namely a close management of data access rights. “Few tools do this”, affirms Pascal Chevrot, before demonstrating the advantages of this function: “For some businesses, such as telecommunication operators, it’s important because they have to report to the CNIL (National Commission for Data Protection and Liberties) for the confidentiality of their data, and soon this will also be the case in Europe, with the arrival of GDPR. Not forgetting that more established businesses have implemented data governance over these questions.”

 

Revealing the value of data

Another advantage of Invenis is that it offers different frameworks. The start-up offers free trial periods to any data users who are using tools they are not satisfied with, along with the opportunity to talk to the technical management team who can demonstrate the tool’s capabilities and even develop proof of concept. However, the start-up also has a support and advice service for businesses that have identified a problem that they would like to solve using their data. “We offer clients guaranteed results, assisting them to resolve their problem with the intention of ultimately making them independent”, explains the co-founder.

It was within this second format that Invenis realized its most iconic proof of concept with CityTaps, another start-up from ParisTech Entrepreneurs that offers prepaid water meters. Using the Invenis software allowed CityTaps to look at three questions. Firstly, how do users consume water in terms of days of the week, size of household, season, etc.? Secondly, what is the optimal moment to warn a user that they need to top up their meter, and would they be quick to do this after receiving an alert SMS? And finally, how can we best predict temperature changes in the meters due to the weather? Invenis provided many responses to these questions by using their processing solutions on CityTaps’ data.

The case of CityTaps shows to what extent data management tools are crucial for companies. Machine learning and intelligent data processing are essential in generating value. However, these technologies can sometimes be difficult to access due to insufficient technical knowledge. Enabling businesses to access this value by reducing access costs in terms of skills is Invenis’ number one aim. As Pascal Chevrot concludes, the key is to provide “”.

algorithms

Ethics, an overlooked aspect of algorithms?

We now encounter algorithms at every moment of the day. But this exposure can be dangerous. It has been proven to influence our political opinions, moods and choices. Far from being neutral, algorithms carry their developers’ value judgments, which are imposed on us without our noticing most of the time. It is now necessary to raise questions about the ethical aspects of algorithms and find solutions for the biases they impose on their users.

 

[dropcap]W[/dropcap]hat exactly does Facebook do? Or Twitter? More generally, what do social media sites do? The overly-simplified but accurate answer is: they select the information which will be displayed on your wall in order to make you spend as much time as possible on the site. Behind this time-consuming “news feed” hides a selection of content, advertising or otherwise, optimized for each user through a great reliance on algorithms. Social networks use these algorithms to determine what will interest you the most. Without questioning the usefulness of these sites — this is most likely how you were directed to this article — the way in which they function does raise some serious ethical questions. To start with, are all users aware of algorithms’ influence on their perception of current events and on their opinions? And to take a step further, what impacts do algorithms have on our lives and decisions?

For Christine Balagué, a researcher at Télécom École de Management and member of CERNA (see text box at the end of the article), “personal data capturing is a well-known topic, but there is less awareness about the processing of this data by algorithms.” Although users are now more careful about what they share on social media, they have not necessarily considered how the service they use actually works. And this lack of awareness is not limited to Facebook or Twitter. Algorithms now permeate our lives and are used in all of the mobile applications and web services we use. All day long, from morning to night, we are confronted with choices, suggestions and information processed by algorithms: Netflix, Citymapper, Waze, Google, Uber, TripAdvisor, AirBnb, etc.

Are your trips determined by Citymapper? Or by Waze? Our mobility is increasingly dependent on algorithms. Illustration: Diane Rottner for IMTech

 

They control our lives,” says Christine Balagué. “A growing number of articles published by researchers in various fields have underscored the power algorithms have over individuals.” In 2015, Robert Epstein, a researcher at the American Institute for Behavioral Research, demonstrated how a search engine could influence election results. His study, carried out with over 4,000 individuals, demonstrated that candidates’ rankings in search results influenced at least 20 % of undecided voters. In another striking example, a study carried out by Facebook in 2012 on 700,000 of its users showed that people who had previously been exposed to negative posts posted predominantly negative content. Meanwhile, those who had previously been exposed to positive posts posted essentially positive content. This proves that algorithms are likely to manipulate individuals’ emotions without their realizing or being informed of it. What role do our personal preferences play in a system of algorithms of which we are not even aware?

 

The opaque side of algorithms

One of the main ethical problems with algorithms stems from this lack of transparency. Two users who carry out the same query on a search engine such as Google will not have the same results. The explanation provided by the service is that responses are personalized to best meet the needs of each of these individuals. But the mechanisms for selecting results are opaque. Among the parameters taken into account to determine which sites will be displayed on the page, over a hundred have to do with the user performing the query. Under the guise of trade secret, the exact nature of these personal parameters and how they are taken into account by Google’s algorithms is unknown. It is therefore difficult to know how the company categorizes us, determines our areas of interest and predicts our behavior. And once this categorization has been carried out, is it even possible to escape it? How can we maintain control over the perception that the algorithm has created about us?

This lack of transparency prevents us from understanding possible biases which could result from data processing. Nevertheless, these biases do exist and protecting ourselves from them is a major issue for society. A study by Grazia Cecere, an economist at Télécom École de Management, provides an example of how individuals are not treated equally by algorithms. Her work has highlighted discrimination between men and women in a major social network’s algorithms for associating interests. “In creating an ad for STEMs (sciences, technology, education, mathematics), we noticed that the software demonstrated a preference for distributing it to men, even though women show more interest for this subject,” explains Grazia Cecere. Far from the myth of malicious artificial intelligence, this sort of bias is rooted in human actions. We must not forget that behind each line of code, there is a developer.

Algorithms are used first and foremost to propose services, which are most often commercial in nature. They are thus part of a company’s strategy and reflect this strategy in order to respond to its economic demands. “Data scientists working on a project seek to optimize their algorithms without necessarily thinking about the ethical issues involved in the choices made by these programs,” points out Christine Balagué. In addition, humans have perceptions about the society to which they belong and integrate these perceptions, either consciously or unconsciously, in the software they develop. Indeed, value judgements present in algorithms quite often reflect the value judgments of their creators. In the example of Grazia Cecere’s work, this provides a simple explanation for the bias discovered, “An algorithm learns what it is asked to learn and replicates stereotypes if they are not removed.”

algorithms

What biases are hiding in the digital tools we use every day? What value judgments passed down from algorithm developers do we encounter on a daily basis? Illustration: Diane Rottner for IMTech.

 

A perfect example of this phenomenon involves medical imaging. An algorithm used to classify a cell as sick or healthy must be configured to make a comparative assessment of the number of false positives and false negatives. Developers must therefore decide to what extent it is tolerable for healthy individuals to receive positive tests in order to prevent sick individuals from receiving negative tests. For doctors, it is preferable to have false positives rather than false negatives while scientists who develop algorithms prefer false negatives to false positives, as scientific knowledge is cumulative. Depending on their own values, developers will privilege one of these professions.

 

Transparency? Of course, but that’s not all!

One proposal for combating these biases is to make algorithms more transparent. Since October 2016, the law for a digital republic, proposed by Axelle Lemaire, the former Secretary of State for Digital Affairs, requires transparency for all public algorithms. This law was responsible for making the higher education admission website (APB) code available to the public. Companies are also increasing their efforts for transparency. As of May 17, 2017, Twitter has allowed its users to see the areas of interest the site associates with them. But despite these good intentions, the level of transparency is far from sufficient for ensuring the ethical dimension. First of all, code understandability is often overlooked: algorithms are sometimes delivered in formats which make them difficult to read and understand, even for professionals. Furthermore, transparency can be artificial. In the case of Twitter, “no information is provided about how user interests are attributed,” observes Christine Balagué.

[Interests from Twitter
These are some of the interests matched to you based on your profile and activity.
You can adjust them if something doesn’t look right.]

Which of this user’s posts led to his being classified under “Action and Adventure,” a very broad category? How are “Scientific news” and “Business and finance” weighed in order to display content in the user’s Twitter feed?

 

To take a step further, the degree to which algorithms are transparent must be assessed. This is the aim of the TransAlgo project, another initiative launched by Axelle Lemaire and run by Inria. “It’s a platform for measuring transparency by looking at what data is used, what data is produced and how open the code is,” explains Christine Balagué, a member of TransAlgo’s scientific council. The platform is the first of its kind in Europe, making France a leading nation in transparency issues. Similarly, DataIA, a convergence institute for data science established on Plateau de Saclay for a period of ten years, is a one-of-a-kind interdisciplinary project involving research on algorithms in artificial intelligence, their transparency and ethical issues.

The project brings together multidisciplinary scientific teams in order to study the mechanisms used to develop algorithms. The humanities can contribute significantly to the analysis of the values and decisions hiding behind the development of codes. “It is now increasingly necessary to deconstruct the methods used to create algorithms, carry out reverse engineering, measure the potential biases and discriminations and make them more transparent,” explains Christine Balagué. “On a broader level, ethnographic research must be carried out on the developers by delving deeper into their intentions and studying the socio-technological aspects of developing algorithms.” As our lives increasingly revolve around digital services, it is crucial to identify the risks they pose for users.

Further reading Artificial Intelligence: the complex question of ethics

[box type=”info” align=”” class=”” width=””]

A public commission dedicated to digital ethics

Since 2009, the Allistene association (Alliance of digital sciences and technologies) has brought together France’s leading players in digital technology research and innovation. In 2012, this alliance decided to create a commission to study ethics in digital sciences and technologies: CERNA. On the basis of multidisciplinary studies combining expertise and contributions from all digital players, both nationally and worldwide, CERNA raises questions about the ethical aspects of digital technology. In studying such wide-ranging topics as the environment, healthcare, robotics and nanotechnologies, it strives to increase technology developers’ awareness and understanding of ethical issues.[/box]