Quantum computer, Romain Alléaume

What is a Quantum Computer?

The use of quantum logic for computing promises a radical change in the way we process information. The calculating power of a quantum computer could surpass that of today’s biggest supercomputers within ten years. Romain Alléaume, a researcher in quantum information at Télécom ParisTech, helps us to understand how they work.

 

Is the nature of the information in a quantum computer different?

Romain Alléaume: In classical computer science, information is encoded in bits: 1 or 0. It is not quite the same in quantum computing, where information is encoded on what, we refer to as “quantum bits” or qubits. And there is a big difference between the two. A standard bit exists in one of two states, either 0 or 1. A qubit can exist in any superposition of these two states, and can therefore have many more than two values.

There is a stark difference between using several bits or qubits. While n standard bits can only take a value among 2n possibilities, n qubits can take on any combination of these 2n states. For example, 5 bits take a value among 32 possibilities: 00000, 00001… right up to 11111. 5 qubits can take on any linear superposition of the previous 32 states, which is more than one billion states. This phenomenal expansion in the size of the space of accessible states is what explains the quantum computer’s greater computing capacity.

 

Concretely, what does a qubit look like?

RA:  Concretely, we can encode a qubit on any quantum system with two states. The most favourable experimental systems are the ones we know how to manipulate precisely. This is for instance the case with the energy levels of electrons in an atom. In quantum mechanics, the energy of an electron “trapped” around an atomic nucleus may take different values, and these energy levels take on specific “quantified” values, hence the name, quantum mechanics. We can call the first two energy levels of the atom 0 and 1: 0 corresponding to the lowest level of energy and 1 to a higher level of energy, known as the “excited state”. We can then encode a quantum bit by putting the atom in the 0 or in the 1 state, but also in any superposition (linear combination) of the 0 state and the 1 state.

To create good qubits, we have to find systems such that the quantum information remains stable over time. In practice, creating very good qubits is an experimental feat: atoms tend to interact with their surroundings and lose their information. We call this phenomenon decoherence. To avoid decoherence, we have to carefully protect the qubits, for example by putting them in very low temperature conditions.

 

What type of problems does the quantum computer solve efficiently?

RA: It exponentially increases the speed with which we can solve “promise problems”, that is, problems with a defined structure, where we know the shape of the solutions we are looking for. However, for reversing a directory for example, the quantum computer has only been proven to speed up the process by a square-root factor, compared with a regular computer. There is an increase, but not a spectacular one.

It is important to understand that the quantum computer is not magic and cannot accelerate any computational problem. In particular, one should not expect quantum computers to replace classical computers. Its main scope will probably be related to simulating quantum systems that cannot be simulated with standard computers. This will involve simulating chemical reactions or super conductivity, etc. While quantum simulators are likely to be the first concrete application of quantum computing, we know about quantum algorithms that can be applied to solve complex optimization problems, or to accelerate computations in machine learning. We can expect to see quantum processors used as co-processors, to accelerate specific computational tasks.

 

What can be the impact of the advent of large quantum computers?

RA: The construction of large quantum computers will moreover enable us to break most of the cryptography that is used today on the Internet. The advent of large quantum computer is unlikely to occur in the next 10 years. Yet, as these data are stored for years, even tens of years, we need to start thinking about new cryptographic techniques that will be resistant to the quantum computer.

Read the blog post: Confidential communications and quantum physics

 

When will the quantum computer compete with classical supercomputers?

RA: Even more than in classical computing, quantum computing requires error-correcting codes to improve the quality of the information coded on qubits, and to be scaled up. We can currently build a quantum computer with just over a dozen qubits, and we are beginning to develop small quantum computers which work with error-correcting codes. We estimate that a quantum computer must have 50 qubits in order to outperform a supercomputer, and solve problems which are currently beyond reach. In terms of time, we are not far away. Probably five years for this important step, often referred to as “quantum supremacy”.

ISS, télécommunication spatiale, Space Telecommunication

What is space telecommunication? A look at the ISS case

Laurent Franck is a space telecommunications researcher at IMT Atlantique. These communication systems are what enable us to exchange information with far-away objects (satellites, probes…). These systems also enable us to communicate with the International Space Station (ISS). This is a special and unusual case compared to the better-known example of satellite television. The researcher explains how these exchanges between Earth and outer space take place.

 

Since his departure in November 2016, Thomas Pesquet has continued to delight the world with his photos of our Earth as seen from the sky. It’s a beautiful way to demystify life in space and make this profession—one that fascinates both young and old—more accessible. We were therefore able to see that the members of Expedition 51 aboard the ISS are far from lost in space. On the contrary, Thomas Pesquet was able to cheer on the France national rugby union team on a television screen and communicate live with children from different French schools (most recently on February 23, in the Gard department). And you too can follow this ISS adventure live whenever you want. But how is this possible? To shed some light on this issue, we met with Laurent Franck, a researcher in space telecommunications at IMT Atlantique.

 

What is the ISS and what is its purpose?

Laurent Franck: The ISS is a manned international space station. It accommodates international teams from the United States, Russia, Japan, Europe and Canada. It is a scientific base that enables scientific and technological experiments to be carried out in the space environment. The ISS is situated approximately 400 kilometers above the earth’s surface. But it is not stationary in the sky, because when something is in orbit at this altitude, the laws of physics make the object rotate at a faster speed than the Earth’s rotation. It therefore follows a circular orbit around our planet at a speed of 28,000 kilometers per hour, enabling it to orbit the Earth in 93 minutes.

 

How can we communicate with the ISS?

LF: Not by wire, that’s for sure! We can communicate directly, meaning between a specific point on Earth and the space station. To do this, it must be visible above us. We can get around this constraint by going through an intermediary. One or several satellites that are situated at a higher elevation can then be used as relays. The radio wave goes from the Earth to the relay satellite, and then to the space station, or vice versa. It is all quite an exercise in geometry. There are approximately ten American relay satellites in orbit. They are called TDRS (Tracking and Data Relay Satellite). Europe has a similar system called EDRS (European Data Relay System).

 

Why are these satellites located at a higher altitude than that of the space station?

LF: Let’s take a simple analogy. I take a flashlight and shine it on the ground. I can see a ring of light on the ground. If I raise the flashlight higher off the ground, this circle gets bigger. This spot of light represents the communication coverage between the ground and the object in the air. The ISS is close to the Earth’s surface, and therefore it only covers a small part of the Earth, and this coverage is moving. Conversely, if I take a geostationary satellite at an altitude of 36,000 kilometers, the coverage is greater and corresponds to a fixed point on the Earth. Not only are few satellites required in order to cover the Earth’s surface, but the ISS can also sustainably communicate, via the geostationary satellite, with a ground station that is also located within this area of coverage. Thanks to this system, only three or four ground stations are required to permanently communicate with the ISS.

 

Is live communication with the ISS truly live?

LF: There is a slight time lag, for two reasons. First, there is the time the signal takes to physically travel from point A to point B. This time is related to the speed of light. Therefore, it takes 125 milliseconds to reach a geostationary satellite (television or satellite relays). We then must add the distance between the satellite and the ISS. This results in a travel time that is incompressible–since it is physical–of a little over a quarter of a second. Or half a second to travel there and back. This first time lag is easily observable when we watch the news on television: the studio asks a question and the reporter on the ground seems to wait before answering, due to the time needed to receive the question via satellite and send the reply!

Secondly, there is a processing time, since the information travels through telecommunications equipment. This equipment cannot process the information at the speed of light. Sometimes the information is stored temporarily to accommodate the processor speed. It’s like when I have to wait in line at a counter. There’s the time the employee at the counter takes to do their job, plus the wait time due to all the people in line in front of me. This time can quickly add up.

We can exchange any kind of information with the ISS. Voice and image, of course, as well as telemetry data. This is the information a spacecraft sends to the earth to communicate its state of health. Included in this information is the station’s position, the data from the experiments carried out on board, etc.

 

What are the main difficulties the spatial telecommunications systems experience?

LF: The major difficulty is linked to the fact that we must communicate with objects that are very far away and have limited electrical transmission power. We record these constraints in an energy link budget. This involves several phenomena. The first is that the farther away we communicate, the more energy is lost. With the distance, the energy is dispersed like a spray. The second phenomenon involved in this budget is that the quality of communication depends on the amount of energy received at the destination. We ask: out of one million bits that are transmitted, how many are false when they arrive at the destination? Finally, the last point is the output rate that is possible for the communication. This also depends on the amount of energy invested in the communication. We often adjust the output rate to obtain a certain level of quality. It all depends on the amount of energy available for transmission. This is limited aboard the ISS, since it is powered via solar panels and sometimes travels in the Earth’s shadow. The relay satellites have the same constraints.

 

Is there a risk of interference when the information is travelling through space?

LF: Yes and no, because radio frequency telecommunications are highly regulated. The right to transmit is linked to a maximum frequency and power. It is also regulated in space: we cannot “spill over” into another nearby antenna. For space communications, there are tables that define the maximum amount of energy that we can send outside of the main direction of communication. Below this maximum level, the energy that is sent to a nearby antenna is of course interference, but it will not prevent it from functioning properly.

 

What are the other applications of communications satellites?

LF: They are used for Internet access, telephony, video telephony, the Internet of things… But what is interesting is what they are not used for: GPS navigation and weather observations, for example. In fact, space missions are traditionally divided into four components: the telecommunications we are discussing here, navigation/positioning, observation, and deep-space exploration like the Voyager probes. Finally, what is fascinating is that with a field as specialized as that of space, there is an almost infinite amount of even more specialized derivations.

 

machine learning

What is machine learning?

Machine learning is an area of artificial intelligence, at the interface between mathematics and computer science. It is aimed at teaching machines to complete certain tasks, often predictive, based on large amounts of data. Text, image and voice recognition technologies are also used to develop search engines and recommender systems for online retail sites. More broadly speaking, machine learning refers to a corpus of theories and statistical learning methods, which encompass deep learning. Stephan Clémençon, a researcher at Télécom ParisTech and Big Data specialist, explains the realities hidden behind these terms.

 

What is machine learning or automatic learning?

Stéphan Clémençon: Machine learning involves teaching machines to make effective decisions within a predefined framework, using algorithms fueled by examples (learning data). The learning program enables the machine to develop a decision-making system that generalizes what it has “learned” from these examples. The theoretical basis for this approach states that if my algorithm searches a catalogue of decision-making rules that is “not overly complex” and that worked well for sample data, they will continue to work well for future data. This refers to the capacity to generalize rules that have been learned statistically.

 

Is machine learning supported by Big Data?

SC: Absolutely. The statistical principle of machine learning relies on the representativeness of the examples used for learning. The more examples are available, and hence learning data, the better the chances of achieving optimal rules. With the arrival of Big Data, we have reached the statistician’s “frequentist heaven”. However, this mega data also poses problems for calculations and execution times. To access such massive information, it must be distributed in a network of machines. We now need to understand how to reach a compromise between the quantity of examples presented to the machine and the calculation time. Certain infrastructures are quickly penalized by the large proportions of the massive amounts of data (text, signals, images and videos) that are made available by modern technology.

 

What exactly does a machine learning problem look like?

SC: Actually, there are several types of problems. Some are called “supervised” problems, because the variable that must be predicted is observed through a statistical sample. One major example of supervised learning from the early stages of machine learning, was to enable a machine to recognize handwriting. To accomplish this, the database must be provided with many “pixelated” images, while explaining to the machine that it is an “e”, an “a”, etc. The computer was trained to recognize the letter that was written on a tablet. Observing the handwritten form of a character several times improves the machine’s capacity to recognize it in the future.

Other problems are unsupervised, which means that labels are available for the observations. This is the case, for example, in S-Monitoring, which is used in predictive maintenance. The machine must learn what is abnormal in order to be able to issue an alert. In a way, the rarity of an event replaces the label. This problem is much more difficult because the result cannot be immediately verified, a later assessment is required, and false alarms can be very costly.

Other problems require a dilemma to be resolved between exploring the possibilities and making use of past data. This is referred to as reinforcement learning. This is the case for personalized recommendations. In retargeting, for example, banner ads are programmed to propose links related to your areas of interest, so you will click on them. However, if you are never proposed any links related to classical literature, on the pretext that you do not yet have any search history in this subject, it will be impossible to effectively determine if this type of content would interest you. In other words, the algorithm will also need to explore the possibilities and no longer use data alone.

 

To resolve these problems, machine learning relies on different types of models, such as artificial neural networks; what does this involve?

SC: Neural networks are a technique based on a general principle that is relatively old, dating back to the late 1950s. This technique is illustrated by the operating model of biological neurons. It starts with a piece of information – the equivalent of a stimulation in biology – that reaches the neuron. Whether the stimulation is above or below the activation threshold will determine whether the transmitted information triggers a decision/action. The problem is that a single layer of neurons may produce a representation that is too simple to be used to interpret the original input information.

By superimposing layers of neurons, potentially with a varying number of neurons in each layer, new explanatory variables are created, combinations resulting from the output of the previous layer. The calculations continue layer by layer until a complex function has been obtained representing the final model. While these networks can be very predictive for certain problems, it is very difficult to interpret the rules using the neural networks model; it is a black box.

 

We hear a lot about deep learning lately, but what is it exactly?

SC: Deep learning is a deep network of neurons, meaning it is composed of many superimposed layers. Today, this method can be implemented by using modern technology that enables massive calculations to be performed, which in turn allow very complex networks to adapt appropriately to the data. This technique, in which many engineers in the fields of science and technology are very experienced, is currently enjoying undeniable success in the area of computer vision. Deep learning is well suited to the field of biometrics and voice recognition, for example, but it shows mixed performances in handling problems in which the available input information does not fully determine the output variable, as is the case in the fields of biology and finance.

 

If deep learning is the present form of machine learning, what is its future?

SC: In my opinion, research in machine learning will focus specifically on situations in which the decision-making system interacts with the environment that produces the data, as is the case in reinforcement. This means that we will learn on a path, rather than from a collection of time invariant examples, thought to definitively represent the entire variability of a given phenomenon. However, more and more studies are being carried out on dynamic phenomena, with complex interactions, such as the dissemination of information on social networks. These aspects are often ignored by current machine learning techniques, and today are left to be handled by modeling approaches based on human expertise.

 

 

Trubo codes, Claude Berrou, Quèsaco, IMT Atlantique

What are turbo codes?

Turbo codes form the basis of mobile communications in 3G and 4G networks. Invented in 1991 by Claude Berrou, and published in 1993 with Alain Glavieux and Punya Thitimajshima, they have now become a reference point in the field of information and communication technologies. As Télécom Bretagne, birthplace of these “error-correcting codes”, prepares to host the 9th international symposium on turbo codes, let’s take a closer look at how these codes work and the important role they play in our daily lives.

 

What do error-correcting codes do?

In order for communication to take place, three things are needed: a sender, a receiver, and a channel. The most common example is that of a person who speaks, sending a signal to someone who is listening, by means of the air conveying the vibrations and forming the sound wave. Yet problems can quickly arise in this communication if other people are talking nearby – making noise.

To compensate for this difficulty, the speaker may decide to yell the message. But the speaker could also avoid shouting, by adding a number after each letter in the message, corresponding to the letter’s place in the alphabet. The listener receiving the information will then have redundant information for each part of the message — in this case, double the information. If noise alters the way a letter is transmitted, the number can help to identify it.

And what role do turbo codes play in this?

In the digital communications sector, there are several error-correcting codes, with varying levels of complexity. Typically, repeating the same message several times in binary code is a relatively safe bet, yet it is extremely costly in terms of bandwidth and energy consumption.

Turbo codes are a much more developed way of integrating information redundancy. They are based on the transmission of the initial message in three copies. The first copy is the raw, non-encoded information. The second is modified by encoding each bit of information using an algorithm shared by the coder and decoder. Finally, another version of the message is also encoded, but after modification (specifically, a permutation). In this third case, it is no longer the original message that is encoded and then sent, but rather a transformed version. These three versions are then decoded and compared in order to find the original message.

Where are turbo codes used?

In addition to being used to encode all our data in 3G and 4G networks, turbo codes are also used in many different fields. NASA uses them for its communication with space probes which have been built since 2003. The space community, which has to contend with many constraints on communication processes, is particularly fond of these codes, as ESA also uses them for many of its probes. But more generally, turbo codes represent a safe and efficient encoding technique in most communication technologies.

Claude Berrou, inventor of turbo codes

How have turbo codes become so successful?

In 1948, American engineer and mathematician Claude Shannon proposed a theorem stating that codes always exist that are capable of minimizing channel-related transmission errors, up to a certain level of disturbance. In other words, Shannon asserted that, despite the noise in a channel, the transmitter will always be able to transmit an item of information to the receiver, almost error-free, when using efficient codes.

The turbo codes developed by Claude Berrou in 1991 meet these requirements, and are close to the theoretical limit for information transmitted with an error rate close to zero. Therefore, they represent highly efficient error-correcting codes. His experimental results, which validated Shannon’s theory, earned Claude Berrou the Marconi Prize in 2005 – the highest scientific distinction in the field of communication sciences. His research earned him a permanent membership position in the French Academy of Sciences.

 

[box type=”info” align=”” class=”” width=””]

Did you know?

The international alphabet (or the NATO phonetic alphabet) is an error-correcting code. Every letter is in fact encoded as word beginning with that letter. Thus ‘N’ and ‘M’ become ‘November’ and ‘Mike’. This technique prevents much confusion, particularly in radio communications, which often involve noise.[/box]

 

Pokémon Go, Télécom SudParis, Marius Preda, Augmented reality

What is Augmented Reality?

Since its launch, the Pokémon Go game has broken all download records. More than just a fun gaming phenomenon, it is above all an indicator of the inevitable arrival of augmented reality technology in our daily lives. Marius Preda, a researcher at Télécom SudParis and an expert on the subject, explains exactly what the term “augmented reality” means.

 

Does just adding a virtual object to a real-time video make something augmented reality?

Marius Preda: If the virtual object is spatially and temporally synchronized with reality, yes. Based on the academic definition, augmented reality is the result of a mixed perception between the real and virtual worlds. The user observes both a real source, and a second source provided by a computer. In the case of Pokémon Go, there is a definite synchronism between the environment filmed with the camera — which changes according to the phone’s position — and the virtual Pokémon that appear and stay in their location.

 

How is this synchronization guaranteed?

MP: The Pokémon Go game works via geolocation: it uses GPS coordinates to make a virtual object appear at a location in the real environment. But during the Pokémon capture phase, the virtual object does not interact with the real image.

Very precise visual augmented realities exist, which attain synchronization in another way. They are based on the recognition of patterns that have been pre-recorded in a database. It is then possible to replace real objects with virtual objects, or to make 3D objects interact with forms in the real environment. These methods are expensive, however, since they require more in-depth learning phases and image processing.

 

Is it accurate to say that several augmented realities exist? 

MP: We can say that there are several ways to ensure the synchronization between the real and virtual worlds. Yet in a broader sense, mixed reality is a continuum between two extremes: pure reality on the one hand, and synthetically produced images on the other. Between these two extremes we find augmented reality, as well as other nuances. If we imagine a completely virtual video game, only with the real player’s face replacing that of the avatar, this is augmented virtuality. Therefore, augmented reality is a point on this continuum, in which synthetically generated objects appear in the real world.

 

Apart from video games, what other sectors are interested in augmented reality applications?

MP: There is a huge demand among professionals. Operators of industrial facilities can typically benefit from augmented reality for repairs. If they do not know how to install a part, they can receive help from virtual demonstrations carried out directly on the machine in front of them.

There is also high demand from architects. They already use 3D models to show their projects to decision-makers who decide whether or not to approve the construction of a building. Yet now they would like to show a future building at its future location using augmented reality, with the right colors, and lighting on the façades, etc.

Of course, such applications have enormous market potential. By monetizing a location in an augmented reality application like Pokémon Go, Google could very well offer game areas located directly in stores.

[box type=”shadow” align=”” class=”” width=””]

A MOOC for developing augmented reality applications

Interested in learning about and creating augmented reality experiences? In augmenting a book, a map, or designing a geolocation quiz? Institut Mines-Télécom is offering a new MOOC to make this possible. It will enable learners to experiment and create several augmented reality applications.

This introductory MOOC, entitled Getting started with augmented reality, is intended for web production professionals, as well as anyone interested in designing innovative experiences using interactions between the virtual and real worlds: web journalists, mobile application developers, students from technical schools, and art and design schools… as well as teachers. Without having any prior experience in computer programming, the learner will easily be able to use the augmented reality prototyping tools.[/box]

Read more on our blog

Blockchain, Patrick Waelbroeck, Télécom ParisTech

What is a Blockchain?

Blockchains are hard to describe. They can be presented as online databases. But what makes them so special? These digital ledgers are impossible to falsify, since they are secured through a collaborative process. Each individual’s participation is motivated by compensation in the form of electronic currency. But what are blockchains used for? Is the money they generate compatible with existing economic models? Patrick Waelbroeck, economist at Télécom ParisTech, demystifies blockchains in this new article in our “What is…?” series.

 

What does “blockchain” really mean?

Patrick Waelbroeck: A blockchain is a type of technology. It is a secure digital ledger. When a user wishes to add a new element to this record, all the other blockchain users are asked to validate this addition in an indelible manner. In order to do this, they are given an algorithmic problem. When one of the users solves the problem, they simultaneously validate the addition, and it is marked with a tamper-proof digital time-stamp. Therefore, a new entry cannot be falsified or backdated, since other users can only authenticate additions in real time. The new elements are grouped together into blocks, which are then placed after older blocks, thus forming a chain of blocks — or a blockchain.

In order for this security method to work for the ledger, there must be an incentive principle motivating users to solve the algorithm. When a request is made for an addition, all users then compete and the first to find the solution receives electronic money. This money could be inBitcoin, Ether, or another type of cryptocurrency.

 

Are blockchains accessible to everybody?

PW: No, especially since specific material is required, which is relatively expensive and must be updated frequently. Therefore, not everyone can earn money by solving the algorithms. However, once this money is created, it can circulate and be used by anyone. It is possible to exchange crypto-currency for common currency through specialized stock exchanges.

What is essential is the notion of anonymity and trust. All the changes to the ledger are recorded in a permanent and secure manner and remain visible. In addition, the management is decentralized: there is not just one person responsible for certification – it is a self-organizing system.

 

What can the ledger created by the blockchain be used for?

PW: Banks are very interested in blockchains due to the possibility of putting many different items in them, such as assets, which would only cost a few cents — as opposed to the current cost of a few euros. This type of ledger could also be used to reference intellectual property rights or land register reference data. Some universities are considering using a blockchain to list the diplomas that have been awarded. This would irrefutably prove a person’s diploma and the date. Another major potential application is smart contracts: automated contracts that will be able to validate tasks and the related compensation. In this example, the advantage would be that the relationship between employees and employers would no longer be based on mutual trust, which can be fragile. The blockchain acts as a trusted intermediary, which is decentralized and indisputable.

 

What still stands in the way of developing blockchains?

 PW: There are big challenges involved in upscaling. Using current technology, it would be difficult to process all the data generated by a large-scale blockchain. There are also significant limitations from a legal standpoint. For smart contracts, for example, it is difficult to define the legal purpose involved. Also, nothing is clearly established in terms of security. For example, what would happen if a State requested special access to a blockchain? In addition, if the key for a public record is only held by one participant, this could lead to security problems. Work still needs to be done on striking such delicate balances.

Read more on our blog

Formula 1, composite material

What is a composite material?

Composite materials continue to entice researchers and are increasingly being used in transport structures and buildings. Their qualities are stunning, and they are considered to be indispensable in addressing the environmental challenges at hand: reducing greenhouse gas emissions, creating stronger and more durable building structures, etc. How are these materials designed? What makes them so promising? Sylvain Drapier, a researcher in this field at Mines Saint-Étienne, answers our questions in this new addition to the “What is…?” series, dedicated to composite materials.

 

Does the principle behind a composite mean that it consists of two different materials?

Sylvain Drapier: Let’s say at least two materials. For a better understanding, it’s easier to think in terms of volume fractions, in other words, the proportion of volume that each component takes up in the composite. In general, a composite contains between 40 to 60% of reinforcements, often in the form of fibers. The rest is made up of a binder, called the matrix, which allows for the incorporation of these fibers. Increasingly, the binder percentage is being reduced by a few percentage points in order to add what we call fillers, such as minerals, which will optimize the composite material’s final properties.

 

composite material, Sylvain Drapier, Mines Saint-Étienne

50 % of the structure of the Airbus A350 is made of composite materials. The transportation industry is particularly interested in these materials.

Are these fibers exactly like those in our clothing?

SD: Agro-sourced composites, using natural fibers like flax and hemp, are starting to be developed. In this regard, it’s a little like the fibers in our clothing. But these materials are still rare. For composites that are produced for widespread distribution, the fibers are short — at times extremely short — glass fibers. To give an idea of their size, they have a diameter of 10 micrometers and are 1 to 2 millimeters long. They can be larger in products that must absorb limited strain, such as sailboards and electrical boxes, in which they are a few centimeters long. However, high-performance materials require continuous fibers that measure up to several hundred meters, which are wound on reels. This is the case for aramid fibers, with the best known being Kevlar, and is the case for glass fibers used to make wind turbine blades, as well as for carbon fibers used in structures that must withstand heavy use, such as bicycles, high-end cars and airplanes …

 

Can these fibers be bound together by soaking them in glue to form a composite?

SD: It all starts with fiber networks, in 2D or 3D, produced by specialized companies. This involves textiles that are actually woven, or knitted, in the case of revolving parts. After this, there are several production methods. Some processes involve having the plastic resin, in liquid form, soak into this network as a binder. When heated, the resin hardens: we refer to this as a thermosetting polymer. Other polymer resins are used in a solid state, and melt when heated. They fill the spaces between the fibers, and become solid when they return to room temperature. These matrices are called thermoplastic, made from the same polymer family as the plastic recyclable products we use every day. Metal and ceramic matrices exist too, but they are rarer.

 

How is the choice of fiber determined?

SD: It all depends on its use. Ceramic matrices are used for composites inserted into hot structures; thermoplastic resins melt above 200-350°C, and the thermosetting matrices are weakened above 200°C. Some uses require very unusual matrix choices. This is the case for Formula 1 brakes, and the Ariane rocket nozzles, designed with 3D carbon: not only are the fibers carbon, but the binder is carbon too. Compared with a part made completely of carbon, this composite resists much better to crumbling, and can used at temperatures well in excess of 1,000°C.

 

Vinci motor, European Space Agency

The Vinci motor is made for European space agency rockets. Its nozzle (the black cone in the picture), which enables the propulsion, is made of a carbon-carbon composite. Credits: DLR German Aerospace Center.

What are the benefits of composites?

SD: These materials are very light, while offering physical properties that are at least equivalent to those of metallic materials. This benefit is what has won over the transportation industry, since a lighter vehicle consumes less energy. Another benefit of composites is they do not rust. Another feature: we can integrate functions into these materials. For example, we can make a composite more flexible in certain areas by orienting the fibers differently, which can allow sub-assemblies of parts to be replaced by just one part. However, composite resins are often water-sensitive. This is why the aeronautics industry simulates ageing cycles in specific humidity and temperature conditions.

 

What approach is envisaged for recycling composites?

SD: Thermoplastic matrices can be melted. The polymers are then separated from the fibers and each component is processed separately. However, thermosetting matrices lack this advantage, and the composites they form must be recycled in other ways. It is for this reason that researchers, seeking materials with a reduced carbon footprint, are looking to agro-based composites, by using more and more plant fibers. There are even composites that are 100% agro-based, associating bio-sourced polymers with these organic reinforcements. Composite recycling concerns do not yet attract the attention they deserve, but research teams are currently investing in this means of development.

 

Read more on our blog

Quèsaco, What is?, 5G, Frédéric Guilloud

What is 5G?

5G is the future network that will allow us to communicate wirelessly. How will it work? When will it be available for users? With the Mobile World Congress in full swing in Barcelona, we are launching our new “What is…?” series with Frédéric Guilloud, Research Professor at IMT Atlantique, who answers our questions about 5G.

 

What is 5G?

Frédéric Guilloud: 5G is the fifth generation of mobile telephone networks. It will replace 4G (also referred to as LTE, for Long Term Evolution). Designing and deploying a new generation of mobile communication systems takes a lot of time. This explains why, at a time when 4G has only recently become available to the general public, it is already time to think about 5G.

What will it be used for?

FG: Up until now, developing successive generations of mobile telephone networks has always been aimed at increasing network speed. Today, this paradigm is beginning to change: 5G is aimed at accommodating a variety of uses (very dense user environments, man-machine communications, etc.). The specifications for this network will therefore cover a very broad spectrum, especially in terms of network speed, transmission reliability, and time limits.

How will 5G work?

FG: Asking how 5G will work today would be like someone in the 1980s asking how GSM would work. Keep in mind that the standardization work for GSM began in 1982, and the first commercial brand was launched in 1992. Even though developing the 5th generation of mobile communications will not take as long as it did for the 2nd, we are still only in the early stages.

From a technical standpoint, there are many questions to consider. How can we make the different access layers (Wi-Fi, Bluetooth, etc.) compatible? Will 5G be able to handle heterogeneous networks, which do not have the same bandwidths? Will we be able to communicate using this network without disturbing these other networks? How can we increase reliability and reduce transmission times?

Several relevant solutions have already been discussed, particularly in the context of the METIS European project (see box). The use of new bandwidths, with higher frequencies, such as 60-80 GHz bands, is certainly an option. Another solution would be to use the space remaining on the spectrum, surrounding the bandwidths which are already being used (Wi-Fi, Bluetooth, etc.), without interfering with them, by using filters and designing new waveforms.

How will the 5G network be deployed?

FG: The initial development phase for 5G was completed with the end of the projects in the 7th Framework R&D Technological Program (FP7), and particularly through the METIS project in April 2015. The second phase is being facilitated by the H2020 projects, which are aimed at completing the pre-standardization work by 2017-2018. The standardization phase is then expected to last 2-3 years, and 2020 could very well mark the beginning of the 5G industrialization phase.

 

Find out more about Institut Mines-Télécom and France Brevets’ commitment to 5G

[box type=”shadow” align=”” class=”” width=””]

The METIS European project

The METIS project (Mobile and wireless communications Enablers for the Twenty-twenty Information Society) was one of the flagship projects of the 7th Framework R&D Technological Program (FP7) aimed at supporting the launch of 5G. It was completed in April 2015 and brought together approximately 30, primarily European, industrial and academic partners, including IMT Atlantique. METIS laid the foundations for designing a comprehensive system to respond to the needs of the 5G network by coordinating the wide variety of uses and the different technical solutions that will need to be implemented.

The continuation of the project will be part of the Horizon 2020 framework program. The METIS-II project, coordinated by the 5G-PPP (the public-private partnership that brings together telecommunications operators), is focused on the overall system for 5G. It will integrate contributions from other H2020 projects, such as COHERENT and FANTASTIC-5G, which were launched in July 2015: each of these projects are focused on specific aspects of 5G. The COHERENT project, in which Eurecom is participating (including Navid Nikain), is focused on developing a programmable cellular network. The FANTASTIC-5G project, with the participation of IMT Atlantique, under the leadership of Catherine Douillard, is aimed at studying, over a two-year period, the issues related to the physical layer (signal processing, coding, implementation, waveform, network access protocol, etc.) for frequencies under 6 GHz.

Find out more about the METIS / METIS-II project[/box]