aerosol therapy

Aerosol therapy: An ex vivo model of lungs

A researcher in Health Engineering at Mines Saint-Étienne, Jérémie Pourchez, and his colleagues at the Saint-Étienne University Hospital, have developed an ex vivo model of lungs to help improve medical aerosol therapy devices. An advantage of this technology is that scientists can study inhalation therapy whilst limiting the amount of animal testing that they use.

 

This article is part of our dossier “When engineering helps improve healthcare

Imagine a laboratory where, sat on top of the workbench, is a model of a human head made using 3D printing. This anatomically correct replica is connected to a pipe which mimics the trachea, and then a vacuum chamber. Inside this chamber is a pair of lungs. Periodically, a pump stimulates the natural movement of the rib cage to induce breathing and the lungs inflate and deflate. This physical model of the respiratory system might sound like science-fiction, but it is actually located in the Health Engineering Center at Mines Saint-Étienne.

Developed by a team led by Jérémie Pourchez, a specialist in inhaled particles and aerosol therapy, and as part of the ANR AMADEUS project, this ex-vivo model stimulates breathing and the illnesses associated with it, such as asthma or fibrosis. For the past two years, the team have been working on developing and validating the model, and have already created both an adult and child-sized version of the device. By offering an alternative and less expensive solution to animal testing, the device has allowed the team to study drug delivery using aerosol therapy. The ex vivo pulmonary model is also far more ethical, as it only uses organs which would otherwise be thrown away by slaughterhouses

Does this ex vivo model work the same as real lungs?

One of the main objectives for the researchers is to prove that these ex vivo models can predict what happens inside the human body accurately. To do this, they must demonstrate three things.

First, the researchers need to study the respiratory physiology of the models. By using the same indicators that are used on real patients, such as those used for measuring the respiratory parameters of an asthmatic, the team demonstrate that the model’s parameters are the same as those for a human being. After this, the team must analyze the model’s ventilation; for example, by making sure that there are no obstacles in the bronchi. To do this, the ex vivo lungs inhale krypton, a radioactive gas, which is then used as a tracer to visualize the air-flow throughout the model. Finally, the team study aerosol deposition in the respiratory tract, which involves observing where inhaled particles settle when using a nebulizer or spray. Again, this is done by using radioactive materials and nuclear medical imaging.

These results are then compared to results you would expect to see in humans, as defined by scientific literature. If the parameters match, then the model is validated. However, the pig’s lungs used in the model behave like the lungs of a healthy individual. This poses a problem for the team, as the aim of their research is to develop a model that can mimic illnesses so they can test the effectiveness of aerosol therapy treatments.

From a healthy model to an ill one

There are various pathologies that can affect someone’s breathing, and air pollution tends to increase the likelihood of contracting one of them. For example, fibrosis damages the elastic fibers that help our lungs to expand when we breathe in. This makes the lung more rigid and breathing more difficult. In order to mimic this in the ex vivo lungs, the organs are heat treated with steam to stiffen the surface of the tissue. This then changes their elasticity, and recreates the mechanical behavior of human lungs with fibrosis.

Other illnesses such as cystic fibrosis occur, amongst other things, due to the lungs secreting substances that make it difficult for air to travel through the bronchi. To recreate this, the researchers insert artificial secretions made from thickening agents, which allows the model lung to mimic these breathing difficulties.

Future versions of the model

Imitating these illnesses is an important first step. But in order to study how aerosol therapy treatments work, the researchers also need to observe how they diffuse into the bloodstream. This can be either an advantage or a disadvantage. Since the lungs are also an entry point where the drug can spread through the body, the research team installed one last tool: a pump to simulate a heartbeat. “The pump allows fluid to circulate around the model in the same way that blood circulates in lungs,” explains Jérémie Pourchez. “During a test, we can then measure the amount of inhaled drug that will diffuse into the bloodstream. We are currently validating this improved model.”

One problem that the team is now facing is the development of new systemic inhalation treatments. These are designed to treat illnesses in other organs, but are inhaled and use the lungs as an entry point into the body. “A few years ago, an insulin spray was put on the market,” says Jérémie Pourchez. Insulin, which is used to treat diabetes, needs to be regularly injected. “This would be a relief for patients suffering from the disease, as it would replace these injections with inhalations. But the drug also requires an extremely precise dose of the active ingredient, and obtaining this dose of insulin using an aerosol remains a scientific and technical challenge.”

As well as being easier to use, an advantage of inhaling a treatment is how quickly the active ingredient enters into the bloodstream. “That’s why people who are trying to quit smoking find that electronic cigarettes work better than patches at satisfying nicotine cravings”, says the researcher. The dose of nicotine inhaled is deposited in the lungs and is diffused directly into the blood. “It also led me to study electronic cigarette-type devices and evaluate whether they can be used to deliver different drugs by aerosol,” explains Jérémie Pourchez.

By modifying certain technical aspects, these devices could become aerosol therapy tools, and be used in the future to treat certain lung diseases. This is one of the ongoing projects of the Saint-Étienne team. However, there is still substantial research that needs to be done into the device’s potential toxicity and its effectiveness depending on the inhaled drugs being tested. Rather than just being used as a tool for quitting smoking, the electronic cigarette could one day become the newest technology in medical aerosol treatment.

Finally, this model will also provide an answer to another important issue surrounding lung transplants. When an organ is donated, it is up to the biomedicine agency to decide whether this donation can be given to the transplant teams. But this urgent decision may be based on factors that are sometimes not sufficient enough to be able to assess the real quality of the organ. “For example, to assess the quality of a donor’s lung,” says Jérémie Pourchez, “we refer to important data such as: smoking, age, or the donor’s known illneses. Therefore, our experimental device which makes lungs breathe ex vivo, can be used as a tool to more accurately assess the quality of the lungs when they are received by the transplant team. Then, based on the tests performed on the organ that is going to be transplanted, we can determine whether it is safe to perform the operation.”

Photographie d'un implant osseux à base de phosphate de calcium conçus par l'équipe de David Marchat.

Bone implants to stimulate bone regeneration

Mines Saint-Étienne’s Centre for Biomedical and Healthcare Engineering (CIS) seeks to improve healthcare through innovations in engineering. David Marchat, a materials researcher at CIS, is working on developing calcium phosphate-based biomaterials. Due to their ability to interact with living organisms, these bone implants can help regenerate bones.

 

This article is part of our dossier “When engineering helps improve healthcare

New-generation bone implants is one of David Marchat’s areas of research. The Mines Saint-Étienne chemist is developing a calcium phosphate-based biomaterial which can interact with living organisms and stimulate bone generation. Although calcium phosphate-based bone substitutes have been used in health systems for decades, their action has remained limited.

Our research focuses on the need for a bio-instructive implant, meaning one that is able to tell cells how to rebuild the bone and facilitate its vascularization.” In practice, this includes two major aspects: working on the chemical composition of calcium phosphate, and on the architecture of the implant. This is important, since existing implants are only able to regenerate small bone defects (less than 1 cm3).

When the bone is damaged too badly, it is not able to rebuild itself. A structural bone graft is then required, either from a donor (allograft), or from the patient (autograft). If the graft comes from a donor — primarily from bone banks — there is a risk that the remaining proteins cause an inflammatory response, infection or rejection. This is not the case if the graft is taken directly from another part of the patient’s body, but there is a limited quantity of harvestable bone structure. Furthermore, this involves two successive operations for the patient and the partial or complete loss of a bone.

Synthetic materials such as calcium phosphate-based biomaterials avoid these constraints. Moreover, since calcium phosphates form the mineral part of the bone, they are generally well-tolerated (i.e., non-toxic). The bone implants developed by the team meet a number of needs. The shape is personalized so as to correspond to the nature of the patient’s bone defect. This close contact between the bone margin and the implant therefore facilitates the migration of fluids, tissues and cells in the implant while also facilitating  their regeneration. The overall architecture, which ranges from the marco-scale (greater than 100 micrometers) to the nano-scale (less than 1 micrometer), is designed to “guide” this regeneration.

In addition, the composition of the calcium phosphate powder is optimized to provide chemical elements that contribute to bone formation. “We had to invent new tools and processes, in order to synthesize calcium-phosphate based powders, analyze them and then make customized bone implants,” explains David Marchat.

A new architecture

When  bone tissue is weakened a process is initiated to regenerate it, in which two types of cells act together. The first type of cells is responsible for breaking down the damaged tissue in order to recover elements that can be used by the second type of cells to rebuild the tissue. This second group of cells weave a fabric of collagen fibers – the extracellular matrix – which they then mineralize by precipitating calcium phosphate crystals. In order for the implant to effectively contribute to this bone remodeling process, the cells, blood vessels, and more generally, the new tissues, must be able to colonize it and eventually replace it.

The bone implants are modeled on several architectural levels, with pores of various sizes to promote bone regeneration and  vascular penetration. At the macro-scale level, the smallest pores  (less than 150 micrometers) confine the cells responsible for bone regeneration and stimulate their activity. Meanwhile, the largest pores (greater than 500 micrometers) allow for greater colonization by bone cells and blood vessels. “The combination of macropores of various sizes which make it possible to increase permeability and confinement,” explains David Marchat, “is essential to new bone regeneration strategies.”

To obtain the desired architecture, the researchers have developed a method based on pouring a calcium phosphate suspension — a liquid mixture containing calcium phosphate powder, water and  chemical stabilizers — into a wax mold, made using 3D printing. This “impregnated” mold is then dried and heat-treated at different temperatures in order to eliminate the wax mold and consolidate the implant.

Another question that inevitably arises is,” adds the chemist, “for a given application, how long does the structure have to remain in the body so that the bone has time to regenerate itself,” adds the chemist. If the implant deteriorates too slowly, it will block bone formation. But if, on the other hand, it deteriorates too quickly, it will not be able to serve as a scaffold for bone formation. “It’s hard to estimate the right balance between the two.

An ambitious engineering project for the testing phase

There are currently two ways to carry out biological assessments of these bone implants, which are used at different stages of testing. Standard in vitro cultures have the advantage of allowing for direct observation through a microscope but do not reproduce real conditions inside the body. In vivo experimentation with implantation inside an animal recreates these conditions more closely and provides what is referred to as a “physiological” environment, although animal and human physiology are different, and it is difficult to access the information. These testing stages are crucial, but researchers would like to move away from animal testing.

This is precisely the aim of an ambitious project: developing a 3D bioreactor with human cells to mimic human physiology. Such a structure would provide conditions equivalent to the human body and allow for direct observations, thereby making animal testing unnecessary for majority of validation phases for medical devices or medications. This project calls for expertise in fluid mechanics and a greater understanding of the human body and its workings. Other research topics in healthcare medical engineering have a similar aim. One such example are Organ-On-Chips, microfluidic chips that act as artificial cell cultures to simulate the inner workings and physiological activity of human organs.

nose

An “electronic nose” analyzes people’s breath to help sniff out diseases

In partnership with IMT Atlantique, a team of researchers at IMT Lille Douai have developed a device which can measure the level of ammonia in someone’s breath. The aim of the artificial nose is to use this device to create a personalized follow-up care for patients affected by chronic kidney disease.  Eventually, the machine could even allow doctors to detect the disease in undiagnosed people.

 

This article is part of our dossier “When engineering helps improve healthcare

In the human body, the kidneys’ main role is to remove toxins which are carried in the blood. However, when a person suffers from chronic kidney disease, this filtration function no longer works to the same standard. In France, the disease affects around 5.7 million people and can range from causing a degree of impairment, to terminal. Around 76,000 people are terminally affected in France, which means that their kidneys cannot filter their blood at all.

For these patients, the only options are to either wait for a transplant or face extensive treatment which hugely affects their daily life. It is therefore essential for doctors to be able to detect this silent and progressive disease early enough to slow down the effects. At the moment, doctors use blood or urine tests to identify the disease. But, to make it easier to diagnose, scientists are exploring another route: breath analysis. Studying the substances in the air we breathe out can provide valuable information about a person’s health.

Ammonia: a key element

In order to make this possible, two teams of researchers from IMT Lille Douai and IMT Atlantique have been working in partnership with the nephrology department at the Lille University Hospital. For the past three years, they have been developing a compact and turnkey device for doctors. The device is an ‘electronic nose’, a system made up of several sensors that can measure the specific concentration of a substance in someone’s breath.

In the case of chronic kidney disease, the substance being measured is ammonia. Ammonia is mainly produced by intestinal bacteria and is supposed to be filtered from the body by the kidneys. Previous studies have established a concentration threshold for levels of ammonia in a person’s breath, which doctors can then use to determine the likelihood that a patient has chronic kidney disease.

Ammonia offers some resistance

But how can you measure this compound using a portable device? The scientists used a series of sensors which react in the presence of ammonia, as Caroline Duc, team member and researcher at IMT Lille Douai, explains. “The sensors are made from two electrodes with a sensitive surface placed on top of them. The resistance of this surface varies depending on the amount of ammonia present”. When used, the device’s ability to resist electrical current increases when ammonia is present and returns to its initial state when the ammonia disappears. This therefore allows scientists to measure the level of the molecule in a patient’s breath.

Additionally, each sensor in the artificial nose has a unique composition. As Caroline Duc points out, “it is very complicated to use a material whose resistance varies when one type of gas is present”. This is why scientists decided to increase the accuracy of the analysis by combining several different sensors which have different responses to ammonia.

During the tests, the electronic nose was periodically exposed to a person’s breath. This resulted in an increase in the resistance when ammonia was present, and a decrease when the sensors were no longer exposed to the exhaled air.  Several factors were then measured and analyzed using statistical processing algorithms.

These algorithms rely on tools such as machine learning. The only difference with this case was that here, these tools were applied using small amounts of data and  supervised learning to categorize the different types of breath. In other words, the algorithms were taught using a dataset that had already classified breath which belongs to a healthy, ill or ‘uncertain’ individual. New profiles were then passed through this algorithm, so that they could be classed into these three categories.

Breath: a complex compound

To date, the first prototype developed is around 15cm in length and contains between 10 and 13 sensors.  The device was tested in a laboratory using artificial breath, which allowed the teams to verify that it could distinguish a healthy individual from someone who was ill, using the criteria defined in up-to-date scientific literature.  Then, experiments were carried out in clinics with patients suffering from chronic kidney disease. The idea was to measure the concentration of ammonia in their breath, before and after dialysis. The results demonstrated that there was a reduction in ammonia after the treatment.

However, they also highlighted the limitations of using a single marker. Before dialysis, some patients had levels of ammonia that were similar to those of a healthy person. Measuring the amount of ammonia in a person’s breath did not give a reliable diagnosis for chronic kidney disease.  This has caused scientists to launch a new study in order to identify other biomarkers characteristic of the disease.

More generally, this reflects the difficulty of conducting a reliable breath analysis, which can include both the air we breathe in as well as out. “Initially, I think clinical breath analysis will be developed for personalized patient follow-up care, rather than for diagnosis”, says Caroline Duc. For example, by measuring how a patient’s ammonia levels change in response to medication, doctors will be able to monitor the effectiveness of a treatment and then adapt it based on the results.

What is the future of follow-up care for other illnesses?

Researchers at IMT Lille Douai will continue to work on improving the electronic nose. At the moment, patients’ breath is collected and sealed in an airtight bag and is then analyzed in a laboratory. Consequently, the team’s aim is to develop a functional and completely autonomous prototype which would give doctors real-time results. However, this raises several new issues, such as the study of fluids, controlling the speed that the air is exhaled, etc. As well as this, to improve their data analysis, Caroline Duc and her colleagues have started a partnership with researchers who specialize in data handling at Télécom SudParis.

Moreover, the team is involved in a European-wide project which aims to identify biomarkers for lung cancer, and then create a multi-sensor system that is specifically designed to detect these substances. IMT Lille Douai’s expertise will be especially useful for this second objective.

This electronic nose, capable of sniffing out illnesses such as chronic kidney disease, is therefore still in the early stages of development, and needs a lot of work before it can really be used by doctors and their patients. But doctors are waiting with bated breath; in several years’ time it could be a breathtaking medical innovation!

Article written (in French) for I’MTech by Bastien Conteras

Mendeleev

Mendeleev: The history of a table

2019 marks the 150th anniversary of the periodic table of elements. To celebrate this anniversary, the Mines ParisTech Library and Mineralogy Museum have teamed up to create the exhibition Before Mendeleev: Genesis of a Table, on view until 31 January 2020. The exhibition traces the contributions of the scientists who preceded Mendeleev and led him to present the periodic table of elements, which has since served as a reference for all scientists and students.

 

To celebrate the 150th anniversary of the periodic table of elements, Mines ParisTech is presenting the exhibition Before Mendeleev: Genesis of a Table until 31 January 2020. Visitors have the opportunity to discover the scientists who contributed to formulating this classification and to developing knowledge over the years. Amélie Dessens is a curator at the library and head of Mines ParisTech’s heritage collections and Sarah Hijmans is a PhD student at the Université de Paris’ SPHère laboratory. They created the exhibition in collaboration with Didier Nectoux, curator at the Mineralogy Museum, to showcase and share the rich cultural collections of the school’s library and museum. “It’s this type of exhibition, along with school partnerships,” says Didier Nectoux, “that allows us to keep this heritage alive outside of the school.” He adds, “this rich heritage must be preserved and shared. And these collections are still essential today. The transformations of the 21st century are driving us to study new possibilities to find alternatives, and we need documentation, archives, in order to know which avenues have already been studied and abandoned, and the reasons why.”

The exhibition, which is presented in chronological order, starts on the doorstep of the Library with the beginnings of the study of elements: alchemy. “The alchemists were not just interested in turning lead into gold,” explains Amélie Dessens. “Beyond the esoteric sense with which alchemy is often related today, it was also – and more importantly – the beginning of chemistry and of identifying the elements that are presented here.” In display cases, eight minerals accompany the works. The first seven elements identified, and bismuth, the earliest written record of which dates from 1558 by the German scholar Georg Agricola. However, it was already well-known in European mining centers prior to this date. This also demonstrates the importance of accompanying discoveries with publication, which is crucial to situating knowledge in time.

A long road to developing the table

From Bergen to Lavoisier, Döbereiner to Newland, a series of display cases present the various steps of the advances, decisions and research that shaped the study and classification of the elements. First, there was Lavoisier, who brought about a true chemical revolution by introducing a scientific method to prove his theories, proposing the first classification of the “33 simple substances,” and working with Berthollet to develop a chemical nomenclature, which made it possible for everyone to use the same names for the elements. The second major turning point came in the 1860s, when scientists realized that elements could have similar chemical properties based on their atomic weight. They thus started to classify them based on these criteria and proposed potential classification formats, which are presented in the exhibition through diagrams, notes and publications.

For example, there was Alexandre-Émile Béguyer de Chancourtois, geologist, mineralogist and professor at the school of Mines de Paris, who made a significant contribution in 1862. He was the first to demonstrate the principle of periodicity through a spiral-shaped classification: the telluric screw. “Mendeleev was not the first to demonstrate periodicity, or to indicate where the missing elements should be placed in the table,” explains exhibition curator Amélie Dessens, “but unlike the others, he dared to predict the properties of the missing elements.” Dmitri Mendeleev published his table in 1869. When gallium was discovered in 1875, confirming his predictions, the news spread throughout the scientific community. It was at this point that Mendeleev’s classification would make its mark in history and earn its place in our textbooks.

applis mobiles, mobile apps

Do mobile apps for kids respect privacy rights?

The number of mobile applications for children is rapidly increasing. An entire market segment is taking shape to reach this target audience. Just like adults, the personal data issue applies to these younger audiences. Grazia Cecere, a researcher in the economics of privacy at Institut Mines-Télécom Business School, has studied the risk of infringing on children’s privacy rights. In this interview, she shares the findings from her research.

 

Why specifically study mobile applications for children?

Grazia Cecere: A report from the NGO Common Sense reveals that 98% of children under the age of 8 in the United States use a mobile device. They spend an average of 48 minutes per day on the device. That is huge, and digital stakeholders have understood this. They have developed a market specifically for kids. As a continuation of my research on the economics of privacy, I asked myself how the concept of personal data protection applied to this market. Several years ago, along with international researchers, I launched a project dedicated to these issues. The project was also launched thanks to funding from Vincent Lefrere’s thesis within the framework of the Futur & Ruptures program.

Do platforms consider children’s personal data differently than that of adults?

GC: First of all, children have a special status within the GDPR in Europe (General Data Protection Regulation). In the United States, specific legislation exists: COPPA (Children’s Online Privacy Protection Act). The FTC (Federal Trade Commission) handles all privacy issues related to users of digital services and pays close attention to children’s rights. As far as the platforms are concerned, Google Play and App Store both have Family and Children categories for children’s applications. Both Google and Apple have expressed their intention to separate these applications from those designed for adults or teens and ensure better privacy protection for the apps in these categories. In order for an app to be included in one of these categories, the developer must certify that it adheres to certain rules.

Is this really the case? Do apps in children’s categories respect privacy rights more than other applications?

GC: We conducted research to answer that question. We collected data from Google Play on over 10,000 mobile applications for children, both within and outside the category. Some apps choose not to certify and instead use keywords to target children. We check if the app collects telephone numbers, location, usage data, and whether they access other information on the telephone. We then compare the different apps. Our results showed that, on average, the applications in the children’s category collect fewer personal data and respect users’ privacy more than those targeting the same audience outside the category. We can therefore conclude that, on average, the platforms’ categories specifically dedicated to children reduce the collection of data. On the other hand, our study also showed that a substantial portion of the apps in these categories collect sensitive data.

Do all developers play by the rules when it comes to protecting children’s personal data?

GC: App markets ask developers to provide their location. Based on this geographical data, we searched to see whether an application’s country of origin influenced its degree of respect for users’ privacy. We demonstrated that if the developer is located in a country with strong personal data regulations—such as the EU, the United States and Canada—it generally respects user privacy more than a developer based in a country with weak regulation. In addition, developers who choose not to provide their location are generally those who collect more sensitive data.

Are these results surprising?

GC: In a sense, yes, because we expected the app market to play a role in respecting personal data. These results raise the question of the extra-territorial scope of the GDPR, for example. In theory, whether an application is developed in France or in India, if it is marked in Europe, it must respect the GDPR. However, our results show that among countries with a weak regulation, the weight of the legislation in the destination market is not enough to change the developers’ local practices. I must emphasize that offering an app to all countries is extremely easy—it is even encouraged by the platforms, which makes it even more important to pay special attention to this issue.

What does this mean for children’s privacy rights?

GC: The developers are the owners of the data. Once personal data is collected by the app, it is sent to the developer’s servers, generally in the country where they are located. The fact that foreign developers pay less attention to protecting users’ privacy means that the processing of this data is probably also less respectful of this principle.

 

robots

Robots on their best behavior in the factory of the future

A shorter version of this article was published in the monthly magazine Acteurs du franco-allemand, as part of an editorial partnership.

[divider style=”normal” top=”20″ bottom=”20″]

Robots must learn to communicate better if they want to earn their spot in the factory of the future. This will be a necessary step in ensuring the autonomy and flexibility of production systems. This issue is the focus of the German-French Academy for the Industry of the Future’s SCHEIF project. More specifically, researchers must choose appropriate forms of communication technology and determine how to best organize the transmission of information in a complex environment.

 

The industry system is monolithic for robots. They are static, and specialized for a single task, but it is impossible for us to change their specialization based on the environment.” This observation was the starting point for the SCHEIF[1] project. SCHEIF, conducted in the framework of the German-French Academy for the Industry of the Future, seeks to allow robots to adapt more easily to function changes. To achieve this, the project brings together researchers from EURECOM, the Technical University of Munich (TUM) and IMT Atlantique. The researchers’ goal is “to create a ‘plug and play’ robot that can be deployed anywhere, easily understand its environment, and quickly interact with humans and other robots,” explains Jérôme Härri, a communications researcher with EURECOM participating in this project.

The robots’ communication capacities are particularly critical in achieving this goal. In order to adapt, they must be able to effectively obtain information.  The machines must also be able to communicate their actions to other agents—both humans and robots—in order to integrate into their environment without disruption. Without these aspects, there can be no coordination and therefore no flexibility.

This is precisely one of the major challenges of the SCHEIF project, since the industrial environment imposes numerous constraints on machine communications. They must be fast in the event of an emergency, and flexible enough to prioritize information based on its importance for production chain safety and effectiveness. They must also be reliable, given the sensitivity of the information transmitted. The machines must also be able to communicate over the distances of large factories, not just a few meters. They must combine speed, transmission range, adaptability and security.

Solving the technology puzzle

The solution cannot be found in a single technology,” Jérôme Härri emphasizes. Sensor technology, for example, like Sigfox and LoRa, which are dedicated to connected objects, have high reliability and a long range, but cannot directly communicate with each other. “There must be a supervisor in charge of the interface, but if it breaks down, it becomes problematic, and this affects the robustness criterion for the communications,” the researcher adds. “Furthermore, this data generally returns to the operator of the network base stations, and the industrialist must subscribe to a service in order to obtain it.

On the other hand, 4G provides the reliability and range, but not necessarily the speed and adaptability needed for the industry of the future. As for 5G, it provides the required speed and offers the possibility of proprietary systems. This would free industrialists from the need to go through an operator. However, its reliability in an industrial context is still under specification.

Faced with this puzzle, two main approaches emerge. The first is based on increasing the interoperability and speed of sensor technology. The second is based on expanding 5G to meet industrial needs, particularly by providing it with features similar to those of sensor technologies.  The researchers chose this second option. “We are improving 5G protocols by examining how to allocate the network’s resources in order to increase reliability and flexibility,” says Jérôme Härri.

To achieve this, the teams of French and German researchers can draw on extensive experience in vehicular communication, which uses 4G and 5G networks to solve transport and mobility issues. The cellular technology used for vehicles has the advantage of featuring a cooperative scheduling specification. This information system feature decides who should communicate a message and at what time. A cooperative scheduler is essential for fleets of vehicles on a highway, just like fleets of robots used in a factory. It ensures that all robots follow the same rules of priority. For example, thanks to the scheduler, information that is urgent for one robot is also urgent for the others, and all the machines can react to free the network from traffic and prioritize this information. “One of our current tasks is to develop a cooperative scheduler for 5G adapted to robots in an industrial context,” explains Jérôme Härri.

Deep learning for added flexibility

Although the machines can rely on a scheduler to know when to communicate, they still must know which rules to follow. The goal of the scheduler is to bring order to the network, to prevent network saturation, for example, and collisions between data packets. However, it cannot determine whether or not to authorize a communication solely by taking communication channel load into account. This approach would mean blindly communicating information: a message would be sent when space is available, without any knowledge of what the other robots will do. Yet in critical networks, the goal is to plan for the medium term in order to guarantee reliability and reaction times. When robots move, the environment changes. It must therefore be possible to predict whether all the robots will start suddenly communicating in a few seconds, or if there will be very few messages.

Deep learning is the tool of choice for teaching networks and machines how to anticipate these types of circumstances. “We let them learn how several moving objects communicate by using mobility datasets. They will then be able to recognize similar situations in their actual use and will know the consequences that can arise in terms of channel quality, or number of messages sent,” the researcher explains. “It is sometimes difficult to ensure learning datasets will match the actual situations the network will face in the future. We must therefore add additional learning on the fly during use. Each decision taken is analyzed. System decisions therefore improve over time.

The initial results on this use of deep learning to optimize the network have been published by the teams from EURECOM and Technical University of Munich. The researchers have succeeded in organizing communication between autonomous mobile agents in order to prevent the collision of the transmitted data packets. “More importantly, we were able to accomplish this without each robot being notified of whether the others would communicate,” Jérôme Härri adds. “We succeeded in allowing one agent to anticipate when the others will communicate based solely on behavior that preceded communication in the past.

The researchers intend to pursue their efforts by increasing the complexity of their experiments to make them more like actual situations that occur in industrial contexts. The more agents, the more the behavior becomes erratic and difficult to predict. The challenge is therefore to enable cooperative learning. This would be a further step towards fully autonomous industrial environments.

[1] SCHEIF is an acronym for Smart Cyber-physical Environments for Industry of the Future.

 

vagues scélérates, rogue waves

Optics as a key to understanding rogue waves

Rogue waves are powerful waves that erupt suddenly. They are rare, but destructive. Above all, they are unpredictable. Surprisingly, researchers have been able to better understand these fascinating waves by studying similar phenomena in fiber optic lasers.

 

Before scientists began measuring and observing them, rogue waves had long been perceived as legends. They can reach a height of 30 meters, forming a wall of water facing ships. French explorer Dumont d’Urville faced one such wave in the southern hemisphere. More recently, in 1995, the commander of the transatlantic liner Queen Elizabeth II described a wave as a “solid wall of water”, adding that he felt he was sailing the boat “straight into the cliffs of Dover”. These waves are also a major cause of containers lost at sea.

A genius idea

But the rare and unpredictable nature of these mysterious waves makes them difficult to study and nearly impossible to predict. Tests have been conducted in specially designed pools, but the resulting waves are much smaller and do not sufficiently reflect reality. Theoretical models, on the other hand, are not accurate enough.

However, in 2007, Daniel Solli and his team from the University of California had the genius idea of comparing the propagation of waves with that of light pulses in optical fibers. Waves and light pulses are in fact both waves and are subject to the same laws of physics. And it is much easier to study light pulses, since all the parameters can be easily controlled: wavelength, intensity, the type of fiber used, etc. Furthermore, we can study thousands of pulses per second, making it possible to observe rare events.

Real time

Now, a group of researchers including Arnaud Mussot from IMT Lille Douai has published an article on this subject in the scientific journal Nature Physics, describing the research on the analogies between oceanography and optics to better understand rogue waves.

Many experiments have been conducted in optics,” Arnaud Mussot explains. “For these experiments, we sent laser pulses into optical fibers and we analyzed the speed of these pulses at the output of the fiber. These observations were made in real time, over extremely short periods of time–a few tens of femtoseconds, which is much less than a billionth of a millisecond.

These experiments showed that there are several ways to create rogue waves. One of the most effective methods is to make several waves crash into each other. But only wave collisions from certain angles, directions and amplitudes generate rogue wave phenomena. However, these experiments do not provide all the answers, since some of the more powerful rogue waves predicted by theorists have not yet been observed.

Predicting rogue waves

In the long term, a better understanding of rogue waves should make it possible to better predict them and prevent certain accidents. “Certain companies are currently developing radar that can map the state of the sea, which can be taken on a boat,” Arnaud Mussot explains. “This data is sent into a computing program that predicts what will happen in the sea in the next minutes. The ship can then modify its course to avoid a rogue wave or mitigate its effects. The more we improve our knowledge and calculations, the more we will succeed in predicting these waves in advance.

This research also benefits other fields, such as optics. It offers a better understanding of the start-up of high-power lasers and certain tasks the lasers perform, for which the characteristics vary as the power of the laser increases.

Article written in French by Cécile Michaut for I’MTech.

 

Facebook

Fine against Facebook: How the American FTC transformed itself into “super CNIL”

Article written in partnership with The Conversation
Winston Maxwell, Télécom Paris – Institut Mines-Télécom

[divider style=”normal” top=”20″ bottom=”20″]

[dropcap]T[/dropcap]he US consumer protection regulator has issued a record $5 billion fine to Facebook for personal data violations. This fine is by far the largest ever issued for a personal data violation. Despite some members of the US Congress saying that this is not enough, the sanction has allowed the FTC to become the most powerful personal data protection regulator in the world. And yet, the USA does not have a general data protection law.

To transform itself into a “super CNIL”, the American FTC relied on a 1914 text on consumer protection that forbids any unfair or deceptive practices in business. France has a similar law in its Consumer Code. In the USA, there is no equivalent to CNIL on a federal level. Therefore, the FTC have taken this role.

It was not easy for the FTC to transform a general text on consumer protection into a law on personal data protection. The organization faced two obstacles. Firstly, they had to create a legal doctrine that was clear enough for businesses to understand what constitutes an unfair and deceptive practice in terms of personal data. Then, they had to find a way to impose financial sanctions, since the 1914 FTC act did not include any.

Proceedings against Facebook

A Facebook office. Earthworm/Flickr, CC BY-NC-SA

To clearly define a deceptive personal data practice, the FTC created a legal doctrine that punishes any business that “fails to keep their promises” in terms of personal data. The FTC had started a first lawsuit against Facebook in 2011 accusing them of deceptive practices due to the discrepancy between what Facebook told consumers and how the company acted. To spot a deceptive practice, the FTC will examine each sentence of a company’s privacy policy to identify a promise, even an implied promise, that is not being kept.

An unfair practice is more difficult to prove, which explains why the FTC prefer to use the term ‘deceptive’ rather than unfair. The FTC considers an unfair practice to be any practice that would be both surprising and not easily avoidable for the average consumer.

The FTC Act does not allow the FTC to directly impose a financial penalty. To do this, they have to ask the US Department of Justice to file a lawsuit. To work around this issue, the FTC encourages settlement agreements. The FTC Act allows the regulator to directly impose sanctions in the event of a breach of these agreements. The most important thing for the FTC is to get an agreement signed at the time of the company’s first violation. This means that in the case of a second violation, the FTC is in a position of strength. The Facebook incident follows this pattern. Facebook signed a settlement agreement with the FTC in 2012. The FTC have now found that Facebook violated this agreement by sharing personal data with Cambridge Analytica. The violation of the agreement made in 2012 allows the FTC to hit back strongly and negotiate a new agreement that will last 20 years, this time with a $5 billion fine.

Settlement agreements

If settlement agreements allow the FTC to increase its powers, why do companies sign them? Companies put themselves in a weaker position by signing settlement agreements and the contract prepares the FTC to make these companies more vulnerable in the case of a second violation.  However, most companies prefer to negotiate an agreement with the FTC instead of going to trial. As well as the large cost of a lawsuit and the negative effect it has on a company’s image, if a company loses a lawsuit to the US government, the door is then opened for other parties to sue them, in particular with consumer class action lawsuits. Companies fear the snowball effect. In addition, a settlement agreement with the FTC does not set a precedent since the company does not admit that they are guilty in the agreement. This means that the company can claim their innocence in other lawsuits.

As well as increasing the FTC’s sanctioning powers, the settlement agreements allow them to establish detailed requirements for personal data protection. A settlement agreement with the FTC can become a mini-GDPR and binds the company for 20 years, which is the usual duration for these agreements.

The new agreement states that Facebook must gain the explicit consent of the user before they use their facial recognition data for any purpose, or before they share their mobile phone number with a third party. The 2012 agreement already required Facebook to carry out impact assessments and this obligation was reinforced in the 2019 agreement. The new agreement requires Facebook to put in place a committee of independent administrators who will manage the implementation of the agreement within the company. As well as this, Facebook’s status will have to be changed to ensure that Mark Zuckerberg is not the sole person who can dismiss those in charge of managing the obligations. The new agreement requires Mark Zuckerberg to sign a personal declaration stating that the company will comply with the commitments made in the agreement. A false declaration would put Mr Zuckerberg at risk of criminal penalties, including imprisonment. Most importantly, the agreements oblige Facebook to document all its risk reduction measures and carry out an audit every two years using an independent auditor.

The 2012 act already included a biannual audit. Following the Cambridge Analytica investigation, the EPIC association was provided a copy of an audit carried out for the 2015-2017 period. The audit did not identify any abnormalities relating to data sharing with Cambridge Analytica and other Facebook business partners. This caused the FTC to question the effectiveness of audits, leading them to strengthen the audit regulations in the new 2019 agreement.

Although the 2012 settlement agreement did not prevent Facebook from crossing the line in the Cambridge Analytica scandal, it did allow the FTC to strongly intervene and sanction this second violation. As well as the $5 billion fine, the new 2019 agreement contains several accountability measures. These measures ensure that the commitments agreed by Facebook are applied at every level of the company and that any violation will be detected quickly. Facebook’s management will not be able to say that they were not made aware and Facebook will have to adhere to these governance commitments for the next 20 years.

In the USA, it is common for companies to negotiate agreements with the government. This process is sometimes criticized as a form of forced negotiation. The $8.9 billion fine against BNP Paribas was a “negotiated” agreement, although whether the negotiation between the French bank and the US government was balanced is questionable. In Europe, there are no settlement agreements for personal data violations, but they are common in competition law.

[divider style=”dotted” top=”20″ bottom=”20″]

Winston Maxwell, Télécom Paris – Institut Mines-Télécom

The original version of this article (in French) was published in The Conversation. Read the original article

lentille autonome, autonomous contact lens

An autonomous contact lens to improve human vision

Two teams from IMT Atlantique and Mines Saint-Étienne have developed an autonomous contact lens which is powered by an integrated flexible micro-battery. This invention is a world first that opens new health prospects, whilst also opening the door for scientists to develop other human-machine interfaces.

 

Human augmentation, a field of research that aims to enhance human abilities through technological progress, has always fascinated authors of science fiction. It is a recurring theme in the British TV series Black Mirror and The Six Million Dollar Man. But beyond dystopia and action adventure, it also interests science. The recent findings from teams at IMT Atlantique and Mines Saint Étienne are one of the latest examples of this.

Jean-Louis de Bougrenet is the head of the Optics Department at IMT Atlantique. Thierry Djenizian is the head the Flexible Electronics Department at the Centre Microéléctronique de Provence at Mines Saint Étienne. Together, the two scientists have achieved a world first; they developed the cyborg lens, an autonomous contact lens with a built-in flexible micro-battery.

The origins of the cyborg lens

In the medical division of his department, Jean-Louis de Bougrenet was working on devices to improve people’s vision. Whilst doing this, the researcher and his team used oculometers, instruments used to analyze how eyes behave, measure an individual’s fatigue or stress, and also to see the direction of their gaze. These devices are used in technology such as VR headsets which can use the direction of someone’s gaze or when they blink to make a command.

However, to be truly efficient, the oculometer must be able to do two things. Firstly, it has to be extremely precise. Secondly, to make sure the VR headset does not contain any additional components that weigh the user down, the oculometer has to be as light as possible. These factors made it clear to the scientists that the oculometer had to be placed directly in the user’s eye. “The contact lens very quickly emerged as a way to make human augmentation possible, since it allowed humans to carry a smart device directly on them,” explains the optical researcher. This system is made possible by advances in nanotechnology.

Thierry Djenizian has spent the last four years working on integrating electric components onto flexible and stretchable surfaces. His research led to the patenting of a flexible micro-battery. However, this device wasn’t originally made to be used on a contact lens.

Read on I’MTech: Towards a new generation of lithium batteries?

After becoming interested in Thierry Djenizian’s work in flexible electronics, Jean-Louis de Bougrenet got in touch with his colleague at Mines Saint-Étienne. During his visit to the Centre Microélectronique de Provence à Gardanne (Bouches-du-Rhône), he observed the advances made in flexible micro-batteries. This led to the idea to integrate this little device directly into the cyborg contact lenses developed at IMT Atlantique, which was extremely innovative, since the scientists were awarded a joint patent for their work.

A flexible micro-battery directly placed in a contact lens

The device is a world first, as an energy storage source is directly incorporated into the small ocular device. “Whenever functions are performed locally by an autonomous system, the system must have energy autonomy,” explains Jean-Louis de Bougrenet.  Until now, ‘smart’ contact lenses have been powered by an external energy source such as a magnetic induction system, using a coil placed in the device. The problem with this process is that, if the energy source is cut, the device no longer works, something that is now no longer the case thanks to this new innovation. In the device developed by the two scientists, the lens is instead powered by a micro-battery that can also be paired to an external source to charge it, or in case it needs to use more energy.

Thierry Djenizian’s aim was to apply results he had already obtained from his previous studies to the design and performance constraints of an ocular device.  This meant that he built on his previous work, which was mainly based on innovation and design.

Normally, flexible batteries are made up of electrodes connected to each other by ductile coils which then carry current.  Our device uses the entire surface area occupied by the coils by carrying microelectrodes directly on these wires,” explains the researcher at Mines Saint-Étienne. In practice, during the manufacture of flexible batteries, electrodes made from several composite materials are placed on an aluminum foil and shaped into vertical ‘micropillars’ using laser ablation with regular spacing. For the lenses, the same technique is used to manufacture the coils that support microelectrodes, giving the battery great flexibility.

 

3D illustration of the flexible coils which carry the microelectrodes.

 

As well as this, the scientists aim to use innovative materials to improve the performance of the device. One example of this is for the electrolyte, which separates the battery’s two electrodes.  The polymers that are currently used will eventually be replaced by other materials with a self-repairing nature. This will offset the strain put on the battery when the device is being charged.

Now, the scientists have succeeded in making a battery that is 0.75cm² and integrated into a scleral contact lens. This lens rests on the ‘white’ of the eye (the sclera) and is both bigger and more stable than a standard contact lens. To make the device, the area of the lens which is in front of the iris is removed and replaced with microelectronic components, including the energy storage device. This method has the distinct advantage of not obstructing the wearer’s field of vision, as light can always enter the eye through the pupil free of any obstacle. The micro-battery has already proved its worth as it can power an LED for several hours. “An LED is a textbook example since it is generally the most energy-intensive microelectronic device,” says Thierry Djenizian.

Already, there are many ways to improve this technology

While the current device is already ground-breaking, the two researchers continue to try to perfect it. The priority for the Flexible Electronics Department at Mines Saint-Étienne is optimizing the system, as well as improving its reliability. “From a technological perspective, we still have a lot of work to do,” states the head of department. “We have the concept, but improving the device is similar to taking a CRT television and turning it into a modern flat screen TV.”

The next step has already been decided. The scientists want to develop miniature antennae in order to charge the battery and make the lens a communicative device, which will allow it to transmit information from the oculometer. Another option that is currently being studied uses an infrared laser to follow the user’s eyes. This laser would be activated by blinking and would show the direction that the user is looking.

Assistance for people with bad eyesight and professional uses

According to Jean-Louis Bougrenet, the innovation will allow them to “take localized engineering to the next level.” The project has a wide range of potential uses, including helping visually impaired people. The scientists have paired up with the Institut de la Vision with the aim of developing a device which can improve the sensory abilities of visually impaired people. As well as this, the cyborg lens could be used in VR headsets as a way of making visual commands.  Discussions have already started with key people in this industry.

In the future, the lenses could have several other applications. In the automotive industry, for example, they could be used to monitor the drivers’ attention or level of fatigue.  However, “at the moment we are only discussing how the lenses could be used professionally, or for people with disabilities. If we make the lenses available to the general public, for example to be used when driving, then we raise issues that go far beyond the technical aspects of the device. This is because it involves people’s consent to wear the device, which is not a trivial matter,” states Jean-Louis de Boygrenet.

Even if the cyborg lens can help humans, there is still some way to go before the device can be seen in an entirely positive light.

This article was written (in French) by Bastien Contreras for I’MTech

 

quantique, quantum technology

20 terms for understanding quantum technology

Quantum mechanics is central to much of the technology we use every day. But what exactly is it? The 11th Fondation Mines-Télécom booklet explores the origins of quantum technology, revealing its practical applications by offering a better understanding of the issues. To clarify the concepts addressed, the booklet includes a glossary, from which this list is taken.

 

Black-body radiation – Thermal radiation of an ideal object absorbing all the electromagnetic energy it receives.

Bra-ket notation (from the word bracket) – Formalism that facilitates the writing of equations in quantum mechanics.

Coherent detectors – Equipment used to detect photons based on amplitude and the phase of the electromagnetic signal rather than interactions with other particles.

Decoherence – Each possibility of a quantum superposition state interacts with its environment at a degree of complexity that makes the different possibilities incoherent and unobservable.

Entanglement – Phenomenon in which two quantum systems present quantum states that are dependent on one another, regardless of the distance separating them.

Locality (principle of) – The idea that two distant objects cannot directly influence each other.

Momentum – Product of the mass and velocity vector of a hypothetical object in time.

NISQ (Noisy Intermediate-Scale Quantum) – Current class of quantum computers

Observable (noun) – Concept in the quantum world comparable to a physical value (position, momentum, etc.) in the classical world.

Quanta – The smallest indivisible unit (of energy, momentum, etc.)

Quantum Hall effect – classical Hall effect refers to the phenomenon of voltage created by an electric current flowing through material immersed in a magnetic field. According to the conditions, this voltage increases in increments. This is the quantum Hall effect.

Quantum state – A concept that differs from a classical physical system, in which measured physical values like position and speed are sufficient in defining the system. A quantum state provides a probability distribution for each observable of the quantum system to which it refers.

Quantum system – Refers to an object studied in a context in which its quantum properties are interesting, such as a photon, mass of particles, etc.

Qubit – Refers to a quantum system in which a given observable (the spin for example) is the superposition of two independent quantum states.

Spin – Like the electric charge, one of the properties of particles.

Superposition principle – Principle that a same quantum state can have several values for one of its given observables.

The Schrödinger wave function – A fundamental concept of quantum mechanics, a mathematical function representing the quantum state of a quantum system.

Uncertainty Principle – Mathematical inequality that expresses a fundamental limit to the level of precision with which two physical properties of a same particle can be simultaneously known.

Wave function collapse – Fundamental concept of quantum mechanics that states that after a measurement, a quantum system’s state is reduced to what was measured.

Wave-particle duality (or wave-corpuscle duality) – The principle that a physical object sometimes has wave properties and sometimes corpuscular properties.

Also read on I’MTech