transportation

Synchronizing future transportation: from trucks to drones

With the development of delivery services, the proliferation of various means of transportation, saturated cities and mutualized goods, optimizing logistics networks is becoming so complex that humans can no longer find solutions without using intelligent software. Olivier Péton, specialized in operational research for optimizing transportation at IMT Atlantique, is seeking to answer this question: how can deliveries be made to thousands of customers under good conditions? He presented his research at the IMT Symposium in October on production systems of the future

This article is part of our series on “The future of production systems, between customization and sustainable development.”

 

Have you ever thought about the future of the book, pair of jeans or alarm clock you buy with just one click? Moved, transferred, stored, redistributed, these objects made their way from one strategic place to the next, across the entire country to your city. Several trucks, vans and bikes are used in the delivery. You receive your order thanks to the careful organization of a logistics network that is becoming increasingly complex.

At IMT Atlantique, Fabien Lehuédé and Olivier Péton are carrying out operational research on how to optimize transportation solutions and logistics networks. “A logistics network must take into account the location of the factories and warehouses, decide which production site will serve a given customer, etc. Our job is to establish a network and develop it over time using recent optimization methods,” explains Olivier Péton.

This job is in high demand. Changes in legislation to limit the access of certain vehicles during given timeframes in city centers has required companies to rethink their distribution methods. At the same time, like these new requirements in the city, the development of new technology and new distribution methods offer opportunities for re-optimizing transportation.

What are the challenges facing the industry of the future?

Most of the work from the past 10 years pertains to logistic systems and the synchronization of vehicles,” remarks Olivier Péton. “In other words, several vehicles must manage to arrive at practically the same time at the same place.” This is the case, for example, in projects involving the pooling of transportation means, in which goods are grouped together at a logistics platform before being sent to the final customer. “This is also the case for multimodal transportation, in which high-capacity vehicles transfer their contents to several smaller-capacity vehicles for the last mile,” the researcher explains. These concepts of mutualization and multimodal transport are at the heart of industry of the future.

In the path from the supplier to the customer, the network sometimes transitions from the national level to that of a city. On the one hand, national transport relies on a network of logistic hubs that handle large volumes of goods. On the other hand, urban networks, particularly for e-commerce, focus on last-mile delivery. “The two networks involve different constraints. For a national network, the delivery forecast can be limited to one week. The trucks often only visit three or four places per day. In the city, we can visit many more customers in one day, and replenish supplies at a warehouse. We must take into account delays, congestion, and the possibility of adjusting the itinerary along the way,” Olivier Péton explains.

Good tools make good networks

A network’s complexity depends on the amount of combinations that can be made with the elements it contains. The higher the number of sites, customer orders and stops, the more difficult it becomes to optimize the network. There could be billions of solutions, but it is impossible to list them all to find the best one. This is where the researchers’ algorithms come into play. They rely on the development of heuristic methods, in other words, coming as close as possible to an optimal solution within a reasonable calculation time of a few seconds or a few minutes. To accomplish this, it is vital to have reliable data: transport costs, delivery time schedules, etc.

There are also specific constraints related to each company. “In some cases, transport companies require truck itineraries in straight lines, with as few detours as possible to make deliveries to intermediate customers,” explains Olivier Péton. Other constraints include the maximum number of customers on one route, fair working times for drivers, etc. These types of constraints are modeled as equations. “To resolve these optimization problems, we start with an initial transport plan and we try to improve it iteratively. Each time we change the transport plan, we make sure it still meets all the constraints”. The ultimate result is based on the quality of service: ensuring that the customer is served within the time slot and in only one delivery.

Growing demand

Today, this research is primarily used prior to delivery in national networks. It helps design transport plans, determine how many trucks must be chartered and create the drivers’ schedules. Olivier Péton adds, “it also helps develop simulations that show the savings a company can hope to make by changing its logistics practices. To accomplish this, we work with 4S Network, a company that supports its customers throughout their entire transport mutualization projects.” This work can also be of interest to the major decision-makers managing a fleet with transport that can vary greatly on a daily basis. If the requests are very different from one day to the next, the software solution can develop a transport plan in a few minutes.

Read more on I’MTech: What is the Physical Internet?

What is the major challenge facing researchers? The tool’s robustness. In other words, its ability to react to unforeseeable incidents: congestion, technical problems… It must allow for small variations without having to re-optimize the entire solution. This is especially the case as new issues arise. Which exchange zone should be used in a city to transfer goods: a parking lot or vacant area? For what tonnage is it best to invest in electrical trucks? There are many different points to consider before real-time optimization can be achieved.

Another challenge involves developing technologically viable solutions with a sustainable business model that are acceptable from a societal and environmental perspective. As part of the ANR Franco-German project OPUSS, Fabien Lehuédé and Olivier Péton are working to optimize complex distribution systems. These systems combine urban trucks and transport with fleets of smaller, autonomous vehicles for last mile deliveries. That is, until drones come on the scene…

 

Article written by Anaïs Gall, for I’MTech.

personalizing, customization

Breaking products down for customization

Customers’ desire to take ownership of products is driving companies to develop more customized products. Élise Vareilles, a researcher in industrial engineering at IMT Mines Albi, works to develop interactive decision support tools. Her goal: help companies mass-produce customized goods while controlling the risks related to production. This research was presented at the IMT symposium on “Production Systems of the Future”.

This article is part of our series on “The future of production systems, between customization and sustainable development.”

 

Mr. Martin wants a red 5-door car with a sunroof. Mrs. Martin wants a hybrid with a rearview camera and leather seats. The salesperson wants to sell them a model combining all these features, but the “hybrid” and “sunroof” options are incompatible. More and more companies are beginning to offer customized services (loans, credit, etc.) and goods (cars, furniture, etc.). Yet they are facing a challenge: how can they mass-produce a product that will meet the customer’s specific request? To find a solution to this aspect, companies must customize their production. But how is this possible?

To configure their products, companies must, in a sense, cut them down into pieces. For example, for a car, they must separate the engine from the wheels and the bodywork. Identifying all these elements plays a major role in customizing a product.

The goal is to develop computer tools that allow us to model each of these elements, like Lego bricks we put together. I have different color bricks, called “variants” and several shapes that represent options. We model their compatibility. Companies’ goal is to ensure the customer’s request can be met using all the options they offer,” explains Élise Vareilles, a researcher at IMT Mines Albi.

What catalog of products or services can I offer my customers using the options I have and their compatibility? To address this growing concern, Élise Vareilles’ team turned to artificial intelligence.

Identifying and controlling risks

The research team works on developing interactive configurators that enable a dialogue with the user. These configurators allow the user to enter various customization criteria for a product or service and view the result. For example, the options for your next car. To accomplish this, the artificial intelligence is fueled by the company’s knowledge. “We make computerized records of explicit knowledge (behavior laws, weight of components, etc.) and implicit knowledge related to the trade (manufacturing processes, best practices, etc.). All this information allows us to create modules, or building blocks, that make up the software’s knowledge base,” Élise Vareilles explains.

Yet not all a company’s knowledge is needed to manufacture each product. Therefore, the tool activates the relevant knowledge base according to the context specified by the user. Targeting this pertinent information allows the system to accompany the user by making suitable suggestions. Élise Vareilles adds, “with some configurators, I enter all my needs and they indicate, without explanation, that none of the products match my request. I do not know which options are causing this. Our tool guides the user by specifying the incompatibility of certain criteria. For example, it can tell the user that the size of the engine affects the size of the wheels and vice versa.”

Challenges 4.0 for the factory of the future

Researchers have developed a generic tool that can be applied to a variety of contexts. It has especially helped architects in configuring new insulation systems for the facades of 110 social housing units in the Landes area of France. “The best way to arrange the panels was to install them symmetrically, but the architects told us it didn’t look nice! An attractive appearance is not a parameter we can program with an equation. We had to find a compromise from among the software’s proposals by assessing all the assembly options and the constraints that only a human can evaluate,” the researcher recalls. The tool’s interactive aspect helped remedy this problem. It proposed assembly configurations for the insulation panels that the architects could adjust. It could also intervene to add to the architects’ proposals based on constraints related to the facades (windows, shutters, etc.) and the geometric nature of the panels.

In the context of the industry of the future, this type of tool could offer a competitive advantage by taking into account 80% of customers’ needs. It also helps control design costs. Breaking the knowledge and possible associations down into bricks means that the tool can help design increasingly adaptable products, which can be modified according to customers’ whims. It also increases the control of production risks by preventing the salesperson from selling a product that is too difficult or even impossible to manufacture. In addition, data mining techniques access the company’s memory to offer recommendations. However, if the knowledge is not constantly updated, the model faces the risk of becoming obsolete. The company’s experts must therefore determine the best time to update their tool.

Humans take on new roles

The two major risks involved in the manufacturing processes have therefore been reduced thanks to this tool from IMT Mines Albi. First, it reduces the risk of designing an object that does not match the customer’s request. By integrating knowledge from the company’s experts (risks, marketing, etc.) into the software, the company guarantees the feasibility of long projects. For example, the tool reduces risks linked to staff turnover, which could result in a loss of skills due to an engineer leaving the company.

However, humans are not being replaced. Instead, they are taking on new roles. “With this tool, 40% of an employee’s activities will be redirected to more complex tasks in which the added value of humans is undeniable. Keep in mind that our tool offers decision support and must rely on the previous work of experts,” Élise Vareilles adds. Yet implementing this type of solution is a long process—lasting approximately 2 years. This conflicts with the short-term investment mentality advocated by industrial culture. It’s now up to stakeholders to recognize these long-term benefits before their competition.

 

Article by Anaïs Culot, for I’MTech.

algorithms

Restricting algorithms to limit their powers of discrimination

From music suggestions to help with medical diagnoses, population surveillance, university selection and professional recruitment, algorithms are everywhere, and transform our everyday lives. Sometimes, they lead us astray. At fault are the statistical, economic and cognitive biases inherent to the very nature of the current algorithms, which are supplied with massive data that may be incomplete or incorrect. However, there are solutions for reducing and correcting these biases. Stéphan Clémençon and David Bounie, Télécom ParisTech researchers in machine learning and economics, respectively, recently published a report on the current approaches and those which are under exploration.

 

Ethics and equity in algorithms are increasingly important issues for the scientific community. Algorithms are supplied with the data we give them including texts, images, videos and sounds, and they learn from these data through reinforcement. Their decisions are therefore based on subjective criteria: ours, and those of the data supplied. Some biases can thus be learned and accentuated by automated learning. This results in the algorithm deviating from what should be a neutral result, leading to potential discrimination based on origin, gender, age, financial situation, etc. In their report “Algorithms: bias, discrimination and fairness”, a cross-disciplinary team[1] of researchers at Télécom ParisTech and the University of Paris Nanterre investigated these biases. They asked the following basic questions: Why are algorithms likely to be distorted? Can these biases be avoided? If yes, how can we minimize them?

The authors of the report are categorical: algorithms are not neutral. On the one hand, because they are designed by humans. On the other hand, because “these biases partly occur because the learning data lacks representativity” explains David Bounie, researcher in economics at Télécom ParisTech and co-author of the report. For example: the recruitment algorithm for the giant Amazon was heavily criticized in 2015 for having discriminated against female applicants. At fault, was an imbalance in the history of the pre-existing data. The people recruited in the previous ten years were primarily men. The algorithm had therefore been trained by a gender-biased learning corpus. As the saying goes, “garbage in, garbage out”. In other words, if the input data is of poor quality, the output will be poor too.

Also read Algorithmic bias, discrimination and fairness

Stéphan Clémençon is a researcher in machine learning at Télécom Paristech and co-author of the report. For him, “this is one of the growing accusations made of artificial intelligence: the absence of control over the data acquisition process.” For the researchers, one way of introducing equity into algorithms is to contradict them. An analogy can be drawn with surveys: “In surveys, we ensure that the data are representative by using a controlled sample based on the known distribution of the general population” says Stéphan Clémençon.

Using statistics to make up for missing data

From employability to criminality or solvency, learning algorithms have a growing impact on decisions and human lives. These biases could be overcome by calculating the probability that an individual with certain characteristics is included in the sample. “We essentially need to understand why some groups of people are under-represented in the database” the researchers explain. Coming back to the example of Amazon, the algorithm favored applications from men because the recruitments made over the last ten years were primarily men. This bias could have been avoided by realizing that the likelihood of finding a woman in the data sample used was significantly lower than the distribution of women in the population.

“While this probability is not known, we need to be able to explain why an individual is in the database or not, according to additional characteristics” adds Stéphan Clémençon. For example, when assessing banking risk, algorithms use data on the people eligible for a loan at a particular bank to determine the borrower’s risk category. These algorithms do not look at applications by people who were refused a loan, who have not needed to borrow money or who obtained a loan in another bank. In particular, young people under 35 years old are systematically assessed as carrying a higher level of risk than their elders. Identifying these associated criteria would make it possible to correct the biases.

Controlling data also means looking at what researchers call “time drift”. By analyzing data over very short periods of time, an algorithm may not account for certain characteristics of the phenomenon being studied. It may also miss long-term trends. By limiting the duration of the study, it will not pick up on seasonal effects or breaks. However, some data must be analyzed on the fly as they are collected. In this case, when the time scale cannot be extended, it is essential to integrate equations describing potential developments in the phenomena analyzed, to compensate for the lack of data.

The difficult issue of equity in algorithms

Other than the possibility of using statistics, researchers are also looking at developing algorithmic equity. This means developing algorithms which meet equity criteria according to attributes protected under law such as ethnicity, gender or sexual orientation. As for statistical solutions, this means integrating constraints into the learning program. For example, it is possible to impose that the probability of a particular algorithmic result will be equal for all individuals belonging to a particular group. It is also possible to integrate independence between the result and a type of data, such as gender, income level, geographical location, etc.

But which equity rules should be adopted? For the controversial Parcoursup algorithm for higher education applications, several incompatibilities were raised. “Take the example of individual equity and group equity. If we consider only the criterion of individual equity, each student should have an equal chance at success. But this is incompatible with the criterion of group equity, which stipulates that admission rates should be equal for certain protected attributes, such as gender” says David Bounie. In other words, we cannot give an equal chance to all individuals regardless of their gender and, at the same time, apply criteria of gender equity. This example illustrates a concept familiar to researchers: the rules of equity contradict each other and are not universal. They depend on ethical and political values that are specific to individuals and societies.

There are complex, considerable challenges facing social acceptance of algorithms and AI. But it is essential to be able to look back through the algorithm’s decision chain in order to explain its results. “While this is perhaps not so important for film or music recommendations, it is an entirely different story for biometrics or medicine. Medical experts must be able to understand the results of an algorithm and refute them where necessary” says David Bounie. This has raised hopes of transparency in recent years, but is no more than wishful thinking. “The idea is to make algorithms public or restrict them in order to audit them for any potential difficulties” the researchers explain. However, these recommendations are likely to come up against trade secret and personal data ownership laws. Algorithms, like their data sets, remain fairly inaccessible. However, the need for transparency is fundamentally linked with that of responsibility. Algorithms amplify the biases that already exist in our societies. New approaches are required in order to track, identify and moderate them.

[1] The report (in French) Algorithms: bias, discrimination and equity was written by Patrice Bertail (University of Paris Nanterre), David Bounie, Stephan Clémençon and Patrick Waelbroeck (Télécom ParisTech), with the support of Fondation Abeona.

Article written for I’MTech by Anne-Sophie Boutaud

To learn more about this topic:

Ethics, an overlooked aspect of algorithms?

Ethical algorithms in health: a technological and societal challenge

Digital twins in the health sector: mirage or reality?

Digital twins, which are already well established in industry, are becoming increasingly present in the health sector. There is a wide range of potential applications for both diagnosis and treatment, but the technology is mostly still in the research phase.

 

The health sector is currently undergoing digital transition with a view to developing “4P” treatment: personalized, predictive, preventive and participative. Digital simulation is a key tool in these changes. It consists in creating a model of an object, process or physical process on which different hypotheses can be tested. Today, patients are treated when they fall sick based on clinical studies of tens or, at best, thousands of other people, with no real consideration of the individual’s personal characteristics. Will each person one day have their own digital twin to allow prediction of the development of acute or chronic diseases based on their genetic profile and environmental factors, and anticipation of their response to different treatments?

“There are a lots of hopes and dreams built on this idea,” admits Stéphane Avril, Researcher at Mines Saint-Étienne. “The profession of doctor or surgeon is currently practiced as an art on the strength of experience acquired over time. The idea is that a program could combine the experience of a thousand doctors, but in reality there is a huge amount of work still to do to create equations for and integrate non-mathematical knowledge and skills in very different fields such as immunology and the cardio-vascular system.”  We are still a long way from simulating an entire human being. “But in certain fields, digital twins provide excellent predictions that are even better than those of a practitioner,” adds Stéphane Avril.

From imaging to the operating room

Biomechanics, for example, is a field of research that lends itself very well to digital simulation. Stéphane Avril’s research addresses aortic aneurysms. 3D digital twins of the affected area are developed based on medical images. The approach has led to the creation of a start-up called Predisurge, whose software allows the creation of individual endoprostheses. “The FDA [American Food and Drug Administration] is encouraging the use of digital simulation for validating the market entry of prostheses, both orthopedic and vascular,” explains the researcher from St Etienne.

Digital simulation also helps surgeons prepare for operations because the software provides predictions on the insertion and effects of these endoprostheses once in place, as well as simulating any pre-surgical complications that could arise. “This technique is currently still in the testing and validation phase, but it could have a very promising impact on reducing surgery time and complications,” stresses Stéphane Avril. The team at Mines Saint-Étienne is currently working on improving our understanding of the properties of the aortic wall using four-dimensional MRI and mechanical tests on aneurysm tissue removed during the insertion of prostheses. The idea is to validate a digital twin designed using 4D MRI images, which could predict the future rupture or stability of an aneurysm and indicate the need for surgery or not.

Read more on I’MTech: A digital twin of the aorta to prevent Aneurysm rupture

Catalin Fetita, a researcher at Télécom SudParis, also uses digital simulation in the field of imaging, but this time in the case of air-borne transmission alongside pulmonary parenchyma analysis. The aim of her work is to obtain biomarkers from medical images for a more precise definition of pathological phenomena in respiratory diseases such as asthma, chronic obstructive pulmonary disease (COPD) and idiopathic interstitial pneumonia (IIP). The model allows assessment of an organ’s functioning based on its morphology. “Digital simulation is used to identify the type of dysfunction and its exact location, quantify it, predict changes in the disease and optimize the treatment process,” explains Catalin Fetita.

Ethical and technical barriers

The problem of data security and anonymity is currently at the heart of ethical and legal debates. For the moment, researchers are having great difficulty accessing databases to “feed” their programs. “To obtain medical images, we have to establish a relationship of trust with a hospital radiologist, get them interested in our work and involve them in the project. We need images to be precisely labeled for the model to be relevant.” Especially since the analysis of medical images can vary from one radiologist to another. “Ideally, we would like to have access to a database of images that have been analyzed by a panel of experts with a consensus on their interpretation,” Catalin Fetita affirms.

The researcher also points to the lack of technical staff. Models are generally developed in the framework of a thesis, and rarely lead to a finished product. “We need a team of research or development engineers to preserve the skills acquired, ensure technology transfer and carry out monitoring and updates.” Imaging techniques are evolving and algorithms can encounter difficulties in processing new images that sometimes have different characteristics.

For Stéphane Avril, a new specialization in engineering and health with mixed skills is needed. “These tools will transform doctors’ and surgeons’ professions, but it’s still a bit like science fiction to practitioners at the moment. The transformation will take place tentatively, with restraint, because full medical training takes more than 10 years.” The researcher thinks that it will be another ten years or so before the tools to integrate the systemic aspect of physiopathology will be operational: “like for self-driving vehicles, the technology exists but there are still quite a few stages to go before it actually arrives in hospitals.

 

Article written by Sarah Balfagon for I’MTech.

 

The TeraLab data machines.

TeraLab: data specialists serving companies

Belles histoires, bouton, CarnotTeraLab is a Big Data and artificial intelligence platform that grants companies access to a whole ecosystem of specialists in these fields. The aim is to remove the scientific and technological barriers facing organizations that want to make use of their data. Hosted by IMT, TeraLab is one of the technology platforms proposed by the Carnot Télécom & Société Numérique. Anne-Sophie Taillandier, Director of TeraLab, presents the platform.

 

What is the role of the TeraLab platform?

Anne-Sophie Taillandier: We offer companies access to researchers, students and innovative enterprises to remove technological barriers in the use of their data. We provide technical resources, infrastructure, tools and skills in a controlled, secure and neutral workspace. Companies can prototype products or services in realistic environments with a view to technology transfer as fast as possible.

In what ways do you work with companies?

AST: First of all, we help them formalize the use case. Companies often come to us with a vague outline of the use case, so we help them with that and can provide specialist contributions if necessary. This is a crucial stage because our aim is also for companies to be able to assess the return on investment at the end of the research or innovation work. It helps them estimate the investment required to launch production, so the need must be clearly defined. We then help them understand what they have the right to do with the data. There again we can call upon expert legal advice if necessary. Lastly, we support them in the specification of the technical architecture.

How do you stand out from other Big Data and artificial intelligence service platforms?

AST: Firstly, by the ecosystem we benefit from. TeraLab is associated with IMT, so we have a number of specialist researchers in these fields as well as students we can mobilize to resolve technological challenges posed by companies. Secondly, TeraLab is a pre-competitive platform. We can also define a framework that brings together legal and technical aspects to meet companies’ needs in an individual way. We can strike a fairly fine balance between safety and flexibility to reassure the organizations who come to us and at the same time give researchers enough space to find solutions to the problems posed.

What level of technical security can you provide?

AST: We can reach an extremely high level of technical security, where the user of the data supplied, such as the researcher, can see it but never extract it. Generally speaking, a validation process involving the data supplier and the Teralab team must be followed in order to extract a piece of data from the workspace. During a project, data security is guaranteed by a combination of technical and legal factors. Moreover, we work in a neutral and controlled space which also provides a form of independence that reassures companies.

What does neutrality mean for you?

AST: The technical components we propose are open source. We have nothing against products under license, but if a company wants to use a specific tool, it must provide the license itself. Our technical team has excellent knowledge of the different libraries and APIs as well as the components required to set up a workspace. They adapt the tools to the company’s needs. We do not host the service beyond the end of the experimentation phase. Instead, we enter a new phase of technology transfer to allow the products or services to be integrated at the client’s end. We therefore have nothing to “sell” except our expertise. This also guarantees our neutrality.

What use cases do you work on?

AST: Since we started TeraLab, more than 60 projects have come through the platform, and there are currently 20 on the go. They can last between 3 months and 3 years. We have had projects in logistics, insurance, public services, energy, mobility, agriculture etc. At the moment, we are focusing on three sectors. The first is cybersecurity: we are interested in seeing what data access barriers there are, how to make a workspace compliant, and how to guarantee respect of personal data. We also work a lot in the health sector and industry. Geographically speaking, we are increasingly working at a European level in the framework of H2020 projects. The platform also benefits from growing recognition among European institutions with, in particular, the “Silver i-space” label awarded by the BDVA.

Physically, what does TeraLab look like?

AST: TeraLab comprises machines at Douai, a technical team in Rennes and a business team in Paris. The platform is accessible remotely, so there is no need to be physically close to it, making it different to other service platforms. We have recently also been able to secure client machines directly on site if the client has specific restrictions with regard to the movement of data.

 

User immersion, between 360° video and virtual reality

I’MTech is dedicating a series of success stories to research partnerships supported by the Télécom & Société Numérique (TSN) Carnot Institute, which the IMT schools are a part of.

[divider style=”normal” top=”20″ bottom=”20″]

To better understand how users interact in immersive environments, designers and researchers are comparing the advantages of 360° video and full-immersion virtual reality. This is also the aim of the TroisCentSoixante inter-Carnot project uniting the Télécom & Société Numérique and the M.I.N.E.S. Carnot Institutes. Strate Research, the research department at Strate School of Design which is a member of the Carnot TSN, is studying this comparison in particular in the case of museography mediation.  

 

When it comes to designing immersive environments, designers have a large selection of tools available to them. Mixed reality, in which the user is plunged into a more or less interactive environment, covers everything from augmented video to fully synthetic 3D images. To determine which is the best option, researchers from members of the TSN Carnot Institute (Strate School of Design) and the M.I.N.E.S Carnot Institute (Mines ParisTech and IMT Mines Alès) have joined forces. They have compared, for different use cases, the differences in user engagement between 360° video and full 3D modeling, i.e. virtual reality.

“At the TSN Carnot Institute we have been working on the case of a museum prototype alongside engineers from Softbank Robotics, who are interested in the project,” explains Ioana Ocnarescu, researcher at Strate. A room containing exhibits such as a Minitel, tools linked to the development of the internet, photos of famous researchers in robotics and robots has been created at Softbank Robotics to create mediation on science and technology. Once the object is in place, a 3D copy is made and a visit route is laid out between the different exhibits. This base scenario is used to film a 360° video guided by a mediator and to create a virtual guide in the form of a robot called Pepper, which travels around the 3D scene with the viewer. In both cases, the user is immersed in the environment using a mixed reality headset.

Freedom or realism: a choice to be made

Besides the graphics, which are naturally different between video and 3D modelling, the two technologies have one fundamental difference: freedom of action in the scenario. “In 360° video the viewer is passive,” explains Ioana Ocnarescu. “They follow the guide and can zoom in on objects, but cannot move around freely as they wish.” Their movement is limited to turning their head and deciding to spend longer on certain objects than others. To allow this, the video is cut in several places allowing a decision tree to be made that leads to specific sequences depending on the user’s choices.

Like the 3D mediation, the 360°-video trial mediation is guided by a robot called Pepper.

Like the 3D mediation, the 360°-video trial mediation is guided by a robot called Pepper.

 

3D modeling, on the other hand, grants a large amount of freedom to the viewer. They can move around freely in the scene, choose whether to follow the guide or not, walk around the exhibits and look at them from any angle, which is where 360° video is limited by the position of the camera. “User feedback shows that certain content is better suited to one device or the other,” the Strate researcher reports. For a painting or a photo, for example, there is little use in being able to travel around the object, and the viewer prefers to be in front of the exhibit in it its surroundings with as much realism as possible. “360° video is therefore better adapted for museums with corridors and paintings on the walls,” she points out. On the other hand, 3D modeling is particularly adapted to looking at and examining 3D artefacts such as statues.

These experiments are extremely useful to researchers in design, in particular because they involve real users. “Knowing what people do with the devices available is at the heart of our reflection,” emphasizes Ioana Ocnarescu. Strate has been studying user-machine interaction for over 5 years to develop more effective interfaces. In this project, the people in immersion can give their feedback directly to the Strate team. “It is the most valuable thing in our work. When everything is controlled in a laboratory environment, the information we collect is less meaningful.

The tests must continue to incorporate a maximum amount of feedback from as many different types of audience as possible. Once finished, the results will be compared with those of other use cases explored by the M.I.N.E.S Carnot Institute. “Mines ParisTech and IMT Mines Alès are comparing the same two devices but in the case of self-driving cars and exploration of the Chauvet cave,” explains the researcher.

 

[divider style=”normal” top=”20″ bottom=”20″]

Carnot TSN, a guarantee of excellence in partnership-based research since 2006

The Télécom & Société numérique (TSN) Carnot Institute has partnered companies in their research to develop digital innovations since 2006. On the strength of over 1,700 researchers and 50 technology platforms, it offers cutting-edge research to resolve complex technological challenges produced by digital, energy and environmental and industrial transformations within the French production fabric. It addresses the following themes: Industry of the Future, networks and smart objects, sustainable cities, mobility, health and security.

The TSN Carnot Institute is composed of Télécom ParisTech, IMT Atlantique, Télécom SudParis, Institut Mines-Télécom Business School, Eurecom, Télécom Physique Strasbourg, Télécom Saint-Étienne, École Polytechnique (Lix and CMAP laboratories), Strate School of Design and Femto Engineering.

[divider style=”normal” top=”20″ bottom=”20″]

Since the enthusiasm for AI in healthcare brought on by IBM’s Watson, many questions on bias and discrimination in algorithms have emerged. Photo: Wikimedia.

Ethical algorithms in health: a technological and societal challenge

The possibilities offered by algorithms and artificial intelligence in the healthcare field raise many questions. What risks do they pose? How can we ensure that they have a positive impact on the patient as an individual? What safeguards can be put in place to ensure that the values of our healthcare system are respected?

 

A few years ago, Watson, IBM’s supercomputer, turned to the health sector and particularly oncology. It has paved the way for hundreds of digital solutions, ranging from algorithms for analyzing radiology images to more complex programs designed to help physicians in their treatment decisions. Specialists agree that these tools will spark a revolution in medicine, but there are also some legitimate concerns. The CNIL, in its report on the ethical issues surrounding algorithms and artificial intelligence, stated that they “may cause bias, discrimination, and even forms of exclusion”.

In the field of bioethics, four basic principles were announced in 1978: justice, autonomy, beneficence and non-maleficence. These principles guide research on the ethical questions raised by new applications for digital technology. Christine Balagué, holder of the Smart Objects and Social Networks chair at the Institut Mines-Télécom Business School, highlights a pitfall, however: “the issue of ethics is tied to a culture’s values. China and France for example have not made the same choices in terms of individual freedom and privacy”. Regulations on algorithms and artificial intelligence may therefore not be universal.

However, we are currently living in a global system where there is no secure barrier to the dissemination of IT programs. The report made by the CCNE and the CERNA on digital technology and health suggests that the legislation imposed in France should not be so stringent as to restrict French research. This would come with the risk of pushing businesses in the healthcare sector towards digital solutions developed by other countries, with even less controlled safety and ethics criteria.

Bias, value judgments and discrimination

While some see algorithms as flawless, objective tools, Christine Balagué, who is also a member of CERNA and the DATAIA Institute highlights their weaknesses: “the relevance of the results of an algorithm depends on the information it receives in its learning process, the way it works, and the settings used”. Bias may be introduced at any of these stages.

Firstly, in the learning data: there may be an issue of representation, like for pharmacological studies, which are usually carried out on 20-40-year-old Caucasian men. The results establish the effectiveness and tolerance of the medicine for this population, but are not necessarily applicable to women, the elderly, etc. There may also be an issue of data quality: their precision and reliability are not necessarily consistent depending on the source.

Data processing, the “code”, also contains elements which are not neutral and may reproduce value judgments or discriminations made by their designers. “The developers do not necessarily have bad intentions, but they receive no training in these matters, and do not think of the implications of some of the choices they make in writing programs” explains Grazia Cecere, economics researcher at the Institut Mines-Télécom Business School.

Read on I’MTech: Ethics, an overlooked aspect of algorithms?

In the field of medical imaging, for example, determining an area may be a subject of contention: A medical expert will tend to want to classify uncertain images as “positive” so as to avoid missing a potential anomaly which could be cancer, which therefore increases the number of false-positives. On the contrary, a researcher will tend to maximize the relevance of their tool in favor of false-negatives. They do not have the same objectives, and the way data are processed will reflect this value judgment.

Security, loyalty and opacity

The security of medical databases is a hotly-debated subject, with the risk that algorithms may re-identify data made anonymous and may be used for malevolent or discriminatory purposes (by employers, insurance companies, etc.). But the security of health data also relies on individual awareness. “People do not necessarily realize that they are revealing critical information in their publications on social networks, or in their Google searches on an illness, a weight problem, etc.”, says Grazia Cecere.

Applications labeled for “health” purposes are often intrusive and gather data which could be sold on to potentially malevolent third parties. But the data collected will also be used for categorization by Google or Facebook algorithms. Indeed, the main purpose of these companies is not to provide objective, representative information, but rather to make profit. In order to maintain their audience, they need to show that audience what it wants to see.

The issue here is in the fairness of algorithms, as called for in France in 2016 in the law for a digital republic. “A number of studies have shown that there is discrimination in the type of results or content presented by algorithms, which effectively restricts issues to a particular social circle or a way of thinking. Anti-vaccination supporters, for example, will see a lot more publications in their favor” explains Grazia Cecere. These mechanisms are problematic, as they get in the way of public health and prevention messages, and the most at-risk populations are the most likely to miss out.

The opaque nature of deep learning algorithms is also an issue for debate and regulation. “Researchers have created a model for the spread of a virus such as Ebola in Africa. It appears to be effective. But does this mean that we can deactivate WHO surveillance networks, made up of local health professionals and epidemiologists who come at a significant cost, when no-one is able to explain the predictions of the model?” asks Christine Balagué.

Researchers from both hard sciences and social sciences and humanities are looking at how to make these technologies responsible. The goal is to be able to directly incorporate a program which will check that the algorithm is not corrupt and will respect the principles of bioethics. A sort of “responsible by design” technology, inspired by Asimov’s three laws of robotics.

 

Article initially written in French by Sarah Balfagon, for I’MTech.

An example of the micro-structures produced using a single-beam laser nano printer by the company Multi-Photon Optics, a member of the consortium.

Nano 3D Printers for Industry

Projets européens H2020The 3-year H2020 project PHENOmenon, launched in January 2018, is developing nano 3D printers capable of producing micro and nano-structures (particularly those with an optical function), while adhering to limited production times. Kevin Heggarty is a researcher at IMT Atlantique, one of the project partners along with three other European research institutes and eight industrial partners, including major groups and SMEs. He offers a closer look at this project and the scientific challenges involved.

 

What is the goal of the H2020 PHENOmenon project?

Kevin Heggarty: The goal of this project is to develop nano 3D printers for producing large, high-resolution objects. The term “large” is relative, since here we are referring to objects that only measure a few square millimeters or centimeters with nanometric resolution—one nanometer measures one millionth of a millimeter. We want to be able to produce these objects within time frames compatible with industry requirements.

What are the scientific obstacles you must overcome?

KH: Currently there are nano 3D printers that work with a single laser beam. The manufacturing times are very long. The idea with PHENOmenon is first to project hundreds of laser beams at the same time. We are currently able to simultaneously project over one thousand. The long-term goal is to project millions of laser beams to significantly improve production speeds.

What inspired the idea for this project?

KH: Parallel photoplotting is an area of expertise that has been developed in IMT Atlantique laboratories for over 15 years. This involves using light beams to trace patterns on photosensitive materials, like photographic film. Up until now, this was done using flat surfaces. The chemistry laboratory of ENS Lyon has developed highly sensitive material used to produce 3D objects. It was in our collaboration with this laboratory that we decided to test an idea—that of combining parallel photoplotting with the technology from ENS Lyon to create a new manufacturing process.

After demonstrating that it was possible to obtain hundreds of cubic microns by simultaneously projecting a large number of laser beams on highly sensitive material, we reached out to AIMEN, an innovation and technology center specialized in advanced manufacturing materials and technologies located in Vigo, Spain. Their cutting-edge equipment for laser machining is well-suited to the rapid manufacturing of large objects. With its solid experience in applying for and leading European projects, AIMEN became the coordinator of PHENOmenon. The other partners are industrial stakeholders, the end users of the technology being developed in the context of this project.

What expectations do the industrial partners have?

KH: Here are a few examples: The Fábrica Nacional de Moneda y Timbre, a public Spanish company, is interested in manufacturing security holograms on bank notes. Thalès would like to cover the photovoltaic panels it markets with micro and nano-structured surfaces produced using nano-printers. The PSA Group wants to equip the passenger compartment of its vehicles with holographic buttons. Design LED will introduce these micro-structured 3D components in its lighting device, a plastic film used to control light…

What are the next steps in this project?

KH: The project partners meet twice a year. IMT Atlantique will host one of these meetings on its Brest campus in the summer of 2020. In terms of new developments in research, the chemistry laboratory of ENS Lyon is preparing a new type of resin. At IMT Atlantique, we are continuing our work. We are currently able to simultaneously project a large number of identical laser beams. The goal is to succeed in project different types of laser beams and then produce prototype nano-structures for the industrial partners.

 

 

Energysquare: charging your telephone has never been so simple!

Start-up company Energysquare has created a wireless charging device for tablets and cellphones. Using a simple mechanism combining a sticker and a metal plate, devices can be charged by conduction. Energysquare, which is incubated at Télécom ParisTech, will soon see its technology tested in hotels. The company now also aims to export and adapt its product to other smart objects.

 

Are you fed up of jumbles of wire in the house or on your desk? You probably never deliberately knotted your phone charger, and yet, when you want to use it, the wire is all tangled up! This article brings you good news: your fight against electric cables has come to an end! Start-up company Energysquare, incubated at Télécom ParisTech since 2015, has revolutionized electrical charging for mobile devices such as smartphones and tablets by disposing with current chargers. Your devices can now all be charged together on a single pad plugged into the mains.

“We based our work on the fact that the devices we use spend a lot of time on the surfaces around us, such as a desk or bedside table. Our idea was to be able to charge them over a whole surface and no longer with a cable at a single point,” explains Timothée Le Quesne, one of the designers of the Energysquare concept. We took a closer look at this vital accessory for future smart houses.

Easy-to-use conductive charging

The first question that comes to mind is how does it work? The technology is composed of two parts. Firstly, the pad, which is a 30×30-centimetre metal plate with independent conductive squares. It is plugged into the mains and can be placed on any surface as desired. The second part is a sticker comprising a flexible conductor with two electrodes and a connector adapted to the charging socket of your device, whether Android or IOS. The sticker is stuck directly on the back of your telephone. No surprises so far… but it is when the two parts come into contact that the magic happens.

When the electrodes come in contact with the charging surface, the system detects the device and sends it a signal to check that it is a battery. An electrical potential of 5 volts is produced between the two squares connected to the electrodes, allowing conductive charging. “The geometric format of the pad has been designed so that the two squares are automatically in contact with the electrodes. That way, there is no need to check anything and the device charges automatically without emitting any electromagnetic waves. Conversely, when no devices are detected, the pad automatically goes on standby,” explains Timothée Le Quesne.

But what happens if another device is placed on the pad? “The surface is naturally inert. The cleverness of the system lies in the fact that it can detect the object and identify whether it is a battery to be charged or not. Even if you spill water on it, it won’t have any effect. It is water resistant and protected against infiltration,” explains the young entrepreneur. No electric current is transmitted to your cup of coffee placed absentmindedly on the surface either. Although the system uses conductive charging, it does not emit any heat when it is running. Heat is dispersed across the surface like in a radiator, even if several devices are charging at once at the same speed as when plugged into the mains. Charging a device becomes so practical you could easily forget your phone lying on the surface. But this is not a problem, because the system goes back into standby once the device is fully charged.

Hotels soon to be using this technology

“We most need electricity when we’re away. We often have low battery in airports, cafes etc… This is a B2B market that we aim to invest in,” explains Timothée Le Quesne. For the moment, Energysquare is addressing the hospitality sector with tests to be carried out in France over the coming weeks. The principle is simple: the pad is installed on a bedside table and the stickers are provided at reception.

But the start-up aims to go even further. Why place the pad on a surface when it could be directly integrated into the furniture it sits on? “Our only limitation is preserving the metal surface of the pad to allow it to charge. We can still add a bit of style though by giving it a brushed effect, for example. Working with furniture manufacturers offers us good prospects. We can already imagine surfaces in meeting rooms covered with our device! We can also give the pad any form we like, with larger or smaller sections according to the device it is designed for,” Timothée Le Quesne continues.

With such a universal system, we can reasonably ask what the start-up’s aims are for the international market. “In January we will be participating in CES, an electronics show in the USA, where we will have a stand to display and demonstrate our technology.” This welcome overseas publicity is hardly a surprise since the start-up saw positive interest in its technology during a fundraiser on Kickstarter in June 2016, with 1/3 of purchasers in Asia and 1/3 in America. “As soon as we have finished validating our tests in hotels in France, we will turn to the foreign market,” affirms Timothée Le Quesne. But don’t worry, Energysquare hasn’t forgotten private individuals, and will launch the online sale of its technology in 2017.

Smart objects: a promising market to conquer

“One of our aims is to become a standard charging device for all smart objects,” admits Timothée Le Quesne. This is a promising future market, since 20 billion smart objects are forecast to be manufactured between now and 2020… All the more technology for us to spend time plugging in to charge! The start-up has already carried out tests with positive results on smart speakers and e-cigarettes, but the shape of certain objects, such as smart headphones, prevents the Energysquare system adapting to them. “For some devices, the electrodes will have to be integrated directly by the manufacturer.”

Nevertheless, there is one item that we use every day which would definitely benefit from this sort of charging system: laptops! The main difficulty, unlike other objects, is the power that needs to be generated by the system. “We need to change certain components to obtain more power through the pad and adapt it to laptops. It is something that is scheduled for 2017,” affirms Timothée Le Quesne. This is the first obstacle to overcome, especially since, when we asked the young entrepreneur what the future for Energysquare looked like 5 to 10 years from now, he replied: “we would like to be able to not only charge devices, but also power household appliances directly. We want to get rid of electric cables and replace them with surfaces that will power your kettle and charge your phone.”

Electroencephalogram: a brain imaging technique that is efficient but limited in terms of spatial resolution.

Technology that decrypts the way our brain works

Different techniques are used to study of the functioning of our brain, including electroencephalography, magnetoencephalography, functional MRI and spectroscopy. The signals are processed and interpreted to analyze the cognitive processes in question. EEG and MRI are the two most commonly used techniques in cognitive science. Their performances offer hope and but also concern. What is the current state of affairs of brain function analysis and what are its limits?

 

Nesma Houmani is a specialist in electroencephalography (EEG) signal analysis and processing at Télécom SudParis. Neuron activity in the brain generates electrical changes which can be detected on the scalp. These are recorded using a cap fitted with strategically-placed electrodes. The advantages of EEG are that it is not costly, easily accessible and noninvasive for the subjects being studied. However, it generates a complex signal composed of oscillations associated with new baseline brain activity when the patient is awake and at rest, punctual signals linked to activations generated by the test and variable background noise caused, notably, by involuntary movements by the subject.

The level of noise depends, among other things, on the type of electrodes used, whether dry or with gel. While the latter reduces the detection of signals not emitted by brain activity, they take longer to place, may cause allergic reactions and require the patient to thoroughly wash with shampoo after the examination, making it more complicated to carry out these tests outside hospitals. Dry electrodes are being introduced in hospitals, but the signals recorded have a high level of noise.

The researcher at Télécom SudParis uses machine learning and artificial intelligence algorithms to extract EEG markers. “I use information theory combined with statistical learning methods to process EEG time series of a few milliseconds.” Information theory supposes that signals with higher entropy contain more information. In other words, when the probability of an event occurring is low, the signal contains more information and is therefore more likely to be relevant. Nesma Houmani’s work allows the removal of parasite signals from the trace and a more accurate interpretation of the EEG data recorded.

A study published in 2015 showed that this technique allowed better definition of the EEG signal in the detection of Alzheimer’s disease. Statistical modeling allows consideration of the interaction between the different areas of the brain over time. As part of her research on visual attention, Nesma Houmani uses EEG combined with an eye tracking device to determine how a subject engages in and withdraws from a task: “The participants must observe images on a screen and carry out different actions according to the image shown. A camera is used to identify the point of gaze, allowing us to reconstitute eye movements,” she explains. Other teams use EEG for emotional state discrimination or for understanding decision-making mechanisms.

EEG provides useful data because it has a temporal resolution of a few milliseconds. It is often used in applications for brain-machine interfaces, allowing a person’s brain activity to be observed in real time with just a few seconds’ delay. “However, EEG is limited in terms of spatial resolution,” explains Nesma Houmani. This is because the electrodes are, in a sense, placed on the scalp in two dimensions, whereas the folds in the cortex are three-dimensional and activity may come from areas that are further below the surface. In addition, each electrode measures the sum of synchronous activity for a group of neurons.

The most popular tool of the moment: fMRI

Conversely, functional MRI (fMRI) has excellent spatial resolution but poor temporal resolution. It has been used a lot in recent scientific studies but is costly and access is limited by the number of devices available. Moreover, the level of noise it produces when in operation and the subject’s position lying down in a tube can be stressful for participants. Brain activity is reconstituted in real time by detecting a magnetic signal linked to the amount of blood transferred by micro-vessels at a given moment, which is visualized over 3D anatomical planes. Although activations can be accurately situated, hemodynamic variations occur a few seconds after the stimulus, which explains why the temporal resolution is lower than that of EEG.

fMRI produces section images of the brain with good spatial resolution but poor temporal resolution.

fMRI produces section images of the brain with good spatial resolution but poor temporal resolution.

 

Nicolas Farrugia has carried out several studies with fMRI and music. He is currently working on applications for machine learning and artificial intelligence in neuroscience at IMT Atlantique. “Two main paradigms are being studied in neuroscience: coding and decoding. The first aims to predict brain activity triggered by a stimulus, while the second aims to identify the stimulus from the activity,” the researcher explains. A study published in 2017 showed the possibilities of fMRI associated with artificial intelligence in decoding. Researchers asked subjects to watch videos in an MRI scanner for several hours. A model was then developed using machine learning, which was able to reconstruct a low-definition image of what the participant saw based on the signals recorded in their visual cortex. fMRI is a particularly interesting technique for studying cognitive mechanisms, and many researchers consider it the key to understanding the human brain, but it nevertheless has its limits.

Reproducibility problems

Research protocol changed recently. Nicolas Farrugia explains: “The majority of publications in cognitive neuroscience use simple statistical models based on functional MRI contrasts by subtracting the activations recorded in the brain for two experimental conditions A and B, such as reading versus rest.” But several problems have led researchers to modify this approach. “Neuroscience is facing a major reproducibility challenge,” admits Nicolas Farrugia. Different limitations have been identified in publications, such as a small workforce, a high level of noise and a separate analysis for each part of the brain, not to mention any interactions or the relative intensity of activation in each area.

These reproducibility problems are leading researchers to change methods, from an inference technique in which all available data is used to obtain a model that cannot be generalized, to a prediction technique in which the model learns from part of the data and is then tested on the rest.” This approach, which is the basis for machine learning, allows the model’s relevance to be checked in comparison with the actual reality. “Thanks to artificial intelligence, we are seeing the development of computational calculation methods which were not possible with standard statistics. In time, this will allow researchers to predict what type of image or what piece of music the person is thinking of based on their brain activity.

Unfortunately, there are also reproducibility problems in signal processing with machine learning. The technique, which is based on artificial neural networks, is currently the most popular because it is very effective in multiple applications, but it requires adjusting hundreds of thousands of parameters using optimization methods. Researchers tend to adjust the parameters of the developed model when they evaluate it and repeat it on the same data, thus distorting the generalization of results. The use of machine learning also leads to another problem for signal detection and analysis: the ability to interpret the results. Knowledge of deep learning mechanisms is currently very limited and is a field of research in its own right, so our understanding of how human neurons function could in fact come from our understanding of how deep artificial neurons function. A strange sort of mise en abyme!

 

Article written by Sarah Balfagon, for I’MTech.

 

More on this topic: