Earth

Will the earth stop rotating after August 1st?

By Natacha Gondran, researcher at Mines Saint-Étienne, and Aurélien Boutaud.
The original version of this article (in French) was published in The Conversation.

[divider style=”normal” top=”20″ bottom=”20″]

[dropcap]I[/dropcap]t has become an annual summer tradition, much like France’s Music Festival or the Tour de France. Every August, right when French people are focused on enjoying their vacation, an alarming story begins to spread through the news: it’s Earth Overshoot Day!

From this fateful date through to the end of the year, humanity will be living at nature’s expense as it runs up an ecological debt. Just imagine vacationers on the beach or at their campsite discovering, through the magic of mobile networks, the news of this imminent breakdown.

Will the earth cease to rotate after August 1st? The answer is… No. There is no need to panic (well, not quite yet.) Once again, this year, the Earth will continue to rotate even after Earth Overshoot Day has come and gone. In the meantime, let’s take a closer look at how this date is calculated and how much it is worth from a scientific standpoint.

Is the ecological footprint serious?

Earth Overshoot Day is calculated based on the results of the “ecological footprint”, an indicator invented in the early 1990s by two researchers from the University of British Columbia in Vancouver. Mathis Wackernagel and William Rees sought to develop a synoptic tool that would measure the toll of human activity on the biosphere. They then thought of estimating the surface area of the land and the sea that would be required to meet humanity’s needs.

More specifically, the ecological footprint measures two things: on the one hand, the biologically productive surface area required to produce certain renewable resources (food, textile fibers and other biomass); on the other, the surface area that should be available to sequester certain pollutants in the biosphere.

In the early 2000s the concept proved extremely successful, with a vast number of research articles published on the subject, which contributed to making the calculation of the ecological footprint more robust and detailed.

Today, based on hundreds of statistical data entries, the NGO Global Footprint Network estimates humanity’s ecological footprint at approximately 2.7 hectares per capita. However, this global average conceals huge disparities: while an American’s ecological footprint exceeds 8 hectares, that of an Afghan is less than 1 hectare.

Overconsumption of resources

It goes without saying: the Earth’s biologically productive surfaces are finite. This is what makes the comparison between humanity’s ecological footprint and the planet’s biocapacity so relevant. This biocapacity represents approximately 12 billion hectares (of forests, cultivated fields, pasture land and fishing areas), or an average of 1.7 hectares per capita in 2012.

The comparison between ecological footprint and biocapacity therefore results in this undeniable fact: each year, humanity consumes more services from the biosphere that it can regenerate. In fact, it would take one and a half planets to sustainably provide for humanity’s needs. In other words, by August, humanity has already consumed the equivalent of the world’s biocapacity for one world.

These calculations are what led to the famous Earth Overshoot Day.

Legitimate Criticism

Of course, the ecological footprint is not immune to criticism. One criticism is that it focuses its analysis on the living portion of natural capital only and fails to include numerous issues, such as the pressure on mineral resources and chemical and nuclear pollution.

The accounting system for the ecological footprint is also very anthropocentric: biocapacity is estimated based on the principle that natural surfaces are at humanity’s complete disposal, ignoring the threats that human exploitation of ecosystems can pose for biodiversity.

Yet most criticism is aimed at the way the ecological footprint of fossil fuels is calculated. In fact, those who designed the ecological footprint based the concept on the observation that fossil fuels were a sort of “canned” photosynthetic energy–since they resulted from the transformation of organic matter that decomposed millions of years ago. The combustion of this matter therefore amounts to transferring carbon of organic origin into the atmosphere. In theory, this carbon could be sequestered in the biosphere… If only the biological carbon sinks were sufficient.

Therefore, what the ecological footprint measures is in fact a “phantom surface” of the biosphere that would be required to sequester the carbon that is accumulating in the atmosphere and causing the climate change we experience. This methodological discovery makes it possible to transfer tons of CO₂ into “sequestration surfaces”, which can then be added to the “production surfaces”.

While this is a clever principle, it poses two problems: first, almost the entire deficit observed by the ecological footprint is linked to the combustion of fossil fuels; and second, the choice of the coefficient between tons of CO₂ and sequestration surfaces is questionable, since several different hypotheses can produce significantly different results..

Is the ecological deficit underestimated?

Most of this criticism was anticipated by the designers of the ecological footprint.

Based on the principle that “everything simple is false, everything complex is unusable” (Paul Valéry), they opted for methodological choices that would produce aggregated results that could be understood by the average citizen. However, it should be noted that, for the most part, these choices were made to ensure that the ecological deficit was not overestimated. Therefore, a more rigorous or more exhaustive calculation would result in the increase of the observed deficit… And thus, an even earlier “celebration” of Earth Overshoot Day.

Finally, it is worth noting that this observation of an ecological overshoot is now widely confirmed by another scientific community which has, for the past ten years, worked in more detail on the “planetary boundaries” concept.

This work revealed nine areas of concern which represent ecological thresholds beyond which the conditions of life on Earth could no longer be guaranteed, since we would be leaving the stable state that has characterized the planet’s ecosystem for 10,000 years.

For three of these issues, it appears the limits have already been exceeded: the species’ extinction rate, and the balance of the biogeochemical cycle of nitrogen and that of phosphorus. We are also dangerously close to the thresholds in the areas of climate change and the use of land surface. In addition, we cannot rule out the possibility of new causes for concern arising in the future.

Earth Overshoot Day is therefore worthy of our attention, since it reminds us of this inescapable reality: we are exceeding several of our planet’s ecological limits. Humanity must take this reality more seriously. Otherwise, the Earth might someday continue to rotate… but without us.

 

[divider style=”normal” top=”20″ bottom=”20″]

Aurélien Boutaud and Natacha Gondran co-authored « L’empreinte écologique » (éditions La Découverte, 2018).

aneurysm

A digital twin of the aorta to prevent aneurysm rupture

15,000 Europeans die each year from rupture of an aneurysm in the aorta. Stéphane Avril and his team at Mines Saint-Étienne are working to better prevent this. To do so, they develop a digital twin of the artery of a patient with an aneurysm. This 3D model makes it possible to simulate the evolution of an aneurysm over time, and better predict the effect of a surgically-implanted prosthesis. Stéphane Avril talks to us about this biomechanics research project and reviews the causes for this pathology along with the current state of knowledge on aneurysms.

 

Your research focuses on the pathologies of the aorta and aneurysm rupture in particular. Could you explain how this occurs?   

Stéphane Avril

Stéphane Avril: The aorta is the largest artery in our body. It leaves the heart and distributes blood to the arms and brain, goes back down to supply blood to the intestines and then divides in two to supply blood to the legs. The wall of the aorta is a little bit like our skin. It is composed of practically the same proteins and the tissues are very similar. It therefore becomes looser as we age. This phenomenon may be accelerated by other factors such as tobacco or alcohol. It is an irreversible process that results in an enlarged diameter of the artery. When there is significant dilation, it is called an aneurysm. This is the most common pathology of the aorta. The aneurysm can rupture, which is often lethal for the individual. In Europe, some 15,000 people die each year from a ruptured aneurysm.

Can the appearance of an aneurysm be predicted?

SA: No, it’s very difficult to predict where and when an aneurysm will appear. Certain factors are morphological. For example, some aneurysms result from the malformation of an aortic valve: 1 % of the population has only two of the three leaflets that make up this part of the heart. As a result, the blood is pumped irregularly, which leads to a microinjury on the wall of the aorta, making it more prone to damage. One out of two individuals with this malformation develops an aneurysm, usually between the ages of 40 and 60. There are also genetic factors that lead to aneurysms earlier in life, between the ages 20 and 40. Then there are the effects of ageing, which make populations over 60 more likely to develop this pathology. It is complicated to determine which factors predominate in relation to one another. Especially since if at 30 or 40 an individual is declared healthy and then starts smoking, which will affect the evolution of the aorta.

If aneurysms cannot be predicted, can they be treated?

SA: In biology, extensive basic research has been conducted on the aortic system. This has allowed us to understand a lot about what causes aneurysms and how they evolve. Although specialists cannot predict an aneurysm’s appearance, they can say why the pathology appeared in a certain location instead of another, for example. For patients who already have an aneurysm, this also means that we know how to identify the risks related to the evolution of the pathology. However, no medication exists yet. Current solutions rely rather on surgery to implant a prosthesis or an endoprosthesis — a stent covered with fabric — to limit pressure on the damaged wall of the artery. Our work carried out with the Sainbiose joint research unit [run by INSERM, Mines Saint-Étienne and Université Jean Monnet], focused on gathering everything that is known so far about the aorta and aneurysms in order to propose digital models.

What is the purpose of these digital models?

SA: The model should be seen as a 3D digital twin of the patient’s aorta. We can perform calculations on it. For example, we study how the artery evolves naturally, whether or not there is a high risk of aneurysm rupture, and if so, where exactly in the aorta. The model can also be used to analyze the effect of a prosthesis on the aneurysm. We can determine whether or not surgery will really be effective and help the surgeon choose the best type of prosthesis. This use of the model to assist with surgery led to the creation of a startup, Predisurge, in May 2017. Practitioners are already using it to predict the effect of an operation and calculate the risks.

Read on IMTech: Biomechanics serving healthcare

How do you go about building this twin of the aorta?  

SA: The first data we use comes from imaging. Patients undergo CAT scans and MRIs. The MRIs give us information about blood flow because we can have 10 to 20 photos of the same area over the duration of a cardiac cycle. This provides us with information about how the aorta compresses and expands with each heart beat. Based on this dynamic, our algorithms can trace the geometry of the aorta. By combining this data with pressure measurements, we can deduce the parameters that control the mechanical behavior of the wall, especially elasticity. We then relate this to the composition of elastin, collagen and the smooth muscle cell ratio of the wall. This gives us a very precise idea about all the parts of the patient’s aorta and its behavior.

Are the digital twins intended for all patients?

SA: That’s one of the biggest challenges. We would like to have a digital twin for each patient as this would allow us to provide personalized medicine on a large scale. This is not yet the case today. For now, we are working with groups of volunteer patients who are monitored every year as part of a clinical study run by the Saint-Étienne University hospital. Our digital models are combined with analyses by doctors, allowing us to validate these models and talk to professionals about what they would like to be able to find using the digital twin of the aorta. We know that as of today, not all patients can benefit from this tool. Analyzing the data collected, building the 3D model, setting the right biological properties for each patient… all this is too time-consuming for wide-scale implementation. At the same time, what we are trying to do is identify the groups of patients who would most benefit from this twin. Is it patients who have aneurysms caused by genetic factors? For which age groups can we have the greatest impact? We also want to move towards automation to make the tool available to more patients.

How can the digital twin tool be used on a large scale?  

SA: The idea would be to include many more patients in our validation phase to collect more data. With a large volume of data, it is easier to move towards artificial intelligence to automate processing. To do so, we have to monitor large cohorts of patients in our studies. This means we would have to shift to a platform incorporating doctors, surgeons and researchers, along with imaging device manufacturers, since this is where the data comes from. This would help create a dialogue between all the various stakeholders and show professionals how modeling the aorta can have a real impact. We already have partnerships with other IMT network schools: Télécom SudParis and Télécom Physique Strasbourg. We are working together to improve the state of the art in image processing techniques. We are now trying to include imaging professionals. In order to scale up the tool, we must also expand the scope of the project. We are striving to do just that.

Around this topic on I’MTech

MRI

Mathematical tools for analyzing the development of brain pathologies in children

Magnetic resonance imaging (MRI) enables medical doctors to obtain precise images of a patient’s structure and anatomy, and of the pathologies that may affect the patient’s brain. However, to analyze and interpret these complex images, radiologists need specific mathematical tools. While some tools exist for interpreting images of the adult brain, these tools are not directly applicable in analyzing brain images of young children and newborn or premature infants. The Franco-Brazilian project STAP, which includes Télécom ParisTech among its partners, seeks to address this need by developing mathematical modeling and MRI analysis algorithms for the youngest patients.

 

An adult’s brain and that of a developing newborn infant are quite different. An infant’s white matter has not yet fully myelinized and some anatomical structures are much smaller. Due to these specific features, the images obtained of a newborn infant and an adult via magnetic resonance imaging (MRI) are not the same. “There are also difficulties related to how the images are acquired, since the acquisition process must be fast. We cannot make a young child remain still for a long period of time,” adds Isabelle Bloch, a researcher in mathematical modeling and spatial reasoning at Télécom ParisTech.  The resolution is therefore reduced because the slices are thicker.”

These complex images require the use of tools to analyze and interpret them and to assist medical doctors in their diagnoses and decisions. “There are many applications for processing MRI images of adult brains. However, in pediatrics there is a real lack that must be addressed,” the researcher observes. “This is why, in the context of the STAP project, we are working to design tools for processing and interpreting images of young children, newborns and premature infants.

The STAP project, funded by the ANR and FAPESP was launched in January and will run for four years. The partners involved include the University of São Paolo in Brazil, the pediatric radiology departments at São Paolo Hospital and Bicêtre Hospital, as well as the University of Paris Dauphine and Télécom ParisTech. “Three applied mathematics and IT teams are working on this project, along with two teams of radiologists. Three teams in France, two in Brazil… The project is both international and multidisciplinary,” says Isabelle Bloch.

 

Rare and heterogeneous data

To work towards developing a mathematical image analysis tool, the researchers collected MRIs of young children and newborns from partner hospitals. “We did not acquire data specifically for this project,” Isabelle Bloch explains. “We use images that are already available to the doctors, for which the parents have given their informed consent for the images to be used for research purposes.” The images are all anonymized, regardless of whether they display normal or pathological anatomy. “We are very cautious: If a patient has a pathology that is so rare that his or her identity could be recognized, we do not use the image.

Certain pathologies and developmental abnormalities are of particular interest to researchers: hyperintense areas, which are areas of white matter that appear lighter than normal on the MRI images, developmental abnormalities in the corpus callosum, the anatomical structure which joins the two cerebral hemispheres, and cancerous tumors.

We are faced with some difficulties because few MRI images exist of premature and newborn babies,” Isabelle Bloch explains. “Finally, the images vary greatly depending on the age of the patient and the pathology. We therefore have a limited dataset and many variations that continue to change over time.

 

Modeling medical knowledge

Although the available images are limited and heterogeneous, the researchers can make up for this lack of data through the medical expertise of radiologists, who are in charge of annotating the MRI that are used. They will therefore have access to valuable information on brain anatomy and pathologies as well as the patient’s history. “We will work to create models in the form of medical knowledge graphs, including graphs of the structures’ spatial layout. We will have assistance from the pediatric radiologists participating in the project,” says Isabelle Bloch. “These graphs will guide the interpretation of the images and help to describe the pathology, and the surrounding structures: Where is the pathology? What healthy structures could it affect? How is it developing?

For this model, each anatomical structure will be represented by a node. These nodes are connected by edges that bear attributes such as spatial relationships or contrasts of intensity that exist in the MRI.  This graph will take into account the patient’s pathologies by adapting and modifying the links between the different anatomical structures. “For example, if the knowledge shows that a given structure is located to the right of another, we would try to obtain a mathematical model that tells us what ‘to the right of’ means,” the researcher explains. “This model will then be integrated into an algorithm for interpreting images, recognizing structures and characterizing a disease’s development.”

After analyzing a patient’s images, the graph will become an individual model that corresponds to a specific patient. “We do not yet have enough data to establish a standard model, which would take variability into account,” the researcher adds. “It would be a good idea to apply this method to groups of patients, but that would be a much longer-term project.

 

An algorithm to describe images in the medical doctor’s language

In addition to describing the brain structures spatially and visually, the graph will take into account how the pathology develops over time. “Some patients are monitored regularly. The goal would be to compare MRI images spanning several months of monitoring and precisely describe the developments of brain pathologies in quantitative terms, as well as their possible impact on the normal structures,” Isabelle Bloch explains.

Finally, the researchers would like to develop an algorithm that would provide a linguistic description of the images’ content using the pediatric radiologist’s specific vocabulary. This tool would therefore connect the quantified digital information extracted from images with words and sentences. “This is the reverse method of that which is used for the mathematical modeling of medical knowledge,” Isabelle Bloch explains. “The algorithm would therefore describe the situation in a quantitative and qualitative manner, hence facilitating the interpretation by the medical expert.

In terms of the structural modeling, we know where we are headed, although we still have work to do on extracting the characteristics from the MRI,” says Isabelle Bloch regarding the project’s technical aspects. “But combining spatial analysis with temporal analysis poses a new problem… As does translating the algorithm into the doctor’s language, which requires transitioning from quantitative measurements to a linguistic description.” Far from trivial, this technical advance could eventually allow radiologists to use new image analysis tools better suited to their needs.

Find out more about Isabelle Bloch’s research

gender diversity, digital professions

Why women have become invisible in IT professions

Female students have deserted computer science schools and women seem mostly absent from companies in this sector. The culprit: the common preconception that female computer engineers are naturally less competent than their male counterparts.  The MOOC entitled Gender Diversity in IT Professions*, launched on 8 March 2018, looks at how sexist stereotypes are constructed, often insidiously. Why are women now the minority, rendered invisible in the digital sector despite the many female pioneers and entrepreneurs have paved the way for the development of software and video games?  Chantal Morley, a researcher within the Gender@Telecom group at Institut Mines-Telecom Business School, takes a look back at the creation of this MOOC, highlighting the research underpinning the course.

 

In 2016, only 33% of digital jobs were occupied by women (OPIIEC). Taking into account only the “core” professions in the sector (engineer, technician or project manager), the percentage falls to 20%. Why is there such a gender gap? No, it’s not because women are less talented in technical professions, nor because they prefer other areas. The choices made by young women in their training and women in their careers is not always the result of a free and informed decision. The influence of stereotypes plays a significant role. These popular beliefs reinforce the idea that the IT field is inherently masculine, a place where women do not belong, and this influences our choices and behaviors even when we do not realize it.

The research group Gender@Telecom, which brings together several female researchers from IMT schools, is looking at the issue of women’s role in the field of information and communication technologies, and specifically the software sector. Through their studies and analysis, the group’s researchers have observed and described how these stereotypes are expressed.  “We interviewed professionals in the sector, and asked students specific questions about their choices and opinions,” explains Chantal Morley, researcher at Institut Mines-Telecom Business School. By analyzing the speech from those interviews, the researchers identified many preconceived notions. “Women do not like computer science, it does not interest them, for example,” the researcher continues. “These representations are unproven and do not match reality!” These little phrases that communicate stereotypes are heard from both men and women. “One might think that this type of differentiation in representations would not exist among male and female students, but that is not the case,” says Chantal Morley. “During a study conducted in Switzerland, we found that guidance counselors are also very much influenced by these stereotypes.” Among professionals, these views are even cited as arguments justifying certain choices.

 

Little phrases, big impacts

The Gender Diversity in IT Professions MOOC* developed by the Gender@Telecom group is aimed at deconstructing these stereotypes. “We used these studies to try to show learners how little things in everyday life, which we do not even notice, contribute to instilling these differentiated views,” Chantal Morley explains. These little phrases or representations can also be found in our speech as well as in advertisements, posters… When viewed individually, these small occurrences are insignificant, yet it is their repetition and systematic nature that pose a problem. Together they work to establish and reinforce sexist stereotypes. “They form a common knowledge, a popular belief that everyone is aware of, that we all accept, saying ‘that’s just the way it is!’”

To study this phenomenon, the researchers from the group analyzed speech from semi-structured interviews conducted with stakeholders in the digital industry. The researchers’ questions focused on the relationship with technology and an entrepreneurship competition that had recently been held at Institut Mines-Telecom Business School. “Again, in this study, some types of arguments were frequently repeated and helped reinforce these stereotypes,” Chantal Morley observes. “For example, when someone mentions a woman who is especially talented, the person will often add, ‘yes, but with her it’s different, that doesn’t count.’ There is always an excuse for not questioning the general rule that says women lack the abilities required in digital professions.

 

Unjustified stereotypes

Yet despite their pervasiveness, there is nothing to justify these remarks. The history of computer science professions proves this fact. However, the contribution of women has long been hidden behind the scenes. “When we studied the history of computer science, we were primarily looking at the area of hardware, equipment. Women were systematically rejected by universities and schools in this field, where they were not allowed to earn a diploma,” says Chantal Morley. “Also, some companies refused to keep their employees if they had a child or got married. This made careers very difficult.” In recent years, research on the history of the software industry, in which there were more opportunities, has revealed that many women contributed to major aspects of its development.

Ada Lovelace is sort of the Marie Curie of computer science… People think she is the only one! Yet she is one contributor among others,” the researcher explains. For example, the computer scientist Grace Hopper invented the first compiler and the COBOL language in the 1950s. “She had the idea of inventing a translator that would translate a relatively understandable and accessible language into machine language. Her contribution to programming was crucial,” Chantal Morley continues. “We can also mention Roberta Williams, a computer scientist who greatly contributed to the beginnings of video games, or Stephanie Shirley, a pioneer computer scientist and entrepreneur…”

In the past these women were able to fight for their place in software professions. What has happened to make women seem absent from these arenas? According to Chantal Morley, the exclusion of women occurred with the arrival of microcomputing, which at the time had been designed for a primarily male target, that of executives. “The representations conveyed at that time progressively led everyone to associate working with computers with men.” But although women are a minority in this sector, they are not completely absent. “Many have participated in the creation of very large companies, they found startups, and there are some very famous women hackers,” Chantal Morley observes. “But they are not at all in the spotlight and do not get much press coverage. As if they were an anomaly, something unusual…

Finally, women’s role in the digital industry varies greatly depending on the country and culture. In India and Malaysia, for example, computer science is a “women’s” profession. It is all a matter of perspective, not a question of innate abilities.

 

[box type=”shadow” align=”” class=”” width=””]*A MOOC combating sexist stereotypes

How are these stereotypes constructed and maintained? How can they be deconstructed? How can we promote the integration of women in digital professions? The Gender Diversity in IT Professions MOOC (in French), launched on 8 March 2018, uncovers the little-known contribution of women to the development of the software industry and what mechanisms cause them to remain hidden and discouraged from integrating this sector. The MOOC is aimed at raising awareness among companies, schools and research organizations on these issues to provide them with keys for developing a more inclusive culture for women. [/box]

Also read on I’MTech

 

AI, an issue of economy and civilization?

This is the first issue of the new quarterly of the series Annales des Mines devoted to the digital transition. The progress made using algorithms, the new computational capacities of devices (ranging from graphic cards to the cloud), and the availability of huge quantities of data combine to explain the advances under way in Artificial Intelligence. But AI is not just a matter of algorithms operating in the background on digital platforms. It increasingly enters into a chain of interactions with the physical world. The examples reported herein come from finance, insurance, employment, commerce and manufacturing. This issue turns to the stakeholders, in particular French, who are implementing AI or who have devoted thought to this implementation and to AI’s place in our society.

Introduction by Jacques Serris, Engineer from the Corps des Mines, Conseil Général de l’Économie (CGE)

About Digital issues, the new series of Annales des Mines

Digital Issues is a quarterly series (March, June, September and December) freely downloadable on the Annales des Mines website, with a print version in French language. Focus of the series is on the issues of the digital transition for an enlightened, yet non necessarily expert, readership. Various viewpoints are being used between technology, economy and society as the Annales des Mines are used to doing in all their series.

Download all the articles of the issue

Aizimov BEYABLE startup artificial intelligence

Artificial Intelligence hiding behind your computer screen!

Far from the dazzle of intelligent humanoid robots and highly effective chatbots, artificial intelligence is now used in many ordinary products and services. In the software and websites consumers use on a daily basis, AI is being used to improve the use of digital technology. This new dynamic is perfectly illustrated by two startups incubated at Télécom ParisTech: BEYABLE and AiZimov.

 

Who are the invisible workers managing the aisles of digital shops? At the supermarket, shoppers regularly see employees stocking the shelves, but the shelves of online sales sites are devoid of human contact. “Whether a website has 500 or 10,000 items for sale, there are always fewer employees managing the products than at a real store,” explains Julien Dugaret, founder of the startup BEYABLE. The young company is well aware that these digital showcases still require maintenance. Currently accelerated at Télécom ParisTech and formerly incubated there, it offers a solution for detecting anomalies on online shopping sites.

BEYABLE’s artificial intelligence algorithms use a clustering technique. By analyzing data from internet users’ visits to websites and the data associated with each project, they group the items together into coherent “clusters”. The articles that cannot be included in any of the clusters are then identified as anomalies and corrected so they can be reintegrated into the right place.

Some products do not have the right images, descriptions or references. For example, a pair of heels might be included in the ‘boots’ category of an online shop,” explains the entrepreneur. The software then identifies the heels so that an employee can correct the description. While this type of error may seem anecdotal or funny, for the companies that use BEYABLE’s services, the quality of the customer experience is at stake.

Some websites offer thousands of articles with product references that are constantly changing. It is important to make sure visitors using the website do not feel lost from one day to the next. “If a real merchant sold t-shirts one day and coffee tables the next, you can imagine all the logistics that would be required overnight. For an online shop, the logistics involved in changing the collection or promoting certain articles is much simpler, but not effortless. The reduced number of online store ‘department managers’ makes the logistics all the more difficult,” explains Julien Dugaret. Artificial intelligence tools play an essential role in these logistics, helping digital marketing teams save a lot of time and ensuring visitor satisfaction.

BEYABLE is increasingly working with websites run by major brands. These websites invest hundreds of thousands of euros to earn consumers’ loyalty. “These websites have now become very important assets for companies,” the founder of the startup observes. They therefore need to understand what the customers are looking for and how they interact with the pages. BEYABLE does more than perform basic analyses, like the so-called “analytics” tools—the best-known being Google Analytics—it also offers these companies, “a look at what they cannot see,” says Julien Dugaret.

The company’s algorithms learn from the visits by categorizing them and identifying several types of internet users: those who look at the maps for nearby shops, those who want to discover items before they buy them, those who are interested in the brand’s activities… “Companies do not always have data experts who can analyze all the information about their visitors, so we offer AI tools suited to this purpose,” Julien Dugaret explains.

Artificial intelligence for professional emails?

For those who use digital services, the hidden AI processes are not only used to improve their online shopping experience. Jérôme Devosse worked as a salesperson for several years and used to study social networks, company websites and news sites to glean information about the people he wanted to contact. “This is business as usual for salespeople: adapting the sales hook and initial contact based on the person’s interests and the company’s needs,” he explains.

After growing weary of doing this task the slow way, he decided to create a tool to automate the research he carried out before appointments. And that was how AiZimov was born, another startup incubated at Télécom ParisTech. “It’s an assistant,” explains Jérôme Devosse. “All I have to do is tell it ‘I want to contact that person‘ and it will write an email based on the public data available online.” Interviews with the person, their company’s financial reports, their place of residence, their participation at trade shows, all of this information is useful for the software. “For example, the assistant will automatically write a message saying, ‘I saw you will be at Vivatech next week, come meet us!”, AiZimov’s founder explains.

The tool works in three stages. First, there is the data acquisition stage which uses technology to search through large volumes of data. Next, the data must be understood. Is the sentence containing the targeted person’s name from an interview or a financial report? What are the associated key words and what logical connections can be made? Finally, the text is generated automatically and can be checked based on different criteria. The user can then choose to send an email that is more formal or more emotional—using things the contact is passionate about—or a very friendly email.

Orange and Renault are already testing the startup’s software. “For salespeople from large companies, the time they save by not writing emails to new contacts is used to maintain the connections they have with existing customers to continue the relationship,” explains Jérôme Devosse. Today, the tool does not send an email without the salesperson’s approval. The personnel can still choose to modify a few details. The entrepreneur is not seeking an entirely automatic process. His areas for future development are focused on using the software for other activities.

I would like to go beyond emails: once the information is acquired, it could be used to write a detailed or general script for making contact via telephone,” he explains. AiZimov’s technology could also be used for other professions than sales. In press relations it could be used to contact the most relevant journalists by sending them private messages on social networks, for example. And why not make this software available to human resource departments for contacting individuals for recruitment purposes? Artificial intelligence could therefore continue to be used in many different online interactions.

fundamental physics

From springs to lasers: energy’s mysterious cycle

In 1953, scientists theorized the energy behavior of a chain of springs and revealed a paradox in fundamental physics. Over 60 years later, a group of researchers from IMT Lille Douai, CNRS and the universities of Lille and Ferrara (Italy) has succeeded in observing this paradox. Their results have greatly enhanced our understanding of physical nonlinear systems, which are the basic ingredients required for detecting exoplanets, navigating driverless cars and the formation of big waves in the ocean. Arnaud Mussot, a physicist and member of the partnership, explains the process and the implications of the research, published in Nature Photonics on April 2, 2018.

 

The starting point for your work was the Fermi-Pasta-Ulam-Tsingou problem. What is that?

Arnaud Mussot: The name refers to the four researchers who wanted to study a complex problem in the 1950s. They were interested in observing the behavior of masses connected by springs. They used 64 of them in their experiment. With a chain like this, each spring’s behavior depends on that of the others, but in a non-proportional manner – what we call “nonlinear” in physics. A theoretical study of the nonlinear behavior of such a large system of springs required them to use a computer. They thought that the theoretical results the computer produced would show that when one spring is agitated, all the springs begin to vibrate until the energy spreads evenly to the 64 springs.

Is that what happened?

AM: No. To their surprise, the energy spread throughout the system and then returned to the initial spring. It was then redispersed into the springs and then again returned to the initial point of agitation, and so on. These results from the computer completely contradicted their prediction of energy being evenly and progressively distributed, known as an equipartition of energy.  Since then, these results have been called the “Fermi-Pasta-Ulam-Tsingou paradox” or “Fermi-Pasta-Ulam-Tsingou recurrence”, referring to the recurring behavior of the system of springs. However, since the 1950s, other theoretical research has been carried out. This research has shown that by allowing a system of springs to vibrate for a very long time, equipartition is achieved.

Why is the work you refer to primarily theoretical and not experimental?

AM: In reality, the vibrations in the springs are absorbed by many external factors. It could be friction with the air. In fact, experiments carried out to observe this paradox have not only concerned springs, but all nonlinear oscillation systems, such as laser beams in fiber optics. In this case, the vibrations are mainly absorbed by impurities in the glass that composes the fiber. In all these systems, energy losses due to external factors prevent the observation of anything beyond the first recurrence. The system returns to its initial state, but then it is difficult to make it return to this state a second time. Yet it is in this step that new and rich physics emerge.

Is this where your work published in Nature Photonics on April 2nd comes into play?

AM: We thought it was a shame to be limited to a single recurrence, because many interesting things happen when we can observe at least two. We therefore had to find ways to limit the absorption of the vibrations in order to reach the second recurrence. To accomplish this, we added a second laser which amplified the first one to compensate for the losses. This type of amplification is already used in fiber optics to carry data over long distances. We distorted its initial purpose to resolve part of our problem. The other part was to succeed in observing the recurrence.

Whether it be a spring or an optical fiber, Fermi-Pasta-Ulam-Tsingou recurrence is common to all nonlinear system

Was the observation difficult to achieve?

AM: Compensating for energy losses was a crucial step, but it was pointless if we were not able to clearly observe what was happening in the fiber. To achieve this, we used the same impurities in the glass which absorbed the light signal. These impurities reflect a small part of the laser which circulates in the fiber. The returning light provides us with information on the development of the laser beam’s power as it spreads. This reflected part is then measured with another laser which is synchronous with the first, to assess the difference in phase between the two. This gives us additional information that allows us to clearly reveal the second recurrence for the first time in the world.

What did these observation techniques reveal about the second recurrence?

AM: We were able to conduct the first experimental demonstration involving what we call a break in symmetry. There is a recurrence for the data values of the initial energy sent into the system. But we also knew that theoretically, if we slightly changed these initial values that disturbed the system, there would be a shift in the distribution of energy during the second recurrence. The system did not repeat the same values. In our experiment, we managed to reverse the maximum and minimum energy level in the second recurrence compared to the first.

What perspectives does this observation create?

AM: From a fundamental perspective, the experimental observation of the theory predicting the break in symmetry is very interesting because it provides a confirmation. But in addition to this, the techniques we implemented to limit the absorption and observe what was occurring are very promising. We want to perfect them in order to go beyond second recurrence. If we succeed in reaching the equipartition point predicted by Fermi, Pasta, Ulam and Tsingou, we will then be able to observe a continuum of light. In very simple terms, this is the moment when we no longer see the lasers’ pulsations.

Does this fundamental work have applications?

AM: In terms of applications, our work allowed us to better understand how nonlinear systems develop. Yet these systems are often all around us. In nature for example, they are the basic ingredients for forming rogue waves, exceptionally high waves that can be observed in the ocean. With a better understanding of Fermi-Pasta-Ulam-Tsingou recurrence and the energy variations in nonlinear systems, we could better understand the mechanisms involved in shaping rogue waves and could better detect them. Nonlinear systems are also present in many optical tools. Modern LIDARs which use frequency combs, or “laser rulers’’ calculate the distances by sending a laser beam and then very precisely timing how long it takes for it to return—like a radar except with light. However, the lasers have nonlinear behavior: here again our work can help optimize the operation of new generation LIDARs, which could be used for navigating autonomous cars. Finally, calculations on the physical nonlinear systems are also involved in detecting exoplanets, thanks to their extreme precision.

 

blockchains

Can we trust blockchains?

Maryline Laurent, Télécom SudParis – Institut Mines-Télécom, Université Paris-Saclay

[divider style=”normal” top=”20″ bottom=”20″]

[dropcap]B[/dropcap]lockchains were initially presented as a very innovative technology with great promise in terms of trust. But is this really the case? Recent events, such as the hacking of the Parity wallet ($30 million US) or the Tether firm ($31 million US) have raised doubts.

This article provides an overview of the main elements outlined in Chapter 11 of the book, Signes de confiance : l’impact des labels sur la gestion des données personnelles (Signs of trust: the impact of seals on personal data management) produced by the Personal Data Values and Policies Chair of which Télécom SudParis is the co-founder. The book may be downloaded from the chair’s website. This article focuses exclusively on public blockchains.

Understanding the technology:

A blockchain can traditionally be equated to a “big,” accessible and auditable account ledger deployed on the internet network. It relies on a large number of IT resources spread out around the world, called “nodes,” which help make the blockchain work. In the case of a public blockchain, everyone can contribute, as long as they have a powerful enough computer to execute the associated code.

Executing the code implies acceptance of the blockchain’s governance rules. These contributors are responsible for collecting transactions made by blockchain customers, aggregating transactions in a structure called a “block” (of transactions) and validating the blocks before they are linked to the  blockchain. The resulting blockchain can be up to several hundred gigabytes and is duplicated a great number of times on the internet, which ensures wide availability of the blockchain.

Elements of trust

The blockchain is based on the following major conceptual principles, naturally positioning it as an ideal technology of trust:

  • Decentralized architecture and neutrality of governance based on the principle of consensus: it relies on a great number of independent contributors, making it decentralized by definition. This means that unlike a centralized architecture where decisions can be made unilaterally, a consensus must be reached, or a party must manage to control over 50% of the blockchain’s computing power (computer resources) to have an effect on the system. Therefore, any change in the governance rules must previously have been approved by consensus between the contributors, who must then update the software code executed.
  • Transparency of algorithms makes for better auditability: all transactions, all blocks, and all governance rules are freely accessible and can be read by everyone. This means that anyone can audit the system to ensure the correct operation of the blockchain and legitimacy of the transactions. The advantage is that experts in the community of users may closely examine the code and report anything that seems suspicious. Trust is therefore based on whistleblowers.
  • Secure underlying technology: Cryptographic techniques and terms of use guarantee that the blockchain cannot be altered, that the transactions recorded are authentic, even if they have been made under a pseudonym and that blockchain security is able to keep up with technological advances thanks to an adaptive security level.

Questions remain

Now we will take a look at blockchains in practice and discuss certain events that have raised doubts about this technology:

  • A  51% attack: Several organizations that contribute significantly to running a blockchain can join forces in order to possess at least 51% of the blockchain’s computing power between them. For example, China is known to concentrate a large part of its computing power — a total of two thirds of its computing power in 2017 — in the bitcoin blockchain. This raises questions about the distributed character of the blockchain and the neutrality of governance since it results in completely uneven decision-making power. Indeed, majority organizations can censure transactions, which impacts the blockchain’s history, or worse still, they can have considerable power to get governance rules that they have decided upon approved.
  • Hard fork: When new governance rules that are incompatible with previous ones are brought forward in the blockchain, this leads to a “hard fork,” meaning a permanent change in the blockchain, which requires a broad consensus amongst the blockchain contributors for the new blockchain rules to be accepted. If a consensus is not reached, the blockchain forks, resulting in the simultaneous existence of two blockchains, one that operates according to the previous rules and the other, according to the new rules. This forking of the chain undermines the credibility of the two resulting blockchains, leading to the devaluation of the associated cryptocurrency. It is worth noting that that a hard fork brought about as part of a 51% attack will be more likely to succeed in getting the new rules adopted since a consensus will be reached more easily.
  • Money laundering: Blockchains are transparent by their very nature but the traceability of transactions can be made very complicated, which facilitates money laundering. It is possible to open a large number of accounts, use the accounts just once, and carry out transactions under the cover of a pseudonym. This raises questions about all of a blockchain’s contributors, since their moral values are essential to running the blockchain, and harms the technology’s image.
  • Programming errors: Errors can be made in smart contracts, the programs that are automatically executed within a blockchain, and can have a dramatic impact on industrial players. Due to one such error an attacker was able to steal $50 million US from the DAO organization in 2016. Organizations who fall victim to such bugs could seek to invalidate these harmful transactions – the  DAO succeeded in provoking a hard fork for this purpose — calling into question the very principle of the inalterability of the blockchain. Indeed, if blocks that have previously been recorded as valid in a blockchain are then made invalid, this raises questions about the blockchain’s reliability.

To conclude, the blockchain is a very promising technology that offers many characteristics to guarantee trust, but the problem lies in the disconnect between the promises of the technology and the ways in which it is used. This leads to a great deal of confusion and misunderstandings about the technology, which we have tried to clear up in this article.

Maryline Laurent, Professor and Head of the R3S Team at the CNRS SAMOVAR Laboratory, Télécom SudParis – Institut Mines-Télécom, Université Paris-Saclay

The original version of this article (in French) was published in French on The Conversation France.

Also read on I’MTech

 

campus mondial de la mer

Campus Mondial de la Mer: promoting Brittany’s marine science and technology research internationally

If the ocean were a country, it would be the world’s 7th-largest economic power, according to a report by the WWF, and the wealth it produces could double by 2030. The Brittany region, at the forefront of marine science and technology research, can make an important contribution to this global development. This is what the Campus Mondial de la Mer (CMM), a Brittany-based academic community, intends to prove. The aim of the Campus is to promote regional research at the international level and support the development of a sustainable marine economy. René Garello, a researcher at IMT Atlantique, a partner of the CMM, answers our questions about this new consortium’s activities and areas of focus.

 

What is the Campus Mondial de la Mer (CMM) and what are its objectives?

René Garello: The Campus Mondial de la Mer is a community of research institutes and other academic institutions, including IMT Atlantique, created through the initiative of the Brest-Iroise Technopôle (Technology Center). Its goal is to highlight the excellence of research carried out in the region focusing on marine sciences and technology. The CMM monitors technological development, promotes research activities and strives to bring international attention to this research. It also helps organize events and symposiums and disseminates information related to these initiatives. The campus’s activities are primarily intended for academics, but they also attract industrial players.

The CMM hosts events and supports individuals seeking to develop new projects as part of its goal to boost the region’s economic activity and create a sustainable maritime economy, which represents tremendous potential at the global level. An OECD report on the sea economy in 2030 shows that by developing all the ocean-based industries, the ocean economy’s output could be doubled, from $1.5 trillion US currently to $3 trillion US in 2030! The Campus de la Mer strives to support this development by promoting Brittany-based research internationally.

What are the Campus Mondial de la Mer‘s areas of focus?

RG: The campus is dedicated to the world of research in the fields of marine science and technology. As far as the technological aspects, underwater exploration using underwater drones, or autonomous underwater vehicles, is an important focus area. These are highly autonomous vehicles, it’s as if they had their own little brains!

Another important focus area involves observing the ocean and the environment using satellite technology. Research in this area mainly involves the application of data from these observations, from both a geophysical and oceanographic perspective and in order to monitor ocean-based activities and the pollution they create.

Finally, a third research area is concerned more with physics, biology and chemistry. This area is primarily led by the University of Western Brittany, which has a large research department related to oceanography, and Institut Universitaire Européen de la Mer.

What sort of activities and projects does the Campus de la Mer promote?

RG: One of the CMM’s aims is to promote the ESA-BIC Nord-France project (European Space Agency – Business Incubator Center), a network of incubators for the regions of Brittany, Hauts-de-France, Ile-de-France and Grand-Est, which provides opportunities for financial and technological support for startups. This project is also connected to the Seine Espace Booster and Morespace, which have close ties with the startup ecosystem of the IMT Altantique incubator.

Another project supported by the Campus Mondial de la Mer involves creating a collaborative space between IMT Atlantique and Institut Universitaire Européen de la Mer, based on shared research themes for academic and industrial partners and our network of startups and SMEs.

The CMM also supports two projects led by UBO. The first is the ISblue, the University Research School (EUR) for Marine Science and Technology, developed through the 3rd Investments in the Future program. The Ifremer and a portion of the laboratories associated with the engineering schools IMT Atlantique, ENSTA Bretagne, ENIB and École Navale (Naval Academy) are involved in this project. The second project consists of housing the UNU-OCEAN institute on the site of the Brest-Iroise Technology Center, with a five-year goal to be able to accommodate 25-30 individuals working at the center of an interdisciplinary research and training ecosystem dedicated to marine science and technology.

Finally, the research themes highlighted by the CMM are in keeping with the aims of GIS BreTel, a Brittany Scientific Interest Group on Remote Sensing that I run. Our work aligns perfectly with the Campus’s approach. When we organize a conference or a symposium, whether at the Brest-Iroise Technology Center or the CMM, everyone participates! This also helps give visibility to research carried out at GIS Bretel and to promote our activities.

Also read on I’MTech

[one_half]

[/one_half]

[one_half_last]

[/one_half_last]

Sampe, composite

Laure Bouquerel wins the SAMPE France competition for her thesis on composite materials for aeronautics

Simulating deformations during the molding stage in a new composite material for the aeronautics industry: this is the subject of Laure Bouquerel’s research at Mines Saint-Étienne as part of her CIFRE PhD thesis with INSA Lyon. The young researcher, winner of the SAMPE France competition, will present her work at the SAMPE France technical days in Bordeaux on 29 and 30 November 2018 and will compete for the World Selection in Southampton during the European meetings in September.

 

An aircraft must be lightweight… But durable! The aircraft’s primary parts, such as the wings and the fuselage, form its structure and bear the greatest stress. These pieces, which were initially manufactured using aluminum, were progressively replaced by composite materials containing carbon fibers and polymer resin for enhanced mechanical performance and resistance to corrosion, while also reducing the mass. The mass issue is at the heart of the aeronautical transport industry: savings on mass leads to a higher payload proportion for aircrafts, while also decreasing fuel consumption.

Traditionally, composite materials for primary parts are molded using indirect processes. This involves using a set of carbon fibers that are pre-impregnated with resin. The part is manufactured by an automated process that superimposes the layers, which are then cured in an autoclave, a pressurized oven. This is currently the most widely used process in the aeronautics industry. It is also the most expensive, due to the processes involved, the material used and its storage.

Hexcel offers a direct process using a new-generation material it has developed: HiTape®. It is a dry, unidirectional reinforcement composed of carbon fibers sandwiched between two thermoplastic webs. It is intended to be deposited using an automated process, then molded before the resin is injected,” Laure Bouquerel explains. The researcher is conducting a thesis at Mines Saint-Étienne on this material that Hexcel is working to develop. The goal is to simulate the molding process involving the stacking of carbon fiber reinforcements in order to better understand and anticipate the deformations and defects that could occur. This work is what earned the young materials specialist an award at the SAMPE France* competition.

Anticipating defects to limit costs

The carbon fibers in the HiTape® material are all aligned in the same direction. The rigidity is at its maximum level in the direction of the fibers. Several layers are deposited in different directions to manufacture a part. This offers very good rigidity in the desired directions, which were identified during the design phase for the structure,” Laure Bouquerel explains. Yet due to the HiTape® material’s specific structure and the presence of the thermoplastic web, specific deformations occur during the molding phase. The tension in the reinforcement is predominant and wrinkling can occur when the material is bent. Finally, friction can occur between the various reinforcement layers.

The appearance of wrinkling is a classic. As they become wrinkled, the fibers are no longer straight, and the load placed on the material will not be transferred as well,” the researcher observes. “These wrinkles also cause the development of areas that are less dense in fiber, where the resin will accumulate after the molding stage, creating zones of weakness in the material.” As these deformations appear, the final part’s overall structure is weakened.

The aim of Laure Bouquerel’s thesis work is to digitally simulate the molding process for the HiTape® material in order to identify and predict the appearance of deformations and then improve the molding process through reverse engineering. Why the use of digital simulation? This method eliminates all the trial and error involving real materials in the laboratory, thus reducing the time and cost involved in developing the product.

A great opportunity for the young researcher

A graduate of Centrale Nantes engineering school, the young researcher became specialized in this field while working toward her Master’s in advanced materials from Cranfield University in England. After earning these two degrees, she further defined her vocation during her work placement year. Laure Bouquerel began her career with Plastic Omnium, an automobile parts supplier in Lyon, and with Airbus in Germany, which explains her specialization in composite materials for the aeronautics industry.

As a winner of the SAMPE France competition, the PhD student will present her work at the SAMPE France technical days in Bordeaux on 29 and 30 November and will participate in the SAMPE Europe competition in Southampton from 11 to 13 September. This will provide a unique opportunity to give visibility to her work. “It will be an opportunity to meet with other industry stakeholders and other PhD students working on similar topics. Talking with peers can inspire new ideas for advancing our own research!”

[box type=”info” align=”” class=”” width=””]

*An international competition dedicated to materials engineering

SAMPE (Society for the Advancement of Material Process Engineering) rewards the best theses on the study of structural materials through an international competition. The French edition, SAMPE France, which Laure Bouquerel won, was held at Mines Saint-Étienne on March 22 and 23. The global competition will be held in Southampton from September 11 to 13 during the SAMPE Europe days. The aim of these international meetings is to bring together manufacturers and researchers from the field of advanced materials to develop professional networks and present the latest technical innovations.[/box]