MRI

Mathematical tools for analyzing the development of brain pathologies in children

Magnetic resonance imaging (MRI) enables medical doctors to obtain precise images of a patient’s structure and anatomy, and of the pathologies that may affect the patient’s brain. However, to analyze and interpret these complex images, radiologists need specific mathematical tools. While some tools exist for interpreting images of the adult brain, these tools are not directly applicable in analyzing brain images of young children and newborn or premature infants. The Franco-Brazilian project STAP, which includes Télécom ParisTech among its partners, seeks to address this need by developing mathematical modeling and MRI analysis algorithms for the youngest patients.

 

An adult’s brain and that of a developing newborn infant are quite different. An infant’s white matter has not yet fully myelinized and some anatomical structures are much smaller. Due to these specific features, the images obtained of a newborn infant and an adult via magnetic resonance imaging (MRI) are not the same. “There are also difficulties related to how the images are acquired, since the acquisition process must be fast. We cannot make a young child remain still for a long period of time,” adds Isabelle Bloch, a researcher in mathematical modeling and spatial reasoning at Télécom ParisTech.  The resolution is therefore reduced because the slices are thicker.”

These complex images require the use of tools to analyze and interpret them and to assist medical doctors in their diagnoses and decisions. “There are many applications for processing MRI images of adult brains. However, in pediatrics there is a real lack that must be addressed,” the researcher observes. “This is why, in the context of the STAP project, we are working to design tools for processing and interpreting images of young children, newborns and premature infants.

The STAP project, funded by the ANR and FAPESP was launched in January and will run for four years. The partners involved include the University of São Paolo in Brazil, the pediatric radiology departments at São Paolo Hospital and Bicêtre Hospital, as well as the University of Paris Dauphine and Télécom ParisTech. “Three applied mathematics and IT teams are working on this project, along with two teams of radiologists. Three teams in France, two in Brazil… The project is both international and multidisciplinary,” says Isabelle Bloch.

 

Rare and heterogeneous data

To work towards developing a mathematical image analysis tool, the researchers collected MRIs of young children and newborns from partner hospitals. “We did not acquire data specifically for this project,” Isabelle Bloch explains. “We use images that are already available to the doctors, for which the parents have given their informed consent for the images to be used for research purposes.” The images are all anonymized, regardless of whether they display normal or pathological anatomy. “We are very cautious: If a patient has a pathology that is so rare that his or her identity could be recognized, we do not use the image.

Certain pathologies and developmental abnormalities are of particular interest to researchers: hyperintense areas, which are areas of white matter that appear lighter than normal on the MRI images, developmental abnormalities in the corpus callosum, the anatomical structure which joins the two cerebral hemispheres, and cancerous tumors.

We are faced with some difficulties because few MRI images exist of premature and newborn babies,” Isabelle Bloch explains. “Finally, the images vary greatly depending on the age of the patient and the pathology. We therefore have a limited dataset and many variations that continue to change over time.

 

Modeling medical knowledge

Although the available images are limited and heterogeneous, the researchers can make up for this lack of data through the medical expertise of radiologists, who are in charge of annotating the MRI that are used. They will therefore have access to valuable information on brain anatomy and pathologies as well as the patient’s history. “We will work to create models in the form of medical knowledge graphs, including graphs of the structures’ spatial layout. We will have assistance from the pediatric radiologists participating in the project,” says Isabelle Bloch. “These graphs will guide the interpretation of the images and help to describe the pathology, and the surrounding structures: Where is the pathology? What healthy structures could it affect? How is it developing?

For this model, each anatomical structure will be represented by a node. These nodes are connected by edges that bear attributes such as spatial relationships or contrasts of intensity that exist in the MRI.  This graph will take into account the patient’s pathologies by adapting and modifying the links between the different anatomical structures. “For example, if the knowledge shows that a given structure is located to the right of another, we would try to obtain a mathematical model that tells us what ‘to the right of’ means,” the researcher explains. “This model will then be integrated into an algorithm for interpreting images, recognizing structures and characterizing a disease’s development.”

After analyzing a patient’s images, the graph will become an individual model that corresponds to a specific patient. “We do not yet have enough data to establish a standard model, which would take variability into account,” the researcher adds. “It would be a good idea to apply this method to groups of patients, but that would be a much longer-term project.

 

An algorithm to describe images in the medical doctor’s language

In addition to describing the brain structures spatially and visually, the graph will take into account how the pathology develops over time. “Some patients are monitored regularly. The goal would be to compare MRI images spanning several months of monitoring and precisely describe the developments of brain pathologies in quantitative terms, as well as their possible impact on the normal structures,” Isabelle Bloch explains.

Finally, the researchers would like to develop an algorithm that would provide a linguistic description of the images’ content using the pediatric radiologist’s specific vocabulary. This tool would therefore connect the quantified digital information extracted from images with words and sentences. “This is the reverse method of that which is used for the mathematical modeling of medical knowledge,” Isabelle Bloch explains. “The algorithm would therefore describe the situation in a quantitative and qualitative manner, hence facilitating the interpretation by the medical expert.

In terms of the structural modeling, we know where we are headed, although we still have work to do on extracting the characteristics from the MRI,” says Isabelle Bloch regarding the project’s technical aspects. “But combining spatial analysis with temporal analysis poses a new problem… As does translating the algorithm into the doctor’s language, which requires transitioning from quantitative measurements to a linguistic description.” Far from trivial, this technical advance could eventually allow radiologists to use new image analysis tools better suited to their needs.

Find out more about Isabelle Bloch’s research

gender diversity, digital professions

Why women have become invisible in IT professions

Female students have deserted computer science schools and women seem mostly absent from companies in this sector. The culprit: the common preconception that female computer engineers are naturally less competent than their male counterparts.  The MOOC entitled Gender Diversity in IT Professions*, launched on 8 March 2018, looks at how sexist stereotypes are constructed, often insidiously. Why are women now the minority, rendered invisible in the digital sector despite the many female pioneers and entrepreneurs have paved the way for the development of software and video games?  Chantal Morley, a researcher within the Gender@Telecom group at Institut Mines-Telecom Business School, takes a look back at the creation of this MOOC, highlighting the research underpinning the course.

 

In 2016, only 33% of digital jobs were occupied by women (OPIIEC). Taking into account only the “core” professions in the sector (engineer, technician or project manager), the percentage falls to 20%. Why is there such a gender gap? No, it’s not because women are less talented in technical professions, nor because they prefer other areas. The choices made by young women in their training and women in their careers is not always the result of a free and informed decision. The influence of stereotypes plays a significant role. These popular beliefs reinforce the idea that the IT field is inherently masculine, a place where women do not belong, and this influences our choices and behaviors even when we do not realize it.

The research group Gender@Telecom, which brings together several female researchers from IMT schools, is looking at the issue of women’s role in the field of information and communication technologies, and specifically the software sector. Through their studies and analysis, the group’s researchers have observed and described how these stereotypes are expressed.  “We interviewed professionals in the sector, and asked students specific questions about their choices and opinions,” explains Chantal Morley, researcher at Institut Mines-Telecom Business School. By analyzing the speech from those interviews, the researchers identified many preconceived notions. “Women do not like computer science, it does not interest them, for example,” the researcher continues. “These representations are unproven and do not match reality!” These little phrases that communicate stereotypes are heard from both men and women. “One might think that this type of differentiation in representations would not exist among male and female students, but that is not the case,” says Chantal Morley. “During a study conducted in Switzerland, we found that guidance counselors are also very much influenced by these stereotypes.” Among professionals, these views are even cited as arguments justifying certain choices.

 

Little phrases, big impacts

The Gender Diversity in IT Professions MOOC* developed by the Gender@Telecom group is aimed at deconstructing these stereotypes. “We used these studies to try to show learners how little things in everyday life, which we do not even notice, contribute to instilling these differentiated views,” Chantal Morley explains. These little phrases or representations can also be found in our speech as well as in advertisements, posters… When viewed individually, these small occurrences are insignificant, yet it is their repetition and systematic nature that pose a problem. Together they work to establish and reinforce sexist stereotypes. “They form a common knowledge, a popular belief that everyone is aware of, that we all accept, saying ‘that’s just the way it is!’”

To study this phenomenon, the researchers from the group analyzed speech from semi-structured interviews conducted with stakeholders in the digital industry. The researchers’ questions focused on the relationship with technology and an entrepreneurship competition that had recently been held at Institut Mines-Telecom Business School. “Again, in this study, some types of arguments were frequently repeated and helped reinforce these stereotypes,” Chantal Morley observes. “For example, when someone mentions a woman who is especially talented, the person will often add, ‘yes, but with her it’s different, that doesn’t count.’ There is always an excuse for not questioning the general rule that says women lack the abilities required in digital professions.

 

Unjustified stereotypes

Yet despite their pervasiveness, there is nothing to justify these remarks. The history of computer science professions proves this fact. However, the contribution of women has long been hidden behind the scenes. “When we studied the history of computer science, we were primarily looking at the area of hardware, equipment. Women were systematically rejected by universities and schools in this field, where they were not allowed to earn a diploma,” says Chantal Morley. “Also, some companies refused to keep their employees if they had a child or got married. This made careers very difficult.” In recent years, research on the history of the software industry, in which there were more opportunities, has revealed that many women contributed to major aspects of its development.

Ada Lovelace is sort of the Marie Curie of computer science… People think she is the only one! Yet she is one contributor among others,” the researcher explains. For example, the computer scientist Grace Hopper invented the first compiler and the COBOL language in the 1950s. “She had the idea of inventing a translator that would translate a relatively understandable and accessible language into machine language. Her contribution to programming was crucial,” Chantal Morley continues. “We can also mention Roberta Williams, a computer scientist who greatly contributed to the beginnings of video games, or Stephanie Shirley, a pioneer computer scientist and entrepreneur…”

In the past these women were able to fight for their place in software professions. What has happened to make women seem absent from these arenas? According to Chantal Morley, the exclusion of women occurred with the arrival of microcomputing, which at the time had been designed for a primarily male target, that of executives. “The representations conveyed at that time progressively led everyone to associate working with computers with men.” But although women are a minority in this sector, they are not completely absent. “Many have participated in the creation of very large companies, they found startups, and there are some very famous women hackers,” Chantal Morley observes. “But they are not at all in the spotlight and do not get much press coverage. As if they were an anomaly, something unusual…

Finally, women’s role in the digital industry varies greatly depending on the country and culture. In India and Malaysia, for example, computer science is a “women’s” profession. It is all a matter of perspective, not a question of innate abilities.

 

[box type=”shadow” align=”” class=”” width=””]*A MOOC combating sexist stereotypes

How are these stereotypes constructed and maintained? How can they be deconstructed? How can we promote the integration of women in digital professions? The Gender Diversity in IT Professions MOOC (in French), launched on 8 March 2018, uncovers the little-known contribution of women to the development of the software industry and what mechanisms cause them to remain hidden and discouraged from integrating this sector. The MOOC is aimed at raising awareness among companies, schools and research organizations on these issues to provide them with keys for developing a more inclusive culture for women. [/box]

Also read on I’MTech

 

AI, an issue of economy and civilization?

This is the first issue of the new quarterly of the series Annales des Mines devoted to the digital transition. The progress made using algorithms, the new computational capacities of devices (ranging from graphic cards to the cloud), and the availability of huge quantities of data combine to explain the advances under way in Artificial Intelligence. But AI is not just a matter of algorithms operating in the background on digital platforms. It increasingly enters into a chain of interactions with the physical world. The examples reported herein come from finance, insurance, employment, commerce and manufacturing. This issue turns to the stakeholders, in particular French, who are implementing AI or who have devoted thought to this implementation and to AI’s place in our society.

Introduction by Jacques Serris, Engineer from the Corps des Mines, Conseil Général de l’Économie (CGE)

About Digital issues, the new series of Annales des Mines

Digital Issues is a quarterly series (March, June, September and December) freely downloadable on the Annales des Mines website, with a print version in French language. Focus of the series is on the issues of the digital transition for an enlightened, yet non necessarily expert, readership. Various viewpoints are being used between technology, economy and society as the Annales des Mines are used to doing in all their series.

Download all the articles of the issue

Aizimov BEYABLE startup artificial intelligence

Artificial Intelligence hiding behind your computer screen!

Far from the dazzle of intelligent humanoid robots and highly effective chatbots, artificial intelligence is now used in many ordinary products and services. In the software and websites consumers use on a daily basis, AI is being used to improve the use of digital technology. This new dynamic is perfectly illustrated by two startups incubated at Télécom ParisTech: BEYABLE and AiZimov.

 

Who are the invisible workers managing the aisles of digital shops? At the supermarket, shoppers regularly see employees stocking the shelves, but the shelves of online sales sites are devoid of human contact. “Whether a website has 500 or 10,000 items for sale, there are always fewer employees managing the products than at a real store,” explains Julien Dugaret, founder of the startup BEYABLE. The young company is well aware that these digital showcases still require maintenance. Currently accelerated at Télécom ParisTech and formerly incubated there, it offers a solution for detecting anomalies on online shopping sites.

BEYABLE’s artificial intelligence algorithms use a clustering technique. By analyzing data from internet users’ visits to websites and the data associated with each project, they group the items together into coherent “clusters”. The articles that cannot be included in any of the clusters are then identified as anomalies and corrected so they can be reintegrated into the right place.

Some products do not have the right images, descriptions or references. For example, a pair of heels might be included in the ‘boots’ category of an online shop,” explains the entrepreneur. The software then identifies the heels so that an employee can correct the description. While this type of error may seem anecdotal or funny, for the companies that use BEYABLE’s services, the quality of the customer experience is at stake.

Some websites offer thousands of articles with product references that are constantly changing. It is important to make sure visitors using the website do not feel lost from one day to the next. “If a real merchant sold t-shirts one day and coffee tables the next, you can imagine all the logistics that would be required overnight. For an online shop, the logistics involved in changing the collection or promoting certain articles is much simpler, but not effortless. The reduced number of online store ‘department managers’ makes the logistics all the more difficult,” explains Julien Dugaret. Artificial intelligence tools play an essential role in these logistics, helping digital marketing teams save a lot of time and ensuring visitor satisfaction.

BEYABLE is increasingly working with websites run by major brands. These websites invest hundreds of thousands of euros to earn consumers’ loyalty. “These websites have now become very important assets for companies,” the founder of the startup observes. They therefore need to understand what the customers are looking for and how they interact with the pages. BEYABLE does more than perform basic analyses, like the so-called “analytics” tools—the best-known being Google Analytics—it also offers these companies, “a look at what they cannot see,” says Julien Dugaret.

The company’s algorithms learn from the visits by categorizing them and identifying several types of internet users: those who look at the maps for nearby shops, those who want to discover items before they buy them, those who are interested in the brand’s activities… “Companies do not always have data experts who can analyze all the information about their visitors, so we offer AI tools suited to this purpose,” Julien Dugaret explains.

Artificial intelligence for professional emails?

For those who use digital services, the hidden AI processes are not only used to improve their online shopping experience. Jérôme Devosse worked as a salesperson for several years and used to study social networks, company websites and news sites to glean information about the people he wanted to contact. “This is business as usual for salespeople: adapting the sales hook and initial contact based on the person’s interests and the company’s needs,” he explains.

After growing weary of doing this task the slow way, he decided to create a tool to automate the research he carried out before appointments. And that was how AiZimov was born, another startup incubated at Télécom ParisTech. “It’s an assistant,” explains Jérôme Devosse. “All I have to do is tell it ‘I want to contact that person‘ and it will write an email based on the public data available online.” Interviews with the person, their company’s financial reports, their place of residence, their participation at trade shows, all of this information is useful for the software. “For example, the assistant will automatically write a message saying, ‘I saw you will be at Vivatech next week, come meet us!”, AiZimov’s founder explains.

The tool works in three stages. First, there is the data acquisition stage which uses technology to search through large volumes of data. Next, the data must be understood. Is the sentence containing the targeted person’s name from an interview or a financial report? What are the associated key words and what logical connections can be made? Finally, the text is generated automatically and can be checked based on different criteria. The user can then choose to send an email that is more formal or more emotional—using things the contact is passionate about—or a very friendly email.

Orange and Renault are already testing the startup’s software. “For salespeople from large companies, the time they save by not writing emails to new contacts is used to maintain the connections they have with existing customers to continue the relationship,” explains Jérôme Devosse. Today, the tool does not send an email without the salesperson’s approval. The personnel can still choose to modify a few details. The entrepreneur is not seeking an entirely automatic process. His areas for future development are focused on using the software for other activities.

I would like to go beyond emails: once the information is acquired, it could be used to write a detailed or general script for making contact via telephone,” he explains. AiZimov’s technology could also be used for other professions than sales. In press relations it could be used to contact the most relevant journalists by sending them private messages on social networks, for example. And why not make this software available to human resource departments for contacting individuals for recruitment purposes? Artificial intelligence could therefore continue to be used in many different online interactions.

fundamental physics

From springs to lasers: energy’s mysterious cycle

In 1953, scientists theorized the energy behavior of a chain of springs and revealed a paradox in fundamental physics. Over 60 years later, a group of researchers from IMT Lille Douai, CNRS and the universities of Lille and Ferrara (Italy) has succeeded in observing this paradox. Their results have greatly enhanced our understanding of physical nonlinear systems, which are the basic ingredients required for detecting exoplanets, navigating driverless cars and the formation of big waves in the ocean. Arnaud Mussot, a physicist and member of the partnership, explains the process and the implications of the research, published in Nature Photonics on April 2, 2018.

 

The starting point for your work was the Fermi-Pasta-Ulam-Tsingou problem. What is that?

Arnaud Mussot: The name refers to the four researchers who wanted to study a complex problem in the 1950s. They were interested in observing the behavior of masses connected by springs. They used 64 of them in their experiment. With a chain like this, each spring’s behavior depends on that of the others, but in a non-proportional manner – what we call “nonlinear” in physics. A theoretical study of the nonlinear behavior of such a large system of springs required them to use a computer. They thought that the theoretical results the computer produced would show that when one spring is agitated, all the springs begin to vibrate until the energy spreads evenly to the 64 springs.

Is that what happened?

AM: No. To their surprise, the energy spread throughout the system and then returned to the initial spring. It was then redispersed into the springs and then again returned to the initial point of agitation, and so on. These results from the computer completely contradicted their prediction of energy being evenly and progressively distributed, known as an equipartition of energy.  Since then, these results have been called the “Fermi-Pasta-Ulam-Tsingou paradox” or “Fermi-Pasta-Ulam-Tsingou recurrence”, referring to the recurring behavior of the system of springs. However, since the 1950s, other theoretical research has been carried out. This research has shown that by allowing a system of springs to vibrate for a very long time, equipartition is achieved.

Why is the work you refer to primarily theoretical and not experimental?

AM: In reality, the vibrations in the springs are absorbed by many external factors. It could be friction with the air. In fact, experiments carried out to observe this paradox have not only concerned springs, but all nonlinear oscillation systems, such as laser beams in fiber optics. In this case, the vibrations are mainly absorbed by impurities in the glass that composes the fiber. In all these systems, energy losses due to external factors prevent the observation of anything beyond the first recurrence. The system returns to its initial state, but then it is difficult to make it return to this state a second time. Yet it is in this step that new and rich physics emerge.

Is this where your work published in Nature Photonics on April 2nd comes into play?

AM: We thought it was a shame to be limited to a single recurrence, because many interesting things happen when we can observe at least two. We therefore had to find ways to limit the absorption of the vibrations in order to reach the second recurrence. To accomplish this, we added a second laser which amplified the first one to compensate for the losses. This type of amplification is already used in fiber optics to carry data over long distances. We distorted its initial purpose to resolve part of our problem. The other part was to succeed in observing the recurrence.

Whether it be a spring or an optical fiber, Fermi-Pasta-Ulam-Tsingou recurrence is common to all nonlinear system

Was the observation difficult to achieve?

AM: Compensating for energy losses was a crucial step, but it was pointless if we were not able to clearly observe what was happening in the fiber. To achieve this, we used the same impurities in the glass which absorbed the light signal. These impurities reflect a small part of the laser which circulates in the fiber. The returning light provides us with information on the development of the laser beam’s power as it spreads. This reflected part is then measured with another laser which is synchronous with the first, to assess the difference in phase between the two. This gives us additional information that allows us to clearly reveal the second recurrence for the first time in the world.

What did these observation techniques reveal about the second recurrence?

AM: We were able to conduct the first experimental demonstration involving what we call a break in symmetry. There is a recurrence for the data values of the initial energy sent into the system. But we also knew that theoretically, if we slightly changed these initial values that disturbed the system, there would be a shift in the distribution of energy during the second recurrence. The system did not repeat the same values. In our experiment, we managed to reverse the maximum and minimum energy level in the second recurrence compared to the first.

What perspectives does this observation create?

AM: From a fundamental perspective, the experimental observation of the theory predicting the break in symmetry is very interesting because it provides a confirmation. But in addition to this, the techniques we implemented to limit the absorption and observe what was occurring are very promising. We want to perfect them in order to go beyond second recurrence. If we succeed in reaching the equipartition point predicted by Fermi, Pasta, Ulam and Tsingou, we will then be able to observe a continuum of light. In very simple terms, this is the moment when we no longer see the lasers’ pulsations.

Does this fundamental work have applications?

AM: In terms of applications, our work allowed us to better understand how nonlinear systems develop. Yet these systems are often all around us. In nature for example, they are the basic ingredients for forming rogue waves, exceptionally high waves that can be observed in the ocean. With a better understanding of Fermi-Pasta-Ulam-Tsingou recurrence and the energy variations in nonlinear systems, we could better understand the mechanisms involved in shaping rogue waves and could better detect them. Nonlinear systems are also present in many optical tools. Modern LIDARs which use frequency combs, or “laser rulers’’ calculate the distances by sending a laser beam and then very precisely timing how long it takes for it to return—like a radar except with light. However, the lasers have nonlinear behavior: here again our work can help optimize the operation of new generation LIDARs, which could be used for navigating autonomous cars. Finally, calculations on the physical nonlinear systems are also involved in detecting exoplanets, thanks to their extreme precision.

 

blockchains

Can we trust blockchains?

Maryline Laurent, Télécom SudParis – Institut Mines-Télécom, Université Paris-Saclay

[divider style=”normal” top=”20″ bottom=”20″]

[dropcap]B[/dropcap]lockchains were initially presented as a very innovative technology with great promise in terms of trust. But is this really the case? Recent events, such as the hacking of the Parity wallet ($30 million US) or the Tether firm ($31 million US) have raised doubts.

This article provides an overview of the main elements outlined in Chapter 11 of the book, Signes de confiance : l’impact des labels sur la gestion des données personnelles (Signs of trust: the impact of seals on personal data management) produced by the Personal Data Values and Policies Chair of which Télécom SudParis is the co-founder. The book may be downloaded from the chair’s website. This article focuses exclusively on public blockchains.

Understanding the technology:

A blockchain can traditionally be equated to a “big,” accessible and auditable account ledger deployed on the internet network. It relies on a large number of IT resources spread out around the world, called “nodes,” which help make the blockchain work. In the case of a public blockchain, everyone can contribute, as long as they have a powerful enough computer to execute the associated code.

Executing the code implies acceptance of the blockchain’s governance rules. These contributors are responsible for collecting transactions made by blockchain customers, aggregating transactions in a structure called a “block” (of transactions) and validating the blocks before they are linked to the  blockchain. The resulting blockchain can be up to several hundred gigabytes and is duplicated a great number of times on the internet, which ensures wide availability of the blockchain.

Elements of trust

The blockchain is based on the following major conceptual principles, naturally positioning it as an ideal technology of trust:

  • Decentralized architecture and neutrality of governance based on the principle of consensus: it relies on a great number of independent contributors, making it decentralized by definition. This means that unlike a centralized architecture where decisions can be made unilaterally, a consensus must be reached, or a party must manage to control over 50% of the blockchain’s computing power (computer resources) to have an effect on the system. Therefore, any change in the governance rules must previously have been approved by consensus between the contributors, who must then update the software code executed.
  • Transparency of algorithms makes for better auditability: all transactions, all blocks, and all governance rules are freely accessible and can be read by everyone. This means that anyone can audit the system to ensure the correct operation of the blockchain and legitimacy of the transactions. The advantage is that experts in the community of users may closely examine the code and report anything that seems suspicious. Trust is therefore based on whistleblowers.
  • Secure underlying technology: Cryptographic techniques and terms of use guarantee that the blockchain cannot be altered, that the transactions recorded are authentic, even if they have been made under a pseudonym and that blockchain security is able to keep up with technological advances thanks to an adaptive security level.

Questions remain

Now we will take a look at blockchains in practice and discuss certain events that have raised doubts about this technology:

  • A  51% attack: Several organizations that contribute significantly to running a blockchain can join forces in order to possess at least 51% of the blockchain’s computing power between them. For example, China is known to concentrate a large part of its computing power — a total of two thirds of its computing power in 2017 — in the bitcoin blockchain. This raises questions about the distributed character of the blockchain and the neutrality of governance since it results in completely uneven decision-making power. Indeed, majority organizations can censure transactions, which impacts the blockchain’s history, or worse still, they can have considerable power to get governance rules that they have decided upon approved.
  • Hard fork: When new governance rules that are incompatible with previous ones are brought forward in the blockchain, this leads to a “hard fork,” meaning a permanent change in the blockchain, which requires a broad consensus amongst the blockchain contributors for the new blockchain rules to be accepted. If a consensus is not reached, the blockchain forks, resulting in the simultaneous existence of two blockchains, one that operates according to the previous rules and the other, according to the new rules. This forking of the chain undermines the credibility of the two resulting blockchains, leading to the devaluation of the associated cryptocurrency. It is worth noting that that a hard fork brought about as part of a 51% attack will be more likely to succeed in getting the new rules adopted since a consensus will be reached more easily.
  • Money laundering: Blockchains are transparent by their very nature but the traceability of transactions can be made very complicated, which facilitates money laundering. It is possible to open a large number of accounts, use the accounts just once, and carry out transactions under the cover of a pseudonym. This raises questions about all of a blockchain’s contributors, since their moral values are essential to running the blockchain, and harms the technology’s image.
  • Programming errors: Errors can be made in smart contracts, the programs that are automatically executed within a blockchain, and can have a dramatic impact on industrial players. Due to one such error an attacker was able to steal $50 million US from the DAO organization in 2016. Organizations who fall victim to such bugs could seek to invalidate these harmful transactions – the  DAO succeeded in provoking a hard fork for this purpose — calling into question the very principle of the inalterability of the blockchain. Indeed, if blocks that have previously been recorded as valid in a blockchain are then made invalid, this raises questions about the blockchain’s reliability.

To conclude, the blockchain is a very promising technology that offers many characteristics to guarantee trust, but the problem lies in the disconnect between the promises of the technology and the ways in which it is used. This leads to a great deal of confusion and misunderstandings about the technology, which we have tried to clear up in this article.

Maryline Laurent, Professor and Head of the R3S Team at the CNRS SAMOVAR Laboratory, Télécom SudParis – Institut Mines-Télécom, Université Paris-Saclay

The original version of this article (in French) was published in French on The Conversation France.

Also read on I’MTech

 

campus mondial de la mer

Campus Mondial de la Mer: promoting Brittany’s marine science and technology research internationally

If the ocean were a country, it would be the world’s 7th-largest economic power, according to a report by the WWF, and the wealth it produces could double by 2030. The Brittany region, at the forefront of marine science and technology research, can make an important contribution to this global development. This is what the Campus Mondial de la Mer (CMM), a Brittany-based academic community, intends to prove. The aim of the Campus is to promote regional research at the international level and support the development of a sustainable marine economy. René Garello, a researcher at IMT Atlantique, a partner of the CMM, answers our questions about this new consortium’s activities and areas of focus.

 

What is the Campus Mondial de la Mer (CMM) and what are its objectives?

René Garello: The Campus Mondial de la Mer is a community of research institutes and other academic institutions, including IMT Atlantique, created through the initiative of the Brest-Iroise Technopôle (Technology Center). Its goal is to highlight the excellence of research carried out in the region focusing on marine sciences and technology. The CMM monitors technological development, promotes research activities and strives to bring international attention to this research. It also helps organize events and symposiums and disseminates information related to these initiatives. The campus’s activities are primarily intended for academics, but they also attract industrial players.

The CMM hosts events and supports individuals seeking to develop new projects as part of its goal to boost the region’s economic activity and create a sustainable maritime economy, which represents tremendous potential at the global level. An OECD report on the sea economy in 2030 shows that by developing all the ocean-based industries, the ocean economy’s output could be doubled, from $1.5 trillion US currently to $3 trillion US in 2030! The Campus de la Mer strives to support this development by promoting Brittany-based research internationally.

What are the Campus Mondial de la Mer‘s areas of focus?

RG: The campus is dedicated to the world of research in the fields of marine science and technology. As far as the technological aspects, underwater exploration using underwater drones, or autonomous underwater vehicles, is an important focus area. These are highly autonomous vehicles, it’s as if they had their own little brains!

Another important focus area involves observing the ocean and the environment using satellite technology. Research in this area mainly involves the application of data from these observations, from both a geophysical and oceanographic perspective and in order to monitor ocean-based activities and the pollution they create.

Finally, a third research area is concerned more with physics, biology and chemistry. This area is primarily led by the University of Western Brittany, which has a large research department related to oceanography, and Institut Universitaire Européen de la Mer.

What sort of activities and projects does the Campus de la Mer promote?

RG: One of the CMM’s aims is to promote the ESA-BIC Nord-France project (European Space Agency – Business Incubator Center), a network of incubators for the regions of Brittany, Hauts-de-France, Ile-de-France and Grand-Est, which provides opportunities for financial and technological support for startups. This project is also connected to the Seine Espace Booster and Morespace, which have close ties with the startup ecosystem of the IMT Altantique incubator.

Another project supported by the Campus Mondial de la Mer involves creating a collaborative space between IMT Atlantique and Institut Universitaire Européen de la Mer, based on shared research themes for academic and industrial partners and our network of startups and SMEs.

The CMM also supports two projects led by UBO. The first is the ISblue, the University Research School (EUR) for Marine Science and Technology, developed through the 3rd Investments in the Future program. The Ifremer and a portion of the laboratories associated with the engineering schools IMT Atlantique, ENSTA Bretagne, ENIB and École Navale (Naval Academy) are involved in this project. The second project consists of housing the UNU-OCEAN institute on the site of the Brest-Iroise Technology Center, with a five-year goal to be able to accommodate 25-30 individuals working at the center of an interdisciplinary research and training ecosystem dedicated to marine science and technology.

Finally, the research themes highlighted by the CMM are in keeping with the aims of GIS BreTel, a Brittany Scientific Interest Group on Remote Sensing that I run. Our work aligns perfectly with the Campus’s approach. When we organize a conference or a symposium, whether at the Brest-Iroise Technology Center or the CMM, everyone participates! This also helps give visibility to research carried out at GIS Bretel and to promote our activities.

Also read on I’MTech

[one_half]

[/one_half]

[one_half_last]

[/one_half_last]

Sampe, composite

Laure Bouquerel wins the SAMPE France competition for her thesis on composite materials for aeronautics

Simulating deformations during the molding stage in a new composite material for the aeronautics industry: this is the subject of Laure Bouquerel’s research at Mines Saint-Étienne as part of her CIFRE PhD thesis with INSA Lyon. The young researcher, winner of the SAMPE France competition, will present her work at the SAMPE France technical days in Bordeaux on 29 and 30 November 2018 and will compete for the World Selection in Southampton during the European meetings in September.

 

An aircraft must be lightweight… But durable! The aircraft’s primary parts, such as the wings and the fuselage, form its structure and bear the greatest stress. These pieces, which were initially manufactured using aluminum, were progressively replaced by composite materials containing carbon fibers and polymer resin for enhanced mechanical performance and resistance to corrosion, while also reducing the mass. The mass issue is at the heart of the aeronautical transport industry: savings on mass leads to a higher payload proportion for aircrafts, while also decreasing fuel consumption.

Traditionally, composite materials for primary parts are molded using indirect processes. This involves using a set of carbon fibers that are pre-impregnated with resin. The part is manufactured by an automated process that superimposes the layers, which are then cured in an autoclave, a pressurized oven. This is currently the most widely used process in the aeronautics industry. It is also the most expensive, due to the processes involved, the material used and its storage.

Hexcel offers a direct process using a new-generation material it has developed: HiTape®. It is a dry, unidirectional reinforcement composed of carbon fibers sandwiched between two thermoplastic webs. It is intended to be deposited using an automated process, then molded before the resin is injected,” Laure Bouquerel explains. The researcher is conducting a thesis at Mines Saint-Étienne on this material that Hexcel is working to develop. The goal is to simulate the molding process involving the stacking of carbon fiber reinforcements in order to better understand and anticipate the deformations and defects that could occur. This work is what earned the young materials specialist an award at the SAMPE France* competition.

Anticipating defects to limit costs

The carbon fibers in the HiTape® material are all aligned in the same direction. The rigidity is at its maximum level in the direction of the fibers. Several layers are deposited in different directions to manufacture a part. This offers very good rigidity in the desired directions, which were identified during the design phase for the structure,” Laure Bouquerel explains. Yet due to the HiTape® material’s specific structure and the presence of the thermoplastic web, specific deformations occur during the molding phase. The tension in the reinforcement is predominant and wrinkling can occur when the material is bent. Finally, friction can occur between the various reinforcement layers.

The appearance of wrinkling is a classic. As they become wrinkled, the fibers are no longer straight, and the load placed on the material will not be transferred as well,” the researcher observes. “These wrinkles also cause the development of areas that are less dense in fiber, where the resin will accumulate after the molding stage, creating zones of weakness in the material.” As these deformations appear, the final part’s overall structure is weakened.

The aim of Laure Bouquerel’s thesis work is to digitally simulate the molding process for the HiTape® material in order to identify and predict the appearance of deformations and then improve the molding process through reverse engineering. Why the use of digital simulation? This method eliminates all the trial and error involving real materials in the laboratory, thus reducing the time and cost involved in developing the product.

A great opportunity for the young researcher

A graduate of Centrale Nantes engineering school, the young researcher became specialized in this field while working toward her Master’s in advanced materials from Cranfield University in England. After earning these two degrees, she further defined her vocation during her work placement year. Laure Bouquerel began her career with Plastic Omnium, an automobile parts supplier in Lyon, and with Airbus in Germany, which explains her specialization in composite materials for the aeronautics industry.

As a winner of the SAMPE France competition, the PhD student will present her work at the SAMPE France technical days in Bordeaux on 29 and 30 November and will participate in the SAMPE Europe competition in Southampton from 11 to 13 September. This will provide a unique opportunity to give visibility to her work. “It will be an opportunity to meet with other industry stakeholders and other PhD students working on similar topics. Talking with peers can inspire new ideas for advancing our own research!”

[box type=”info” align=”” class=”” width=””]

*An international competition dedicated to materials engineering

SAMPE (Society for the Advancement of Material Process Engineering) rewards the best theses on the study of structural materials through an international competition. The French edition, SAMPE France, which Laure Bouquerel won, was held at Mines Saint-Étienne on March 22 and 23. The global competition will be held in Southampton from September 11 to 13 during the SAMPE Europe days. The aim of these international meetings is to bring together manufacturers and researchers from the field of advanced materials to develop professional networks and present the latest technical innovations.[/box]

 

 

HyBlockArch

HyBlockArch: hybridizing the blockchain for the industry of the future

Within the framework of the German-French Academy for the Industry of the Future, a partnership between IMT and Technische Universität München (TUM), the HyBlockArch project examines the future of the blockchain. This project aims to adapt this technology to an industrial scale to create a powerful tool for companies. To accomplish this goal, the teams led by Gérard Mémmi (Télécom ParisTech) and Georg Carle (TUM) are working on new blockchain architectures. Gérard Memmi shares his insight.

 

Why are you looking into new blockchain architectures?

Gérard Mémmi: Current blockchain architectures are limited in terms of performance in the broadest sense: turnaround time, memory, energy… In many cases, this hinders the blockchain from being developed in Industry 4.0. Companies would like to see faster validation times or to be able to put even more information into a blockchain block. A bank that wants to track an account history over several decades will be concerned about the number of blocks in the blockchain and the possible increase in block latency times. Yet today we cannot foresee the behavior of blockchain architectures for many years to come. There is also the energy issue: the need to reduce consumption caused by the proof of work required to enter data into a blockchain, while still ensuring a comparable level of security.  We must keep in mind that the bitcoin’s proof of work consumes the same amount of electrical energy as a country like Venezuela.

What type of architecture are you trying to develop with the HyBlockArch project?

GM: We are working on hybrid architectures. These multi-layer architectures make it possible to reach an industrial scale. We start with a blockchain protocol in which each node of the ledger communicates with a mini data storage network on a higher floor. This is not necessarily a blockchain protocol and it can operate slightly differently while still maintaining similar properties. The structure is transparent for the users; they do not notice a difference. The miners who perform the proof of work required to validate data only see the blockchain aspect. This is an advantage for them, allowing them to work faster without taking the upper layer of the architecture into account.

What would the practical benefits be for a company?

GM: For a company this would mean smart contracts could be created more quickly and the computer operations that rely on this architecture would have shorter latency times, resulting in a broader scope of application. The private blockchain is very useful in the field of logistics. For example, each time a product changes hands, as from the vendor to the carrier, the operation is recorded in the blockchain. A hybrid architecture records this information more quickly and at a lower cost for companies.

This project is being carried out in the framework of the German-French Academy for the Industry of the Future. What is the benefit of this partnership with Technische Universität München (TUM)?

GM: Our German colleagues are developing a platform that measures the performance of the different architectures. We can therefore determine the most optimal architecture in terms of energy savings, fast turnaround and security for typical uses in the industry of the future. We contribute a more theoretical aspect: we analyze the smart contracts to develop more advantageous protocols, and we work with proof of work mechanisms for recording information in the blockchain.

What does this transnational organization represent in the academic field?

GM: This creates a European dynamic in the work on this issue. In March we launched a blockchain alliance between French institutes: BART. By working together with TUM on this topic, we are developing a Franco-German synergy in an area that only a few years ago was only featured as a minor issue at a research conference, as the topic of only one session. The blockchain now has scientific events all to itself. This new discipline is booming and through the HyBlockArch project we are participating in this growth at the European level.

 

C2Net

C2Net: supply chain logistics on cloud nine

Projets européens H2020A cloud solution to improve supply chain logistics? This is the principle behind the European C2Net project. Launched on January 1, 2015, the project was completed on December 31, 2017. The project has successfully demonstrated how a cloud platform can enable the various players in a supply chain to better anticipate and manage future problems. To do so, C2Net drew on research on interoperability and on the automation of alerts using data taken directly from companies in the supply chain. Jacques Lamothe and Frédérick Benaben, researchers in industrial engineering, on logistics and information systems respectively, give us an overview of the work they carried out at IMT Mines Albi on the C2Net project.

 

What was the aim of the C2Net project?

Jacques Lamothe: The original idea was to provide cloud tools for SMEs to help them with advanced supply chain planning. The goal was to identify future inventory management problems companies may have well in advance. As such, we had to work on three parts: recovering data from SMEs, generating alerts for issues to be resolved, and monitoring planning activity to see if everything went as intended. It wasn’t easy because we had to respond to interoperability issues — meaning data exchange between the different companies’ information systems. And we also had to understand the business rules of the supply chain players in order to evaluate the relevant alerts.

Could you give us an example of the type of problem a company may face?

Frédérick Benaben: One thing that can happen is that a supplier is only able to manufacture 20,000 units of an item while the SME is expecting 25,000. This makes for a strained supply chain and solutions must be found, such as compensating for this change by asking suppliers in other countries if they can produce more. It’s a bit like an ecosystem: when there’s a problem in one part, all the players in the supply chain are affected.

Jacques Lamothe: What we actually realized is that, a lot of the time, certain companies have very effective tools to assess the demand on one side, while other companies have very effective tools to measure production on the other side. But it is difficult for them to establish a dialogue between these two parts. In the chain, the manufacturer does not necessarily notice when there is lower demand and vice versa. This is one of the things the C2Net demonstrator helped correct in the use case we developed with the companies.

And what were the companies’ expectations for this project?  

Jacques Lamothe: For the C2Net project, each academic partner brought an industrial partner he had already worked with. And each of these SMEs had a different set of problems. In France, our partner for the project was Pierre Fabre. They were very interested in data collection and creating an alert system. On the Spanish side, this was less of a concern than optimizing planning. Every company has its own issues and the use cases the industrial partners brought us meant we had to find solutions for everyone: from generating data on their supply chains to creating tools to allow them to manage alerts or planning.

To what extent has your research work had an impact on the companies’ structures and the way they are organized?

Frédérick Benaben: What was smart about the project is that we did not propose the C2Net demonstrator as a cloud platform that would replace companies’ existing systems. Everything we did is situated a level above the organizations so that they will not be impacted, and integrates the existing systems, especially the information systems already in place. So the companies did not have to be changed. This also explains why we had to work so hard on interoperability.

What did the work on interoperability involve?

Frédérick Benaben: There were two important interoperability issues. The first was being able to plug into existing systems in order to collect information and understand what was collected. A company may have different subcontractors, all of whom use different data formats. How can a company understand and use the data from both subcontractor A, which is provided in one language and that of subcontractor B, which is provided in another? We therefore had to propose data reconciliation plans.

The second issue involves interpretation. Once the data has been collected and everyone is speaking  the same language, or at least can understand one another, how can common references be established? For example, having everyone speak in liters for quantities of liquids instead of vials or bottles. Or, when a subcontractor announces that an item may potentially be out of stock, what does this really mean? How far in advance does the subcontractor notify its customers? Does everyone have the same definition? All these aspects had to be harmonized.

How will these results be used?

Jacques Lamothe: The demonstrator has been installed at the University of Valencia in Spain and should be reused for research projects. As for us, the results have opened up new research possibilities. We want to go beyond a tool that can simply detect future problems or allow companies to be notified. One of our ideas is to work on solutions that make it possible to make more or less automated decisions to adjust the supply chain.

Frédérick Benaben: A spin-off has also been developed in Portugal. It uses a portion of the data integration mechanisms to propose services for SMEs. And we are still working with Pierre Fabre too, since their feedback has been very positive. The demonstrator helped them see that it is possible to do more than what they are currently able to do. In fact, we have developed and submitted a partnership research project with them.

Also read on I’MTech: