consent, consentement

GDPR: managing consent with the blockchain?

Blockchain and GDPR: two of the most-discussed keywords in the digital sector in recent months and years. At Télécom SudParis, Maryline Laurent has decided to bring the two together. Her research focuses on using the blockchain to manage consent to personal data processing.

 

The GDPR has come into force at last! Six years have gone by since the European Commission first proposed reviewing the personal data protection rules. The European regulation, which came into force in April 2016, was closely studied by companies for over two years in order to ensure compliance by the 25 May 2018 deadline. Of the 99 articles that make up the GDPR, the seventh is especially important for customers and users of digital services. It specifies that any request for consent “must be presented in a manner which is clearly distinguishable from the other matters, in an intelligigble and easily accessible form, using clear and plain language.” Moreover, any company (known as a data controller) responsible for processing customers’ personal data “shall be able to demonstrate that consent was given by the data subject to the processing of their personal data.”

Although these principles seem straightforward, they introduce significant constraints for companies. Fulfilling both of these principles (transparency and accounting) is not an easy task. Maryline Laurent, a researcher at Télécom SudParis with network security expertise, is tackling this problem. As part of her work for IMT’s Personal Data Values and Policies Chair — of which she is the co-founder — she has worked on a solution based on the blockchain in a B2C environment1. The approach relies on smart contracts recorded in public blockchains such as Ethereum.

Maryline Laurent describes the beginning of the consent process that she and her team have designed between a customer and a service provider: “The customer contacts the company through and authenticated channel and receives a request from the service provider containing the elements of consent that shall be proportionate to the provided service.” Based on this request, customers can prepare a smart contract to specify information for which they agree to authorize data processing. “They then create this contract in the blockchain, which notifies the service provider of the arrival of a new consent,” continues the researcher. The company verifies that this corresponds to its expectations and signs the contract. In this way, the fact that the two parties have approved the contract is permanently recorded in a block of the chain. Once the customer has made sure that everything has been properly carried out, he may provide his data. All subsequent processing of this data will also be recorded in the blockchain by the service provider.

 

A smart contract approved by the Data Controller and User to record consent in the blockchain

 

A smart contract approved by the Data Controller and User to record consent in the blockchainSuch a solution allows users to understand what they have consented to. Since they write the contract themselves, they have direct control over which uses of their data they accept. The process also ensures multiple levels of security. “We have added a cryptographic dimension specific to the work we are carrying out,” explains Maryline Laurent. When the smart contract is generated, it is accompanied by some cryptographic material that makes it appear to the public as user-independent. This makes it impossible to link the customer of the service and the contract recorded in the blockchain, which protects its interests.

Furthermore, personal data is never entered directly in the blockchain. To prevent the risk of identity theft, “a hash function is applied to personal data,” says the researcher. This function calculates a fingerprint over the data that makes it impossible to trace back. This hashed data is then recorded in the register, allowing customers to monitor the processing of their data without fear of an external attack.

 

Facilitating audits

This solution is not only advantageous for customers. Companies also benefit from the use of consent based on the blockchain. Due to the transparency of public registers and the unalterable time-stamped registration that defines the blockchain, service providers can comply with the auditing need. Article 24 of the GDPR requires the data controller to “implement appropriate technical and organizational measures to ensure and be able to demonstrate that the  processing of personal data is performed in compliance with this Regulation.” In short, companies must be able to provide proof of compliance with consent requirements for their customers.

There are two types of audits,” explains Maryline Laurent. “A private audit is carried out by a third-party organization that decides to verify a service provider’s compliance with the GDPR.” In this case, the company can provide the organization with all the consent documents recorded in the blockchain, along with the associated operations. A public audit, on the other hand, is carried out to ensure that there is sufficient transparency for anyone to verify that everything appears to be in compliance from the outside. “For security reasons, of course, the public only has a partial view, but that is enough to detect major irregularities,” says the Télécom SudParis researcher. For example, any user may ensure that once he/she has revoked consent, no further processing is performed on the data concerned.

In the solution studied by the researchers, customers are relatively familiar with the use of the blockchain. They are not necessarily experts, but must nevertheless use software that allows them to interface with the public register. The team is already working on blockchain solutions in which customers would be less involved. “Our new work2 has been presented in San Francisco at the 2018 IEEE 11th Conference on Cloud Computing, which hold from 2 to 7 July 2018. It makes the customer peripheral to the process and instead involves two service providers in a B2B relationship,” explains Maryline Laurent. This system better fits a B2B relationship when a data controller outsources data to a data processor and enables consent transfer to the data processor. “Customers would no longer have any interaction with the blockchain, and would go through an intermediary that would take care of recording all the consent elements.”

Between applications for customers and those for companies, this work paves the way for using the blockchain for personal data protection. Although the GDPR has come into force, it will take several months for companies to become 100% compliant. Using the blockchain could therefore be a potential solution to consider. At Télécom SudParis, this work has contributed to “thinking about how the blockchain can be used in a new context, for the regulation,” and is backed up by the solution prototypes. Maryline Laurent’s goal is to continue this line of thinking by identifying how software can be used to automate the way GDPR is taken into account by companies.

 

1 N. Kaâniche, M. Laurent, “A blockchain-based data usage auditing architecture with enhanced privacy and availability“, The 16th IEEE International Symposium ong Network Computing and Applications, NCA 2017, ISBN: 978-1-5386-1465-5/17, Cambridge, MA USA, 30 Oct. 2017-1 Nov. 2017.

N. Kaâniche, M. Laurent, “BDUA: Blockchain-based Data Usage Auditing“, IEEE 11th Conference on Cloud Computing, San Francisco, CA, USA, 2-7 July 2018

aneurysm

A digital twin of the aorta to prevent aneurysm rupture

15,000 Europeans die each year from rupture of an aneurysm in the aorta. Stéphane Avril and his team at Mines Saint-Étienne are working to better prevent this. To do so, they develop a digital twin of the artery of a patient with an aneurysm. This 3D model makes it possible to simulate the evolution of an aneurysm over time, and better predict the effect of a surgically-implanted prosthesis. Stéphane Avril talks to us about this biomechanics research project and reviews the causes for this pathology along with the current state of knowledge on aneurysms.

 

Your research focuses on the pathologies of the aorta and aneurysm rupture in particular. Could you explain how this occurs?   

Stéphane Avril

Stéphane Avril: The aorta is the largest artery in our body. It leaves the heart and distributes blood to the arms and brain, goes back down to supply blood to the intestines and then divides in two to supply blood to the legs. The wall of the aorta is a little bit like our skin. It is composed of practically the same proteins and the tissues are very similar. It therefore becomes looser as we age. This phenomenon may be accelerated by other factors such as tobacco or alcohol. It is an irreversible process that results in an enlarged diameter of the artery. When there is significant dilation, it is called an aneurysm. This is the most common pathology of the aorta. The aneurysm can rupture, which is often lethal for the individual. In Europe, some 15,000 people die each year from a ruptured aneurysm.

Can the appearance of an aneurysm be predicted?

SA: No, it’s very difficult to predict where and when an aneurysm will appear. Certain factors are morphological. For example, some aneurysms result from the malformation of an aortic valve: 1 % of the population has only two of the three leaflets that make up this part of the heart. As a result, the blood is pumped irregularly, which leads to a microinjury on the wall of the aorta, making it more prone to damage. One out of two individuals with this malformation develops an aneurysm, usually between the ages of 40 and 60. There are also genetic factors that lead to aneurysms earlier in life, between the ages 20 and 40. Then there are the effects of ageing, which make populations over 60 more likely to develop this pathology. It is complicated to determine which factors predominate in relation to one another. Especially since if at 30 or 40 an individual is declared healthy and then starts smoking, which will affect the evolution of the aorta.

If aneurysms cannot be predicted, can they be treated?

SA: In biology, extensive basic research has been conducted on the aortic system. This has allowed us to understand a lot about what causes aneurysms and how they evolve. Although specialists cannot predict an aneurysm’s appearance, they can say why the pathology appeared in a certain location instead of another, for example. For patients who already have an aneurysm, this also means that we know how to identify the risks related to the evolution of the pathology. However, no medication exists yet. Current solutions rely rather on surgery to implant a prosthesis or an endoprosthesis — a stent covered with fabric — to limit pressure on the damaged wall of the artery. Our work carried out with the Sainbiose joint research unit [run by INSERM, Mines Saint-Étienne and Université Jean Monnet], focused on gathering everything that is known so far about the aorta and aneurysms in order to propose digital models.

What is the purpose of these digital models?

SA: The model should be seen as a 3D digital twin of the patient’s aorta. We can perform calculations on it. For example, we study how the artery evolves naturally, whether or not there is a high risk of aneurysm rupture, and if so, where exactly in the aorta. The model can also be used to analyze the effect of a prosthesis on the aneurysm. We can determine whether or not surgery will really be effective and help the surgeon choose the best type of prosthesis. This use of the model to assist with surgery led to the creation of a startup, Predisurge, in May 2017. Practitioners are already using it to predict the effect of an operation and calculate the risks.

Read on IMTech: Biomechanics serving healthcare

How do you go about building this twin of the aorta?  

SA: The first data we use comes from imaging. Patients undergo CAT scans and MRIs. The MRIs give us information about blood flow because we can have 10 to 20 photos of the same area over the duration of a cardiac cycle. This provides us with information about how the aorta compresses and expands with each heart beat. Based on this dynamic, our algorithms can trace the geometry of the aorta. By combining this data with pressure measurements, we can deduce the parameters that control the mechanical behavior of the wall, especially elasticity. We then relate this to the composition of elastin, collagen and the smooth muscle cell ratio of the wall. This gives us a very precise idea about all the parts of the patient’s aorta and its behavior.

Are the digital twins intended for all patients?

SA: That’s one of the biggest challenges. We would like to have a digital twin for each patient as this would allow us to provide personalized medicine on a large scale. This is not yet the case today. For now, we are working with groups of volunteer patients who are monitored every year as part of a clinical study run by the Saint-Étienne University hospital. Our digital models are combined with analyses by doctors, allowing us to validate these models and talk to professionals about what they would like to be able to find using the digital twin of the aorta. We know that as of today, not all patients can benefit from this tool. Analyzing the data collected, building the 3D model, setting the right biological properties for each patient… all this is too time-consuming for wide-scale implementation. At the same time, what we are trying to do is identify the groups of patients who would most benefit from this twin. Is it patients who have aneurysms caused by genetic factors? For which age groups can we have the greatest impact? We also want to move towards automation to make the tool available to more patients.

How can the digital twin tool be used on a large scale?  

SA: The idea would be to include many more patients in our validation phase to collect more data. With a large volume of data, it is easier to move towards artificial intelligence to automate processing. To do so, we have to monitor large cohorts of patients in our studies. This means we would have to shift to a platform incorporating doctors, surgeons and researchers, along with imaging device manufacturers, since this is where the data comes from. This would help create a dialogue between all the various stakeholders and show professionals how modeling the aorta can have a real impact. We already have partnerships with other IMT network schools: Télécom SudParis and Télécom Physique Strasbourg. We are working together to improve the state of the art in image processing techniques. We are now trying to include imaging professionals. In order to scale up the tool, we must also expand the scope of the project. We are striving to do just that.

Around this topic on I’MTech

MRI

Mathematical tools for analyzing the development of brain pathologies in children

Magnetic resonance imaging (MRI) enables medical doctors to obtain precise images of a patient’s structure and anatomy, and of the pathologies that may affect the patient’s brain. However, to analyze and interpret these complex images, radiologists need specific mathematical tools. While some tools exist for interpreting images of the adult brain, these tools are not directly applicable in analyzing brain images of young children and newborn or premature infants. The Franco-Brazilian project STAP, which includes Télécom ParisTech among its partners, seeks to address this need by developing mathematical modeling and MRI analysis algorithms for the youngest patients.

 

An adult’s brain and that of a developing newborn infant are quite different. An infant’s white matter has not yet fully myelinized and some anatomical structures are much smaller. Due to these specific features, the images obtained of a newborn infant and an adult via magnetic resonance imaging (MRI) are not the same. “There are also difficulties related to how the images are acquired, since the acquisition process must be fast. We cannot make a young child remain still for a long period of time,” adds Isabelle Bloch, a researcher in mathematical modeling and spatial reasoning at Télécom ParisTech.  The resolution is therefore reduced because the slices are thicker.”

These complex images require the use of tools to analyze and interpret them and to assist medical doctors in their diagnoses and decisions. “There are many applications for processing MRI images of adult brains. However, in pediatrics there is a real lack that must be addressed,” the researcher observes. “This is why, in the context of the STAP project, we are working to design tools for processing and interpreting images of young children, newborns and premature infants.

The STAP project, funded by the ANR and FAPESP was launched in January and will run for four years. The partners involved include the University of São Paolo in Brazil, the pediatric radiology departments at São Paolo Hospital and Bicêtre Hospital, as well as the University of Paris Dauphine and Télécom ParisTech. “Three applied mathematics and IT teams are working on this project, along with two teams of radiologists. Three teams in France, two in Brazil… The project is both international and multidisciplinary,” says Isabelle Bloch.

 

Rare and heterogeneous data

To work towards developing a mathematical image analysis tool, the researchers collected MRIs of young children and newborns from partner hospitals. “We did not acquire data specifically for this project,” Isabelle Bloch explains. “We use images that are already available to the doctors, for which the parents have given their informed consent for the images to be used for research purposes.” The images are all anonymized, regardless of whether they display normal or pathological anatomy. “We are very cautious: If a patient has a pathology that is so rare that his or her identity could be recognized, we do not use the image.

Certain pathologies and developmental abnormalities are of particular interest to researchers: hyperintense areas, which are areas of white matter that appear lighter than normal on the MRI images, developmental abnormalities in the corpus callosum, the anatomical structure which joins the two cerebral hemispheres, and cancerous tumors.

We are faced with some difficulties because few MRI images exist of premature and newborn babies,” Isabelle Bloch explains. “Finally, the images vary greatly depending on the age of the patient and the pathology. We therefore have a limited dataset and many variations that continue to change over time.

 

Modeling medical knowledge

Although the available images are limited and heterogeneous, the researchers can make up for this lack of data through the medical expertise of radiologists, who are in charge of annotating the MRI that are used. They will therefore have access to valuable information on brain anatomy and pathologies as well as the patient’s history. “We will work to create models in the form of medical knowledge graphs, including graphs of the structures’ spatial layout. We will have assistance from the pediatric radiologists participating in the project,” says Isabelle Bloch. “These graphs will guide the interpretation of the images and help to describe the pathology, and the surrounding structures: Where is the pathology? What healthy structures could it affect? How is it developing?

For this model, each anatomical structure will be represented by a node. These nodes are connected by edges that bear attributes such as spatial relationships or contrasts of intensity that exist in the MRI.  This graph will take into account the patient’s pathologies by adapting and modifying the links between the different anatomical structures. “For example, if the knowledge shows that a given structure is located to the right of another, we would try to obtain a mathematical model that tells us what ‘to the right of’ means,” the researcher explains. “This model will then be integrated into an algorithm for interpreting images, recognizing structures and characterizing a disease’s development.”

After analyzing a patient’s images, the graph will become an individual model that corresponds to a specific patient. “We do not yet have enough data to establish a standard model, which would take variability into account,” the researcher adds. “It would be a good idea to apply this method to groups of patients, but that would be a much longer-term project.

 

An algorithm to describe images in the medical doctor’s language

In addition to describing the brain structures spatially and visually, the graph will take into account how the pathology develops over time. “Some patients are monitored regularly. The goal would be to compare MRI images spanning several months of monitoring and precisely describe the developments of brain pathologies in quantitative terms, as well as their possible impact on the normal structures,” Isabelle Bloch explains.

Finally, the researchers would like to develop an algorithm that would provide a linguistic description of the images’ content using the pediatric radiologist’s specific vocabulary. This tool would therefore connect the quantified digital information extracted from images with words and sentences. “This is the reverse method of that which is used for the mathematical modeling of medical knowledge,” Isabelle Bloch explains. “The algorithm would therefore describe the situation in a quantitative and qualitative manner, hence facilitating the interpretation by the medical expert.

In terms of the structural modeling, we know where we are headed, although we still have work to do on extracting the characteristics from the MRI,” says Isabelle Bloch regarding the project’s technical aspects. “But combining spatial analysis with temporal analysis poses a new problem… As does translating the algorithm into the doctor’s language, which requires transitioning from quantitative measurements to a linguistic description.” Far from trivial, this technical advance could eventually allow radiologists to use new image analysis tools better suited to their needs.

Find out more about Isabelle Bloch’s research

gender diversity, digital professions

Why women have become invisible in IT professions

Female students have deserted computer science schools and women seem mostly absent from companies in this sector. The culprit: the common preconception that female computer engineers are naturally less competent than their male counterparts.  The MOOC entitled Gender Diversity in IT Professions*, launched on 8 March 2018, looks at how sexist stereotypes are constructed, often insidiously. Why are women now the minority, rendered invisible in the digital sector despite the many female pioneers and entrepreneurs have paved the way for the development of software and video games?  Chantal Morley, a researcher within the Gender@Telecom group at Institut Mines-Telecom Business School, takes a look back at the creation of this MOOC, highlighting the research underpinning the course.

 

In 2016, only 33% of digital jobs were occupied by women (OPIIEC). Taking into account only the “core” professions in the sector (engineer, technician or project manager), the percentage falls to 20%. Why is there such a gender gap? No, it’s not because women are less talented in technical professions, nor because they prefer other areas. The choices made by young women in their training and women in their careers is not always the result of a free and informed decision. The influence of stereotypes plays a significant role. These popular beliefs reinforce the idea that the IT field is inherently masculine, a place where women do not belong, and this influences our choices and behaviors even when we do not realize it.

The research group Gender@Telecom, which brings together several female researchers from IMT schools, is looking at the issue of women’s role in the field of information and communication technologies, and specifically the software sector. Through their studies and analysis, the group’s researchers have observed and described how these stereotypes are expressed.  “We interviewed professionals in the sector, and asked students specific questions about their choices and opinions,” explains Chantal Morley, researcher at Institut Mines-Telecom Business School. By analyzing the speech from those interviews, the researchers identified many preconceived notions. “Women do not like computer science, it does not interest them, for example,” the researcher continues. “These representations are unproven and do not match reality!” These little phrases that communicate stereotypes are heard from both men and women. “One might think that this type of differentiation in representations would not exist among male and female students, but that is not the case,” says Chantal Morley. “During a study conducted in Switzerland, we found that guidance counselors are also very much influenced by these stereotypes.” Among professionals, these views are even cited as arguments justifying certain choices.

 

Little phrases, big impacts

The Gender Diversity in IT Professions MOOC* developed by the Gender@Telecom group is aimed at deconstructing these stereotypes. “We used these studies to try to show learners how little things in everyday life, which we do not even notice, contribute to instilling these differentiated views,” Chantal Morley explains. These little phrases or representations can also be found in our speech as well as in advertisements, posters… When viewed individually, these small occurrences are insignificant, yet it is their repetition and systematic nature that pose a problem. Together they work to establish and reinforce sexist stereotypes. “They form a common knowledge, a popular belief that everyone is aware of, that we all accept, saying ‘that’s just the way it is!’”

To study this phenomenon, the researchers from the group analyzed speech from semi-structured interviews conducted with stakeholders in the digital industry. The researchers’ questions focused on the relationship with technology and an entrepreneurship competition that had recently been held at Institut Mines-Telecom Business School. “Again, in this study, some types of arguments were frequently repeated and helped reinforce these stereotypes,” Chantal Morley observes. “For example, when someone mentions a woman who is especially talented, the person will often add, ‘yes, but with her it’s different, that doesn’t count.’ There is always an excuse for not questioning the general rule that says women lack the abilities required in digital professions.

 

Unjustified stereotypes

Yet despite their pervasiveness, there is nothing to justify these remarks. The history of computer science professions proves this fact. However, the contribution of women has long been hidden behind the scenes. “When we studied the history of computer science, we were primarily looking at the area of hardware, equipment. Women were systematically rejected by universities and schools in this field, where they were not allowed to earn a diploma,” says Chantal Morley. “Also, some companies refused to keep their employees if they had a child or got married. This made careers very difficult.” In recent years, research on the history of the software industry, in which there were more opportunities, has revealed that many women contributed to major aspects of its development.

Ada Lovelace is sort of the Marie Curie of computer science… People think she is the only one! Yet she is one contributor among others,” the researcher explains. For example, the computer scientist Grace Hopper invented the first compiler and the COBOL language in the 1950s. “She had the idea of inventing a translator that would translate a relatively understandable and accessible language into machine language. Her contribution to programming was crucial,” Chantal Morley continues. “We can also mention Roberta Williams, a computer scientist who greatly contributed to the beginnings of video games, or Stephanie Shirley, a pioneer computer scientist and entrepreneur…”

In the past these women were able to fight for their place in software professions. What has happened to make women seem absent from these arenas? According to Chantal Morley, the exclusion of women occurred with the arrival of microcomputing, which at the time had been designed for a primarily male target, that of executives. “The representations conveyed at that time progressively led everyone to associate working with computers with men.” But although women are a minority in this sector, they are not completely absent. “Many have participated in the creation of very large companies, they found startups, and there are some very famous women hackers,” Chantal Morley observes. “But they are not at all in the spotlight and do not get much press coverage. As if they were an anomaly, something unusual…

Finally, women’s role in the digital industry varies greatly depending on the country and culture. In India and Malaysia, for example, computer science is a “women’s” profession. It is all a matter of perspective, not a question of innate abilities.

 

[box type=”shadow” align=”” class=”” width=””]*A MOOC combating sexist stereotypes

How are these stereotypes constructed and maintained? How can they be deconstructed? How can we promote the integration of women in digital professions? The Gender Diversity in IT Professions MOOC (in French), launched on 8 March 2018, uncovers the little-known contribution of women to the development of the software industry and what mechanisms cause them to remain hidden and discouraged from integrating this sector. The MOOC is aimed at raising awareness among companies, schools and research organizations on these issues to provide them with keys for developing a more inclusive culture for women. [/box]

Also read on I’MTech

 

Aizimov BEYABLE startup artificial intelligence

Artificial Intelligence hiding behind your computer screen!

Far from the dazzle of intelligent humanoid robots and highly effective chatbots, artificial intelligence is now used in many ordinary products and services. In the software and websites consumers use on a daily basis, AI is being used to improve the use of digital technology. This new dynamic is perfectly illustrated by two startups incubated at Télécom ParisTech: BEYABLE and AiZimov.

 

Who are the invisible workers managing the aisles of digital shops? At the supermarket, shoppers regularly see employees stocking the shelves, but the shelves of online sales sites are devoid of human contact. “Whether a website has 500 or 10,000 items for sale, there are always fewer employees managing the products than at a real store,” explains Julien Dugaret, founder of the startup BEYABLE. The young company is well aware that these digital showcases still require maintenance. Currently accelerated at Télécom ParisTech and formerly incubated there, it offers a solution for detecting anomalies on online shopping sites.

BEYABLE’s artificial intelligence algorithms use a clustering technique. By analyzing data from internet users’ visits to websites and the data associated with each project, they group the items together into coherent “clusters”. The articles that cannot be included in any of the clusters are then identified as anomalies and corrected so they can be reintegrated into the right place.

Some products do not have the right images, descriptions or references. For example, a pair of heels might be included in the ‘boots’ category of an online shop,” explains the entrepreneur. The software then identifies the heels so that an employee can correct the description. While this type of error may seem anecdotal or funny, for the companies that use BEYABLE’s services, the quality of the customer experience is at stake.

Some websites offer thousands of articles with product references that are constantly changing. It is important to make sure visitors using the website do not feel lost from one day to the next. “If a real merchant sold t-shirts one day and coffee tables the next, you can imagine all the logistics that would be required overnight. For an online shop, the logistics involved in changing the collection or promoting certain articles is much simpler, but not effortless. The reduced number of online store ‘department managers’ makes the logistics all the more difficult,” explains Julien Dugaret. Artificial intelligence tools play an essential role in these logistics, helping digital marketing teams save a lot of time and ensuring visitor satisfaction.

BEYABLE is increasingly working with websites run by major brands. These websites invest hundreds of thousands of euros to earn consumers’ loyalty. “These websites have now become very important assets for companies,” the founder of the startup observes. They therefore need to understand what the customers are looking for and how they interact with the pages. BEYABLE does more than perform basic analyses, like the so-called “analytics” tools—the best-known being Google Analytics—it also offers these companies, “a look at what they cannot see,” says Julien Dugaret.

The company’s algorithms learn from the visits by categorizing them and identifying several types of internet users: those who look at the maps for nearby shops, those who want to discover items before they buy them, those who are interested in the brand’s activities… “Companies do not always have data experts who can analyze all the information about their visitors, so we offer AI tools suited to this purpose,” Julien Dugaret explains.

Artificial intelligence for professional emails?

For those who use digital services, the hidden AI processes are not only used to improve their online shopping experience. Jérôme Devosse worked as a salesperson for several years and used to study social networks, company websites and news sites to glean information about the people he wanted to contact. “This is business as usual for salespeople: adapting the sales hook and initial contact based on the person’s interests and the company’s needs,” he explains.

After growing weary of doing this task the slow way, he decided to create a tool to automate the research he carried out before appointments. And that was how AiZimov was born, another startup incubated at Télécom ParisTech. “It’s an assistant,” explains Jérôme Devosse. “All I have to do is tell it ‘I want to contact that person‘ and it will write an email based on the public data available online.” Interviews with the person, their company’s financial reports, their place of residence, their participation at trade shows, all of this information is useful for the software. “For example, the assistant will automatically write a message saying, ‘I saw you will be at Vivatech next week, come meet us!”, AiZimov’s founder explains.

The tool works in three stages. First, there is the data acquisition stage which uses technology to search through large volumes of data. Next, the data must be understood. Is the sentence containing the targeted person’s name from an interview or a financial report? What are the associated key words and what logical connections can be made? Finally, the text is generated automatically and can be checked based on different criteria. The user can then choose to send an email that is more formal or more emotional—using things the contact is passionate about—or a very friendly email.

Orange and Renault are already testing the startup’s software. “For salespeople from large companies, the time they save by not writing emails to new contacts is used to maintain the connections they have with existing customers to continue the relationship,” explains Jérôme Devosse. Today, the tool does not send an email without the salesperson’s approval. The personnel can still choose to modify a few details. The entrepreneur is not seeking an entirely automatic process. His areas for future development are focused on using the software for other activities.

I would like to go beyond emails: once the information is acquired, it could be used to write a detailed or general script for making contact via telephone,” he explains. AiZimov’s technology could also be used for other professions than sales. In press relations it could be used to contact the most relevant journalists by sending them private messages on social networks, for example. And why not make this software available to human resource departments for contacting individuals for recruitment purposes? Artificial intelligence could therefore continue to be used in many different online interactions.

blockchains

Can we trust blockchains?

Maryline Laurent, Télécom SudParis – Institut Mines-Télécom, Université Paris-Saclay

[divider style=”normal” top=”20″ bottom=”20″]

[dropcap]B[/dropcap]lockchains were initially presented as a very innovative technology with great promise in terms of trust. But is this really the case? Recent events, such as the hacking of the Parity wallet ($30 million US) or the Tether firm ($31 million US) have raised doubts.

This article provides an overview of the main elements outlined in Chapter 11 of the book, Signes de confiance : l’impact des labels sur la gestion des données personnelles (Signs of trust: the impact of seals on personal data management) produced by the Personal Data Values and Policies Chair of which Télécom SudParis is the co-founder. The book may be downloaded from the chair’s website. This article focuses exclusively on public blockchains.

Understanding the technology:

A blockchain can traditionally be equated to a “big,” accessible and auditable account ledger deployed on the internet network. It relies on a large number of IT resources spread out around the world, called “nodes,” which help make the blockchain work. In the case of a public blockchain, everyone can contribute, as long as they have a powerful enough computer to execute the associated code.

Executing the code implies acceptance of the blockchain’s governance rules. These contributors are responsible for collecting transactions made by blockchain customers, aggregating transactions in a structure called a “block” (of transactions) and validating the blocks before they are linked to the  blockchain. The resulting blockchain can be up to several hundred gigabytes and is duplicated a great number of times on the internet, which ensures wide availability of the blockchain.

Elements of trust

The blockchain is based on the following major conceptual principles, naturally positioning it as an ideal technology of trust:

  • Decentralized architecture and neutrality of governance based on the principle of consensus: it relies on a great number of independent contributors, making it decentralized by definition. This means that unlike a centralized architecture where decisions can be made unilaterally, a consensus must be reached, or a party must manage to control over 50% of the blockchain’s computing power (computer resources) to have an effect on the system. Therefore, any change in the governance rules must previously have been approved by consensus between the contributors, who must then update the software code executed.
  • Transparency of algorithms makes for better auditability: all transactions, all blocks, and all governance rules are freely accessible and can be read by everyone. This means that anyone can audit the system to ensure the correct operation of the blockchain and legitimacy of the transactions. The advantage is that experts in the community of users may closely examine the code and report anything that seems suspicious. Trust is therefore based on whistleblowers.
  • Secure underlying technology: Cryptographic techniques and terms of use guarantee that the blockchain cannot be altered, that the transactions recorded are authentic, even if they have been made under a pseudonym and that blockchain security is able to keep up with technological advances thanks to an adaptive security level.

Questions remain

Now we will take a look at blockchains in practice and discuss certain events that have raised doubts about this technology:

  • A  51% attack: Several organizations that contribute significantly to running a blockchain can join forces in order to possess at least 51% of the blockchain’s computing power between them. For example, China is known to concentrate a large part of its computing power — a total of two thirds of its computing power in 2017 — in the bitcoin blockchain. This raises questions about the distributed character of the blockchain and the neutrality of governance since it results in completely uneven decision-making power. Indeed, majority organizations can censure transactions, which impacts the blockchain’s history, or worse still, they can have considerable power to get governance rules that they have decided upon approved.
  • Hard fork: When new governance rules that are incompatible with previous ones are brought forward in the blockchain, this leads to a “hard fork,” meaning a permanent change in the blockchain, which requires a broad consensus amongst the blockchain contributors for the new blockchain rules to be accepted. If a consensus is not reached, the blockchain forks, resulting in the simultaneous existence of two blockchains, one that operates according to the previous rules and the other, according to the new rules. This forking of the chain undermines the credibility of the two resulting blockchains, leading to the devaluation of the associated cryptocurrency. It is worth noting that that a hard fork brought about as part of a 51% attack will be more likely to succeed in getting the new rules adopted since a consensus will be reached more easily.
  • Money laundering: Blockchains are transparent by their very nature but the traceability of transactions can be made very complicated, which facilitates money laundering. It is possible to open a large number of accounts, use the accounts just once, and carry out transactions under the cover of a pseudonym. This raises questions about all of a blockchain’s contributors, since their moral values are essential to running the blockchain, and harms the technology’s image.
  • Programming errors: Errors can be made in smart contracts, the programs that are automatically executed within a blockchain, and can have a dramatic impact on industrial players. Due to one such error an attacker was able to steal $50 million US from the DAO organization in 2016. Organizations who fall victim to such bugs could seek to invalidate these harmful transactions – the  DAO succeeded in provoking a hard fork for this purpose — calling into question the very principle of the inalterability of the blockchain. Indeed, if blocks that have previously been recorded as valid in a blockchain are then made invalid, this raises questions about the blockchain’s reliability.

To conclude, the blockchain is a very promising technology that offers many characteristics to guarantee trust, but the problem lies in the disconnect between the promises of the technology and the ways in which it is used. This leads to a great deal of confusion and misunderstandings about the technology, which we have tried to clear up in this article.

Maryline Laurent, Professor and Head of the R3S Team at the CNRS SAMOVAR Laboratory, Télécom SudParis – Institut Mines-Télécom, Université Paris-Saclay

The original version of this article (in French) was published in French on The Conversation France.

Also read on I’MTech

 

campus mondial de la mer

Campus Mondial de la Mer: promoting Brittany’s marine science and technology research internationally

If the ocean were a country, it would be the world’s 7th-largest economic power, according to a report by the WWF, and the wealth it produces could double by 2030. The Brittany region, at the forefront of marine science and technology research, can make an important contribution to this global development. This is what the Campus Mondial de la Mer (CMM), a Brittany-based academic community, intends to prove. The aim of the Campus is to promote regional research at the international level and support the development of a sustainable marine economy. René Garello, a researcher at IMT Atlantique, a partner of the CMM, answers our questions about this new consortium’s activities and areas of focus.

 

What is the Campus Mondial de la Mer (CMM) and what are its objectives?

René Garello: The Campus Mondial de la Mer is a community of research institutes and other academic institutions, including IMT Atlantique, created through the initiative of the Brest-Iroise Technopôle (Technology Center). Its goal is to highlight the excellence of research carried out in the region focusing on marine sciences and technology. The CMM monitors technological development, promotes research activities and strives to bring international attention to this research. It also helps organize events and symposiums and disseminates information related to these initiatives. The campus’s activities are primarily intended for academics, but they also attract industrial players.

The CMM hosts events and supports individuals seeking to develop new projects as part of its goal to boost the region’s economic activity and create a sustainable maritime economy, which represents tremendous potential at the global level. An OECD report on the sea economy in 2030 shows that by developing all the ocean-based industries, the ocean economy’s output could be doubled, from $1.5 trillion US currently to $3 trillion US in 2030! The Campus de la Mer strives to support this development by promoting Brittany-based research internationally.

What are the Campus Mondial de la Mer‘s areas of focus?

RG: The campus is dedicated to the world of research in the fields of marine science and technology. As far as the technological aspects, underwater exploration using underwater drones, or autonomous underwater vehicles, is an important focus area. These are highly autonomous vehicles, it’s as if they had their own little brains!

Another important focus area involves observing the ocean and the environment using satellite technology. Research in this area mainly involves the application of data from these observations, from both a geophysical and oceanographic perspective and in order to monitor ocean-based activities and the pollution they create.

Finally, a third research area is concerned more with physics, biology and chemistry. This area is primarily led by the University of Western Brittany, which has a large research department related to oceanography, and Institut Universitaire Européen de la Mer.

What sort of activities and projects does the Campus de la Mer promote?

RG: One of the CMM’s aims is to promote the ESA-BIC Nord-France project (European Space Agency – Business Incubator Center), a network of incubators for the regions of Brittany, Hauts-de-France, Ile-de-France and Grand-Est, which provides opportunities for financial and technological support for startups. This project is also connected to the Seine Espace Booster and Morespace, which have close ties with the startup ecosystem of the IMT Altantique incubator.

Another project supported by the Campus Mondial de la Mer involves creating a collaborative space between IMT Atlantique and Institut Universitaire Européen de la Mer, based on shared research themes for academic and industrial partners and our network of startups and SMEs.

The CMM also supports two projects led by UBO. The first is the ISblue, the University Research School (EUR) for Marine Science and Technology, developed through the 3rd Investments in the Future program. The Ifremer and a portion of the laboratories associated with the engineering schools IMT Atlantique, ENSTA Bretagne, ENIB and École Navale (Naval Academy) are involved in this project. The second project consists of housing the UNU-OCEAN institute on the site of the Brest-Iroise Technology Center, with a five-year goal to be able to accommodate 25-30 individuals working at the center of an interdisciplinary research and training ecosystem dedicated to marine science and technology.

Finally, the research themes highlighted by the CMM are in keeping with the aims of GIS BreTel, a Brittany Scientific Interest Group on Remote Sensing that I run. Our work aligns perfectly with the Campus’s approach. When we organize a conference or a symposium, whether at the Brest-Iroise Technology Center or the CMM, everyone participates! This also helps give visibility to research carried out at GIS Bretel and to promote our activities.

Also read on I’MTech

[one_half]

[/one_half]

[one_half_last]

[/one_half_last]

HyBlockArch

HyBlockArch: hybridizing the blockchain for the industry of the future

Within the framework of the German-French Academy for the Industry of the Future, a partnership between IMT and Technische Universität München (TUM), the HyBlockArch project examines the future of the blockchain. This project aims to adapt this technology to an industrial scale to create a powerful tool for companies. To accomplish this goal, the teams led by Gérard Mémmi (Télécom ParisTech) and Georg Carle (TUM) are working on new blockchain architectures. Gérard Memmi shares his insight.

 

Why are you looking into new blockchain architectures?

Gérard Mémmi: Current blockchain architectures are limited in terms of performance in the broadest sense: turnaround time, memory, energy… In many cases, this hinders the blockchain from being developed in Industry 4.0. Companies would like to see faster validation times or to be able to put even more information into a blockchain block. A bank that wants to track an account history over several decades will be concerned about the number of blocks in the blockchain and the possible increase in block latency times. Yet today we cannot foresee the behavior of blockchain architectures for many years to come. There is also the energy issue: the need to reduce consumption caused by the proof of work required to enter data into a blockchain, while still ensuring a comparable level of security.  We must keep in mind that the bitcoin’s proof of work consumes the same amount of electrical energy as a country like Venezuela.

What type of architecture are you trying to develop with the HyBlockArch project?

GM: We are working on hybrid architectures. These multi-layer architectures make it possible to reach an industrial scale. We start with a blockchain protocol in which each node of the ledger communicates with a mini data storage network on a higher floor. This is not necessarily a blockchain protocol and it can operate slightly differently while still maintaining similar properties. The structure is transparent for the users; they do not notice a difference. The miners who perform the proof of work required to validate data only see the blockchain aspect. This is an advantage for them, allowing them to work faster without taking the upper layer of the architecture into account.

What would the practical benefits be for a company?

GM: For a company this would mean smart contracts could be created more quickly and the computer operations that rely on this architecture would have shorter latency times, resulting in a broader scope of application. The private blockchain is very useful in the field of logistics. For example, each time a product changes hands, as from the vendor to the carrier, the operation is recorded in the blockchain. A hybrid architecture records this information more quickly and at a lower cost for companies.

This project is being carried out in the framework of the German-French Academy for the Industry of the Future. What is the benefit of this partnership with Technische Universität München (TUM)?

GM: Our German colleagues are developing a platform that measures the performance of the different architectures. We can therefore determine the most optimal architecture in terms of energy savings, fast turnaround and security for typical uses in the industry of the future. We contribute a more theoretical aspect: we analyze the smart contracts to develop more advantageous protocols, and we work with proof of work mechanisms for recording information in the blockchain.

What does this transnational organization represent in the academic field?

GM: This creates a European dynamic in the work on this issue. In March we launched a blockchain alliance between French institutes: BART. By working together with TUM on this topic, we are developing a Franco-German synergy in an area that only a few years ago was only featured as a minor issue at a research conference, as the topic of only one session. The blockchain now has scientific events all to itself. This new discipline is booming and through the HyBlockArch project we are participating in this growth at the European level.

 

C2Net

C2Net: supply chain logistics on cloud nine

Projets européens H2020A cloud solution to improve supply chain logistics? This is the principle behind the European C2Net project. Launched on January 1, 2015, the project was completed on December 31, 2017. The project has successfully demonstrated how a cloud platform can enable the various players in a supply chain to better anticipate and manage future problems. To do so, C2Net drew on research on interoperability and on the automation of alerts using data taken directly from companies in the supply chain. Jacques Lamothe and Frédérick Benaben, researchers in industrial engineering, on logistics and information systems respectively, give us an overview of the work they carried out at IMT Mines Albi on the C2Net project.

 

What was the aim of the C2Net project?

Jacques Lamothe: The original idea was to provide cloud tools for SMEs to help them with advanced supply chain planning. The goal was to identify future inventory management problems companies may have well in advance. As such, we had to work on three parts: recovering data from SMEs, generating alerts for issues to be resolved, and monitoring planning activity to see if everything went as intended. It wasn’t easy because we had to respond to interoperability issues — meaning data exchange between the different companies’ information systems. And we also had to understand the business rules of the supply chain players in order to evaluate the relevant alerts.

Could you give us an example of the type of problem a company may face?

Frédérick Benaben: One thing that can happen is that a supplier is only able to manufacture 20,000 units of an item while the SME is expecting 25,000. This makes for a strained supply chain and solutions must be found, such as compensating for this change by asking suppliers in other countries if they can produce more. It’s a bit like an ecosystem: when there’s a problem in one part, all the players in the supply chain are affected.

Jacques Lamothe: What we actually realized is that, a lot of the time, certain companies have very effective tools to assess the demand on one side, while other companies have very effective tools to measure production on the other side. But it is difficult for them to establish a dialogue between these two parts. In the chain, the manufacturer does not necessarily notice when there is lower demand and vice versa. This is one of the things the C2Net demonstrator helped correct in the use case we developed with the companies.

And what were the companies’ expectations for this project?  

Jacques Lamothe: For the C2Net project, each academic partner brought an industrial partner he had already worked with. And each of these SMEs had a different set of problems. In France, our partner for the project was Pierre Fabre. They were very interested in data collection and creating an alert system. On the Spanish side, this was less of a concern than optimizing planning. Every company has its own issues and the use cases the industrial partners brought us meant we had to find solutions for everyone: from generating data on their supply chains to creating tools to allow them to manage alerts or planning.

To what extent has your research work had an impact on the companies’ structures and the way they are organized?

Frédérick Benaben: What was smart about the project is that we did not propose the C2Net demonstrator as a cloud platform that would replace companies’ existing systems. Everything we did is situated a level above the organizations so that they will not be impacted, and integrates the existing systems, especially the information systems already in place. So the companies did not have to be changed. This also explains why we had to work so hard on interoperability.

What did the work on interoperability involve?

Frédérick Benaben: There were two important interoperability issues. The first was being able to plug into existing systems in order to collect information and understand what was collected. A company may have different subcontractors, all of whom use different data formats. How can a company understand and use the data from both subcontractor A, which is provided in one language and that of subcontractor B, which is provided in another? We therefore had to propose data reconciliation plans.

The second issue involves interpretation. Once the data has been collected and everyone is speaking  the same language, or at least can understand one another, how can common references be established? For example, having everyone speak in liters for quantities of liquids instead of vials or bottles. Or, when a subcontractor announces that an item may potentially be out of stock, what does this really mean? How far in advance does the subcontractor notify its customers? Does everyone have the same definition? All these aspects had to be harmonized.

How will these results be used?

Jacques Lamothe: The demonstrator has been installed at the University of Valencia in Spain and should be reused for research projects. As for us, the results have opened up new research possibilities. We want to go beyond a tool that can simply detect future problems or allow companies to be notified. One of our ideas is to work on solutions that make it possible to make more or less automated decisions to adjust the supply chain.

Frédérick Benaben: A spin-off has also been developed in Portugal. It uses a portion of the data integration mechanisms to propose services for SMEs. And we are still working with Pierre Fabre too, since their feedback has been very positive. The demonstrator helped them see that it is possible to do more than what they are currently able to do. In fact, we have developed and submitted a partnership research project with them.

Also read on I’MTech:

 

Artificial intelligence

What is artificial intelligence?

Artificial intelligence (AI) is a hot topic. In late March, the French government organized a series of events dedicated to this theme, the most notable of which was the publication of the report “For a Meaningful Artificial Intelligence,” written by Cédric Villani, a mathematician and member of the French parliament. The buzz around AI coincides with companies’ and scientists’ renewed interest in the topic. Over the last few years AI has become fashionable again, as it was in the 1950s and 1960s. But what does this term actually refer to? What can we realistically expect from it? Anne-Sophie Taillandier, director of IMT’s TeraLab platform dedicated to big data and AI, is working on innovations and technology transfer in this field. She was recently listed as one of the top 20 individuals driving AI in France by L’Usine Nouvelle. She sat down with us to present the basics of artificial intelligence.

 

How did AI get to where it is today?

Anne-Sophie Taillandier: AI has played a key role in innovation questions for two or three years now. What has helped create this dynamic are closer ties between two scientific fields: information sciences and big data, both of which focus on the question, “How can information be extracted from data, whether big or small?” The results have been astonishing. Six years ago, we were only able to automatically recognize tiny pieces of images. When deep learning was developed, the recognition rate skyrocketed. But if we have been able to use the algorithms on large volumes of images, it is because of hardware that has made it possible to perform the computations in a reasonable amount of time.

What technology is AI based on?

AST: Artificial intelligence is the principle of extracting and processing information. This requires tools and methods. Machine learning is a method that brings together highly statistical techniques such as neural networks. Deep learning is another technique that relies on deeper neural networks. These two methods have some things in common; what makes them different is the tools chosen. In any event, both technologies are based on the principle of learning. The system learns from an initial database and it is then used on other data. The results are assessed so that the system can keep learning. But AI itself is not defined by these technologies. In the future, there may be other types of technology which will also be considered artificial intelligence. And even today, researchers in robotics sometimes use different algorithms.

Can you give some specific examples of the benefits of artificial intelligence?

AST: The medical sector is a good illustration. In medical imaging, for example, we can teach an algorithm to detect cancerous tumors. It can then help doctors look for parts of an image that require their attention. We can also adjust a patient’s treatment depending on a lot of different data: is he alone or does he have a support network? Is he active or inactive? What is his living environment like? All these aspects contribute to personalized medicine, which has only become possible because we know how to process all this data and automatically extract information. For now, artificial intelligence is mainly used as a decision-making aid. Ultimately, it’s a bit like what doctors do when they ask patients questions, but in this case we help them gather information from a wide range of data. With AI, the goal is first and foremost to reproduce something that we know very well.

How can we distinguish between solutions that involve AI and others?

AST: I would say that it’s not really important. What matters is if using a solution provides real benefits. This question often comes up with chatbots, for example. Knowing whether AI is behind them or not — whether it’s just a decision tree based on a previous scenario or if it’s a human — is not helpful. As a consumer, what’s important to me is that the chatbot in front of me can answer my questions. They’re always popping up on sites now, which is frustrating since a lot of the time they are not particularly useful! So it is how a solution is used that really matters, more than the technology behind it.

Does the fact that AI is “trendy” adversely affect important innovations in the sector?

AST: With TeraLab we are working on very advanced topics with researchers and companies seeking  cutting-edge solutions. If people exaggerate in their communication materials or use the term “artificial intelligence” in their keywords, it doesn’t affect us. I’d rather that the public become familiar with the term and think about the technology already present in their smartphones than fantasize about something inaccessible.