ethics, social networks, éthique, Antonio Cailli, Télécom ParisTech

Rethinking ethics in social networks research

Antonio A. CasilliTélécom ParisTech – Institut Mines-Télécom, University of Paris-Saclay and Paola TubaroCentre national de la recherche scientifique (CNRS)

[dropcap]R[/dropcap]esearch into social media is booming, fueled by increasingly powerful computational and visualization tools. However, it also raises some ethical and deontological issues that tend to escape the existing regulatory framework. The economic implications of large scale data platforms, the active participation of members of networks, the spectrum of mass surveillance, the effect on health, the role of artificial intelligence: a wealth of questions all needing answers. A workshop running from December 5-6, 2017 at Paris-Saclay, organized in collaboration with three international research groups, hopes to make progress in this area.

 

Social Networks, what are we talking about?

The expression “social network” has become commonly used, but those that use it to refer to social media such as Facebook or Instagram are often ignorant about its origin and true meaning. Studies into social networks began long before the dawn of the digital age. Since the 1930s, sociologists have been conducting studies that attempt to explain the structure of the relationships that connect individuals and groups: their “networks”. This could be, for example, relationships based on advice between employees of a business, or friendships between pupils in a school. These networks can be represented as points (the pupils) connected by lines (the relationships).

A graphic representation of a social network (friendships between pupils at a school), created by J.L. Moreno in 1934. Circles = girls, triangles = boys, arrows = friendships. J.L. Moreno, 1934, CC BY

 

Well before any studies into the social aspects of Facebook and Twitter, this research shed significant light on the topic. For example, the role of spouses in a marriage; the importance of “weak connections” in job hunting; the “informal” organization of a business; the diffusion of innovation; the education of political and social elites; and mutual assistance and social support when faced with ageing or illness. The designers of digital platforms such as Facebook now adopt some of the analytical principles that this research was based on, founded on mathematical graph theory (although they often pay less attention to the associated social issues).

Researchers in this field understood very quickly that the classic principles of research ethics (especially the informed consent of participants in a study and the anonymization of any data relating to them) were not easy to guarantee. In social network research, the focus is never on one sole individual, but rather on the links between the participant and other people. If the other people are not involved in the study, it is hard to see how their consent can be obtained. Also, the results may be hard to anonymize, as visuals can often be revealing, even when there is no associated personal identification.

 

Digital ethics: a minefield

Academics have been pondering these ethical problems for a quite some time: in 2005, the journel Social Networks dedicated an issue to these questions. The dilemmas faced by researchers are exacerbated today by the increased availability of relational data which has been collected and used by digital giants such as Facebook and Google. New problems arise as soon as the lines between “public” and “private” spheres become blurred. To what extent do we need consent to access the messages that a person sends to their contacts, their “retweets” or their “likes” on friends’ walls?

Information sources are often the property of commercial companies, and the algorithms these companies use tend to offer a biased perspective on the observations. For example, can a contact made by a user through their own initiative be interpreted in the same way as a contact made on the advice of an automated recommendation system? In short, data doesn’t speak for itself, and we must question the conditions of its use and the ways it is created before thinking about processing it. These dimensions are heavily influenced by economic and technical choices as well as by the software architecture imposed by platforms.

But is negotiation between researchers (especially in the public sector) and platforms (which sometimes stem from major multinational companies) really possible? Does access to proprietary data risk being restricted or unequally distributed (potentially at a disadvantage to public research, especially when it doesn’t correspond to the objectives and priorities of investors)?

Other problems emerge when we consider that researchers may even resort to paid crowdsourcing for data production, using platforms such as Amazon Mechanical Turk to ask the masses to respond to a questionnaire, or even to upload their online contact lists. However, these services raise questions about old beliefs in terms of working conditions and appropriation of a product. The ensuing uncertainty hinders research which could potentially have positive impacts on knowledge and society in a general sense.

The potential for misappropriation of research results for political or economic ends is multiplied by the availability of online communication and publication tools, which are now used by many researchers. Although the interest among the military and police in social network analysis is already well known (Osama Bin Laden was located and neutralized following the application of social network analysis principles), these appropriations are becoming even more common today, and are less easy for researchers to control. There is an undeniable risk that lies in the use of these principles to restrict civil and democratic movements.

A simulation of the structure of an Al-Qaeda network, “Social Network Analysis for Startups” (fig. 1.7), 2011. Reproduced here with permission from the authors. Kouznetsov A., Tsvetovat M., CC BY

 

Celebrating researchers

To break this stalemate, the solution is not to increase the number of restrictions which would just aggravate the constraints that are already inhibiting research. On the contrary, we must create an environment of trust, so that researchers can explore the scope and importance of social networks online and offline, as they are essential in making the most of prominent economic and social phenomena, whilst still respecting people’s rights.

The active role of researchers must be highlighted. Rather than remaining subject to predefined rules, they need to participate in the co-creation of an adequate ethical and deontological framework, drawing on their experience and reflections. This bottom-up approach integrates the contributions of not just academics but also the public, civil society associations and representatives from public and private research bodies. These ideas and reflections could then be brought forward to those responsible for establishing regulations (such as ethics committees)

 

An international workshop in Paris

Poster for the RECSNA17 Conference

Such was the focus of the workshop Recent ethical challenges in social-network analysis. The event was organized in collaboration with international teams (The Social Network Analysis Group from the British Sociological Association, BSA-SNAG; Réseau thématique n. 26 “Social Networks” from the French Sociological Association; and the European Network for Digital Labor Studies (ENDLS)), with support from Maison des Sciences de l’Homme de Paris-Saclay and Institut d’études avancées de Paris. The conference will be held on December 5-6. For more information and to sign up, please consult the event website. Antonio A. Casilli, Associate Professor at Télécom ParisTech and research fellow at Centre Edgar Morin (EHESS), Télécom ParisTech – Institut Mines-Télécom, University of Paris-Saclay and Paola Tubaro, Head of Research at LRI, a Computing Research Laboratory at CNRS. Teacher at ENS, Centre national de la recherche scientifique (CNRS).

 

The original version of this article was published on The Conversation France.

 

laser femtoseconde, Femto Engineering

A new laser machining technique for industry

Belles histoires, Bouton, CarnotFEMTO-Engineering, part of Carnot Télécom & Société numérique institute, offers manufacturers a new cutting and drilling technique for transparent materials. By using a femtosecond laser, experts can reach unrivalled levels of precision when manufacturing ultra-hard materials. Jean-Pierre Goedgebuer, director of FC’Innov (FEMTO-Engineering), explains how the technique works.

What is high aspect ratio precision machining and what is it used for?

Jean-Pierre Goedgebuer: precision machining is used in cutting, drilling and engraving materials. It allows various designs to be inscribed onto materials such as glass, steel or stainless steel. It’s a very widespread method in industry. Precision machining corresponds to a positioning and shaping technique for an extremely small scale, i.e. in the range of 2 microns (10-6 meters). The term “aspect ratio” for example is a reference to drilling. It corresponds to the relationship between the depth and the diameter. Therefore, an aspect ratio of 100 corresponds to a diameter 100 times smaller than its depth.

Cutting or drilling requires local destruction and mastery of the material. In order to achieve this, we supply energy from a laser. This emits heat when it comes into contact with the material.

 

What is femtosecond machining?

JPG: The term femtosecond [1] refers to the duration of the laser pulses, which last a few tens or hundreds of femtoseconds. The length of the pulse determines the length of the interaction between light and the material. The shorter it is, the fewer thermal exchanges there are with the material and therefore in principal, the less the material is destroyed.

In laser machining, we use short pulses (femtoseconds – 10-15 of a second) or longer pulses (nanoseconds – 10-9 of a second). The choice depends on the usage. For machining with no thermal effect, that is, where the material is not affected by the heat produced by the pulse, we tend to use femtosecond pulses, allowing us to find a good compromise between destruction of the material and how high the temperature is. These techniques are associated to light propagation models which allow us to simulate the impact of the properties of a material on the propagation of the light going through it.

 

The femtosecond machining technique generally uses Gaussian beams. The defining characteristic of your process is that it uses Bessel beams. What is the difference?

JPG: Gaussian laser beams are beams inside which the energy is spread in a Gaussian way. When they have raised energy levels, they produce non-linear effects when propagated in the materials. This means that they produce autofocusing effects, making their diameters non-constant and distorting their propagation. These effects can be detrimental to the quality of the machining of certain special kinds of glass.

In contrast, the Bessel laser beams, like what we use in our machining technique, allow us to avoid these non-linear effects. They therefore have the ability to maintain a constant diameter over a well-defined length. They act as very fine “laser needles”, measuring just a few hundred nanometers in diameter (a nanometer corresponds to approximately the size of an atom). Inside these “laser needles” is a very high concentration of energy. This generates an extremely localized plasma within the material, which causes the excision of the material. Furthermore, we can control the length of these “laser needles” in a very precise way. We use them to do very deep cutting or drilling (with an aspect ratio of up to 2,000) producing a precise, clean result with no thermal effects.

In order to start being able to use this new technology, we used a traditional femtosecond laser. What led to several patents being filed by the Institut FEMTO-ST, was finding out how to transform Gaussian beams into Bessel beams.

 

What is the point of this new technology?

JPG: There are two main reasons for it. As we’re dealing with “laser needles” which hold a high density of energy, it is possible to drill very hard materials which would pose a problem for traditional laser machining techniques. Thanks to the technique’s athermic nature, the material in question keeps its physicochemical properties intact; it does not change.

This machining method is used for transparent materials. Industrial demand is high as there are many products that require the machining of herder transparent materials. This is the case for example with smartphones, where the screens need to be made from special kinds of very durable, scratch-resistant glass. This is a big market and is a major focus for many laser manufacturers, particularly in Europe, the US and of course, Asia. There are several other uses however, including elsewhere in the biomedical field.

 

What’s next for this technique?

JPG: Our mission at FEMTO Engineering is to accentuate the research coming out of the Institut FEMTO-ST. In this context, we have partnerships with manufacturers with whom we are exploring how this new technology could respond to their needs in terms of very specific materials where traditional femtosecond machining doesn’t give satisfactory results. We are currently working on cutting new materials for smartphones, as well as polymers for medical use.

The primary research carried out by the Institut FEMTO-ST, is continuing to focus in particular on better understanding light-matter interaction mechanisms and plasma formation. This research was recently formally recognized by the ERC (European Research Council) which finances experimental projects that encourage scientific discovery. The aim is to really master the understanding of the physical properties of Bessel beam propagation which is something that has not been particularly studied on a scientific level before now.

[1] A femtosecond corresponds to one millionth of a billionth of a second. It’s the approximate duration of an electromagnetic wave. A femtosecond is to a second what a second is to the lifetime of the universe.

On the same topic:

Underwater pipeline, Pipeline sous-marin, hydrocarbures, hydrates de méthane, crystallization, cristallisation

Understanding methane hydrate formation to revolutionize pipelines

Since hydrocarbon is always drawn from deep in the sea floor, oil companies face potential obstruction problems in their pipelines due to the formation of solid compounds: methane hydrates. Ana Cameirao, an engineer and PhD specializing in industrial crystallization at Mines Saint-Étienne, is hoping to understand and model this phenomenon. She has contributed to the creation of an industrial chair in collaboration with international laboratories and operators such as Total, with the aim of developing a modelling software for the flow within the pipelines. Their mission is to achieve a more economic and ecological usage of underwater pipelines.

 

Always further, always deeper.” This is the logic behind the implementation of offshore platforms. Faced with the world’s intense demand and thanks to technological progress, hydrocarbon reserves which had previously been considered to be inaccessible are now exploitable. However, the industry has met an obstacle: methane hydrates. These solid compounds are actually solidified water molecules trapped in a sort of cage created by a methane molecule. These are created in environments of around 4°C and 80 bars of pressure, such as in deep-sea pipelines. These can end up accumulating and subsequently obstructing the flow. This issue may prove hard to fix, seeing as depths reach close to 3,000 meters!

In order to get around this problem, oil companies generally inject methanol into the pipelines in order to lower the formation temperature of the hydrates. However, injecting this alcohol carries an additional cost as well as an environmental impact. Additionally, systematic thermal insulation of pipelines is not sufficient to prevent the formation of hydrates. “The latest solution consists in injecting additives which are supposed to slow the formation and accumulation of hydrates”, explains Ana Cameirao, a researcher at the SPIN (Sciences des Processus Industriels et Naturels) research center at Mines Saint-Étienne, and a specialist in crystallization, the science behind the formation and growth of solid aggregates within liquid phases, for instance.

 

Towards the reasonable exploitation of pipelines

For nearly 10 years, the researcher has been studying the formation of hydrates in all conditions likely to occur in offshore pipelines. “We are looking to model the phenomenon, in other words, to estimate the quantity of hydrates formed, to see whether this solid phase can be transported through the flow, to find if there is a need to inject additives, and if yes, in what quantity”, she summarizes. The goal is to prompt a well-considered exploitation of the pipelines and avoid the massive injection of methanol as a preventative measure. In order to establish these models, Ana Cameirao utilizes a valuable experimental tool: the Archimedes platform.

This 50 meter loop located at the SPIN center allows her to reproduce the flow of the mixture of oil, water and gas which circulates in the pipelines. A plethora of equipment, including cameras and laser probes which function under very high pressure levels, allow her to study the formation of the solid compounds, including their size, nature, aggregation speed, etc. She has been closely examining all the possible scenarios: “we vary the temperature and pressure, but also the nature of the mix, for example by incorporating more or less gas, or by varying the proportion of water in the mixture”, explains Ana Cameirao.

Thanks to all these trials, in 2016, the researcher and her team published one of the most complete comprehension models about this phenomenon of methane hydrate crystallization. “Similar models do already exist, but only for fixed proportions of water. Our model is more extensive: it can integrate any proportion of water. This allows a greater variety of oil wells to be studied, including the oldest ones where the mixture can consist of up to 90% water!” This model is the product of painstaking work: over 150 experiments have been completed over the last 5 years, each of them representing at least two measurement days. Above all, it offers new perspectives: “Petrochemical process simulation software is very limited in explaining the flow in pipelines alongside hydrate formation. The main task is to invent modules that are able to take this phenomenon into consideration”, analyses Ana Cameirao.

 

Applications in environmental technology

This is the next step of a soon-to-be completed project: “We are currently aiming to combine our knowledge on crystallization of hydrates with that of experts on fluid mechanics, in order to better characterize their flow”. This multidisciplinary approach is the main subject of the international chair Gas Hydrates and Multiphase Flow in Flow Assurance, which was opened in January 2017 by the Mines school in collaboration with two laboratories hailing from the Federal University of Technology in Parana, Brazil (UTFPR), and the Colorado School of Mines  in the US. The chair, which will span over three to five years, also involves industrial partners, the top level of whom includes Total. “Total, who has been a partner of the research center for 15 years, not only offers financial support, but also shares with us its experience in real exploitation”, tells Ana Cameirao.

 

Credits: Mines Saint-Étienne

 

A better understanding of hydrate crystallization will facilitate the offshore exploitation of hydrocarbon, but it could also benefit environmental technology over time. Indeed, researchers are working on innovative application of hydrates, such as the harvesting of CO2 or new climate control techniques. “The idea would be to form hydrate sorbets overnight when energy is available and less expensive, in order to diffuse this through a climate control system during the daytime. As the hydrates melt, the heat in the surrounding area would be absorbed”, explains Ana Cameirao. Clearly, it seems that crystallization can lead to anything!

 

[author title=”Ana Cameirao : « Creativity comes first »” image=”https://imtech-test.imt.fr/wp-content/uploads/2017/09/Portrait_Ana_Cameirao.jpg”]

Ana Cameirao chose very early on to pursue a course in engineering in her home country of Portugal. “It was the possibility to apply the science which interested me, this potential to have a definitive impact on people’s lives”, she recalls. After finishing her studies in industrial crystallization at IMT Mines Albi, she threw herself into applied research. “It’s a constant challenge, we are always discovering new things”, she marvels, when looking back over her ten years at the SPIN center at Mines Saint Étienne.

Ana Cameirao also invokes creativity in her role as a professor, backed by innovative teaching methods which include projects, specific case studies, bibliographic independent learning, and much more. “Students today are no longer interested in two-hour lectures. You need to involve them”, she tells. The teacher feels so strongly about this topic that she decided to complete a MOOC dedicated to exploring methods for stimulating creativity, and plans to organize her own workshops on the subject within her school in 2018!

[/author]

 

 

Seald

Seald: transparent cryptography for businesses

Since 2015, the start-up Seald has been developing a solution for the encryption of email communication and documents. Incubated at ParisTech Entrepreneurs, it is now offering businesses a simple-to-use cryptography service, with automated operations. This provides an alternative to the software currently on the market, which is generally hard to get to grips with.

 

Cybersecurity has become an essential issue for businesses. Faced with the growing risk of data hacking, they must find defense solutions. One of these is cryptography, allowing businesses to encrypt data so that a malicious hacker attempting to steal them would not be able to gain access. This is what is offered by the start-up Seald, founded in 2015 in Berkeley, USA, who after spending a period in San Francisco in 2016 is now incubated at ParisTech Entrepreneurs. Its defining feature? Its solution is totally transparent to all the employees of the business.

There are already solutions that exist on the market, but they require you to open software and carry out a procedure that can require dozens of clicks just to encrypt a single email”, tells Timothée Rebours, co-founder of the start-up. In contrast, Seald is a lot simpler and faster to use. When a user sends an email, a simple icon appears on the messenger interface which can be ticked to encrypt the message. It is then guaranteed that neither the content nor any attachments are readable, should the message be intercepted.

If the receiver also has Seald, communication will be encrypted at both ends, and message and document will be read in an equally transparent way. If they do not have Seald, they can install it for free. However, this is not always possible if the policy of the receiver’s firm prohibits the installation of external applications on computer stations. In this case, an online double identification system using a code received via SMS or email allows them to authenticate themselves and subsequently read the document securely.

For the moment, Seald can be used with the more recent email servers, such as Gmail and Outlook. “We are also developing specific implementations for companies using internal messaging services”, explains Timothée Rebours. The start-up’s aim is to cover all possible email applications. “In this way; we are responding to a usage corresponding to problems within the business” explains the co-founder. Following on from that, he says: “Once we have finished what we are currently working on, we will then start on integrating into other kinds of messaging, but probably not before.”

 

Towards an automated and personalized cryptography

Seald is also hoping to improve its design, which currently requires people sending emails or documents to check a box. The objective is to limit their forgetfulness as best possible. The ideal would therefore be to have automatic encryption specific to the sender, the document being sent and the receiver. Reaching this goal is a task which Seald endeavors to fulfil by offering many features to the managers of IT systems within businesses.

Administrators already have several parameters in place that they can use to automate data encryption. For example, they can decide to encrypt all messages sent from a company email addresses to the email address of another business. Using this method; if company A starts a project with company B for example, all emails sent by employees between a company A email address and a company B email address would be encrypted by default. The security of communications is therefore no longer left in the hands of the employees working on the project, which means they can’t forget to encrypt their documents, saving them valuable time.

The start-up is pushing the features offered to IT administrators even further. It allows them to associate each document type to a revocation condition. The encrypted information sent to a third-party company – such as a consulting or communication firm – can be made impossible to read after a certain time, for example to the end of a contract. The administrator can also revoke the rights of access to the encrypted information for a device or a user, in the case where a person leaves the company due to malicious intentions.

By offering businesses this solution, Seald is changing companies’ perceptions on cryptography, with easy-to-understand functionalities. “Our aim has always been to offer encryption to the masses”, assures Timothée Rebours. Reaching the employees of businesses could be the first step towards raising more awareness amongst the public about the issue of cybersecurity and data protection within communications.

Mohamed Daoudi

Mohamed Daoudi

IMT Nord Europe| #PatternRecognition #Imaging

[toggle title=”Find all his articles on I’MTech” state=”open”]

[/toggle]

Langage, Language, Intelligence artificielle, Jean-Louis Dessalles, Artificial Intelligence

The fundamentals of language: why do we talk?

Human language is a mystery. In a society where information is so valuable, why do we talk to others without expecting anything in return? Even more intriguing than this are the processes determining communication, whether that be a profound debate or a spontaneous conversation with an acquaintance. These are the questions driving Jean-Louis Dessalles’ current project, a researcher in computing at Télécom ParisTech. His work has led him to reconsider the perspective on information adopted by Claude Shannon, a pioneer in the field. He has devised original theories and conversational models which explain trivial discussions just as well as heated debates.

 

Why do we talk? And what do we talk about? Fueled with the optimism of a young researcher, Jean-Louis Dessalles hoped to find the answer to these two questions in just a few months after finishing his thesis in 1993. Nearly 24 years have now passed, and the subject of his research has not changed. From his office in the Computing and Networks department at Télécom ParisTech, he continues to have an interest in Language. His work breaks away from the classic approach adopted by researchers in information science and communication. “The discipline mainly focuses on ways we can convey messages, but not about what is conveyed or why”, he explains, contradicting the approach to communication described by Claude Shannon in 1948.

The reasons for communication, along with the underlying motives for these information exchanges, are however very legitimate and complex questions. As the researcher explains in the film Le Grand Roman de l’Homme, which came out in 2014, communication is contradictory of various behavioral theories. Game theory for example, sometimes used in economy to describe and analyze behavioral mechanisms, struggles to justify the role of communication between humans. According to this theory, and by attaching value to all information, expected communication situations would consist in each participant providing the minimum information possible, whilst trying to glean the maximum from the other person. However this logic is not followed by humans in everyday discussions. “We need to consider the role of communication in a social context” deduces Jean-Louis Dessalles.

By dissecting the scientific elements in communication situations (i.e. interviews, attitudes in online forums, discussions, etc.) he has tried to find an explanation for people offering up useful information. The hypothesis he is putting forward today is compatible with all observable communication types; for him, offering up quality information is not motivated by economic gain, as game theory assumes, but rather by a gain in social reputation. “In technical online forums for example, experts don’t respond out of altruism, or for monetary gain. They are competing to give the most complete response in order to assert their status as an expert. In this way they gain social significance”, explains the researcher. Talking and showing our ability to stay informed is therefore synonymous with positioning ourselves in a social hierarchy.

 

When the unexpected liberates language

With the question of “why do we talk” cleared up, we still need to find out what it is we are talking about. Jean-Louis Dessalles isn’t interested in the subject of discussions per-say, but rather the general mechanisms dominating the act of communication. After having analyzed in detail tens of hours of recordings, he has come to the conclusion that a large part of spontaneous exchange is structured around the unexpected. The triggers of spontaneous conversation are often events that humans would consider unlikely or abnormal, in other words, when the normality of a situation is broken. For example, seeing a person over 2m tall, a series of cars of the same color all parked in a row or a lotto draw where all the numbers follow on from one another; these are all instances which are likely to provoke surprise in an individual, and encourage them to engage in spontaneous conversation with an interlocutor.

In order to explain this engagement based on the unexpected, Jean-Louis Dessalles has developed Simplicity Theory. According to him, the unexpected corresponds above all else to things which are simple to describe. He says “simple” because it is always easy to describe an out-of-the-ordinary situation, simply by placing the focus on the unexpected thing. For example, describing a person that is 2m tall is easy because this criterion alone is enough to establish a narration point. In contrast, describing a person of normal height and weight with standard clothes and a face with no distinctive features in particular would require a more complex description to achieve a successful definition.

Although simplicity may be a driver for spontaneous conversation, another significant discussion category also exists: that of argumentative conversation. In this case, the unexpected no longer applies. This kind of exchange follows a model defined by Jean-Louis Dessalles, called CAN (Conflict, Abduction, and Negation). “To start an argument, there has to be a conflict, opposing points of view. Abduction is the following stage, which consists in going back to the cause of the conflict in order to shift this and deploy arguments. Finally, negation allows the participants to progress to counterfactuals in order to reflect on solutions which would allow them to resolve the conflict.” Beyond that simple description, the CAN model could allow the development of artificial intelligence to progress (see text box).

 

[box type=”shadow” align=”” class=”” width=””]

When artificial intelligence looks at language theories

Machines should be able to have a reasonable conversation in order to appear intelligent”, assures Jean-Louis Dessalles. For the researcher, the test invented by Alan Turing, consisting in claiming that a machine is intelligent if a human can’t tell the difference between it and another human when having a conversation, is completely legitimate. Because of this, his work has found a place in the development of artificial intelligence that is able to pass this test. It is therefore absolutely essential to understand human communication mechanisms in order to transfer these to machines. A machine integrating the CAN model would be more able to have a debate with a human. In the case of a GPS, it would allow the device to plan routes whilst incorporating factors other than simply time or distance. Discussing with a GPS what we expect from a journey – such as beautiful scenery for example – in a logical manner, would significantly extend the quality of the human machine interface.

[/box]

 

In the hours of conversation recorded by the researcher, the distribution of spontaneous discussions induced by unexpected elements and arguments was respectively 25% and 75%. He remarks however that the line separating the two is not necessarily strict, since spontaneous narration can lead to a more profound debate, which would dramatically change the basis of the CAN model. These results offer a response to the question “what do we talk about?” and solidify years of research. For Jean-Louis Dessalles, it’s proof that “it pays to be naïve”. His recklessness at the beginning eventually led him to theorize various models throughout his career, on which humans base their communication, and will probably continue to do so for a long time to come.

[author title=”Jean-Louis Dessalles, computer scientist, human language specialist” image=”https://imtech-test.imt.fr/wp-content/uploads/2017/09/JL_Dessalles_portrait_bio.jpg”]A Polytechnic and Télécom ParisTech graduate, Jean-Louis Dessalles became a researcher in computing after obtaining his PhD in 1993. It is therefore difficult to see the link to questions regarding human language and its origins, something normally reserved for linguists or ethnologists. “I chose to focus on a subject relevant to the resources I had available to me, which were computer sciences”, he argues.

He then carried out research which contradicts the probabilistic approach of Claude Shannon, which is how he presented it to a conference at the Insitut Henri Poincaré in October 2016 for the centenary of the father of information theory.

His reflections on information have been the subject of a book, Le fil de la vie, published by Odile Jacob in 2016. He is also the author of several books about the question of language emergence. [/author]

 

greentropism, spectroscopie

GreenTropism, the start-up making matter interact with light

The start-up GreenTropism, specialists in spectroscopy, won an interest-free loan from the Fondation Mines-Télécom last June. It hopes to use this to reinforce its R&D and develop its sales team. Its technology is based on automatic learning and is intended for both industrial and academic use, offering application perspectives ranging from the environment to the IoT.

 

Is your sweater really cashmere? What is the protein and calorie content of your meal? Perhaps the answers to these questions come from one single field of study: Spectroscopy. Qualifying and quantifying material is at the heart of the mission of GreenTropism, a start-up incubated at Télécom SudParis. To do this, innovators use spectroscopy. “The discipline studies interactions between light and matter”, explains Anthony Boulanger, CEO of GreenTropism. “We all do spectroscopy without even knowing it, because our eyes actually work as spectrometers: they are light-sensitive and send out signals which are then analyzed by our brains. At GreenTropism, we play the role of the brain for classic spectrometers using spectral signatures, algorithms and machine learning.

The old becoming the new

GreenTropism is based on two techniques implemented in the 1960’s: spectroscopy and machine learning. Getting to grips with the first of these requires an acute knowledge of what a photon is and how it interacts with matter. Depending on the kind of light rays used (i.e. X-rays, ultra-violet, visible, infrared, etc.) the spectral responses are not the same. According to what we are wanting to observe, the nature of a radiation type will be more or less suitable. Therefore, UV rays detect, amongst other things, organic molecules in aromatic cycles, whilst close infrared allows the assessment of water content, for example.

The machine learning element is managed by data scientists working hand in hand with geologists and biochemists from the R&D team at GreenTropism. “It’s important to fully understand the subject we are working on and not to simply process data”, specifies Anthony Boulanger. The start-up has been developing machine learning in the hope of processing several types of spectral data. “Early on, we set up an analysis lab within Irstea. Here, we assess samples with high-resolution spectrometers. This allows us to supplement our database and therefore create our own algorithms. In spectroscopy, there is great variation of data. These come from the environment (wood, compost, waste, water, etc.), from agriculture, from cosmetics, etc. We can study all types of organic matter”, explains the innovator.

GreenTropism’s knowledge goes even further than this. Their deep understanding of infrared, visible and UV radiation, as well as laser beams (LIBS, Raman), allows them to provide a platform for software and agnostic models. This means they are adjustable to various types of radiation and independent to the spectrometer used. Anthony Boulanger adds: “our system allows results to be obtained in real time, whereas traditional analyses in a lab can take several hours over several days.

[box][one_half]

A miniaturized spectrometer.

[/one_half][one_half_last]

A traditional spectrometer.

[/one_half_last] [/box]

Crédits : Share Alike 2.0 Generic

Real-time analysis technology for all levels of expertise

Our technology consists in a machine learning platform allowing for the creation of spectrum interpretation models. In other words, it’s software transforming a spectrum into a value which is of interest to a manufacturer that has already mastered spectrometry. This allows them to achieve an operational result since in this way they can control and improve the overall quality of their process”, explains the CEO of GreenTropism. By using a traditional spectrometer in association with the GreenTropism software, a manufacturer can verify the quality of the raw material at the time of its delivery and ensure that its specification is fulfilled for example. Continued analysis also ensures the monitoring of the entire production chain in real time and in a non-destructive way. The result is that all finished products, as well as those in the transformation process, are open to systematic analysis. In this case, the objective is to characterize the material of a product. It is used for example to dissociate materials or two essences of wood. GreenTropism also receives support from partnership with academics such as Irstea or Inrea. These partnerships allow them to extend their fields of expertise, whilst also deepening their understanding of matter.

GreenTropism technology is also aimed at novices wanting to instantly analyze samples. “In this case, we depend on our lab to construct a database in a proactive way, before putting the machine learning platform in place”, adds Anthony Boulanger. It is therefore a question of matter qualification. Obtaining details about the composition of an element such as the nutritional content of a food item is a direct application. “The needs linked to spectroscopy are still vague since we have been processing organic matter. We can measure the widespread parameters such as the level of ripeness of a piece of fruit, as well as other, more concrete details such as the quantity of glucose or saccharine a product contains.

Towards the democratization of spectroscopy

The fields of application are vast: environment, industry, the list goes on. But GreenTropism technology also adapts to general public usage through the Internet of Things, mass market electrical technology and household electronic items. “The advantage of spectroscopy is that there is no need to create close contact between light and matter. This allows for potential combinations between daily life devices and spectrometers where the user doesn’t have to worry about technical aspects such as calibration for example. Imagine coffee machines that allow you to select the caffeine level in your drink. We could also monitor the health status of our plants through our smartphone”, explains Anthony Boulanger. This last usage would function like a camera. After a flash of light is emitted, the program will receive a spectral response. Rather than receiving a photograph, the user would for example find out the water level in their flower pot.

In order to make these functions possible, GreenTropism is working on the miniaturization of its spectrometers. “Today, spectrometers in labs are 100% reliable. A new, so-called ‘miniaturized’ generation (hand-held) is entering the market. However, these devices lack scientific publication about their reliability, casting doubt on their value. This is why we are working on making this technology reliable at a software level. This is a market which opens up a lot of doors for us, including one which leads to the general public”, Anthony Boulanger concludes.

vigisat, surveillance, environnement

VIGISAT: monitoring and protection of the environment by satellite

Belles histoires, Bouton, CarnotFollowing on from our series on the platforms provided by the Télécom & Société numérique Carnot institute, we will now look at VIGISAT, based near Brest. This collaborative hub is also a project focusing on the satellite monitoring of oceans and continents in high resolution.

 

On 12th July, scientists in Wales observed a drifting iceberg four times the size of London. The imposing block of ice detached from the Antarctic and is currently meandering around the Weddell Sea, and has now started to crack. This close monitoring of icebergs was made possible by satellite images.

Although perhaps not directly behind this observation, the Breton Observation Station, VIGISAT, is particularly involved in the matter of maritime surveillance. It also gathers useful information on protecting the marine and terrestrial environments. René Garello, a researcher at IMT Atlantique, presents the main issues.

 

What is VIGISAT?

René Garello: VIGISAT is a reception center for satellite data (radar sensors only) operated by CLS (Collecte Localisation Satellites) [1]. The station benefits from the expertise of the Groupement d’Intérêt Scientifique Bretagne Télédétection (BreTel) community, made up of nine academic members and partners from the socio-economic world. Its objective is to demonstrate the relevance of easy access data for the development of methods for observing the planet. It is at the service of the research community (for academic partners) and of the “end users” from a business perspective.

VIGISAT is also a project within the Breton CPER (Contrat de Plan État-Région) framework, which has been renewed to run until 2020. The station/project concept was named a platform by the Institut Carnot Télécom & Société Numérique at the end of 2014.

 

The VIGISAT station

 

What data does VIGISAT collect and how does it process this?

RG: The VIGISAT station receives data from satellites carrying Synthetic Aperture Radars (better known as SARs). This microwave sensor allows us to obtain very high resolution imaging of the Earth’s surface. The data received by the station therefore come from both the Canadian satellite RadarSAt-2, and in particular from the new series of European satellites: SENTINEL. These are sun-synchronous orbiting satellites [NB: the satellite always passes over a certain point at the same solar time], which move at an altitude of 800km and can circle the Earth in just 100 minutes.

We receive raw information collected by satellites, in other words, data come in the form of unprocessed bit streams. The data are then transmitted by fiber optic to the processing center which is also located on the site. “Radar images” are then constructed using the raw information and the radar’s known parameters. The final data, although in image form, require expert interpretation. In simple terms, the radar wave emitted is sensitive to the properties of the observed surfaces. In this way, the nature of the earth (vegetation, bare surfaces, urban landscapes, etc.) will send its own characteristic energy. Furthermore, the information required depends on the measuring device’s intrinsic parameters, such as the length of the wave or the polarization.

 

What scientific issues are addressed using VIGISAT data?

RG: CLS and researchers from members of the GIS BreTel are working on diverse and complementary issues. At IMT Atlantique or Rennes 1 University, we are mainly focusing on the methodological aspects. For example, for 20 years, we have had a high level of expertise on statistical processing of images. In particular, this allows us to identify areas of interest on terrestrial images or surface types on the ocean. More recently, we have been faced with the sheer immensity of the data we collect. We therefore put machine learning, data mining and other algorithms in place in order to fully process these databases.

Other GIS institutions, such as Ifremer or IUEM [2], are working on marine and coastal topics, in collaboration with us. For example, research has been carried out on estuary and delta areas, such as the Danube. The aim is to quantify the effect of flooding and its persistence over time.

Finally, continental themes such as urban planning, land use, agronomics and ecology are the main elements being studied by Rennes 2 University or Agrocampus. In the case of urban planning, satellite observations allow us to locate and map the green urban fabric. This allow us to estimate the allergenic potential of public spaces for example. It should be noted that a lot of these works, which began in the field of research, have led to the creation of some viable start-ups [3].

What projects has VIGISAT led?

RG: Since 2010, VIGISAT’s privileged data access has allowed it to back various other research projects. Indeed, it has created a lasting dynamic within the scientific community on the development of land, as well as the surveillance and controlled exploitation of land. Amongst the projects currently underway, there is for example CleanSeaNet, which focuses on the detection and monitoring of marine pollution. KALIDEOS-Bretagne looks at the evolution of land and landscape occupation and use on a town-countryside gradient. SESAME deals with the management and exploitation of satellite data for marine surveillance purposes.

 

Who is benefitting from the data analyzed by VIGISAT?

RG: Several targets were identified whilst preparing for the CPER 2015-2020 support request. One of these objectives is to generate activity in terms of the use of satellite data by Breton businesses. This includes the development of new public services based on satellite imaging in order to favor downstream services with regional affiliates development strategy.

One sector that benefits from the data and their processing is undoubtedly the highly reactive socio-economic world (i.e. start-ups, SMEs, etc.) that are based on the uses we discussed earlier. On a larger scale, protection and surveillance services are also addressed by the action coordinated by the developers and the suppliers of a service, such as GIS and the authorities at a regional, national and European level. By way of an example, BreTel has been a member of the NEREUS (Network of European Regions Using Space technologies) since 2009. This allows us to hold a strong position in the region as a center of expertise in marine surveillance (as well as in detection and monitoring of oil pollution) and also analyze ecological corridors in the context of biodiversity.

 [1] CLS is an affiliate of CNES, ARDIAN and Ifremer. It is an international business specializing in supplying Earth observation and surveillance solutions since 1986.
 [2] European Institute for Marine Studies
[3] Some examples of these start-ups include: e-ODYN, Oceandatalab, Hytech Imaging, Kermap, Exwews, and Unseenlab.

[box type=”info” align=”” class=”” width=””]

On VIGISAT:

The idea for VIGISAT began in 2001, with the start-up BOOST Technologies, which came out of IMT Atlantique (formerly Télécom Bretagne). From 2005, propositions were made to various partners including the Bretagne Region and the Brest Metropolis, in order to try and develop an infrastructure like VIGISAT on the campus close to Brest. Following BOOST Technologies’ merger with CLS in 2008, the project flourished after the creation of GIS BreTel in 2009. In the same year, the VIGISAT project experienced further success when presented to CPER. Then, BreTel grew its roadmap by adding the “research” sector, as well as the “training”, “innovation”, “promotion/dispersal” aspects. GIS BreTel is currently focusing on the “activity creation” and “new public services” sections which are in tune with the philosophy of the Carnot platforms.

BreTel also has a presence on a European level. GIS and its members have gained the title of “Copernicus Academy”. Thanks to this, they receive support from specialists in the European Copernicus program for all their education needs. From the end of 2017, BreTel and its partners will be participating in the Business Incubator Centers at ESA (ESA-BIC) which will cover five regions in Northern France (Brittany, Pays de la Loire, Ile-de-France, Hauts-de-France and Grand-Est), headed by the Brittany region.[/box]

[box type=”shadow” align=”” class=”” width=””]

The TSN Carnot institute, a guarantee of excellence in partnership-based research since 2006

 

Having first received the Carnot label in 2006, the Télécom & Société numérique Carnot institute is the first national “Information and Communication Science and Technology” Carnot institute. Home to over 2,000 researchers, it is focused on the technical, economic and social implications of the digital transition. In 2016, the Carnot label was renewed for the second consecutive time, demonstrating the quality of the innovations produced through the collaborations between researchers and companies. The institute encompasses Télécom ParisTech, IMT Atlantique, Télécom SudParis, Télécom, École de Management, Eurecom, Télécom Physique Strasbourg and Télécom Saint-Étienne, École Polytechnique (Lix and CMAP laboratories), Strate École de Design and Femto Engineering.[/box]

Also on I’MTech:

[box][one_half]

[/one_half][one_half_last]

[/one_half_last][/box]

 

Brennus Analytics

Brennus Analytics: finding the right price

Brennus Analytics offers software solutions, making artificial intelligence available to businesses. Algorithms allow them to determine the optimal sales price, helping bring businesses closer to their objective of gaining market share and margin whilst also satisfying their customers. Currently incubated at ParisTech Entrepreneurs, Brennus Analytics also allows businesses to make well-informed decisions about their number one profitability lever: pricing.

 

Setting the price of a product can be a real headache for businesses. It is however a crucial stage which can determine the success or failure of an entire commercial strategy. If the price is too high, customers won’t buy the product. Too low, and the obtained margin is too weak to guarantee sufficient revenues. In order to help businesses find the right price, the start-up Brennus Analytics, incubated at ParisTech Entrepreneurs, proposes a software making artificial intelligence technology accessible for businesses. Founded in October 2015, the start-up is based on its founders’ own experiences in the field, in their roles as former researchers at the Insitut de Recherche en Informatique in Toulouse (IRIT).

The start-up is simplifying a task which can prove arduous and time-consuming for businesses. Hundreds, or indeed, thousands of factors have to be considered when setting prices. What is the customer willing to pay for the product? At what point in the year is there the greatest demand? Would a price drop have to be compensated for by an increase in volume? These are just a few simple examples showing the complexity of the problem to be resolved, not forgetting that each business will also have its own individual set of rules and restrictions concerning prices. “A price should be set depending on the product or service, the customer, and the context in which the transaction or contractual negotiation will take place”, emphasizes Emilie Gariel, the Marketing Director at Brennus Analytics.

 

Brennus Analytics

 

In order to achieve this, the team at Brennus Analytics relies on their solid knowledge regarding the task of pricing, combining it with data science and artificial intelligence technology. The technology they choose to implement depends on the problem they are trying to solve. For statistics, machine learning, deep learning and similar technologies are used. For more complex cases, Brennus employs an exclusive technology, called an “Adaptive Multi-Agent System” (AMAS). This works by representing each factor which needs to be considered by an agent. The optimal price is then obtained through an exchange of information between these agents, taking into consideration the objectives set by the business. “Our solution doesn’t try to replace human input, it simply provides valuable assistance in decision-making. This is also why we favor transparent artificial intelligence systems; it is crucial that the client understands the suggested price”, affirms Emilie Gariel.

The data used to run these algorithms comes from the businesses themselves. The majority have a transaction history and a large quantity of sales data available. These databases can potentially be supplemented by open-source data. However, the marketing director at Brennus Analytics warns: “We are not a data provider. However, there are several start-ups that are developing in the field of data mixing who can assist our clients if they are looking, for example, to raise the price of competition products.” She is careful to add: “Always wanting more data doesn’t really make much sense. It’s better to find a middle-ground between gathering internal data which is sometimes limited, and joining the race to accumulate information.”

In order to illustrate Brennus’ proposed solution, Emilie Gariel gives the example of a key player in the distribution of office supplies. “This client was facing to intense pressure from its competition, and they felt they had not always positioned themselves well in terms of pricing”, she explains. Its prices were set on the basis of a margin objective by product category. This outlook was too generic, disconnected from the client, which led to prices which were too high for popular products in this competitive market, and then prices which were too low for products where client price sensitivity was less strong. “The software allowed an optimization of prices which had a strong impact on the margin, by integrating a dynamic segmentation of products and a flexibility in pricing”, she concludes.

The capacity to clarify and subsequently resolve complex problems is likely Brennus’ greatest strength. “Without an intelligent tool like ours, businesses are forced to simplify the problem excessively. They consider fewer factors, simply basing prices on segments and other limited contexts. Their pricing is often therefore sub-optimal. Artificial intelligence, on the other hand, is able to work with thousands of parameters at the same time”, explains Emilie Gariel. The solution offers businesses several possibilities of how to increase their profitability by working on the different components of pricing (costs, reductions, promotions, etc.). In this way, she perfectly illustrates the potential of artificial intelligence to improve decision processes and profitability in businesses.

 

supply chain management, Matthieu Lauras

What is supply chain management?

Behind each part of your car, your phone or even the tomato on your plate, there’s an extensive network of contributors. Every day, billions of products circulate. The management of a logistics chain – or ‘supply chain management’ – organizes these movements on a smaller or larger scale. Matthieu Lauras, a researcher in industrial engineering at IMT Mines Albi, explains what it’s all about and the problems associated with supply chain management as well as their solutions.

 

What is a supply chain?

Matthieu Lauras: A supply chain consists of a network of installations (i.e. factories, shops, warehouses, etc.) and partners ranging from supplier-to-supplier chains, to client-to-client chains. It’s the succession of all these participants that provides added value and allows a finished consumer product or service to be created, and transported to the end of the production line.

For the management of supply chains, we focus on the flux of material and information. The idea is to optimize the overall performance of the network: to be capable of delivering the right product to the right place at the right time with the right standard of quality and cost. I often say to my students that supply chain management is the science of compromise. You have to find a good balance between several restrictions and issues. This is what allows you to have a sustainable level of competition.

 

What difficulties are produced by the supply chain?

ML: The biggest difficulty with supply chains occurs when they are not managed in a centralized way. In the context of a business for example, the CEO is able to be a mediator between two services if there is a problem. However, when dealing with the scale of a supply chain, there are several businesses which have different legal stances, and no one person is able to be the mediator. This means that participants have to get along, collaborate and coordinate.

This isn’t easy to do since one of the characteristics of a supply chain is the absence of total coherence between the local and global optimum. For example, I optimize my production by selling my product in 6-packs, to make things quicker, even though this isn’t necessarily what my customers want to ensure the product’s sale. They may prefer that the product is sold in packs of 10 rather than 6. Therefore, what I gain in producing 6-packs is then lost by the next participant who has to transform my product. This is just one example of the type of problem we try to tackle through research into supply chain management.

 

What does supply chain management research consist in?

ML: Research in this field spans over several levels. There is a lot of information available, the question is how to exploit it. We offer tools which can process this data in order for it to be passed on to people (i.e. production/logistics managers, operations directors, request managers, distribution/transport directors, etc.) that would be in the position to make decisions and lead actions.

An important element is that of uncertainty and variability. The majority of tools used in the supply chain were designed in the 60’s or 70’s. The problem is that they were invented at a time where the economy was relatively stable. A business knew that it would sell a certain volume of a certain product over the 5 years to come. Today, we don’t really know what we’re going to sell in a year. Furthermore, we have no idea about the variations in demand that we will have to deal with, nor the new technological opportunities that may arise in the next six months. We are therefore obliged to question what developments we can bring to the decision-making tools that are currently in use, in order to make them more adapted to this new environment.

In practice, research is based on three main stages: first, we design the mathematical models and the algorithms allowing us to find an optimal solution to a problem or to compare several potential solutions. Then we develop computing systems which are able to implement these. Finally, we conduct experiments with real data sets to assess the impact of innovations and suggested tools (the advantages and disadvantages).

Some tools in supply chain management are methodological, but the majority are computer-based. They generally consist in software such as business management software packages (software containing several universal tools) which can be used on a network scale, or alternatively, APS (‘Advanced Planning and Scheduling Systems’). Four elements are then developed by the intermediary of these tools: planning, collaboration, risk management and delay reduction. Amongst other things, these allow simulations of various scenarios to be carried out in order to optimize the performance of the supply chain.

 

What problems are these tools responding to?

ML: Let’s consider planning tools. In the supply chain for paracetamol, we’re talking about a product which needs to have immediate availability. However, it takes around 9 months between the moment when the first component is supplied and when the product is actually manufactured. This means we have to anticipate potential demand several months in advance. Depending on this, it is possible to predict the supplies of materials necessary for the product to be manufactured, but also the positioning of stock closer to or further from the client.

In terms of collaboration, the objective is to avoid conflicts that could paralyze the chain. This means that the tools facilitate the exchange of information and joint decision-making. Take the example of Carrefour and Danone. The latter sets up a TV advertising campaign for its new yogurt range. If this process isn’t coordinated with the supermarket, making sure that the products are in the shops and that  there is sufficient space to feature them, Danone risks spending lots of money on an advertising campaign without being able to meet the demand it creates.

Another range of tools deals with delay reduction. A supply chain has a strong momentum. The time it takes for a piece of information linked to a change at the end of the chain (a higher demand that expected for example) will have an impact on all participants for anything from a few weeks to several months. It’s a “whiplash effect”. In order to limit this, it is in everyone’s best interest to have smaller chains that are more reactive to changes. Research is therefore looking to reduce waiting times, information transmission time and even transport time between two points.

Finally, today we cannot know exactly what the demand will be in 6 months. This is why we are working on the issue of risk sharing, or “contingency plans” which allow us to limit the negative impact of risks. This can be implemented by calling upon several suppliers for any given component. If I then have a problem with one of these (i.e. a factory fire, liquidation, etc.), I retain my ability to function.

 

Are supply chain management techniques applied to any fields other than that of commercial chains?

ML: Supply chain management is now open to other applications, particularly in the service world, in hospitals and even in banks. The principal aim is to provide a product or service to a client. In the case of a patient waiting for an operation, there is a need for resources once they enter the operating theater. All the necessary staff need to be available, from the stretcher bearer that carries the patient, to the surgeon that operates on them. It’s therefore a question of synchronization of resources and logistics.

Of course there are also restrictions specific to this kind of environment. For example, for humanitarian logistics, the question of customers does not present in the same way as in commercial logistics. Indeed, the person benefitting from a service in a humanitarian supply chain is not the person who pays, as they would be in a commercial domain. However, there is still the need to manage the flow of resources in order to maximize the produced added value.