Seald

Seald: transparent cryptography for businesses

Since 2015, the start-up Seald has been developing a solution for the encryption of email communication and documents. Incubated at ParisTech Entrepreneurs, it is now offering businesses a simple-to-use cryptography service, with automated operations. This provides an alternative to the software currently on the market, which is generally hard to get to grips with.

 

Cybersecurity has become an essential issue for businesses. Faced with the growing risk of data hacking, they must find defense solutions. One of these is cryptography, allowing businesses to encrypt data so that a malicious hacker attempting to steal them would not be able to gain access. This is what is offered by the start-up Seald, founded in 2015 in Berkeley, USA, who after spending a period in San Francisco in 2016 is now incubated at ParisTech Entrepreneurs. Its defining feature? Its solution is totally transparent to all the employees of the business.

There are already solutions that exist on the market, but they require you to open software and carry out a procedure that can require dozens of clicks just to encrypt a single email”, tells Timothée Rebours, co-founder of the start-up. In contrast, Seald is a lot simpler and faster to use. When a user sends an email, a simple icon appears on the messenger interface which can be ticked to encrypt the message. It is then guaranteed that neither the content nor any attachments are readable, should the message be intercepted.

If the receiver also has Seald, communication will be encrypted at both ends, and message and document will be read in an equally transparent way. If they do not have Seald, they can install it for free. However, this is not always possible if the policy of the receiver’s firm prohibits the installation of external applications on computer stations. In this case, an online double identification system using a code received via SMS or email allows them to authenticate themselves and subsequently read the document securely.

For the moment, Seald can be used with the more recent email servers, such as Gmail and Outlook. “We are also developing specific implementations for companies using internal messaging services”, explains Timothée Rebours. The start-up’s aim is to cover all possible email applications. “In this way; we are responding to a usage corresponding to problems within the business” explains the co-founder. Following on from that, he says: “Once we have finished what we are currently working on, we will then start on integrating into other kinds of messaging, but probably not before.”

 

Towards an automated and personalized cryptography

Seald is also hoping to improve its design, which currently requires people sending emails or documents to check a box. The objective is to limit their forgetfulness as best possible. The ideal would therefore be to have automatic encryption specific to the sender, the document being sent and the receiver. Reaching this goal is a task which Seald endeavors to fulfil by offering many features to the managers of IT systems within businesses.

Administrators already have several parameters in place that they can use to automate data encryption. For example, they can decide to encrypt all messages sent from a company email addresses to the email address of another business. Using this method; if company A starts a project with company B for example, all emails sent by employees between a company A email address and a company B email address would be encrypted by default. The security of communications is therefore no longer left in the hands of the employees working on the project, which means they can’t forget to encrypt their documents, saving them valuable time.

The start-up is pushing the features offered to IT administrators even further. It allows them to associate each document type to a revocation condition. The encrypted information sent to a third-party company – such as a consulting or communication firm – can be made impossible to read after a certain time, for example to the end of a contract. The administrator can also revoke the rights of access to the encrypted information for a device or a user, in the case where a person leaves the company due to malicious intentions.

By offering businesses this solution, Seald is changing companies’ perceptions on cryptography, with easy-to-understand functionalities. “Our aim has always been to offer encryption to the masses”, assures Timothée Rebours. Reaching the employees of businesses could be the first step towards raising more awareness amongst the public about the issue of cybersecurity and data protection within communications.

Mohamed Daoudi

Mohamed Daoudi

IMT Nord Europe| #PatternRecognition #Imaging

[toggle title=”Find all his articles on I’MTech” state=”open”]

[/toggle]

Langage, Language, Intelligence artificielle, Jean-Louis Dessalles, Artificial Intelligence

The fundamentals of language: why do we talk?

Human language is a mystery. In a society where information is so valuable, why do we talk to others without expecting anything in return? Even more intriguing than this are the processes determining communication, whether that be a profound debate or a spontaneous conversation with an acquaintance. These are the questions driving Jean-Louis Dessalles’ current project, a researcher in computing at Télécom ParisTech. His work has led him to reconsider the perspective on information adopted by Claude Shannon, a pioneer in the field. He has devised original theories and conversational models which explain trivial discussions just as well as heated debates.

 

Why do we talk? And what do we talk about? Fueled with the optimism of a young researcher, Jean-Louis Dessalles hoped to find the answer to these two questions in just a few months after finishing his thesis in 1993. Nearly 24 years have now passed, and the subject of his research has not changed. From his office in the Computing and Networks department at Télécom ParisTech, he continues to have an interest in Language. His work breaks away from the classic approach adopted by researchers in information science and communication. “The discipline mainly focuses on ways we can convey messages, but not about what is conveyed or why”, he explains, contradicting the approach to communication described by Claude Shannon in 1948.

The reasons for communication, along with the underlying motives for these information exchanges, are however very legitimate and complex questions. As the researcher explains in the film Le Grand Roman de l’Homme, which came out in 2014, communication is contradictory of various behavioral theories. Game theory for example, sometimes used in economy to describe and analyze behavioral mechanisms, struggles to justify the role of communication between humans. According to this theory, and by attaching value to all information, expected communication situations would consist in each participant providing the minimum information possible, whilst trying to glean the maximum from the other person. However this logic is not followed by humans in everyday discussions. “We need to consider the role of communication in a social context” deduces Jean-Louis Dessalles.

By dissecting the scientific elements in communication situations (i.e. interviews, attitudes in online forums, discussions, etc.) he has tried to find an explanation for people offering up useful information. The hypothesis he is putting forward today is compatible with all observable communication types; for him, offering up quality information is not motivated by economic gain, as game theory assumes, but rather by a gain in social reputation. “In technical online forums for example, experts don’t respond out of altruism, or for monetary gain. They are competing to give the most complete response in order to assert their status as an expert. In this way they gain social significance”, explains the researcher. Talking and showing our ability to stay informed is therefore synonymous with positioning ourselves in a social hierarchy.

 

When the unexpected liberates language

With the question of “why do we talk” cleared up, we still need to find out what it is we are talking about. Jean-Louis Dessalles isn’t interested in the subject of discussions per-say, but rather the general mechanisms dominating the act of communication. After having analyzed in detail tens of hours of recordings, he has come to the conclusion that a large part of spontaneous exchange is structured around the unexpected. The triggers of spontaneous conversation are often events that humans would consider unlikely or abnormal, in other words, when the normality of a situation is broken. For example, seeing a person over 2m tall, a series of cars of the same color all parked in a row or a lotto draw where all the numbers follow on from one another; these are all instances which are likely to provoke surprise in an individual, and encourage them to engage in spontaneous conversation with an interlocutor.

In order to explain this engagement based on the unexpected, Jean-Louis Dessalles has developed Simplicity Theory. According to him, the unexpected corresponds above all else to things which are simple to describe. He says “simple” because it is always easy to describe an out-of-the-ordinary situation, simply by placing the focus on the unexpected thing. For example, describing a person that is 2m tall is easy because this criterion alone is enough to establish a narration point. In contrast, describing a person of normal height and weight with standard clothes and a face with no distinctive features in particular would require a more complex description to achieve a successful definition.

Although simplicity may be a driver for spontaneous conversation, another significant discussion category also exists: that of argumentative conversation. In this case, the unexpected no longer applies. This kind of exchange follows a model defined by Jean-Louis Dessalles, called CAN (Conflict, Abduction, and Negation). “To start an argument, there has to be a conflict, opposing points of view. Abduction is the following stage, which consists in going back to the cause of the conflict in order to shift this and deploy arguments. Finally, negation allows the participants to progress to counterfactuals in order to reflect on solutions which would allow them to resolve the conflict.” Beyond that simple description, the CAN model could allow the development of artificial intelligence to progress (see text box).

 

[box type=”shadow” align=”” class=”” width=””]

When artificial intelligence looks at language theories

Machines should be able to have a reasonable conversation in order to appear intelligent”, assures Jean-Louis Dessalles. For the researcher, the test invented by Alan Turing, consisting in claiming that a machine is intelligent if a human can’t tell the difference between it and another human when having a conversation, is completely legitimate. Because of this, his work has found a place in the development of artificial intelligence that is able to pass this test. It is therefore absolutely essential to understand human communication mechanisms in order to transfer these to machines. A machine integrating the CAN model would be more able to have a debate with a human. In the case of a GPS, it would allow the device to plan routes whilst incorporating factors other than simply time or distance. Discussing with a GPS what we expect from a journey – such as beautiful scenery for example – in a logical manner, would significantly extend the quality of the human machine interface.

[/box]

 

In the hours of conversation recorded by the researcher, the distribution of spontaneous discussions induced by unexpected elements and arguments was respectively 25% and 75%. He remarks however that the line separating the two is not necessarily strict, since spontaneous narration can lead to a more profound debate, which would dramatically change the basis of the CAN model. These results offer a response to the question “what do we talk about?” and solidify years of research. For Jean-Louis Dessalles, it’s proof that “it pays to be naïve”. His recklessness at the beginning eventually led him to theorize various models throughout his career, on which humans base their communication, and will probably continue to do so for a long time to come.

[author title=”Jean-Louis Dessalles, computer scientist, human language specialist” image=”https://imtech-test.imt.fr/wp-content/uploads/2017/09/JL_Dessalles_portrait_bio.jpg”]A Polytechnic and Télécom ParisTech graduate, Jean-Louis Dessalles became a researcher in computing after obtaining his PhD in 1993. It is therefore difficult to see the link to questions regarding human language and its origins, something normally reserved for linguists or ethnologists. “I chose to focus on a subject relevant to the resources I had available to me, which were computer sciences”, he argues.

He then carried out research which contradicts the probabilistic approach of Claude Shannon, which is how he presented it to a conference at the Insitut Henri Poincaré in October 2016 for the centenary of the father of information theory.

His reflections on information have been the subject of a book, Le fil de la vie, published by Odile Jacob in 2016. He is also the author of several books about the question of language emergence. [/author]

 

greentropism, spectroscopie

GreenTropism, the start-up making matter interact with light

The start-up GreenTropism, specialists in spectroscopy, won an interest-free loan from the Fondation Mines-Télécom last June. It hopes to use this to reinforce its R&D and develop its sales team. Its technology is based on automatic learning and is intended for both industrial and academic use, offering application perspectives ranging from the environment to the IoT.

 

Is your sweater really cashmere? What is the protein and calorie content of your meal? Perhaps the answers to these questions come from one single field of study: Spectroscopy. Qualifying and quantifying material is at the heart of the mission of GreenTropism, a start-up incubated at Télécom SudParis. To do this, innovators use spectroscopy. “The discipline studies interactions between light and matter”, explains Anthony Boulanger, CEO of GreenTropism. “We all do spectroscopy without even knowing it, because our eyes actually work as spectrometers: they are light-sensitive and send out signals which are then analyzed by our brains. At GreenTropism, we play the role of the brain for classic spectrometers using spectral signatures, algorithms and machine learning.

The old becoming the new

GreenTropism is based on two techniques implemented in the 1960’s: spectroscopy and machine learning. Getting to grips with the first of these requires an acute knowledge of what a photon is and how it interacts with matter. Depending on the kind of light rays used (i.e. X-rays, ultra-violet, visible, infrared, etc.) the spectral responses are not the same. According to what we are wanting to observe, the nature of a radiation type will be more or less suitable. Therefore, UV rays detect, amongst other things, organic molecules in aromatic cycles, whilst close infrared allows the assessment of water content, for example.

The machine learning element is managed by data scientists working hand in hand with geologists and biochemists from the R&D team at GreenTropism. “It’s important to fully understand the subject we are working on and not to simply process data”, specifies Anthony Boulanger. The start-up has been developing machine learning in the hope of processing several types of spectral data. “Early on, we set up an analysis lab within Irstea. Here, we assess samples with high-resolution spectrometers. This allows us to supplement our database and therefore create our own algorithms. In spectroscopy, there is great variation of data. These come from the environment (wood, compost, waste, water, etc.), from agriculture, from cosmetics, etc. We can study all types of organic matter”, explains the innovator.

GreenTropism’s knowledge goes even further than this. Their deep understanding of infrared, visible and UV radiation, as well as laser beams (LIBS, Raman), allows them to provide a platform for software and agnostic models. This means they are adjustable to various types of radiation and independent to the spectrometer used. Anthony Boulanger adds: “our system allows results to be obtained in real time, whereas traditional analyses in a lab can take several hours over several days.

[box][one_half]

A miniaturized spectrometer.

[/one_half][one_half_last]

A traditional spectrometer.

[/one_half_last] [/box]

Crédits : Share Alike 2.0 Generic

Real-time analysis technology for all levels of expertise

Our technology consists in a machine learning platform allowing for the creation of spectrum interpretation models. In other words, it’s software transforming a spectrum into a value which is of interest to a manufacturer that has already mastered spectrometry. This allows them to achieve an operational result since in this way they can control and improve the overall quality of their process”, explains the CEO of GreenTropism. By using a traditional spectrometer in association with the GreenTropism software, a manufacturer can verify the quality of the raw material at the time of its delivery and ensure that its specification is fulfilled for example. Continued analysis also ensures the monitoring of the entire production chain in real time and in a non-destructive way. The result is that all finished products, as well as those in the transformation process, are open to systematic analysis. In this case, the objective is to characterize the material of a product. It is used for example to dissociate materials or two essences of wood. GreenTropism also receives support from partnership with academics such as Irstea or Inrea. These partnerships allow them to extend their fields of expertise, whilst also deepening their understanding of matter.

GreenTropism technology is also aimed at novices wanting to instantly analyze samples. “In this case, we depend on our lab to construct a database in a proactive way, before putting the machine learning platform in place”, adds Anthony Boulanger. It is therefore a question of matter qualification. Obtaining details about the composition of an element such as the nutritional content of a food item is a direct application. “The needs linked to spectroscopy are still vague since we have been processing organic matter. We can measure the widespread parameters such as the level of ripeness of a piece of fruit, as well as other, more concrete details such as the quantity of glucose or saccharine a product contains.

Towards the democratization of spectroscopy

The fields of application are vast: environment, industry, the list goes on. But GreenTropism technology also adapts to general public usage through the Internet of Things, mass market electrical technology and household electronic items. “The advantage of spectroscopy is that there is no need to create close contact between light and matter. This allows for potential combinations between daily life devices and spectrometers where the user doesn’t have to worry about technical aspects such as calibration for example. Imagine coffee machines that allow you to select the caffeine level in your drink. We could also monitor the health status of our plants through our smartphone”, explains Anthony Boulanger. This last usage would function like a camera. After a flash of light is emitted, the program will receive a spectral response. Rather than receiving a photograph, the user would for example find out the water level in their flower pot.

In order to make these functions possible, GreenTropism is working on the miniaturization of its spectrometers. “Today, spectrometers in labs are 100% reliable. A new, so-called ‘miniaturized’ generation (hand-held) is entering the market. However, these devices lack scientific publication about their reliability, casting doubt on their value. This is why we are working on making this technology reliable at a software level. This is a market which opens up a lot of doors for us, including one which leads to the general public”, Anthony Boulanger concludes.

vigisat, surveillance, environnement

VIGISAT: monitoring and protection of the environment by satellite

Belles histoires, Bouton, CarnotFollowing on from our series on the platforms provided by the Télécom & Société numérique Carnot institute, we will now look at VIGISAT, based near Brest. This collaborative hub is also a project focusing on the satellite monitoring of oceans and continents in high resolution.

 

On 12th July, scientists in Wales observed a drifting iceberg four times the size of London. The imposing block of ice detached from the Antarctic and is currently meandering around the Weddell Sea, and has now started to crack. This close monitoring of icebergs was made possible by satellite images.

Although perhaps not directly behind this observation, the Breton Observation Station, VIGISAT, is particularly involved in the matter of maritime surveillance. It also gathers useful information on protecting the marine and terrestrial environments. René Garello, a researcher at IMT Atlantique, presents the main issues.

 

What is VIGISAT?

René Garello: VIGISAT is a reception center for satellite data (radar sensors only) operated by CLS (Collecte Localisation Satellites) [1]. The station benefits from the expertise of the Groupement d’Intérêt Scientifique Bretagne Télédétection (BreTel) community, made up of nine academic members and partners from the socio-economic world. Its objective is to demonstrate the relevance of easy access data for the development of methods for observing the planet. It is at the service of the research community (for academic partners) and of the “end users” from a business perspective.

VIGISAT is also a project within the Breton CPER (Contrat de Plan État-Région) framework, which has been renewed to run until 2020. The station/project concept was named a platform by the Institut Carnot Télécom & Société Numérique at the end of 2014.

 

The VIGISAT station

 

What data does VIGISAT collect and how does it process this?

RG: The VIGISAT station receives data from satellites carrying Synthetic Aperture Radars (better known as SARs). This microwave sensor allows us to obtain very high resolution imaging of the Earth’s surface. The data received by the station therefore come from both the Canadian satellite RadarSAt-2, and in particular from the new series of European satellites: SENTINEL. These are sun-synchronous orbiting satellites [NB: the satellite always passes over a certain point at the same solar time], which move at an altitude of 800km and can circle the Earth in just 100 minutes.

We receive raw information collected by satellites, in other words, data come in the form of unprocessed bit streams. The data are then transmitted by fiber optic to the processing center which is also located on the site. “Radar images” are then constructed using the raw information and the radar’s known parameters. The final data, although in image form, require expert interpretation. In simple terms, the radar wave emitted is sensitive to the properties of the observed surfaces. In this way, the nature of the earth (vegetation, bare surfaces, urban landscapes, etc.) will send its own characteristic energy. Furthermore, the information required depends on the measuring device’s intrinsic parameters, such as the length of the wave or the polarization.

 

What scientific issues are addressed using VIGISAT data?

RG: CLS and researchers from members of the GIS BreTel are working on diverse and complementary issues. At IMT Atlantique or Rennes 1 University, we are mainly focusing on the methodological aspects. For example, for 20 years, we have had a high level of expertise on statistical processing of images. In particular, this allows us to identify areas of interest on terrestrial images or surface types on the ocean. More recently, we have been faced with the sheer immensity of the data we collect. We therefore put machine learning, data mining and other algorithms in place in order to fully process these databases.

Other GIS institutions, such as Ifremer or IUEM [2], are working on marine and coastal topics, in collaboration with us. For example, research has been carried out on estuary and delta areas, such as the Danube. The aim is to quantify the effect of flooding and its persistence over time.

Finally, continental themes such as urban planning, land use, agronomics and ecology are the main elements being studied by Rennes 2 University or Agrocampus. In the case of urban planning, satellite observations allow us to locate and map the green urban fabric. This allow us to estimate the allergenic potential of public spaces for example. It should be noted that a lot of these works, which began in the field of research, have led to the creation of some viable start-ups [3].

What projects has VIGISAT led?

RG: Since 2010, VIGISAT’s privileged data access has allowed it to back various other research projects. Indeed, it has created a lasting dynamic within the scientific community on the development of land, as well as the surveillance and controlled exploitation of land. Amongst the projects currently underway, there is for example CleanSeaNet, which focuses on the detection and monitoring of marine pollution. KALIDEOS-Bretagne looks at the evolution of land and landscape occupation and use on a town-countryside gradient. SESAME deals with the management and exploitation of satellite data for marine surveillance purposes.

 

Who is benefitting from the data analyzed by VIGISAT?

RG: Several targets were identified whilst preparing for the CPER 2015-2020 support request. One of these objectives is to generate activity in terms of the use of satellite data by Breton businesses. This includes the development of new public services based on satellite imaging in order to favor downstream services with regional affiliates development strategy.

One sector that benefits from the data and their processing is undoubtedly the highly reactive socio-economic world (i.e. start-ups, SMEs, etc.) that are based on the uses we discussed earlier. On a larger scale, protection and surveillance services are also addressed by the action coordinated by the developers and the suppliers of a service, such as GIS and the authorities at a regional, national and European level. By way of an example, BreTel has been a member of the NEREUS (Network of European Regions Using Space technologies) since 2009. This allows us to hold a strong position in the region as a center of expertise in marine surveillance (as well as in detection and monitoring of oil pollution) and also analyze ecological corridors in the context of biodiversity.

 [1] CLS is an affiliate of CNES, ARDIAN and Ifremer. It is an international business specializing in supplying Earth observation and surveillance solutions since 1986.
 [2] European Institute for Marine Studies
[3] Some examples of these start-ups include: e-ODYN, Oceandatalab, Hytech Imaging, Kermap, Exwews, and Unseenlab.

[box type=”info” align=”” class=”” width=””]

On VIGISAT:

The idea for VIGISAT began in 2001, with the start-up BOOST Technologies, which came out of IMT Atlantique (formerly Télécom Bretagne). From 2005, propositions were made to various partners including the Bretagne Region and the Brest Metropolis, in order to try and develop an infrastructure like VIGISAT on the campus close to Brest. Following BOOST Technologies’ merger with CLS in 2008, the project flourished after the creation of GIS BreTel in 2009. In the same year, the VIGISAT project experienced further success when presented to CPER. Then, BreTel grew its roadmap by adding the “research” sector, as well as the “training”, “innovation”, “promotion/dispersal” aspects. GIS BreTel is currently focusing on the “activity creation” and “new public services” sections which are in tune with the philosophy of the Carnot platforms.

BreTel also has a presence on a European level. GIS and its members have gained the title of “Copernicus Academy”. Thanks to this, they receive support from specialists in the European Copernicus program for all their education needs. From the end of 2017, BreTel and its partners will be participating in the Business Incubator Centers at ESA (ESA-BIC) which will cover five regions in Northern France (Brittany, Pays de la Loire, Ile-de-France, Hauts-de-France and Grand-Est), headed by the Brittany region.[/box]

[box type=”shadow” align=”” class=”” width=””]

The TSN Carnot institute, a guarantee of excellence in partnership-based research since 2006

 

Having first received the Carnot label in 2006, the Télécom & Société numérique Carnot institute is the first national “Information and Communication Science and Technology” Carnot institute. Home to over 2,000 researchers, it is focused on the technical, economic and social implications of the digital transition. In 2016, the Carnot label was renewed for the second consecutive time, demonstrating the quality of the innovations produced through the collaborations between researchers and companies. The institute encompasses Télécom ParisTech, IMT Atlantique, Télécom SudParis, Télécom, École de Management, Eurecom, Télécom Physique Strasbourg and Télécom Saint-Étienne, École Polytechnique (Lix and CMAP laboratories), Strate École de Design and Femto Engineering.[/box]

Also on I’MTech:

[box][one_half]

[/one_half][one_half_last]

[/one_half_last][/box]

 

Brennus Analytics

Brennus Analytics: finding the right price

Brennus Analytics offers software solutions, making artificial intelligence available to businesses. Algorithms allow them to determine the optimal sales price, helping bring businesses closer to their objective of gaining market share and margin whilst also satisfying their customers. Currently incubated at ParisTech Entrepreneurs, Brennus Analytics also allows businesses to make well-informed decisions about their number one profitability lever: pricing.

 

Setting the price of a product can be a real headache for businesses. It is however a crucial stage which can determine the success or failure of an entire commercial strategy. If the price is too high, customers won’t buy the product. Too low, and the obtained margin is too weak to guarantee sufficient revenues. In order to help businesses find the right price, the start-up Brennus Analytics, incubated at ParisTech Entrepreneurs, proposes a software making artificial intelligence technology accessible for businesses. Founded in October 2015, the start-up is based on its founders’ own experiences in the field, in their roles as former researchers at the Insitut de Recherche en Informatique in Toulouse (IRIT).

The start-up is simplifying a task which can prove arduous and time-consuming for businesses. Hundreds, or indeed, thousands of factors have to be considered when setting prices. What is the customer willing to pay for the product? At what point in the year is there the greatest demand? Would a price drop have to be compensated for by an increase in volume? These are just a few simple examples showing the complexity of the problem to be resolved, not forgetting that each business will also have its own individual set of rules and restrictions concerning prices. “A price should be set depending on the product or service, the customer, and the context in which the transaction or contractual negotiation will take place”, emphasizes Emilie Gariel, the Marketing Director at Brennus Analytics.

 

Brennus Analytics

 

In order to achieve this, the team at Brennus Analytics relies on their solid knowledge regarding the task of pricing, combining it with data science and artificial intelligence technology. The technology they choose to implement depends on the problem they are trying to solve. For statistics, machine learning, deep learning and similar technologies are used. For more complex cases, Brennus employs an exclusive technology, called an “Adaptive Multi-Agent System” (AMAS). This works by representing each factor which needs to be considered by an agent. The optimal price is then obtained through an exchange of information between these agents, taking into consideration the objectives set by the business. “Our solution doesn’t try to replace human input, it simply provides valuable assistance in decision-making. This is also why we favor transparent artificial intelligence systems; it is crucial that the client understands the suggested price”, affirms Emilie Gariel.

The data used to run these algorithms comes from the businesses themselves. The majority have a transaction history and a large quantity of sales data available. These databases can potentially be supplemented by open-source data. However, the marketing director at Brennus Analytics warns: “We are not a data provider. However, there are several start-ups that are developing in the field of data mixing who can assist our clients if they are looking, for example, to raise the price of competition products.” She is careful to add: “Always wanting more data doesn’t really make much sense. It’s better to find a middle-ground between gathering internal data which is sometimes limited, and joining the race to accumulate information.”

In order to illustrate Brennus’ proposed solution, Emilie Gariel gives the example of a key player in the distribution of office supplies. “This client was facing to intense pressure from its competition, and they felt they had not always positioned themselves well in terms of pricing”, she explains. Its prices were set on the basis of a margin objective by product category. This outlook was too generic, disconnected from the client, which led to prices which were too high for popular products in this competitive market, and then prices which were too low for products where client price sensitivity was less strong. “The software allowed an optimization of prices which had a strong impact on the margin, by integrating a dynamic segmentation of products and a flexibility in pricing”, she concludes.

The capacity to clarify and subsequently resolve complex problems is likely Brennus’ greatest strength. “Without an intelligent tool like ours, businesses are forced to simplify the problem excessively. They consider fewer factors, simply basing prices on segments and other limited contexts. Their pricing is often therefore sub-optimal. Artificial intelligence, on the other hand, is able to work with thousands of parameters at the same time”, explains Emilie Gariel. The solution offers businesses several possibilities of how to increase their profitability by working on the different components of pricing (costs, reductions, promotions, etc.). In this way, she perfectly illustrates the potential of artificial intelligence to improve decision processes and profitability in businesses.

 

supply chain management, Matthieu Lauras

What is supply chain management?

Behind each part of your car, your phone or even the tomato on your plate, there’s an extensive network of contributors. Every day, billions of products circulate. The management of a logistics chain – or ‘supply chain management’ – organizes these movements on a smaller or larger scale. Matthieu Lauras, a researcher in industrial engineering at IMT Mines Albi, explains what it’s all about and the problems associated with supply chain management as well as their solutions.

 

What is a supply chain?

Matthieu Lauras: A supply chain consists of a network of installations (i.e. factories, shops, warehouses, etc.) and partners ranging from supplier-to-supplier chains, to client-to-client chains. It’s the succession of all these participants that provides added value and allows a finished consumer product or service to be created, and transported to the end of the production line.

For the management of supply chains, we focus on the flux of material and information. The idea is to optimize the overall performance of the network: to be capable of delivering the right product to the right place at the right time with the right standard of quality and cost. I often say to my students that supply chain management is the science of compromise. You have to find a good balance between several restrictions and issues. This is what allows you to have a sustainable level of competition.

 

What difficulties are produced by the supply chain?

ML: The biggest difficulty with supply chains occurs when they are not managed in a centralized way. In the context of a business for example, the CEO is able to be a mediator between two services if there is a problem. However, when dealing with the scale of a supply chain, there are several businesses which have different legal stances, and no one person is able to be the mediator. This means that participants have to get along, collaborate and coordinate.

This isn’t easy to do since one of the characteristics of a supply chain is the absence of total coherence between the local and global optimum. For example, I optimize my production by selling my product in 6-packs, to make things quicker, even though this isn’t necessarily what my customers want to ensure the product’s sale. They may prefer that the product is sold in packs of 10 rather than 6. Therefore, what I gain in producing 6-packs is then lost by the next participant who has to transform my product. This is just one example of the type of problem we try to tackle through research into supply chain management.

 

What does supply chain management research consist in?

ML: Research in this field spans over several levels. There is a lot of information available, the question is how to exploit it. We offer tools which can process this data in order for it to be passed on to people (i.e. production/logistics managers, operations directors, request managers, distribution/transport directors, etc.) that would be in the position to make decisions and lead actions.

An important element is that of uncertainty and variability. The majority of tools used in the supply chain were designed in the 60’s or 70’s. The problem is that they were invented at a time where the economy was relatively stable. A business knew that it would sell a certain volume of a certain product over the 5 years to come. Today, we don’t really know what we’re going to sell in a year. Furthermore, we have no idea about the variations in demand that we will have to deal with, nor the new technological opportunities that may arise in the next six months. We are therefore obliged to question what developments we can bring to the decision-making tools that are currently in use, in order to make them more adapted to this new environment.

In practice, research is based on three main stages: first, we design the mathematical models and the algorithms allowing us to find an optimal solution to a problem or to compare several potential solutions. Then we develop computing systems which are able to implement these. Finally, we conduct experiments with real data sets to assess the impact of innovations and suggested tools (the advantages and disadvantages).

Some tools in supply chain management are methodological, but the majority are computer-based. They generally consist in software such as business management software packages (software containing several universal tools) which can be used on a network scale, or alternatively, APS (‘Advanced Planning and Scheduling Systems’). Four elements are then developed by the intermediary of these tools: planning, collaboration, risk management and delay reduction. Amongst other things, these allow simulations of various scenarios to be carried out in order to optimize the performance of the supply chain.

 

What problems are these tools responding to?

ML: Let’s consider planning tools. In the supply chain for paracetamol, we’re talking about a product which needs to have immediate availability. However, it takes around 9 months between the moment when the first component is supplied and when the product is actually manufactured. This means we have to anticipate potential demand several months in advance. Depending on this, it is possible to predict the supplies of materials necessary for the product to be manufactured, but also the positioning of stock closer to or further from the client.

In terms of collaboration, the objective is to avoid conflicts that could paralyze the chain. This means that the tools facilitate the exchange of information and joint decision-making. Take the example of Carrefour and Danone. The latter sets up a TV advertising campaign for its new yogurt range. If this process isn’t coordinated with the supermarket, making sure that the products are in the shops and that  there is sufficient space to feature them, Danone risks spending lots of money on an advertising campaign without being able to meet the demand it creates.

Another range of tools deals with delay reduction. A supply chain has a strong momentum. The time it takes for a piece of information linked to a change at the end of the chain (a higher demand that expected for example) will have an impact on all participants for anything from a few weeks to several months. It’s a “whiplash effect”. In order to limit this, it is in everyone’s best interest to have smaller chains that are more reactive to changes. Research is therefore looking to reduce waiting times, information transmission time and even transport time between two points.

Finally, today we cannot know exactly what the demand will be in 6 months. This is why we are working on the issue of risk sharing, or “contingency plans” which allow us to limit the negative impact of risks. This can be implemented by calling upon several suppliers for any given component. If I then have a problem with one of these (i.e. a factory fire, liquidation, etc.), I retain my ability to function.

 

Are supply chain management techniques applied to any fields other than that of commercial chains?

ML: Supply chain management is now open to other applications, particularly in the service world, in hospitals and even in banks. The principal aim is to provide a product or service to a client. In the case of a patient waiting for an operation, there is a need for resources once they enter the operating theater. All the necessary staff need to be available, from the stretcher bearer that carries the patient, to the surgeon that operates on them. It’s therefore a question of synchronization of resources and logistics.

Of course there are also restrictions specific to this kind of environment. For example, for humanitarian logistics, the question of customers does not present in the same way as in commercial logistics. Indeed, the person benefitting from a service in a humanitarian supply chain is not the person who pays, as they would be in a commercial domain. However, there is still the need to manage the flow of resources in order to maximize the produced added value.

 

Müge Ozman

Müge Ozman

Institut Mines-Telecom Business School | #Innovation #StrategicManagement

[toggle title=”Find all her articles on I’MTech” state=”open”]

[/toggle]

Botfuel, Chatbots

Botfuel: chatbots to revolutionize human-machine interfaces

Are chatbots the future of interactions between humans and machines? These virtual conversation agents have a growing presence on messaging applications, offering us services in tourism, entertainment, gastronomy and much more. However, not all chatbots are equal. Botfuel, a start-up incubated at ParisTech Entrepreneurs, offers its services to businesses wanting to develop top-of-the-range chatbots.

They help us to order food, book a trip or discover cultural events and bars. Chatbots are virtual intermediaries providing us with access to many services, and are becoming ever more present on messaging applications such as Messenger, Whatsapp, Telegram, etc. In 2016, Yan Georget and Javier Gonzalez decided to enter the conversational agent market, founding their start-up, Botfuel, incubated for a year at ParisTech Entrepreneurs. They were not looking, however, to develop simple, low-end chatbots. “Many key players target the mass market, with more potential users and easy-to-produce chatbots”, explains Yan Georget, “this is not our approach”.

Botfuel is aimed at businesses that want to provide their customers with a high quality user experience. The fledgling business offers companies state-of-the-art technological building blocks to help develop these intelligent conversational agents. It therefore differs from the process typically adopted by businesses for designing chatbots. Ordinarily, chatbots operate on a decision-tree basis, which is established in advance. Developers create potential scenarios based on the questions the chatbot will ask the user, trying to predict all possible responses. However, the human mind is unpredictable. There will inevitably be occasions where the user gives an unforeseen response or pushes the conversational agent to its limits. Chatbots that rely on decision-trees soon tend to struggle, often leaving customers disappointed.

We take a different approach which is based on machine learning algorithms”, explains Yan Georget. Every sentence provided by users is analyzed to understand its meaning and identify its possible intentions. To achieve this, Botfuel works in collaboration with businesses on their databases, trying to understand the primary motivations of web-users. For example, the start-up has collaborated with BlaBlaCar, helping them to fine-tune their chatbots, which in turn allows customers to find a carshare more easily. Thanks to this design approach, the chatbot knows to attribute the same meaning to the phrases: “I want to go to Nantes”, “I would like to get to Nantes by car” and “looking for a carshare to Nantes”, something which is near impossible for traditional chatbots which dismiss various semantic formulations if the conversation doesn’t exactly match the expected discussion scenario.

Botfuel also uses information extraction algorithms to precisely identify dates, places and desired services, regardless of the order in which they appear in the conversation with the chatbot. Understanding the user is clearly Botfuel’s principal motivation. Building on this, Yan Georget explains the choices they made in terms of the issue of language correction. “People make typos when they are writing. We opted for word-by-word correction, using an algorithm based on the occurrence of each word in the language and the resemblance this has with the mistyped word entered by the user. We only correct mistakes made by a user if we are sure we know what word it is they were wanting to type.” This approach differs from that of other chatbots, which base correction on the overall meaning of the phrase. This second approach sometimes incurs errors by associating a user with the wrong intentions. Even though more errors may be corrected with this method, Botfuel prefers to prioritize the absence of misunderstanding, even if it means the chatbot has to ask the user to reformulate their question.

 

Chatbots, the future of online services

In addition to BlaBlaCar, many other businesses are interested in Botfuel’s chatbots. French insurance provider April and French bank Société Générale now form part of their clientele. One of the main reasons these new interfaces are attracting so much interest is, according to Yan Georget, because “conversational interfaces are very powerful”. “When using an online purchasing service, you have only to type in what you’re looking for and you’ve practically already made the purchase.” The alternative consists in going through the menu of the website of a vendor and finding the desired product from a list using search filters… this takes several clicks, and wastes several minutes, in comparison to simply typing in “I’m looking for Fred Vargas’s latest novel” into the chatbot interface.

For businesses, chatbots also represent a great opportunity to learn more about their customers. Botfuel provides an analysis system allowing businesses to better understand the links between different customer demands. By talking to a chatbot, the customer provides a lot more information than they would by simply browsing the site. It explains their interests in a more detailed way and provides better explanation for their dissatisfaction. These are all elements that can be very valuable to businesses, helping them to improve their service for the benefit of the customer.

These new perspectives revealed by chatbots are promising, but Yan Georget would also like to moderate expectations and alleviate certain fears surrounding the service: “The aim of chatbots is not to replace humans in the customer experience. Their purpose is to use conversation as an interface with machines, in place of the interactions that we currently have. When a computer operating system changes, habitual users have to readapt to this new interface. With chatbots, the conversational interface doesn’t change, conversation is flexible. The only thing that changes is the additional features that a chatbot can gain with each update.” In terms of the “intelligent” nature of the chatbots, the co-founder remains cautious in this respect. With a doctorate in artificial intelligence, he is aware of the ethical limits and risks associated with an unbridled boom in this field of technology. The focus is therefore on the issue of non-supervised learning in chatbots, which gives them more autonomy when dealing with customers. For Yan Geroget, the example of Tay, Microsoft’s chatbot which was made to become racist on Twitter by its users, is very significant. “The development of chatbots should be supervised by humans. Auto-learning is too dangerous, and it is not the kind of risk to which we are willing to expose businesses or final users.

 

[box type=”info” align=”” class=”” width=””]

Chatbots: an educational tool?

On 29th June at Télécom ParisTech, Botfuel and ParisTech Entrepreneurs came together for a meet-up dedicated to discussing chatbots and education. In this sector, intelligent conversational agents represent a potential asset for personalized education programs. From revising for your exams with Roland Garros through the chatbot Messenger “ReviserAvecRG”, to finding your path with the specialist careers advisor bot “Hello Charly”, aimed at young people between the ages of 14 and 24, or even practicing writing in a foreign language, chatbots undeniably offer a real range of tools.

The meet-up provided an opportunity to share and present experiences and concrete examples from specialist businesses. This event comes as part of the launch of the incubator’s “Tech Meet-ups” program, meetings focusing on technology and the future. The event is therefore the first in a series that will be continued in October with the next “Tech Meet-up”, which this time will be dedicated to the blockchain.[/box]

 

stephan clémençon, conseiller scientifique, exposition, scientific exhibition advisor

What exactly is a scientific exhibition advisor? A discussion with Stephan Clémençon

Who are the people working behind the scenes at scientific exhibitions? The tasks at hand range from approving content and proposing themes to identifying scientific and societal issues, and much more. To find out more, we interviewed Stephan Clémençon, a researcher specializing in machine learning at Télécom ParisTech and a scientific advisor for the Terra Data exhibition led by Cité des Sciences et de l’Industrie, focusing on digital data.

 

What is the role of a scientific exhibition council?

Stephan Clémençon: Organization for the exhibition began about a year and a half before the event. Our council was complementary in terms of skills, since it was made up of mainly technical specialists in IT data, etc., as well as others who focused on usage and legal issues. Our aim was to identify the topics to be addressed during the exhibition by illustrating with examples. Above all, we wanted to make the link between data and applications. Secondly, the exhibition organizers presented the different workshops to us and what they did was extraordinary.

 

What messages were you wanting to pass on?

SC: We wanted to show that data are not just a way of representing information. For example, we addressed the notion of storage. Often, people don’t realize the network aspect, the fact that there are kilometers of fiber optic cables at the bottom of the oceans. It’s important to show people pictures of that. In practice, people switch on their computer, search for information, etc., but they actually have no idea about the physical and concrete aspect behind this, such as what a data center looks like. The important thing was to demystify data.

 

Which part of the exhibition did you work on the most?

SC: I mentioned to the organizers that biometrics could have an impact on the public. The idea was to follow the digital trace of visitors, who had their photo taken at the exhibition entrance. I also worked on the Algorithmic aspect with Françoise Soulié-Fogelman [a professor in computing at Tianjin University, China]. We illustrated how recommendation engines work and what their principles are. The objective was to demystify the algorithmic aspect. An algorithm is simply a sequence of tasks leading to a result. We explained that it was nothing new and that they are already in use in daily life.

 

What motivated you to participate in this project?

SC: I think that addressing the subject of data is important. People are scared to talk about artificial intelligence, automatic processing by machines, etc. and rightly so, because we are becoming dependent on these technologies. But this is not something specific to machine learning, it applies to technology as a whole. It is therefore very important to explain how these technologies work. Working with Cité des Sciences also allowed us to reach young people who use technology but don’t necessarily ask themselves how it works. I also took part just out of curiosity. I had no idea about how these exhibitions were put together and it allowed me to discover this new world.

 

What can you draw from the exhibition and all that it entailed?

SC: I feel that it was a small but well thought-out exhibition. It was educational on the topic of data and what they are used for. There was a good balance between mathematical aspects and usage. It would be interesting to generalize this kind of exhibition because there is a real need to provide society with information about digital technology. In terms of artificial intelligence which has lately become fashionable and developing robotization, many of these issues are suffering because of received ideas. They deserve to be presented to the general public in the form of an exhibition. We are currently sitting on the threshold of some really significant transformations and it would be good if people started to think about these rather than just letting them happen.