human resources

How is technology changing the management of human resources in companies?

In business, digital technology is revolutionizing more than just production and design. The human resources sector is also being greatly affected; whether that be through better talent spotting, optimized recruitment processes or getting employees more involved with the company’s vision. This is illustrated though two start-ups incubated at ParisTech Entrepreneurs, KinTribe and Brainlinks.

 

Recruiters in large companies can sometimes store tens of thousands of profiles in their databases. However, it often difficult to make use of such a substantial pool of information using conventional methods. “It’s impossible to keep such a large file up-to-date, so the data often become obsolete very quickly”, states Chloé Desault, a former in-company recruiter and co-founder of the start-up KinTribe. “Along with Louis Laroche, my co-founder who was also formerly a recruiter, we aim to facilitate the use of these talent pools and improve the daily lives of recruiters”, she adds. The software solution enables recruitment professionals to create a recruitment pool using professional social networks. With KinTribe, they are able to create a usable database, in which they can perform complex searches in order to find the best person to contact for their need, from tens of thousands of available profiles. “This means they no longer have to waste time on people that do not correspond to the target in question”, affirms the co-founder.

The software’s algorithms can then process the collected data to produce a rating for the relevant market. This rating indicates to what extent a person is susceptible to an external recruitment offer. “70% of people on LinkedIn aren’t actively looking for a job, but would still consider a job offer if it was presented to them”, Louis Laroche explains. In order to identify these people, and to what extent they are likely to be interested, the algorithm is based on key values identified by recruiters. Age, field of work and duration of last employment are all factors that can influence how open someone is to a proposition.

One of the start-up’s next goals is to add new sources of data into the mix, allowing their users to go on other networks to find new talents. Multiplying the available data will also allow them to improve the market rating algorithms. “We want to provide recruiters with the best possible knowledge by aggregating the maximum amount of social data that we can”; summarizes the KinTribe co-founder.

Finally, the two entrepreneurs are also interested in other topics within in the field of recruitment. “As a start-up, we have to try to stay ahead of the curve and understand what the market will do next. We dedicated part of our summer to exploring the potential of a new co-optation product” Chloé concludes.

 

From recruitment to employee involvement

In human resources, software tools represent more than just an opportunity for recruiting new talent. One of their aims is also to get employees involved in the company’s vision, and to listen to them in order to pinpoint their expectations. The start-up Brainlinks was created for this very reason. Today, it offers a mobile app called Toguna for businesses with over 150 people.

The concept is simple: with Toguna, general management or human resources departments can ask employees a question such as: “What is your view of the company of the future?” or “What new things would you like to see in the office?” The employees, who remain anonymous on the app, can then select the questions they are interested in and offer responses that will be made public. If a response made by a colleague is interesting, other employees can vote for it, thus creating a collective form of involvement based on questions about life at work.

In order to make Toguna appeal to the maximum number of people, Brainlinks has opted for a smart, professional design: “Contributions are formatted by the person writing them; they can add an image and choose the font, etc.”, explains Marc-Antoine Garrigue, the start-up’s co-founder. “There is an element of fun that allows each person to make their contributions their own”, he continues. According to Marc-Antoine Garrigue, this feature has helped them reach an average employee participation rate of 85%.

Once the votes have been cast and the propositions collected, managers can analyze the responses. When a response is chosen, it is highlighted on the app, providing transparency in the inclusion of employee ideas. A field of development for the app is to continue improving the dialogue between managers and employees. “We hope to go even further in constructing a collective history: employees make contributions on a daily basis and in return, the management can explain the decisions they have made after having consulted these”, outlines the co-founder. This is an opportunity that could really help businesses to see digital transformation as a vehicle for creativity and collective involvement.

 

auonomous cars, Guillaume Duc, chair C3S, connected cars

No autonomous cars without cybersecurity

Protecting cars from cyber-attacks is an increasingly important concern in developing smart vehicles. As these vehicles become more complex, the number of potential hacks and constraints on protection algorithms is growing. Following the example of the “Connected cars and cybersecurity” chair launched by Télécom ParisTech on October 5, research is being carried out to address this problem. Scientists intend to take on these challenges, which are crucial to the development of autonomous cars.

 

Connected cars already exist. From smartphones connected to the dashboard, to computer-aided maintenance operations, cars are packed with communicating embedded systems. And yet, they still seem to be a long way from the futuristic vehicles we’ve been dreaming up in our imagination. They do not (yet) all communicate with one another or with road infrastructures to provide warnings about dangerous situations for example. Cars are struggling to make the leap from “connected” to “intelligent”. And without intelligence, they will never become autonomous. Guillaume Duc, a research professor in electronics at Télécom ParisTech who specializes in embedded systems, perfectly sums up one of the hurdles to this development, “Autonomous cars will not exist until we are able to guarantee that cyber-attacks will not put a smart vehicle, its passengers or its environment in danger.”

Cybersecurity for connected cars is indeed crucial to their development. Whether rightly or wrongly, no authority will authorize the sale of increasingly intelligent vehicles without first guaranteeing that they will not be out of control on the roads. The topic is of such importance in the industry that researchers and manufacturers have teamed up to find solutions. A “Connected Cars and Cybersecurity” chair bringing together Télécom ParisTech, Fondation Mines-Télécom, Renault, Thalès, Nokia, Valéo and Wavestone was launched on October 5. According to Guillaume Duc, the specific features of connected cars make this is a unique research topic.

The security objectives are obviously the same as in many other systems,” he says, pointing to the problems of information confidentiality or certifying that information has really been sent by one sensor instead of another. “But cars have a growing number of components, sensors, actuators and communication interfaces, making them easier to hack,” he goes on to say. The more devices there are in a car, the more communication points it has with the outside world. And it is precisely these entry points which are the most vulnerable. However, these are not necessarily the instruments that first come to mind, like radio terminals or 4G.

Certain tire pressure sensors use wireless communication to display a possible flat tire on the dashboard. But wireless communication means that without an authentication system to ensure that the received information has truly been sent by this sensor, anyone can pretend to be this sensor from outside the car. And if you think sending incorrect information about tire pressure seems insignificant, think again. “If the central computer expects a value of between 0 and 10 from the sensor and you send it a negative number, for example, you have no idea how it will react,” explains the researcher. This could crash the computer, potentially leading to more serious problems for the car’s controls.

 

Adapting cybersecurity mechanisms for cars

The stakes are high for research on how to protect each of these communicating elements. These components have only limited computing power while algorithms to protect against attacks usually require high computing power. “One of the chair’s aims is to successfully adapt algorithms to guarantee security while requiring less computer resources,” says Guillaume Duc. This challenge goes hand in hand with another one, limiting latency in the car’s execution of critical decisions. Adding algorithms to embedded systems leads to increased computing time when an action is transmitted. But cars cannot afford to take longer to brake. Researchers therefore have their work cut out for them.

In order to address these challenges, they are looking to the avionics sector, which has been facing problems associated with the proliferation of sensors for years. But unlike planes, fleets of cars are not operated in an ultra-controlled environment. And in contrast to aircraft pilots, drivers are masters of their own cars and may handle them as they like. Cars are also serviced less regularly. It is therefore crucial to guarantee that cybersecurity tools installed in cars cannot be altered by their owners’ tinkering.

And since absolute security does not exist and “algorithms may eventually be broken, whether due to unforeseen vulnerabilities or improved attack techniques,” as the researcher explains, these algorithms must also be agile, meaning that they can be adapted, upgraded and improved without automakers having to recall an entire series of cars.

 

Resilience when faced with an attack

But if absolute security does not exist, where does this leave the 100% security guarantee against attacks, which is the critical factor in developing autonomous cars? In reality, researchers do not seek to protect against all possible attacks on connected cars. Their goal rather to ensure that even if an attack is successful, it will not prevent the driver or the car itself from remaining safe. And of course, this must be possible without having to brake suddenly on the highway.

To reach these objectives, researchers are using their expertise to develop resilience in embedded systems. The problem recalls that of critical infrastructures, such as nuclear power plants, which cannot simply shut down when under attack. In the case of cars, a malicious intrusion in the system must first be detected when it occurs. To do so, the vehicle’s behavior is constantly compared to previously-recorded behaviors which are considered normal. If an action is suspicious, it is identified as such. In the event of a real attack, it is crucial to guarantee that the car’s main functions (steering, brakes etc.)  will be maintained and isolated from the rest of the system.

Ensuring a car’s resilience from its design phase, resilience by design, is also the most important condition for cars to continually become more autonomous. Automakers can provide valuable insight for researchers in this area, by contributing to discussions about a solution’s technical acceptability or weighing in on economic issues. While it is clear that autonomous cars cannot exist without security, it is equally clear that they will not be rolled out if the cost of security makes them too expensive to find a market.

[box type=”info” align=”” class=”” width=””]

Cars: personal data on wheels!

The smarter the car, the more personal data it contains. Determining how to protect this data will also be a major research area for the “Connected cars and cybersecurity” chair. To address this question it will work in collaboration with another Télécom ParisTech chair, dedicated to “Values and policies of personal information” which also brings together Télecom SudParis and Télécom École de Management. This collaboration will make it possible to explore the legal and social aspects of exchanging personal data between connected cars and their environment. [/box]

ledgys, blockchain, vivatechnology

With Ledgys, blockchain becomes a tangible reality

Winner of the blockchain BNP jury prize at VivaTechnology last year, Ledgys hopes to reaffirm its relevance once more this year. From June 15 to 17, 2017, the company will be attending the event again to present its Ownest application. The application is aimed at both businesses and consumers, offering a simple solution for using blockchain technology in practical cases such as logistics management or an authenticity certificate for a handbag.

 

Beyond the fantasy of the blockchain, what is the reality of this technology? Ledgys, a start-up founded in April 2016 and currently incubated at Télécom ParisTech, answers this question in a very appealing way. By developing their app, Ownest, they offer professionals and individuals easy access to their blockchain portfolio. Users can therefore easily visualize and manage the objects and products they own that are recorded in the decentralized register, whether that be a watch, containers or a palette on its journey between a distributor and a shop.

To illustrate the application’s potential, Clément Bergé-Lefranc, co-founder of Ledgys, uses the example of a company producing luxury items: “a brand that produces 1,000 individually-numbered bags can create a digital asset for each one that is included in the blockchain. These assets will accompany the product throughout its whole life cycle.” From conception to destruction, including distribution and resale from individual to individual, the digital asset will follow the bag with which it is associated through all phases. Each time the bag moves from one actor of the chain to another, the transaction of the digital asset is recorded in the blockchain, proving that the exchange really took place.

“This transaction is approved by thousands of computers, and its certification is more secure than traditional financial exchanges”, claims Clément Bergé-Lefranc. The blockchain basically submits each transaction to be validated by other users of the technology. The proof of the exchange is therefore certified by all the other participants, and is then recorded alongside all other transactions that have been carried out. It is impossible to then go back and alter this information.

Also read on I’MTech: What is a blockchain?

With Ownest, users have easy access to information on the assets of the products they own. The app allows users to transfer a title to another person in a matter of seconds. There are a whole host of advantages for the consumer. In the example of a bag made by a luxury brand, the asset certifies that the product does indeed come from the manufacturer’s own workshops and that it is not a fake. Should they wish to then resell the bag to an individual, they can prove that the object is authentic. It also solves problems at customs when returning from a journey.

Monitoring the product throughout its life cycle allows businesses to better understand the secondary market. “A collector of luxury bags who only buys second-hand products is totally invisible to a brand”, highlights the Ledgys co-founder. “If they want, the collector can make themselves known to the brand and prove that they own the items, meaning that the brand can offer advantages such as a limited-edition item.” Going beyond customer relations, the blockchain is above all one of the quickest and most effective ways to manage the state of stocks. The company can see in real time which products have been handed over to distributors and which have just been stored in the warehouse. Instead of signing delivery slips on the distribution platforms, a simple transfer of the digital asset from the deliverer to the receiver will suffice. This perspective has also attracted one of the world leaders in distribution, who is now a client of Ledgys, hoping to improve the traceability of their packaging.

“These really are concrete examples of what blockchain technology can do, and it’s what we would like to offer”, Clément Bergé-Lefranc declares enthusiastically. The start-up will present these examples of how it can be used at the VivaTechnology exhibition in Paris from June 15-17. Ledgys is committed to their mission of popularizing the use of the blockchain, and collaborating with another start-up for the event: Inwibe and the app “Wibe me up”. Together, they will offer all participants the chance to vote for the best start-ups. By using blockchain technology to certify the votes, they can ensure a transparent census of the public’s favorites.

Imagining how blockchain might be used in 10 years’ time

As well as developing Ownest, the Ledgys team is working on a more long-term project. “One of our objectives is to build marketplaces for the exchange of data using blockchain technology” presents Clément Bergé-Lefranc. Using Ethereum, the solution allows people to buy and sell data with blockchain. They still have a while to wait however before seeing such marketplaces emerge. “There are technical elements that still need to be resolved for this to be possible and optimized on Ethereum”, admits the Ledgys’ co-founder. “However, we are already building on what the blockchain will become, and its use in a few years’ time.”

In the meantime, the start-up is working on a proof of concept in developing countries, in collaboration with “The Heart Fund”, a foundation devoted to treating heart disease. The UN-accredited project aims to establish a secure and universal medical file for each patient. Blockchain technology will allow health-related data to be certified and accessible. “The aim is that with time, we will promote the proper use of patients’ medical data”, announces Clément Bergé-Lefranc. By authorizing access for professionals in the medical sector in a transparent and secure way, the quality of healthcare in countries where medical attention is less reliable can be improved. Again, this is an example of Ledgys’ desire to use blockchain not just to fulfil fantasies, but also to resolve concrete problems.

The original version of this article was published on the ParisTech Entrepreneurs incubator website.

 

Data&Musée, cultural institutions

Data&Musée – Developing data science for cultural institutions

Télécom SudParis and Télécom ParisTech are taking part in Data&Musée, a collaborative project led by Orpheo, launched on September 27, 2017. The project’s aim is to provide a single, open platform for data from cultural institutions in order to develop analysis and forecasting tools to guide them in developing strategies and expanding their activities.

 

Data science is a recent scientific discipline concerned with extracting information, analyses or forecasts from a large quantity of data. It is now widely used in many different industries from energy and transport to the healthcare sector.

However, this discipline has not yet become a part of French cultural institutions’ practices. Though institutions collect their data on an individual level, until now there had been no initiative to aggregate and analyze all the data from French museums and monuments. And yet, gathering this data could provide valuable information for institutions and visitors alike, whether to establish analyses of cultural products in France, measure institutions’ performance or provide visitors with helpful recommendations for museums and monuments to visit.

The Data&Musée project will serve as a testing ground for exploring the potential of data analysis for cultural institutions and determining how this data can help institutions grow. The project is led by the Orpheo group, a provider of guide systems (audio-guide systems, multimedia guides, software etc.) for cultural and tourist sites, and has brought together researchers and a team of companies specialized in data analysis such as Tech4TeamKernixMyOrpheo. The Centre des Monuments Nationaux, an institution which groups together nearly 100 monuments, and Paris Musées, an organization which incorporates 14 museums in Paris, have agreed to test the project on their sites.

A single, open platform for centralizing data

The Data&Musée project strives to usher museums into the data age by grouping together a great number of cultural institutions on Teralab, IMT and GENES’s shared data platform. “This platform provides a neutral, secure and sovereign storage hosting space. The data will be hosted on the IMT Lille Douai site in France,” explains Antoine Garnier, the head of the project at IMT. “Teralab can host sensitive data in accordance with current regulations and is already recognized as a trustworthy tool.

In addition, highly sensitive data can be anonymized if necessary. The project could enlist the help of Lamane, a startup specializing in these technical issues, which was created through IMT Atlantique incubators.

Previously-collected individual data from each institution, such as ticketing data or web site traffic, will be combined with new sources collected by Data&Musée and created by visitors using a smart guestbook (currently being developed by the corporate partner GuestViews), social media analysis and an indoor geolocation system.

Orpheo seeks to enhance the visitor journey but is not certain whether it should be up to the visitor or carried out automatically,” explains Nel Samama, whose research laboratory at Télécom SudParis is working with Orpheo on the geolocation aspect. “Analyzing flows in a fully automatic way means using radio or optical techniques, which function correctly in demonstration mode but are unreliable in real use. Having the visitor participate in this loop would simplify it tremendously.

Developing tools for indications, forecasting and recommendations

Based on an analysis of this data, the goal is to develop performance indicators for museums and build tools for personalizing the visitor experience.

Other project partners including Reciproque, a company that provides engineering services for cultural institutions, and the UNESCO ITEN chair (Innovation, Transmission and Digital Publishing), will use the data collected to work on modeling aesthetic taste in order to determine typical visitor profiles and appropriate recommendations for content based on these profiles. This tool will therefore increase visitors’ awareness of the rich offerings of French cultural institutions and therefore boost the tourism industry. Jean-Claude Moissinac, a research professor at Télécom ParisTech, is working on this aspect of the project in partnership with Reciproque. “I’m especially interested in data semantics, or web semantics,” explains the researcher. “The idea is to index all the data collected in a homogenous way, then use it to make a graph in order to interlink the information. We can then infer groups, which may be works or users. After that, we use this knowledge to propose different paths.”

The project plans to set up an interface through which partner institutions may view their regional attendance, visitor seasonality, and segmentation compared to other institutions with similar themes. Performance indicators will also be developed for the museums. The various data collected will be used to develop analytical and predictive models for visiting cultural sites in France and for providing these institutions with recommendations to help them determine strategies for expanding their activities.

With a subscription or contribution system, this structured data could eventually be transmitted to institutions that do not produce data or to third parties with the consent of institutions and users. A business model could therefore emerge, allowing Data&Musée to live on beyond the duration of the project.

Project supported by Cap Digital and Imaginove, with funding from Bpifrance and Région Île-de-France.

 

ethics, social networks, éthique, Antonio Cailli, Télécom ParisTech

Rethinking ethics in social networks research

Antonio A. CasilliTélécom ParisTech – Institut Mines-Télécom, University of Paris-Saclay and Paola TubaroCentre national de la recherche scientifique (CNRS)

[dropcap]R[/dropcap]esearch into social media is booming, fueled by increasingly powerful computational and visualization tools. However, it also raises some ethical and deontological issues that tend to escape the existing regulatory framework. The economic implications of large scale data platforms, the active participation of members of networks, the spectrum of mass surveillance, the effect on health, the role of artificial intelligence: a wealth of questions all needing answers. A workshop running from December 5-6, 2017 at Paris-Saclay, organized in collaboration with three international research groups, hopes to make progress in this area.

 

Social Networks, what are we talking about?

The expression “social network” has become commonly used, but those that use it to refer to social media such as Facebook or Instagram are often ignorant about its origin and true meaning. Studies into social networks began long before the dawn of the digital age. Since the 1930s, sociologists have been conducting studies that attempt to explain the structure of the relationships that connect individuals and groups: their “networks”. This could be, for example, relationships based on advice between employees of a business, or friendships between pupils in a school. These networks can be represented as points (the pupils) connected by lines (the relationships).

A graphic representation of a social network (friendships between pupils at a school), created by J.L. Moreno in 1934. Circles = girls, triangles = boys, arrows = friendships. J.L. Moreno, 1934, CC BY

 

Well before any studies into the social aspects of Facebook and Twitter, this research shed significant light on the topic. For example, the role of spouses in a marriage; the importance of “weak connections” in job hunting; the “informal” organization of a business; the diffusion of innovation; the education of political and social elites; and mutual assistance and social support when faced with ageing or illness. The designers of digital platforms such as Facebook now adopt some of the analytical principles that this research was based on, founded on mathematical graph theory (although they often pay less attention to the associated social issues).

Researchers in this field understood very quickly that the classic principles of research ethics (especially the informed consent of participants in a study and the anonymization of any data relating to them) were not easy to guarantee. In social network research, the focus is never on one sole individual, but rather on the links between the participant and other people. If the other people are not involved in the study, it is hard to see how their consent can be obtained. Also, the results may be hard to anonymize, as visuals can often be revealing, even when there is no associated personal identification.

 

Digital ethics: a minefield

Academics have been pondering these ethical problems for a quite some time: in 2005, the journel Social Networks dedicated an issue to these questions. The dilemmas faced by researchers are exacerbated today by the increased availability of relational data which has been collected and used by digital giants such as Facebook and Google. New problems arise as soon as the lines between “public” and “private” spheres become blurred. To what extent do we need consent to access the messages that a person sends to their contacts, their “retweets” or their “likes” on friends’ walls?

Information sources are often the property of commercial companies, and the algorithms these companies use tend to offer a biased perspective on the observations. For example, can a contact made by a user through their own initiative be interpreted in the same way as a contact made on the advice of an automated recommendation system? In short, data doesn’t speak for itself, and we must question the conditions of its use and the ways it is created before thinking about processing it. These dimensions are heavily influenced by economic and technical choices as well as by the software architecture imposed by platforms.

But is negotiation between researchers (especially in the public sector) and platforms (which sometimes stem from major multinational companies) really possible? Does access to proprietary data risk being restricted or unequally distributed (potentially at a disadvantage to public research, especially when it doesn’t correspond to the objectives and priorities of investors)?

Other problems emerge when we consider that researchers may even resort to paid crowdsourcing for data production, using platforms such as Amazon Mechanical Turk to ask the masses to respond to a questionnaire, or even to upload their online contact lists. However, these services raise questions about old beliefs in terms of working conditions and appropriation of a product. The ensuing uncertainty hinders research which could potentially have positive impacts on knowledge and society in a general sense.

The potential for misappropriation of research results for political or economic ends is multiplied by the availability of online communication and publication tools, which are now used by many researchers. Although the interest among the military and police in social network analysis is already well known (Osama Bin Laden was located and neutralized following the application of social network analysis principles), these appropriations are becoming even more common today, and are less easy for researchers to control. There is an undeniable risk that lies in the use of these principles to restrict civil and democratic movements.

A simulation of the structure of an Al-Qaeda network, “Social Network Analysis for Startups” (fig. 1.7), 2011. Reproduced here with permission from the authors. Kouznetsov A., Tsvetovat M., CC BY

 

Celebrating researchers

To break this stalemate, the solution is not to increase the number of restrictions which would just aggravate the constraints that are already inhibiting research. On the contrary, we must create an environment of trust, so that researchers can explore the scope and importance of social networks online and offline, as they are essential in making the most of prominent economic and social phenomena, whilst still respecting people’s rights.

The active role of researchers must be highlighted. Rather than remaining subject to predefined rules, they need to participate in the co-creation of an adequate ethical and deontological framework, drawing on their experience and reflections. This bottom-up approach integrates the contributions of not just academics but also the public, civil society associations and representatives from public and private research bodies. These ideas and reflections could then be brought forward to those responsible for establishing regulations (such as ethics committees)

 

An international workshop in Paris

Poster for the RECSNA17 Conference

Such was the focus of the workshop Recent ethical challenges in social-network analysis. The event was organized in collaboration with international teams (The Social Network Analysis Group from the British Sociological Association, BSA-SNAG; Réseau thématique n. 26 “Social Networks” from the French Sociological Association; and the European Network for Digital Labor Studies (ENDLS)), with support from Maison des Sciences de l’Homme de Paris-Saclay and Institut d’études avancées de Paris. The conference will be held on December 5-6. For more information and to sign up, please consult the event website. Antonio A. Casilli, Associate Professor at Télécom ParisTech and research fellow at Centre Edgar Morin (EHESS), Télécom ParisTech – Institut Mines-Télécom, University of Paris-Saclay and Paola Tubaro, Head of Research at LRI, a Computing Research Laboratory at CNRS. Teacher at ENS, Centre national de la recherche scientifique (CNRS).

 

The original version of this article was published on The Conversation France.

 

Seald

Seald: transparent cryptography for businesses

Since 2015, the start-up Seald has been developing a solution for the encryption of email communication and documents. Incubated at ParisTech Entrepreneurs, it is now offering businesses a simple-to-use cryptography service, with automated operations. This provides an alternative to the software currently on the market, which is generally hard to get to grips with.

 

Cybersecurity has become an essential issue for businesses. Faced with the growing risk of data hacking, they must find defense solutions. One of these is cryptography, allowing businesses to encrypt data so that a malicious hacker attempting to steal them would not be able to gain access. This is what is offered by the start-up Seald, founded in 2015 in Berkeley, USA, who after spending a period in San Francisco in 2016 is now incubated at ParisTech Entrepreneurs. Its defining feature? Its solution is totally transparent to all the employees of the business.

There are already solutions that exist on the market, but they require you to open software and carry out a procedure that can require dozens of clicks just to encrypt a single email”, tells Timothée Rebours, co-founder of the start-up. In contrast, Seald is a lot simpler and faster to use. When a user sends an email, a simple icon appears on the messenger interface which can be ticked to encrypt the message. It is then guaranteed that neither the content nor any attachments are readable, should the message be intercepted.

If the receiver also has Seald, communication will be encrypted at both ends, and message and document will be read in an equally transparent way. If they do not have Seald, they can install it for free. However, this is not always possible if the policy of the receiver’s firm prohibits the installation of external applications on computer stations. In this case, an online double identification system using a code received via SMS or email allows them to authenticate themselves and subsequently read the document securely.

For the moment, Seald can be used with the more recent email servers, such as Gmail and Outlook. “We are also developing specific implementations for companies using internal messaging services”, explains Timothée Rebours. The start-up’s aim is to cover all possible email applications. “In this way; we are responding to a usage corresponding to problems within the business” explains the co-founder. Following on from that, he says: “Once we have finished what we are currently working on, we will then start on integrating into other kinds of messaging, but probably not before.”

 

Towards an automated and personalized cryptography

Seald is also hoping to improve its design, which currently requires people sending emails or documents to check a box. The objective is to limit their forgetfulness as best possible. The ideal would therefore be to have automatic encryption specific to the sender, the document being sent and the receiver. Reaching this goal is a task which Seald endeavors to fulfil by offering many features to the managers of IT systems within businesses.

Administrators already have several parameters in place that they can use to automate data encryption. For example, they can decide to encrypt all messages sent from a company email addresses to the email address of another business. Using this method; if company A starts a project with company B for example, all emails sent by employees between a company A email address and a company B email address would be encrypted by default. The security of communications is therefore no longer left in the hands of the employees working on the project, which means they can’t forget to encrypt their documents, saving them valuable time.

The start-up is pushing the features offered to IT administrators even further. It allows them to associate each document type to a revocation condition. The encrypted information sent to a third-party company – such as a consulting or communication firm – can be made impossible to read after a certain time, for example to the end of a contract. The administrator can also revoke the rights of access to the encrypted information for a device or a user, in the case where a person leaves the company due to malicious intentions.

By offering businesses this solution, Seald is changing companies’ perceptions on cryptography, with easy-to-understand functionalities. “Our aim has always been to offer encryption to the masses”, assures Timothée Rebours. Reaching the employees of businesses could be the first step towards raising more awareness amongst the public about the issue of cybersecurity and data protection within communications.

Langage, Language, Intelligence artificielle, Jean-Louis Dessalles, Artificial Intelligence

The fundamentals of language: why do we talk?

Human language is a mystery. In a society where information is so valuable, why do we talk to others without expecting anything in return? Even more intriguing than this are the processes determining communication, whether that be a profound debate or a spontaneous conversation with an acquaintance. These are the questions driving Jean-Louis Dessalles’ current project, a researcher in computing at Télécom ParisTech. His work has led him to reconsider the perspective on information adopted by Claude Shannon, a pioneer in the field. He has devised original theories and conversational models which explain trivial discussions just as well as heated debates.

 

Why do we talk? And what do we talk about? Fueled with the optimism of a young researcher, Jean-Louis Dessalles hoped to find the answer to these two questions in just a few months after finishing his thesis in 1993. Nearly 24 years have now passed, and the subject of his research has not changed. From his office in the Computing and Networks department at Télécom ParisTech, he continues to have an interest in Language. His work breaks away from the classic approach adopted by researchers in information science and communication. “The discipline mainly focuses on ways we can convey messages, but not about what is conveyed or why”, he explains, contradicting the approach to communication described by Claude Shannon in 1948.

The reasons for communication, along with the underlying motives for these information exchanges, are however very legitimate and complex questions. As the researcher explains in the film Le Grand Roman de l’Homme, which came out in 2014, communication is contradictory of various behavioral theories. Game theory for example, sometimes used in economy to describe and analyze behavioral mechanisms, struggles to justify the role of communication between humans. According to this theory, and by attaching value to all information, expected communication situations would consist in each participant providing the minimum information possible, whilst trying to glean the maximum from the other person. However this logic is not followed by humans in everyday discussions. “We need to consider the role of communication in a social context” deduces Jean-Louis Dessalles.

By dissecting the scientific elements in communication situations (i.e. interviews, attitudes in online forums, discussions, etc.) he has tried to find an explanation for people offering up useful information. The hypothesis he is putting forward today is compatible with all observable communication types; for him, offering up quality information is not motivated by economic gain, as game theory assumes, but rather by a gain in social reputation. “In technical online forums for example, experts don’t respond out of altruism, or for monetary gain. They are competing to give the most complete response in order to assert their status as an expert. In this way they gain social significance”, explains the researcher. Talking and showing our ability to stay informed is therefore synonymous with positioning ourselves in a social hierarchy.

 

When the unexpected liberates language

With the question of “why do we talk” cleared up, we still need to find out what it is we are talking about. Jean-Louis Dessalles isn’t interested in the subject of discussions per-say, but rather the general mechanisms dominating the act of communication. After having analyzed in detail tens of hours of recordings, he has come to the conclusion that a large part of spontaneous exchange is structured around the unexpected. The triggers of spontaneous conversation are often events that humans would consider unlikely or abnormal, in other words, when the normality of a situation is broken. For example, seeing a person over 2m tall, a series of cars of the same color all parked in a row or a lotto draw where all the numbers follow on from one another; these are all instances which are likely to provoke surprise in an individual, and encourage them to engage in spontaneous conversation with an interlocutor.

In order to explain this engagement based on the unexpected, Jean-Louis Dessalles has developed Simplicity Theory. According to him, the unexpected corresponds above all else to things which are simple to describe. He says “simple” because it is always easy to describe an out-of-the-ordinary situation, simply by placing the focus on the unexpected thing. For example, describing a person that is 2m tall is easy because this criterion alone is enough to establish a narration point. In contrast, describing a person of normal height and weight with standard clothes and a face with no distinctive features in particular would require a more complex description to achieve a successful definition.

Although simplicity may be a driver for spontaneous conversation, another significant discussion category also exists: that of argumentative conversation. In this case, the unexpected no longer applies. This kind of exchange follows a model defined by Jean-Louis Dessalles, called CAN (Conflict, Abduction, and Negation). “To start an argument, there has to be a conflict, opposing points of view. Abduction is the following stage, which consists in going back to the cause of the conflict in order to shift this and deploy arguments. Finally, negation allows the participants to progress to counterfactuals in order to reflect on solutions which would allow them to resolve the conflict.” Beyond that simple description, the CAN model could allow the development of artificial intelligence to progress (see text box).

 

[box type=”shadow” align=”” class=”” width=””]

When artificial intelligence looks at language theories

Machines should be able to have a reasonable conversation in order to appear intelligent”, assures Jean-Louis Dessalles. For the researcher, the test invented by Alan Turing, consisting in claiming that a machine is intelligent if a human can’t tell the difference between it and another human when having a conversation, is completely legitimate. Because of this, his work has found a place in the development of artificial intelligence that is able to pass this test. It is therefore absolutely essential to understand human communication mechanisms in order to transfer these to machines. A machine integrating the CAN model would be more able to have a debate with a human. In the case of a GPS, it would allow the device to plan routes whilst incorporating factors other than simply time or distance. Discussing with a GPS what we expect from a journey – such as beautiful scenery for example – in a logical manner, would significantly extend the quality of the human machine interface.

[/box]

 

In the hours of conversation recorded by the researcher, the distribution of spontaneous discussions induced by unexpected elements and arguments was respectively 25% and 75%. He remarks however that the line separating the two is not necessarily strict, since spontaneous narration can lead to a more profound debate, which would dramatically change the basis of the CAN model. These results offer a response to the question “what do we talk about?” and solidify years of research. For Jean-Louis Dessalles, it’s proof that “it pays to be naïve”. His recklessness at the beginning eventually led him to theorize various models throughout his career, on which humans base their communication, and will probably continue to do so for a long time to come.

[author title=”Jean-Louis Dessalles, computer scientist, human language specialist” image=”https://imtech-test.imt.fr/wp-content/uploads/2017/09/JL_Dessalles_portrait_bio.jpg”]A Polytechnic and Télécom ParisTech graduate, Jean-Louis Dessalles became a researcher in computing after obtaining his PhD in 1993. It is therefore difficult to see the link to questions regarding human language and its origins, something normally reserved for linguists or ethnologists. “I chose to focus on a subject relevant to the resources I had available to me, which were computer sciences”, he argues.

He then carried out research which contradicts the probabilistic approach of Claude Shannon, which is how he presented it to a conference at the Insitut Henri Poincaré in October 2016 for the centenary of the father of information theory.

His reflections on information have been the subject of a book, Le fil de la vie, published by Odile Jacob in 2016. He is also the author of several books about the question of language emergence. [/author]

 

greentropism, spectroscopie

GreenTropism, the start-up making matter interact with light

The start-up GreenTropism, specialists in spectroscopy, won an interest-free loan from the Fondation Mines-Télécom last June. It hopes to use this to reinforce its R&D and develop its sales team. Its technology is based on automatic learning and is intended for both industrial and academic use, offering application perspectives ranging from the environment to the IoT.

 

Is your sweater really cashmere? What is the protein and calorie content of your meal? Perhaps the answers to these questions come from one single field of study: Spectroscopy. Qualifying and quantifying material is at the heart of the mission of GreenTropism, a start-up incubated at Télécom SudParis. To do this, innovators use spectroscopy. “The discipline studies interactions between light and matter”, explains Anthony Boulanger, CEO of GreenTropism. “We all do spectroscopy without even knowing it, because our eyes actually work as spectrometers: they are light-sensitive and send out signals which are then analyzed by our brains. At GreenTropism, we play the role of the brain for classic spectrometers using spectral signatures, algorithms and machine learning.

The old becoming the new

GreenTropism is based on two techniques implemented in the 1960’s: spectroscopy and machine learning. Getting to grips with the first of these requires an acute knowledge of what a photon is and how it interacts with matter. Depending on the kind of light rays used (i.e. X-rays, ultra-violet, visible, infrared, etc.) the spectral responses are not the same. According to what we are wanting to observe, the nature of a radiation type will be more or less suitable. Therefore, UV rays detect, amongst other things, organic molecules in aromatic cycles, whilst close infrared allows the assessment of water content, for example.

The machine learning element is managed by data scientists working hand in hand with geologists and biochemists from the R&D team at GreenTropism. “It’s important to fully understand the subject we are working on and not to simply process data”, specifies Anthony Boulanger. The start-up has been developing machine learning in the hope of processing several types of spectral data. “Early on, we set up an analysis lab within Irstea. Here, we assess samples with high-resolution spectrometers. This allows us to supplement our database and therefore create our own algorithms. In spectroscopy, there is great variation of data. These come from the environment (wood, compost, waste, water, etc.), from agriculture, from cosmetics, etc. We can study all types of organic matter”, explains the innovator.

GreenTropism’s knowledge goes even further than this. Their deep understanding of infrared, visible and UV radiation, as well as laser beams (LIBS, Raman), allows them to provide a platform for software and agnostic models. This means they are adjustable to various types of radiation and independent to the spectrometer used. Anthony Boulanger adds: “our system allows results to be obtained in real time, whereas traditional analyses in a lab can take several hours over several days.

[box][one_half]

A miniaturized spectrometer.

[/one_half][one_half_last]

A traditional spectrometer.

[/one_half_last] [/box]

Crédits : Share Alike 2.0 Generic

Real-time analysis technology for all levels of expertise

Our technology consists in a machine learning platform allowing for the creation of spectrum interpretation models. In other words, it’s software transforming a spectrum into a value which is of interest to a manufacturer that has already mastered spectrometry. This allows them to achieve an operational result since in this way they can control and improve the overall quality of their process”, explains the CEO of GreenTropism. By using a traditional spectrometer in association with the GreenTropism software, a manufacturer can verify the quality of the raw material at the time of its delivery and ensure that its specification is fulfilled for example. Continued analysis also ensures the monitoring of the entire production chain in real time and in a non-destructive way. The result is that all finished products, as well as those in the transformation process, are open to systematic analysis. In this case, the objective is to characterize the material of a product. It is used for example to dissociate materials or two essences of wood. GreenTropism also receives support from partnership with academics such as Irstea or Inrea. These partnerships allow them to extend their fields of expertise, whilst also deepening their understanding of matter.

GreenTropism technology is also aimed at novices wanting to instantly analyze samples. “In this case, we depend on our lab to construct a database in a proactive way, before putting the machine learning platform in place”, adds Anthony Boulanger. It is therefore a question of matter qualification. Obtaining details about the composition of an element such as the nutritional content of a food item is a direct application. “The needs linked to spectroscopy are still vague since we have been processing organic matter. We can measure the widespread parameters such as the level of ripeness of a piece of fruit, as well as other, more concrete details such as the quantity of glucose or saccharine a product contains.

Towards the democratization of spectroscopy

The fields of application are vast: environment, industry, the list goes on. But GreenTropism technology also adapts to general public usage through the Internet of Things, mass market electrical technology and household electronic items. “The advantage of spectroscopy is that there is no need to create close contact between light and matter. This allows for potential combinations between daily life devices and spectrometers where the user doesn’t have to worry about technical aspects such as calibration for example. Imagine coffee machines that allow you to select the caffeine level in your drink. We could also monitor the health status of our plants through our smartphone”, explains Anthony Boulanger. This last usage would function like a camera. After a flash of light is emitted, the program will receive a spectral response. Rather than receiving a photograph, the user would for example find out the water level in their flower pot.

In order to make these functions possible, GreenTropism is working on the miniaturization of its spectrometers. “Today, spectrometers in labs are 100% reliable. A new, so-called ‘miniaturized’ generation (hand-held) is entering the market. However, these devices lack scientific publication about their reliability, casting doubt on their value. This is why we are working on making this technology reliable at a software level. This is a market which opens up a lot of doors for us, including one which leads to the general public”, Anthony Boulanger concludes.

vigisat, surveillance, environnement

VIGISAT: monitoring and protection of the environment by satellite

Belles histoires, Bouton, CarnotFollowing on from our series on the platforms provided by the Télécom & Société numérique Carnot institute, we will now look at VIGISAT, based near Brest. This collaborative hub is also a project focusing on the satellite monitoring of oceans and continents in high resolution.

 

On 12th July, scientists in Wales observed a drifting iceberg four times the size of London. The imposing block of ice detached from the Antarctic and is currently meandering around the Weddell Sea, and has now started to crack. This close monitoring of icebergs was made possible by satellite images.

Although perhaps not directly behind this observation, the Breton Observation Station, VIGISAT, is particularly involved in the matter of maritime surveillance. It also gathers useful information on protecting the marine and terrestrial environments. René Garello, a researcher at IMT Atlantique, presents the main issues.

 

What is VIGISAT?

René Garello: VIGISAT is a reception center for satellite data (radar sensors only) operated by CLS (Collecte Localisation Satellites) [1]. The station benefits from the expertise of the Groupement d’Intérêt Scientifique Bretagne Télédétection (BreTel) community, made up of nine academic members and partners from the socio-economic world. Its objective is to demonstrate the relevance of easy access data for the development of methods for observing the planet. It is at the service of the research community (for academic partners) and of the “end users” from a business perspective.

VIGISAT is also a project within the Breton CPER (Contrat de Plan État-Région) framework, which has been renewed to run until 2020. The station/project concept was named a platform by the Institut Carnot Télécom & Société Numérique at the end of 2014.

 

The VIGISAT station

 

What data does VIGISAT collect and how does it process this?

RG: The VIGISAT station receives data from satellites carrying Synthetic Aperture Radars (better known as SARs). This microwave sensor allows us to obtain very high resolution imaging of the Earth’s surface. The data received by the station therefore come from both the Canadian satellite RadarSAt-2, and in particular from the new series of European satellites: SENTINEL. These are sun-synchronous orbiting satellites [NB: the satellite always passes over a certain point at the same solar time], which move at an altitude of 800km and can circle the Earth in just 100 minutes.

We receive raw information collected by satellites, in other words, data come in the form of unprocessed bit streams. The data are then transmitted by fiber optic to the processing center which is also located on the site. “Radar images” are then constructed using the raw information and the radar’s known parameters. The final data, although in image form, require expert interpretation. In simple terms, the radar wave emitted is sensitive to the properties of the observed surfaces. In this way, the nature of the earth (vegetation, bare surfaces, urban landscapes, etc.) will send its own characteristic energy. Furthermore, the information required depends on the measuring device’s intrinsic parameters, such as the length of the wave or the polarization.

 

What scientific issues are addressed using VIGISAT data?

RG: CLS and researchers from members of the GIS BreTel are working on diverse and complementary issues. At IMT Atlantique or Rennes 1 University, we are mainly focusing on the methodological aspects. For example, for 20 years, we have had a high level of expertise on statistical processing of images. In particular, this allows us to identify areas of interest on terrestrial images or surface types on the ocean. More recently, we have been faced with the sheer immensity of the data we collect. We therefore put machine learning, data mining and other algorithms in place in order to fully process these databases.

Other GIS institutions, such as Ifremer or IUEM [2], are working on marine and coastal topics, in collaboration with us. For example, research has been carried out on estuary and delta areas, such as the Danube. The aim is to quantify the effect of flooding and its persistence over time.

Finally, continental themes such as urban planning, land use, agronomics and ecology are the main elements being studied by Rennes 2 University or Agrocampus. In the case of urban planning, satellite observations allow us to locate and map the green urban fabric. This allow us to estimate the allergenic potential of public spaces for example. It should be noted that a lot of these works, which began in the field of research, have led to the creation of some viable start-ups [3].

What projects has VIGISAT led?

RG: Since 2010, VIGISAT’s privileged data access has allowed it to back various other research projects. Indeed, it has created a lasting dynamic within the scientific community on the development of land, as well as the surveillance and controlled exploitation of land. Amongst the projects currently underway, there is for example CleanSeaNet, which focuses on the detection and monitoring of marine pollution. KALIDEOS-Bretagne looks at the evolution of land and landscape occupation and use on a town-countryside gradient. SESAME deals with the management and exploitation of satellite data for marine surveillance purposes.

 

Who is benefitting from the data analyzed by VIGISAT?

RG: Several targets were identified whilst preparing for the CPER 2015-2020 support request. One of these objectives is to generate activity in terms of the use of satellite data by Breton businesses. This includes the development of new public services based on satellite imaging in order to favor downstream services with regional affiliates development strategy.

One sector that benefits from the data and their processing is undoubtedly the highly reactive socio-economic world (i.e. start-ups, SMEs, etc.) that are based on the uses we discussed earlier. On a larger scale, protection and surveillance services are also addressed by the action coordinated by the developers and the suppliers of a service, such as GIS and the authorities at a regional, national and European level. By way of an example, BreTel has been a member of the NEREUS (Network of European Regions Using Space technologies) since 2009. This allows us to hold a strong position in the region as a center of expertise in marine surveillance (as well as in detection and monitoring of oil pollution) and also analyze ecological corridors in the context of biodiversity.

 [1] CLS is an affiliate of CNES, ARDIAN and Ifremer. It is an international business specializing in supplying Earth observation and surveillance solutions since 1986.
 [2] European Institute for Marine Studies
[3] Some examples of these start-ups include: e-ODYN, Oceandatalab, Hytech Imaging, Kermap, Exwews, and Unseenlab.

[box type=”info” align=”” class=”” width=””]

On VIGISAT:

The idea for VIGISAT began in 2001, with the start-up BOOST Technologies, which came out of IMT Atlantique (formerly Télécom Bretagne). From 2005, propositions were made to various partners including the Bretagne Region and the Brest Metropolis, in order to try and develop an infrastructure like VIGISAT on the campus close to Brest. Following BOOST Technologies’ merger with CLS in 2008, the project flourished after the creation of GIS BreTel in 2009. In the same year, the VIGISAT project experienced further success when presented to CPER. Then, BreTel grew its roadmap by adding the “research” sector, as well as the “training”, “innovation”, “promotion/dispersal” aspects. GIS BreTel is currently focusing on the “activity creation” and “new public services” sections which are in tune with the philosophy of the Carnot platforms.

BreTel also has a presence on a European level. GIS and its members have gained the title of “Copernicus Academy”. Thanks to this, they receive support from specialists in the European Copernicus program for all their education needs. From the end of 2017, BreTel and its partners will be participating in the Business Incubator Centers at ESA (ESA-BIC) which will cover five regions in Northern France (Brittany, Pays de la Loire, Ile-de-France, Hauts-de-France and Grand-Est), headed by the Brittany region.[/box]

[box type=”shadow” align=”” class=”” width=””]

The TSN Carnot institute, a guarantee of excellence in partnership-based research since 2006

 

Having first received the Carnot label in 2006, the Télécom & Société numérique Carnot institute is the first national “Information and Communication Science and Technology” Carnot institute. Home to over 2,000 researchers, it is focused on the technical, economic and social implications of the digital transition. In 2016, the Carnot label was renewed for the second consecutive time, demonstrating the quality of the innovations produced through the collaborations between researchers and companies. The institute encompasses Télécom ParisTech, IMT Atlantique, Télécom SudParis, Télécom, École de Management, Eurecom, Télécom Physique Strasbourg and Télécom Saint-Étienne, École Polytechnique (Lix and CMAP laboratories), Strate École de Design and Femto Engineering.[/box]

Also on I’MTech:

[box][one_half]

[/one_half][one_half_last]

[/one_half_last][/box]

 

Brennus Analytics

Brennus Analytics: finding the right price

Brennus Analytics offers software solutions, making artificial intelligence available to businesses. Algorithms allow them to determine the optimal sales price, helping bring businesses closer to their objective of gaining market share and margin whilst also satisfying their customers. Currently incubated at ParisTech Entrepreneurs, Brennus Analytics also allows businesses to make well-informed decisions about their number one profitability lever: pricing.

 

Setting the price of a product can be a real headache for businesses. It is however a crucial stage which can determine the success or failure of an entire commercial strategy. If the price is too high, customers won’t buy the product. Too low, and the obtained margin is too weak to guarantee sufficient revenues. In order to help businesses find the right price, the start-up Brennus Analytics, incubated at ParisTech Entrepreneurs, proposes a software making artificial intelligence technology accessible for businesses. Founded in October 2015, the start-up is based on its founders’ own experiences in the field, in their roles as former researchers at the Insitut de Recherche en Informatique in Toulouse (IRIT).

The start-up is simplifying a task which can prove arduous and time-consuming for businesses. Hundreds, or indeed, thousands of factors have to be considered when setting prices. What is the customer willing to pay for the product? At what point in the year is there the greatest demand? Would a price drop have to be compensated for by an increase in volume? These are just a few simple examples showing the complexity of the problem to be resolved, not forgetting that each business will also have its own individual set of rules and restrictions concerning prices. “A price should be set depending on the product or service, the customer, and the context in which the transaction or contractual negotiation will take place”, emphasizes Emilie Gariel, the Marketing Director at Brennus Analytics.

 

Brennus Analytics

 

In order to achieve this, the team at Brennus Analytics relies on their solid knowledge regarding the task of pricing, combining it with data science and artificial intelligence technology. The technology they choose to implement depends on the problem they are trying to solve. For statistics, machine learning, deep learning and similar technologies are used. For more complex cases, Brennus employs an exclusive technology, called an “Adaptive Multi-Agent System” (AMAS). This works by representing each factor which needs to be considered by an agent. The optimal price is then obtained through an exchange of information between these agents, taking into consideration the objectives set by the business. “Our solution doesn’t try to replace human input, it simply provides valuable assistance in decision-making. This is also why we favor transparent artificial intelligence systems; it is crucial that the client understands the suggested price”, affirms Emilie Gariel.

The data used to run these algorithms comes from the businesses themselves. The majority have a transaction history and a large quantity of sales data available. These databases can potentially be supplemented by open-source data. However, the marketing director at Brennus Analytics warns: “We are not a data provider. However, there are several start-ups that are developing in the field of data mixing who can assist our clients if they are looking, for example, to raise the price of competition products.” She is careful to add: “Always wanting more data doesn’t really make much sense. It’s better to find a middle-ground between gathering internal data which is sometimes limited, and joining the race to accumulate information.”

In order to illustrate Brennus’ proposed solution, Emilie Gariel gives the example of a key player in the distribution of office supplies. “This client was facing to intense pressure from its competition, and they felt they had not always positioned themselves well in terms of pricing”, she explains. Its prices were set on the basis of a margin objective by product category. This outlook was too generic, disconnected from the client, which led to prices which were too high for popular products in this competitive market, and then prices which were too low for products where client price sensitivity was less strong. “The software allowed an optimization of prices which had a strong impact on the margin, by integrating a dynamic segmentation of products and a flexibility in pricing”, she concludes.

The capacity to clarify and subsequently resolve complex problems is likely Brennus’ greatest strength. “Without an intelligent tool like ours, businesses are forced to simplify the problem excessively. They consider fewer factors, simply basing prices on segments and other limited contexts. Their pricing is often therefore sub-optimal. Artificial intelligence, on the other hand, is able to work with thousands of parameters at the same time”, explains Emilie Gariel. The solution offers businesses several possibilities of how to increase their profitability by working on the different components of pricing (costs, reductions, promotions, etc.). In this way, she perfectly illustrates the potential of artificial intelligence to improve decision processes and profitability in businesses.