Posts

Comprendre informations du langage, algorithms

Making algorithms understand what we are talking about

Human language contains different types of information. We understand it all unconsciously, but explaining it systematically is much more difficult. The same is true for machines. The NoRDF Project Chair “Modeling and Extracting Complex Information from Natural Language Text” seeks to solve this problem: how can we teach algorithms to model and extract complex information from language? Fabian Suchaneck and Chloé Clavel, both researchers at Telecom Paris, explain the approaches of this new project

What aspects of language are involved in making machines understand?

Fabian Suchaneck: We need to make them understand more complicated natural language texts. Current systems can understand simple statements. For example, the sentence: “A vaccine against Covid-19 has been developed” is simple enough to be understood by algorithms. On the other hand, they cannot understand sentences that go beyond a single statement, such as: “If the vaccine is distributed, the Covid-19 epidemic will end in 2021. In this case, the machine does not understand that the condition required for the Covid-19 epidemic to end in 2021 is that the vaccine is distributed. We also need to make machines understand what emotions and feelings are associated with language; this is Chloé Clavel’s specialist area.

What are the preferred approaches in making algorithms understand natural language?

FS: We are developing “neurosymbolic” approaches, which seek to combine symbolic approaches with deep learning approaches. Symbolic approaches use human-implemented logical rules that simulate human reasoning. For the type of data we process, it is fundamental to be able to interpret what has been understood by the machine afterwards. Deep learning is a type of automatic learning where the machine is able to learn by itself. This allows for greater flexibility in handling variable data and the ability to integrate more layers of reasoning.

Where does the data you analyze come from?

FS: We can collect data when humans interact with chatbots from a company and especially those from the project’s partner companies. We can extract data from comments on web pages, forums and social networks.

Chloé Clavel: We can also extract information about feelings, emotions, social attitudes, especially in dialogues between humans or humans with machines.

Read on I’MTech: Robots teaching assistants

What are the main difficulties for the machine in learning to process language?

CC: We have to create models that are robust in changing contexts and situations. For example, there may be language variability in the expression of feelings from one individual to another, meaning that the same feelings may be expressed in very different words depending on the person. There is also a variability of contexts to be taken into account. For example, when humans interact with a virtual agent, they will not behave in the same way as with a human, so it is difficult to compare data from these different sources of interactions. Yet, if we want to move towards more fluid and natural human-agent interactions, we must draw inspiration from the interactions between humans.

How do you know whether the machine is correctly analyzing the emotions associated with a statement?

CC: The majority of the methods we use are supervised. The data entered into the models are annotated in the most objective way possible by humans. The goal is to ask several annotators to annotate the emotion they perceive in a text, as the perception of an emotion can be very subjective. The model is then taught about the data for which a consensus among the annotators could be found. When testing the performance of the model, when we inject an annotated text into a model that has been trained with similar texts, we can see if the annotation it produces is close to those determined by humans.

Since the annotation of emotions is particularly subjective, it is important to determine how the model actually understood the emotions and feelings present in the text. There are many biases in the representativeness of the data that can interfere with the model and mislead us on the interpretation made by the machine. For example, if we assume that younger people are angrier than older people in our data and that these two categories do not express themselves in the same way, then it is possible that the model may end up simply detecting the age of the individuals and not the anger associated with the comments.

Is it possible that the algorithms end up adapting their speech according to perceived emotions?

CC: Research is being conducted on this aspect. Chatbots’ algorithms must be relevant in solving the problems they are asked to solve, but they must also be able to provide a socially relevant response (e.g. to the user’s frustration or dissatisfaction). These developments will improve a range of applications, from customer relations to educational or support robots.

What contemporary social issues are associated with the understanding of human language by machines?

FS: This would notably allow a better understanding of the perception of news on social media by humans, the functioning of fake news, and therefore in general which social group is sensitive to which type of discourse and why. The underlying reasons why different individuals adhere to different types of discourse are still poorly understood today. In addition to the emotional aspect, there are different ways of thinking that are built in argumentative bubbles that do not communicate with each other.

In order to be able to automate the understanding of human language and exploit the numerous data associated with it, it is therefore important to take as many dimensions into account as possible, such as the purely logical aspect of what is said in sentences and the analysis of the emotions and feelings that accompany them.

By Antonin Counillon