The unintelligence of artificial intelligence
Despite the significant advances made by artificial intelligence, it still struggles to copy human intelligence. Artificial intelligence remains focused on performing tasks, without understanding the meaning of its actions, and its limitations are therefore evident in the event of a change of context or when the time comes to scale up. Jean-Louis Dessalles outlines these problems in his latest book entitled Des intelligences très artificielles (Very Artificial Intelligence). The Télécom Paris researcher also suggests avenues for investigation into creating truly intelligent artificial intelligence. He presents some of his ideas in the following interview with I’MTech.
Can artificial intelligence (AI) understand what it is doing?
Jean-Louis Dessalles: It has happened, yes. It was the case with the SHRDLU program, for example, an invention by Terry Winograd during his thesis at MIT, in 1970. The program simulated a robot that could stack blocks and speak about what it was doing at the same time. It was incredible, because it was able to justify its actions. After making a stack, the researchers could ask it why it had moved the green block, which they had not asked it to move. SHRDLU would reply that it was to make space in order to move the blocks around more easily. This was almost 50 years ago and has remained one of the rare isolated cases of programs capable of understanding their own actions. These days, the majority of AI programs cannot explain what they are doing.
Why is this an isolated case?
JLD: SHRDLU was very good at explaining how it stacked blocks in a virtual world of cubes and pyramids. When the researchers wanted to scale the program up to a more complex world, it was considerably less effective. This type of AI became something which was able to carry out a given task but was unable to understand it. Recently, IBM released Project Debater, an AI program that can debate in speech competitions. It is very impressive, but if we analyze what the program is doing, we realize it understands very little. The program searches the Internet, extracts phrases which are logically linked, and puts them together in an argument. When the audience sees listens, it has the illusion of a logical construction, but it is simply a compilation of phrases from a superficial analysis. The AI program doesn’t understand the meaning of what it says.
IBM’s Project Debater speaking on the statement “We should subsidize preschools”
Does it matter whether AI understands, as long as it is effective?
JLD: Systems that don’t understand end up making mistakes that humans wouldn’t make. Automated translation systems are extremely useful, for example. However, they can make mistakes on simple words because they do not understand implicit meaning, when even a child could understand the meaning due to the context. The AI behind these programs is very effective as long as it remains within a given context, like SHRDLU. As soon as you put it into an everyday life situation, when you need it to consider context, it turns out to be limited because it does not understand the meaning of what we are asking it.
Are you saying that artificial intelligence is not intelligent?
JLD: There are two fundamental, opposing visions of AI these days. On the one hand, a primarily American view which focuses on performance, on the other hand, Turing’s view that if an AI program cannot explain what it is doing or interact with me, I will not call it “intelligent”. From a utilitarian point of view, the first vision is successful in many ways, but it comes up against major limitations, especially in problem-solving. Take the example of a connected building or house. AI can make optimal decisions, but if the decisions are incomprehensible to humans, they will consider the AI stupid. We want machines to be able to think sequentially, like us: I want to do this, so I have to change that; and if that creates another problem, I will then change something else. The machine’s multi-criteria optimization sets all the parameters at the same time, which is incomprehensible to us. It will certainly be effective, but ultimately the human will be the one judging whether the decision made is appropriate or not, according to their values and preferences, including their will to understand the decision.
Why can’t a machine understand the meaning of the actions we ask of it?
JLD: Most of today’s AI programs are based on digital techniques, which do not incorporate the issue of representation. If I have a problem, I set the parameters and variables, and the neural network gives me a result based on a calculation I cannot understand. There is no way of incorporating concepts or meaning. There is also work being done on ontologies. Meaning is represented in the form of preconceived structures where everything is explicit: a particular idea or concept will be paired with a linguistic entity. For example, to give a machine the meaning of the word “marriage”, we will associate it with a conceptual description based on a link between a person A and a person B, and the machine will discover for itself that there is a geographical proximity between these two people, they live in the same place, etc. Personally, I don’t believe that ontologies will bring us closer to an AI which understands what it is doing, and thus one that is truly intelligent under Turing’s definition.
What do you think is the limitation of ontologies?
JLD: They too have difficulty being scaled up. For the example of marriage, the challenge lies in giving the machine the full range of meaning that humans attribute to this concept. Depending on an individual’s values and beliefs, their idea of marriage will differ. Making AI understand this requires constructing representations that are complex, sometimes too complex. Humans understand a concept and its subtleties very quickly, with very little initial description. Nobody spends hours on end teaching a child what a cat is. The child does it alone by observing just a few cats and finding the common point between them. For this, we use special cognitive mechanisms including looking for simplicity, which enables us to reconstruct the missing part of a half-hidden object, or to understand the meaning of a word which can have several different meanings.
What does AI lack in order to be truly intelligent and acquire this implicit knowledge?
JLD: Self-observation requires contrast, which is something AI lacks. The meaning of words changes with time and depending on the situation. If I say to you: “put this in the closet”, you will know which piece of furniture to turn to, even though the closet in your office and the one in your bedroom do not look alike, neither in their shape or in what they contain. This is what allows us to understand very vague concepts like the word “big”. I can talk about “big bacteria” or a “big galaxy” and you will understand me, because you know that the word “big” does not have an absolute meaning. It is based on a contrast between the designated object and the typical corresponding object, depending on the context. Machines do not yet know how to do this. They would recognize the word “big” as a characteristic of the galaxy, but using “big” to describe bacteria would make no sense to it, for example. They need to be able to make contrasts.
Is this feasible?
JLD: Quite likely, yes. But we would have to augment digital techniques to do so. AI designers are light years away from being able to address this type of question. What they want to figure out, is how to improve the performance of their multi-layer neural networks. They do not see the point of striving towards human intelligence. IBM’s Project Debater is a perfect illustration of this: it is above all about classification, with no ability to make contrasts. On the face of it, it is very impressive, but it is not as powerful as human intelligence, with its cognitive mechanisms for extrapolating and making arguments. The IBM program contrasts phrases according to the words they contain, while we contrast them based on the ideas they express. In order to be truly intelligent, AI will need to go beyond simple classification and attempt to reproduce, instead of mime, our cognitive mechanisms.
Leave a Reply
Want to join the discussion?Feel free to contribute!