1705487302 Are artificial and human intelligence comparable Polytechnic insights

Are artificial and human intelligence comparable? – Polytechnic insights

Artificial intelligence (AI) is changing the world as we know it. It intervenes in every part of our lives, with more or less desirable and ambitious goals. Inevitably, AI and human intelligence (HI) are compared. This confrontation does not come out of nowhere, but can be explained by historical dynamics that are deeply rooted in the AI ​​project.

A comparison that is not new

AI and HI have co-evolved as areas of study. Since the beginning of modern computer science, two approaches have been distinguished: evolution through parallelism or through ignorance. “The founders of AI divided themselves into two approaches. On the one hand, those who wanted to analyze human mental processes and reproduce them on a computer by mirroring each other, so that the two companies feed off each other. On the other hand, those who saw IH as more of a limitation than an inspiration. This movement was interested in problem solving, i.e. in the result and not in the process,” remembers Daniel Andler.

Our tendency to compare AI and IH in numerous writings is therefore not a current fad, but part of the history of AI. Symptomatic of our time is the tendency to assimilate the whole thing from the numerisphere to AI: “Today we are calling.” all computer AI. We need to go back to the basics of the discipline to understand that an AI is a concrete tool, defined by the calculation it performs and the type of task it solves. If the task seems to require human ability, we will question the ability to be intelligent. This is essentially AI,” explains Maxime Amblard.

The two branches of the historical tree

The two main trends mentioned previously have given rise to two main categories of AI:

  • symbolic AI, based on rules of logical reasoning and having little influence on human perception
  • Connectionist AI based on neural networks inspired by human cognition

Maxime Amblard takes us back to the context of that time: “In the middle of the 20th century, the computing capacity of computers was tiny compared to today. So we say to ourselves: in order to have intelligent systems, the calculation must contain expert information, previously encoded in the form of rules and symbols. At the same time, other researchers are more interested in how expertise might emerge. The question then becomes: How can we create a probability distribution that provides a good explanation of how the world works? From there, we understand why these approaches exploded as the availability of data, storage, and computing capacity radically increased. »

To illustrate the historical development of these two branches, Maxime Amblard uses the metaphor of two skis moving forward one after the other: “Before computing power, probabilistic models were invisible in favor of symbolic models. We are currently witnessing a peak in connectionist AI thanks to its revolutionary results. Nevertheless, the problem of explaining the results leaves the way open for hybrid systems (connectionist and symbolic) to trace knowledge back to classical probabilistic approaches. »

Are artificial and human intelligence comparable Polytechnic insights

For her part, Annabelle Blangero specifies that today “there is a debate about whether expert systems really correspond to AI, since we tend to be considered AI systems that necessarily involve machine learning”. Nevertheless, Daniel Andler mentions that one of the leading minds in AI, Stuart Russell, remains very committed to symbolic AI. Maxime Amblard also agrees: “Perhaps I have a vision that is too influenced by the history and epistemology of AI, but I think that in order to qualify something as intelligent, it is more important to ask how what is created through calculations will change the world than focusing on it.” depends on the type of tool used. »

Does the machine look like us?

After the historical and definitional detour, the question arises: Are AI and HI two sides of the same coin? Before we can develop an answer, we must question the methodological framework that makes this comparison possible. For Daniel Andler, “Functionalism is the framework par excellence within which the question of comparison arises, provided we call the combined result of cognitive functions “intelligence.” » However, something is almost certainly missing to get as close as possible to human intelligence in time and space. “Historically, it was John Haugeland who developed this idea of ​​a missing ingredient in AI. We often think about consciousness, intentionality, autonomy, emotions or even the body,” explains Daniel Andler.

In fact, AI appears to lack consciousness and associated mental states. For Annabelle Blangero, this missing ingredient is just a question of technical means: “I come from a neuroscientific school of thought in which we assume that consciousness arises from the constant evaluation of the environment and the associated reactions of sensorimotor functions.” Based on this principle Reproducing human multimodality on a robot should produce the same properties. Today, the architecture of connectionist systems pretty much reproduces what happens in the human brain. In addition, we use similar activity measurements within biological and artificial neural networks.”

However, Daniel Andler emphasizes: “Today there is no single theory that explains human consciousness. The question of its origins is largely open and the subject of numerous debates in the scientific-philosophical community. » For Maxime Amblard, the fundamental difference lies in the desire to make sense. “People construct explanatory models for what they perceive. We are true meaning-making machines. »

The thorny question of intelligence

Despite this argued development, the question of rapprochement between IA and iH remains unresolved. In fact, the problem is primarily conceptual and concerns the way we define intelligence.

A classic definition would describe intelligence as a set of skills that enable problem solving. In his recent work, “Artificial Intelligence, Human Intelligence: The Double Enigma,” Daniel Andler offers an alternative, elegant and inverted definition: “Animals (human or nonhuman) demonstrate the ability to adapt to situations. They learn to solve temporal and spatial problems that concern them. They blithely do not care about solving general, decontextualized problems. »

This controversial definition has the merit of placing intelligence in context rather than making it an invariant concept. The mathematician and philosopher also reminds us of the nature of the concept of intelligence. “Intelligence is part of what we call a dense concept: it is both descriptive and objective, evaluative and subjective. Although in practice one can quickly draw conclusions about a person's intelligence in a situation, this can still be fundamentally debated. »

Making AI work for people again

Ultimately, the comparison question seems uninteresting if we expect a concrete answer. It's more about understanding the intellectual path taken, the process. This reflection raises crucial questions: What do we want to give AI? For what purposes? What do we want for the future of our societies?

Essential questions that reinvigorate the ethical, economic, legislative and social challenges facing actors in the world of AI, as well as governments and citizens around the world. Ultimately, it is pointless to know whether AI looks like us or will look like us. The only question that matters is: What do we want to do with it and why?

Julian Hernández