The illusion of solving intelligence

Why we won’t create artificial general intelligence

Laurent Ach
4 min readJan 29, 2023
The mind is not information processing
The mind is not information processing — Laurent Ach x DALL-E

Artificial intelligence (AI) is currently called narrow intelligence because any current machine learning model can only be used for a very limited and well-defined task. Many AI scientists hope that we will soon come up with more powerful techniques that would be the basis for general artificial intelligence (AGI) able to solve new problems beyond the ones they have been designed for and trained on, like humans who are able to deal with any kind of situation. But in the situations as humans we face every day, the problems don’t initially have any formal definition, we don’t only solve problems, we invent them. From our perception of reality we create concepts, scientific models, and to deal with our interactions in society, we create laws and institutions. Our incredible power is to be able to create an abstract world of symbols that makes sense to us. Our worst weakness is to sometimes get trapped in it and loose its connection with reality, because for some reason any formal representation of the reality is only good for a limited period of time and to some extent. Any scientific theory appears at some point as incomplete and is superseded by a new one. Any political institution needs to evolve or to be changed after a while. Any value we care for or goal we pursue as an individual or as a social group, have to be rephrased sometimes. Any symbol needs to be interpreted by humans, it of course does not carry any meaning by itself. Even if you believe in some religion, the sacred texts need some interpretation. Meaning and interpretation is what will always miss in AI because any artificial mechanism is based on a formal system that defines everything it takes as input and produces as output in the form of signs that can’t be interpreted from inside the system.

The hope that we can build AGI is based on the belief that a system could be designed to create new concepts by training on examples. It is true that data with a high degree of abstraction can be found in the inside layers of deep artificial neural networks, for instance the concept of color in the form of an area where the vector representations of different colors are automatically gathered in a cluster without any prior information related to the concept. Large Language Models certainly generate many abstract and elaborate representations of various human concepts, since they are able to answer difficult questions needing expertise in some domains. But without human interpretation these concepts have no grounding. It’s up to humans to figure out whether internal data representations correspond to interesting concepts or not. The only thing a machine learning model can do is to optimize its parameters to minimize an error function for a particular task, and the interest of concepts emerging from the training is only related to this task. A language model trained to create texts that seem human-written, will create internal representations of words and sentences, which are relevant for producing combinations of words similar to the texts in the training set. If some concept emerges from that, it means it was already implicitly there and is not new.

What’s missing in current machine learning models to achieve AGI could be a more symbol-driven approach (symbolic AI) or a more grounded approach (multi-modal AI). This is actually what is promoted by some researchers who acknowledge the current limits of AI and it may help create better models but it won’t be enough to make AI capable of dealing with novel situations. Only humans can invent new concepts and the other way around, convert a concept into words that make sense to them. Only they can create from scratch a formal system of interest for dealing with situations in the real world. Science is one place for that, AI is part of it, but it only works in one direction. A formal systems is a good basis for taking human thinking a step further but the manipulation of information inside the formal system is useless for going beyond the concepts attached to it. In itself, without human interpretation, a formal system does not even have any value at all. What’s missing in AI to reach the level of human intelligence is life itself and particularly a subjective experience that connects the world to the words.

From the fear of risks associated with AGI, the research domain of AI Alignment has been created and is full of confusions and illusions. By principle it is impossible to expect AI to align with human values because those values cannot be encapsulated in formula, which is a reason why human laws need human interpretation. The hope that AI could understand enough what matters for humans, to deal with human affaires without our intervention, goes with the belief that there is nothing more in the human mind than information processing. It’s a belief, not a scientific stance, and it goes with the opinion, absurd to some people and obvious to others, that human mind is just a machine and humans are just robots. Consciousness is the characteristic of our brain that makes it possible for us to represent the world by following different approaches like science, arts, philosophy. Human mind, intelligence, and consciousness are inextricably intertwined with the body and the subjective experience is probably some fundamental property in life that has nothing to do with information processing.

--

--

Laurent Ach

CTO of Qwant, previously Director Europe of Rakuten Institute of Technology, interested in the essential differences between artificial and human intelligence