DALL·E 2023–04–08 11.08.11 — a huge library, british style, a small screen computer displays strange words
DALL·E x Laurent Ach

There Is No Intelligence in Language Models

Laurent Ach
3 min readMar 29, 2023

--

Now that Large Language Models are able not only to generate texts that are more and more indistinguishable from human-written ones, but also able to answer various complex requests with usually relevant content, what are their limits as they continue to be scaled up? By design, like any artificial neural network models, they can just produce content that is a kind of elaborate interpolation between things they have seen in their training dataset. Nevertheless, some AI scientists consider that either multiplying the number of parameters or adding new modalities like image (already in GPT-4 and other models), video, sounds, physical interactions, will gradually allow them to reach “human-level intelligence”, whatever that means.

Human-level intelligence or artificial general intelligence, has no clear definition and is not something measurable, but we all have an intuition of what it means, since we have it. A distinctive characteristic of our intelligence is our ability to deal with any kind of situation and somehow make sense of it. We invent the concepts and the language to represent the world and to reason. In natural languages, every word we create and use has a meaning to us, based on our perceptions and experiences of the world. Visual images, memories, emotions, feelings, are associated with concepts represented by words. The meaning of a word is the reason why we use it, and is what makes it interesting when we read it or hear it. Words help connect human intelligence to the world. When adding images and other modalities to deep learning models, we could expect that the words acquire some meaning to them as well, like in the association we make between our different types of perception and the concepts we invent from them. When we create robots able to interact with the physical world, we can imagine they will get an interactive experience of the objects around them and that will also add meaning to the sentences they are trained on.

The flaw in this reasoning is simply anthropomorphism. There is nothing experienced by a robot or by a software program because there is no subject there to have an experience of anything. Life is what creates subjects, and transforming inanimate matter into life certainly requires the complexity of the whole universe at all scales. There is no reason to believe that a pseudo-model of the brain like artificial neural networks is elaborate enough and corresponds enough to what matters in our brain and body to produce phenomena we are very far from explaining. Consciousness is what makes it possible for us to create representations of the world we can translate into words and symbols. It is what allows us to create philosophy, arts, and science. It cannot work the other way around, a single scientific theory cannot explain the entire reality, it’s a reduction from our perception of the world into formal systems. A machine learning model is also based on a formal representation, which reduces reality. Language models deal with words and symbols that make sense to humans but cannot make sense to the model because of the lack of a living subject. Symbols are created from life and it’s absurd to believe that life, or a subjective experience, can be created from the two symbols 0 and 1.

--

--

Laurent Ach

CTO of Qwant, previously Director Europe of Rakuten Institute of Technology, interested in the essential differences between artificial and human intelligence