There Is No Artificial Intelligence

About fundamental differences between artificial and human intelligence

Laurent Ach
6 min readFeb 2, 2018

All the recent amazing achievements of artificial intelligence are categorized as narrow intelligence, but there is a lot of speculation about general and super intelligence. A recent, famous and impressive example of narrow intelligence is AlphaGo (and later AlphaGo Zero) by DeepMind, which can beat the top human players of the game of Go, considered for decades as very difficult for computers to play.

This type of success and the outstanding performances of deep learning especially in computer vision, and reinforcement learning especially for playing games, let many people think that in the coming years almost any human task will be better achieved by computers. This is probably true for solving some completely formalized problems, with clearly defined input and output data, in various separate narrow areas. A major challenge facing AI today is to connect and go beyond these narrow areas and progress toward a more general type of intelligence, the final goal being to reach the level of human intelligence, which can adapt to any situation, formalize and solve previously unseen problems.

There is today almost no sign of any progress toward artificial general intelligence, and the best example of current limitations in this domain is observed with natural language dialog: even using advanced deep learning techniques, the chabots we discuss with in 2018 are not much better than the first ones like Eliza in 1966, created with a few hundred lines of code and still a reference in recent articles (see footnote 1). Natural language dialog is the area that best illustrates the notion of general intelligence because it cannot be entirely formalized and does not have clear limits.

Looking further than general artificial intelligence, the notions of singularity and super intelligence, popularized by Ray Kurzweil and Nick Bostrom, mostly come from the concept of intelligence explosion by I.J. Good, in 1965. It is the simple idea that considering intelligence along a single dimension, and based on the progress observed in AI in various separate fields, we can predict for the coming years that it will surpass human intelligence. This would occur at a “singularity point” where the curve of artificial intelligence crosses the curve of human intelligence, and from then the explosion of artificial intelligence would be produced by the cycle of computers creating better computers.

A common way of illustrating Super Intelligence, without any definition of intelligence and considering it can be simply measured along a single dimension

There are many interesting and valid arguments against this theory and I would like to explain about a specific one, which is related to fundamental differences between human and artificial intelligence. These differences are so essential that we can say there is absolutely no intelligence in what we call artificial intelligence and there won’t, in any foreseeable future.

This statement may seem strange in a period when we observe AI achieving outstanding and unexpected results in many domains and outperforming humans, often by far, in complex tasks (playing games, recognizing scenes in images, driving cars, detect cancer in images…). It may also seem strange because the combination of successes we anticipate for AI, with an exponentially increasing power of computers, looks like there is no limit to AI progress, so why not also consider the possibility of an artificial subjective experience with emotions and consciousness. The ultimate hope now expressed by the Transhumanism movement is to merge Humanity and technology, be able to upload our minds to computers and become immortal.

These predictions are confusing because the intelligence we may attribute to computers is purely a matter of interpretation by human beings and has no actual existence. Current AI techniques are wonderful tools but what happens inside any computer is just physical phenomena and no information exists in it without a human to associate meanings to them. This can be said another way as does the philosopher John Searle: computers are syntactic, minds are semantic.

The physical signals produced by the hardware of computers are a convenient way of representing and manipulating information that does not exist outside our minds. We just decide that some physical property is to be considered as a bit with 0 and 1 values and we combine them to represent more complex data, then transform the data to create higher levels of representation, which still only exist into our minds. Adding machine learning on top of that does not change this reality in any way. A computer does not know what 0 or 1 is, it actually knows nothing about the input and output of its program, it is just able transform physical properties into other physical properties, like any machine.

Computers are so useful today, we would have much trouble living without them. Just like Internet is, since some years, and just like AI will soon be. But without human beings to interpret computer input and output, whether they are physical actions or symbolic data, the objective reality of computers is only the pieces of hardware it is composed of. There is no perception, no meaning, no experience, no intelligence inside a computer. Who cares? Until people confuse science fiction scenarios of AI conquering the world with reality and we see huge misunderstanding and confusion in news titles, like Facebook being forced to shut down one of its artificial intelligence systems before it evolves into Skynet, or AI breakthrough taking us one step closer to the Singularity.

Digicomp (computer toy) and punched card (via Wikimedia Commons)

Unlike in science fiction movies, a subjective experience, consciousness, a desire to fight against humanity or any emotion or feeling will never exist in a computer as defined by machines used to manipulate bits, qubits, any higher level representation or anything defined using a formal system. Whatever the complexity of any AI model, which would even receive lots of information about the surrounding world and learn by itself about the reality, the resulting system will still be pieces of hardware. The semantics, the intelligence we see in a computer only exists when we are there to see it.

These observations about computer being syntactic, depending on human interpretation to produce meanings, is also true if you consider the brain as a computer. It is up to you to interpret the brain phenomena as information processing but that does not reflect in any way the intrinsic nature of the subjective experience the brain is able to produce through some yet unknown biological mechanism. It is this subjective experience, which makes it possible to attribute meanings to objects and concepts, not any information processing however complex it is.

Without human interpretation, input and output data have no meaning for a computer or any artificial intelligence

A common mistake when comparing AI and human intelligence, is to confuse simulation and replication. A neural network has nothing to do with the brain, it is a representation we came up with, based on biology study, which reflects our understanding of the brain at some point. This model is not to be compared with a real brain because just like any scientific theory, it is a model of a limited observable reality in a particular context.

We can simulate any phenomenon happening in the brain as long as we can get an idea of how it is generated, directly or through a learning phase, including when we use models of human emotions. These simulated emotions are only a reality through our interpretation, unlike the emotions we feel when we have a subjective experience, and this reality won’t change with the increasing complexity of an artificial neural network or any elaborated type of AI.

Our subjective experience is the fist objective reality we get in contact with, as René Descartes thought in the 17th century and we may someday be able to understand how it is produced by the brain. Let’s talk again after that, about creating a subjective experience and actual intelligence into a computer.

This article is based on a talk at Rakuten Technology Conference 2017. Many ideas in this article come form the work by John Searle. A good overview of his theory about consciousness and the differences between computer and human intelligence can be seen in a video of a talk at Google, end of 2015.

(1) Added note, in June 2020: natural language processing has recently shown tremendous progress with the introduction of attention mechanisms and Transformers. I still claim that AI lacks understanding of natural language, but that should be the topic of a whole article

--

--

Laurent Ach

CTO of Qwant, previously Director Europe of Rakuten Institute of Technology, interested in the essential differences between artificial and human intelligence