There Won’t Be Strong Artificial Intelligence

Arguments and counterarguments about whether consciousness could emerge from Artificial Intelligence

Laurent Ach
8 min readFeb 16, 2019
A front view of the Digi-Comp I version 2.0

It has been common since a few years, to read that computers will achieve human-level intelligence in the next decades and even acquire subjective experience and consciousness. The next anticipated step is Intelligence Explosion or Super Intelligence, leading to the creation of a new species of artificial entities replacing or enslaving humanity. If you think that looks more like a science fiction scenario than a scientific discussion, you are right. Like the MIT roboticist Rodney Brooks said “We don’t read ghost stories and say, The ghosts are coming! They’re going to take over!” but regarding AI, crazy predictions are easily accepted, probably because they are told by people considered as AI experts, and who sometimes actually are, or by renowned cognitive scientists. Some moderate predictions consider that this will eventually happen but only in centuries, and more and more true AI experts well explain about the limitations of current technologies. But the debate on strong AI, which is about whether human-like intelligence and consciousness can someday emerge from computers, is rarely visible to the general public. I wrote some explanations one year ago, in the article “There is No Artificial Intelligence”, against the belief in strong AI, and I would like now to tell more about why the usual arguments of people believing in these fables are misleading. In the following comments, I will not consider the technical difficulties of today to create general AI but only the fundamental problems preventing us from achieving human-like intelligence even in the very long term. I review here some common arguments supporting the possibility of consciousness emerging from AI, and give corresponding counterarguments. Both are rather simplified compared with entire articles or books on these topics, but I believe they are good starting points for reflecting on them. They are organized in a sequential order, similar to a discussion between proponents and opponents of strong AI.

Against the argument about physical components

Argument: The brain is made of physical components: neurons, synapses, themselves composed of cells being composed of molecules, atoms, and particles. A computer is also built from physical components and we are now able to simulate many cognitive capabilities of the brain and even outperform humans in tasks that were considered impossible for computers just a few years ago (driving cars, playing go, recognizing complex things in images, synthesizing very realistic picture of human faces…). So if consciousness exists in a brain, assuming we exclude the existence of supernatural phenomena, why wouldn’t it be possible that consciousness someday appears in a computer?

Counterarguments

  1. We can’t replicate consciousness until we can explain how it appears. It is out of the question to replicate a human body or a brain from the lowest level of their physical components up to their highest level of complexity, except using biological reproduction. So if we want to artificially recreate some properties of our brain we have to start from a formal representation, or model, which explains how it works at some level. But such a model just reflects our understanding of some cognitive and biological phenomena, for instance at the level of neural networks, and in a restricted field. Based on models, we are able to build simulations and we can imagine copy some mechanisms from humans or animals bodies or brain. But it is impossible to achieve anything beyond what the model explains. We are not able to explain the mechanisms giving rise to consciousness, so if consciousness appears in a robot or computer it will be by magic. If one day we can explain how consciousness emerges in living beings, we may be in a better position to evaluate if it is feasible to replicate its mechanisms. The notion of emergence is what lets hope that consciousness may appear anyway, at some point, from a pseudo-intelligent computer program, just like life emerges from inanimate matter, and subjective experience emerges from a collection of simple biological components. But there is no reason why a model explaining our limited observations at a specific scale would encapsulate the mechanisms giving rise to a phenomenon of a completely different kind.
  2. What is built in a formal system remains in a formal system. Even though computers and human beings are made of the same types of physical particles, whatever we build in a computer does not really depend on the material components, the hardware, which in theory could be built using wood pieces or water pipes instead of electronic components. Computer programs are implemented via software programming, based on formal languages. At the lowest level, the software and the data are composed of bits, which are 0/1 values, and there is nothing beyond that. Even the most advanced deep learning algorithms are built on this basis. Their training algorithm could be completely described down to this bit level. Below the bit level, there are some physical phenomena in the hardware, and you need a human being to make the connection between the physical reality of the hardware, and the symbols the software deals with, by making sense of the program input and output or its internal memory. Without humans, a computer is just a machine, which takes some electromagnetic or mechanical signals as an input and produces an electromagnetic or mechanical signal as an output. The meaning of the electromagnetic output (like the text displayed on a screen) does not make sense if there is no human to interpret it. This is very different from what a human being does, which has intrinsic meaning. Whatever the complexity of a software program, including elaborate AI models or not, what happens inside the computer has no purpose in itself, and cannot have a subjective experience or emotion or consciousness. We can produce the illusion of very complex phenomena like emotions by simulating human behavior in a robot, but it has nothing to do with the actual human experience, and it even does not have any existence until a human is there to interpret what the robot does. At the lowest level, the bits 0/1 values don’t even exist if there is no human to read them, based on conventions for reading them or based on models we created to make computers useful tools.

Against the argument about alternative intelligence

Argument: We build artificial intelligence using various algorithms, some of them being inspired by models of the brain but the intelligence of computers is obviously different from human intelligence. So we don’t have to understand how consciousness appears in human brains. It will appear differently in other types of intelligent entities that we won’t even be able to understand.

Counterargument: This argument is based on the assumption that replicating some attributes of intelligence can make subjective experience appear, and it is certainly true that intelligence is bound to perception and emotion since without them we are not able to associate a meaning to a symbol (symbol grounding problem). But it is rather the other way around and precisely because intelligence goes together with subjective experience, there is no actual intelligence in computers. If there is no human to interpret what a computer does based on the conventions defining a formal system it is based on, the computer is nothing less than a bunch of electromagnetic signals with no meaning. The meaning comes from perception and subjective experience.

Against the argument about natural evolution of robots

Argument: It may be impossible to directly create consciousness in a robot but if we create the conditions for a robot to perceive its environment and learn like a human child, all the properties of human intelligence will emergence at some point, including subjective experience.

Counterargument: This consideration is similar to the expectation of consciousness emerging from sufficiently elaborate learning systems in a computer. The sensors we can endow computers with, and any associated learning algorithm, have nothing to do with animal or human perception. In human beings, the subjective experience is tightly bound to perception, whereas in a robot all elementary actions are logically described from the beginning in a formal system you cannot escape. It is sometimes said that human beings or human brains are just the same, since they are also based on low-level pieces of material combined to achieve superior capabilities. But it is a confusion of the scientific representation of reality at a certain level under specific conditions of the experiment, with the actual physical and biological phenomenon, which combines all the scales and complexity of reality.

Against the argument of illusion

Argument: Consciousness is an illusion (according to Daniel Dennett) or it is just a convenient way of talking about some phenomenon we observe at human scale (according to Sean M. Carroll and “poetic naturalism”). So the only goals that make sense if you want to replicate any feature of human intelligence, is to replicate the observable behavior of human beings. This view is rather well explained here by Stanislas Dehaene and other cognitive scientists.

Counterargument: This argument is mostly about denying the objective reality of the subjective experience. There is no contradiction saying subjective experience is an objective fact, as is perfectly explained by John Searle. The subjective experience is the first reality we are confronted with. Any so-called objective reality arises from a general agreement about what can be recognized by the vast majority of people as a symbol with sufficient common and simple meaning, based on everyone’s grounding subjective experience. Without this common symbol grounding, we cannot make science because science is based on symbols and at some point, symbols cannot be defined by other symbols only. David Chalmers well explained about the distinction between what he called the easy and the hard problem of consciousness and the argument of illusion is about denying the existence of the hard problem.

About some other arguments

All the previous counterarguments denying the possibility of consciousness in AI are only valid if we consider the limitations of computers, as we know them today. The consciousness could exist thanks to the property of mater at quantum physics level, or thanks to some other fundamental features of physical matter we don’t know well yet, and we could create new types of computers based on them. As this is pure speculation no proper counterarguments hold.

Another common argument considers information complexity as bringing new properties after it reaches some thresholds, and as the source of emergence of consciousness. But information, defined as the data transformed by computers, is a formal representation that needs human interpretation and does not exist otherwise. What we call information in “information processing”, needs an observer to acquire a meaning, which is totally different from what actually happens in our brain and is what makes it possible to deal with information with some grounded meaning.

Further discussions and dual problems

The debate about the fundamental limits of AI is related to several dual notions: weak AI vs. strong AI, weak emergence vs. strong emergence, easy problem vs. hard problem of consciousness… In the background of discussions on these topics, scientists or non-scientists often express passion, which is related to what we consider as the nature of humanity.

--

--

Laurent Ach

CTO of Qwant, previously Director Europe of Rakuten Institute of Technology, interested in the essential differences between artificial and human intelligence