Thursday, July 08, 2004

Can We Create an Intelligent, Conscious Machine?

Answer: In principle yes, but the practical challenge is extreme.

Everyone anticipates continued progress in producing faster and faster computers which are capable of ever more amazing feats. A couple of years ago, a computer defeated a chess grandmaster. Researchers in artificial intelligence (AI) have been working on models of important features such as memory, sense perception, and language skills, and some have even attempted to bring these together in robots which can interact with their environment. In assessing these efforts, some believe that there is no reason to doubt that they will someday result in a machine which matches or exceeds overall human intelligence. Others doubt this is possible. Interwoven into this debate is the question of whether any such machine could be conscious? Again, some would answer yes, while others believe there would always be something essential to human consciousness which (in principle) could not be fabricated.

In addressing these questions, I need to say what I mean by intelligence and consciousness.

I want to follow the common practice of defining ultimate success in artificial intelligence as the creation of capabilities equivalent to human intelligence. Success in some narrow domain, such as chess-playing for instance, would not be sufficient to show human-style intelligence (evidently the chess computer succeeded through rapid “brute-force” computation, which is something at which we already know computers excel). But what criteria would satisfy a requirement for breadth as well as depth in intelligence? How would an AI researcher know if he or she succeeded in this task? Well, many have argued along the lines of “if it looks, walks, and quacks like a duck, then it’s a duck”. Alan Turing, the great English mathematician and pioneer of computer science advanced what became known as the Turing test for machine intelligence. In abbreviated form, the Turing test says that if a human being and a computer were questioned by a moderator (who didn’t know which was which) using text messages, and if the moderator could not distinguish them by their responses to his or her questions, then the machine would qualify as intelligent. For present purposes, I take the Turing test to be a reasonable way to define success in AI. (I should note that Turing actually defined the problem in terms of the question “Can machines think?”)

What about artificial consciousness (AC)? In defining consciousness, I am using consciousness in its sense as phenomenal awareness – the qualitative subjective experience of being. To bring this issue into focus, consider the following question: if we built an entity which could pass the Turing test, would its subjective experience be like ours? Would it have an “inner-life” at all? Some philosophers and scientists (including I believe Turing) would reject these questions as meaningless. After all, I can’t be sure that other people have the same sort of subjective experience as I have! However, despite the level of difficulty, I think it is a real and important question. I believe first-person subjective experience is a crucial part of our natural world and accounting for its presence and manifestation in the world is an appropriate (and extremely important!) challenge for science and philosophy.

Let me give my conclusions, and then try to back them up. First, I believe that success in creating a human-like artificial intelligence is possible in principle. This is because I believe that human intelligence is a natural phenomenon, and there is no need to invoke transcendent or supernatural explanations to account for it. If we can arise in the world, then other intelligent entities can come forth as well. We don’t need any extra “stuff”.

Also, because I have concluded that the raw material of subjective experience must be a ubiquitous part of the natural world, I don’t have a reason to assert that an artificial being couldn’t be conscious. In fact, I believe that if we could succeed in building an artificial intelligence, we would simultaneously create a robustly conscious being. We would not create a robot or zombie which somehow lacks any inner experience.

However, I believe that success in creating an intelligent and conscious artificial entity in the foreseeable future will be very, very difficult as a practical matter; and, specifically I predict that the digital computer, which is currently the vehicle for AI work, will likely not be the platform for a successful creation.

Human intelligence and consciousness is the product of a multi-billion year evolutionary process. Here, I refer to evolution in the broadest sense: this includes the physical evolution of the universe prior to the first life-forms, the more-familiar story of biological evolution on earth, as well as the much more recent explosion of cultural evolution.

The elementary building-blocks of human intelligence and consciousness were in place in the early universe. The idea that some prototype of consciousness, in particular, exists in the basic components of the universe is a strange and new concept for most people, so let me summarize the idea. As I have argued at greater length elsewhere, the conclusion that a kind of subjective experience is part of the substance of the universe follows from two simple assertions. First, subjective (first-person) experience is an irrefutable fact of existence which cannot be explained away by reducing it via traditionally third-person or “objective” scientific analysis, the way water is described by breaking it down into H2O. Second, subjective experience could not have just popped into existence out of nowhere at some point in biological development. The foundation for it had to be there already.

So, human consciousness in its current form is the product of evolutionary development of increasingly complex new forms of organization. The reason rocks differ from human beings lies in the way they are put together, not in their elementary constituents. The good news in this for AC ambitions is that we do not need to find some magic ingredient in order to create consciousness. The bad news is that the system which gives rise to human consciousness is so extremely intricate: we are the most complex entities known in the universe by far. In particular, I believe there is something very special about the way all biological entities “leverage” the proto-conscious experience of their primitive parts up through the levels of organic molecule, cell and organism. Then on top of this we humans add the unique complexity of our brain and nervous system. There is a great deal going on at each level of structure. For example, our cells have a particularly complex make-up. They are active entities in their own right with significant internal structure.

As an aside, the complexity of cells means that neurons specifically are much more than digital bits. This means attempts in AI research to model the brains’ neuron network on a computer will still be greatly oversimplifying the organization underlying human experience.

Let’s leave the question of human consciousness briefly and return to the question of intelligence. We are perhaps more used to the idea that the set of capabilities we call intelligence arose through evolution (the ability to speak and reason, for instance). While the details are debated, the idea that intelligence developed through natural selection is widely accepted. What perhaps is sometimes under- appreciated is the implication that intelligence arose as a method of enhancing survival, rather than popping into the world as an all-purpose analytical engine (like a computer). The broader point I wish to make is that we human beings are intrinsically active, not passive. Our intelligence does not exist in isolation inside our heads, waiting to analyze inputs. We are active agents constantly interfacing with our physical and social/cultural environment, and this environment is therefore also part of the foundation of our being. For AI to fully succeed, an artificial construct will also need to be an independent active entity which can match its internal capabilities with the dynamism of the external world.

Now to tie back intelligence to consciousness: I believe action in the world is always accompanied by some kind of experience. Consider the alternative for a moment. Some philosophers, grappling with the mysteries of consciousness, speculate that there could be an intelligent entity which not actually conscious: it would go about its business, acting like a fully intelligent being with appropriate behavior, without any kind of inner subjective experience (human consciousness in this view is epiphenomenal: it accompanies our actions, but does not play any necessary role). Science fiction has sometimes dealt with scenarios involving intelligent robots: are they persons in the same way we humans are? Do they have feelings, etc? In my view, higher levels of (active) intelligence co-arose with more robust experience through evolution. This implies human-level intelligence and consciousness couldn’t exist without each other. I think the idea that intelligence could exist without consciousness is fostered by our seeing the hints of intelligence in today’s computers. But if I’m right, these hints are still a far cry from the actual AI success: a fully independent, interactive and dynamic intelligent artificial agent. Such a being would necessarily be conscious.

To conclude, I think AI and AC are possible in principle. Our human capabilities are natural in origin, and every step in our understanding of nature will also bring forward our understanding of intelligence and consciousness. On the other hand, because human capabilities arise from such great complexity, we still have a long way to go on this journey. My speculation is that a significant advance, such as quantum computing or a biologically-based computing foundation, will be necessary to move toward success.

2 comments:

Unknown said...

Good post. Thanks.

Steve said...

Thank you for taking a look at it.