AI and Consciousness

   Consciousness is perhaps one of the most controversial areas of research in psychology. Currently, there is no general consensus as to how to define or measure conscious awareness. Despite this, both researchers and lay-persons alike feel that consciousness is a fundamental determinant of what it means to be human. Not surprisingly, in the field of AI, consciousness is just as controversial.

   One fundamental issue is whether or not conscious awareness is simply a by-product of complex intelligent systems. Those who assume that consciousness is simply a by-product or an emergent property argue as follows. In humans, a single neuron has nothing resembling intelligence. Yet in combination, billions of neurons combine to form a mind that does possess intelligence. It would appear then, that the brain is more than the sum of its parts. Intelligence emerges with sufficient configural complexity of neurons. So it is not inconceivable that other attributes such as consciousness, creativity and emotionality may emerge as a by-product of complex artificially intelligent systems. In general then, the idea is that consciousness is just a by-product of any sufficiently complex brain, and AI engineers need not try to isolate and recreate it specifically, it will emerge automatically as needed.

   If one assumes, on the other hand, that consciousness is not such a by-product, then an additional question is whether or not it is possible to computationally define, and simulate it. Thus, in asking whether computers can think, we must inevitably turn to the question of whether thinking computers would actually be conscious? In other words, at some point in the enterprise of AI it becomes important to define the relationship between consciousness and intelligence. For example, is consciousness a necessary condition for intelligent systems or would intelligent systems necessarily display consciousness?

   Skeptics point out that a fundamental component of consciousness is subjective phenomenal experience which may be beyond the scope of computational simulation. To illustrate this distinction, imagine a person, totally colourblind from birth, who as a result, has never experienced the colour red as any different from an equally bright grey, or green. Then imagine that the person studies colour theory, physics, psychology of perception and the biology of the eye. Imagine that the person becomes totally knowledgeable about all aspects of the colour red. Skeptics argue that in studying all of this factual information, our person is learning the type of thing a computer might learn: wavelengths, frequencies, etc. However, what the person is not learning is what red really looks like - she can't see it, so her experience is lacking something. The argument then is that no amount of factual information - the kind you would give a computer - can give them the subjective experience of red, or anything else. What makes the argument significant is that the same skeptics argue that the subjective, personal, phenomenological experiences are what make up conciousness and thus without them, an AI cannot be conscious.

Previous (Emotional Intelligence) | Home | Next (Symbolic AI)

If any part of the site is not working for you, or if you would like to see a resource added, please contact us.
All contents copyright , 1999.