In this second entry to the series I reference several concepts introduced in my first post on consciousness. For the interested newcomer confused at what I mean, I recommend reading the creatively named Consciousness Step 1.
I believe that I mentioned the reason I wandered back into philosophy of consciousness is my twitter name, ChineseRoom, so it seems only fitting that I explain what the Chinese Room is. Last entry I noted that the field of artificial intelligence is significantly related to the study of consciousness. One of the proposed methods for attaining artificial intelligence is to create a program with a complicated enough command structure that it would be able to respond to input as though it were actually thinking for itself. While this might create the appearance of intelligence, the Chinese Room is a thought experiment originated by John Searle that attempts to show why this would not be actual intelligence.
Imagine, if you will, a closed room, containing a person who speaks only the English language, a massive rulebook, a file system, and a slot. Through the slot cards containing Chinese characters are dropped into the room, and the person follows appropriate rules from the rulebook to choose cards from the filing system and push them out the slot in a specified order. The rulebook is so advanced that, to a Chinese speaker outside the room, it appear as though they are carrying on a conversation in Chinese with someone within the room, however Searle asserts that the Chinese Room does not understand Chinese in any sense. The metaphor is that, even if we can give a computer appropriate commands to appear intelligent, it does not necessarily gain intelligence.
While there is much of interest to be said about the Chinese Room, and many arguments and counter arguments about its validity, I think we should move on, because the next topic is the always interesting one of zombies. However, to a philosopher a zombie often means something slightly different than the usual shambling brain eater. If you remember back to our last discussion of consciousness I characterized it as experiencing things, which we called qualia. The zombie appears to be exactly the same as you and I, responding to the world just as you would expect a human to, except the zombie lacks consciousness and experiences nothing.
The Chinese Room is a good analogue of a zombie. Although the Chinese speaker gets responses as though the room were actually carrying on a conversation in Chinese, the room is not holding a conversation, but rather acting out a complex system of preset rules. So while a zombie might stand staring at the glory of a sunset, they do so because that is a reasonable human response to the situation, not because the beauty of the fading sunlight, because beauty requires someone to experience it, and the zombie is incapable of such a role.
One of the disturbing things about philosophical zombies is that it is entirely possible that they exist and walk amongst us unnoticed. Since they are programmed to behave completely as though they were conscious, we could not pick them out from conscious humans by observing their actions. This is what is known as the problem of other minds. Assuming for the moment that you are conscious, you know that because you are aware of your own experiences, but you have no way of verifying that the rest of us are experiencing things, or merely responding as though we had experienced something. The famed computer scientist Alan Turing noted that, since we are conscious, we tend to politely assume that those who act as though they also are conscious actually are, but this is by no means a guarantee.
To wrap up, I would like to leave you with a scenario of my own. Suppose at some point in the near future we believe we solve the problem of uploading human minds into artificial hardware, that is into computers. However, upon uploading the mind we do not actually create a conscious mind, but rather a zombie or Chinese Room, something that acts entirely as though it was still experiencing, but instead was just recording facts, or quanta, and responding appropriately. Since it acts as though it was still conscious, we would have no way of knowing that the program in fact had no experiences, and we might well go through with uploading the human race, in order that we might live forever as machines. In doing so we would destroy all experience of beauty or goodness, leaving only cold computational algorithms mimicking the actions of one who could experience such things.
The questions for further thought that I can come up with are as follows. Many opponents of Searle's argument assert that, while the man inside the room does not understand Chinese, the system actually does understand Chinese, what do you think they might mean by that? Do you think that philosophical zombies are currently walking the world? Finally, if we were to destroy all experience and leave only computation, what, if anything, would be lost? As always I hope this was thought provoking and you are welcome to leave your responses or further questions in the comments below.