Saturday, February 26, 2011

Singularity Redux

This is a response to a wonderful post on Elf Army Writes about the Singularity. I was going to leave it as a comment, but as I wrote it just kept on doubling in length until it seemed a bit unwieldy and better suited to be a post of its own. I highly encourage you to follow the link to the Singularity post, but let me try to summarize the contents.

Computing power doubles every 18 months to 2 years, this is often called Moore's Law. It is theorized that, at some point, computers will achieve the computational power of the human brain and begin self-improving, this event is called the Singularity. After this, futurists like Ray Kurzweil believe that we will be improved by the machines, or we will become machines. Dystopian story writers, of course, usually predict we will be subsumed, enslaved, or exterminated by the machines, but I digress. The method that, we predict, a computer would be able to implement these self-improving processes is called a genetic algorithm. A genetic algorithm is allowed to change, or mutate, itself and judges the "fitness" of the resulting program according to some cost structure.

Before we begin, some xkcd levity. Let me start with a pragmatic concern that I have with Kurzweil's work as I understand it. I only read a bit of Kurzweil for my Thinking Critically About Technology course, so I do not claim to be an expert. That said, it is my understanding that the Singularity is set to occur when the computing power of an artificial computer matches that of the human brain, where computing power is measured in the ability of a processor to perform a certain number of operations in a given time. This seems to sidestep the dual problems of architecture and software.

Architecture refers to the structure of the processor performing the calculations. Computers tend to be linear processors, although nowadays most home computers have two processors working together, each tends to do its own thing, so one will keep your game running while the other makes sure your music is playing from a separate program. In contrast, the human brain is massively parallel. Despite common aphorisms, it is not terribly difficult to walk and chew gum at the same time, and you are also breathing, regulating your circulatory system, probably thinking about something, and so forth. Not only are we able to run a truly massive number of processes simultaneously in our brain, the processes interact in our brains, for example, our mood subconsciously affects our mannerisms and demeanor. Suffice it to say that, even if a computer were to have the raw power of the human brain, it does not seem clear that it would be able to harness it to the same effect as a human brain. Writing software to take advantage of parallel processing requires a very different type of thinking than linear processing, and is still considered a tricky problem. Or, to put it a different way, any animal with a larger brain probably has more processing power available to it, but (I think) there is something unique that humans do with their processing power which cannot be explained without an appeal to a biological analogue of software.

Of course I could be wrong, and certainly wouldn't mind finding out that I am, and the Singularity could come right on schedule. I hope it does; it seems like the experience of collaborating with an intelligence that was not human could open up vast insights into ourselves and our place in the world. I like to believe that, if we could communicate with something that had a vastly different perspective, we would obtain a better all around view of our own existence.

As you may have noticed, after the Singularity the interesting questions, to me, become less technical and more philosophical, along the lines of what does it mean to be a person? Before I burnt out Fall semester, I wrote a three part series on the philosophy of consciousness, which is quite relevant to this topic if you are interested. I also recommend the Science Fiction novel Blindsight by Peter Watts; although rather bleak, it deals with the issue of consciousness in a compelling and thought provoking manner.

I would like to conclude with the subject of computers having feelings. The problem of other minds, which I mentioned in my post about the song Poker Face, implies that we don't even know that other humans have feelings, only that they tend to act in a manner that is consistent with how we act when we experience feelings. So, it would not be necessary for a computer to actually experience emotions for us to believe they have feelings, merely that they respond in a manner consistent with our expectations of emotional beings.

This brings me to the terrifying thought with which I closed my second post in the series on consciousness. It seems feasible that we could create computers that mimicked the responses to emotions, but did not actually experience emotions. If we were to replace ourselves with such computer replicas, we would destroy all beauty by forever blinding the eyes of the beholders.


elfarmy17 said...

Your point about parallel processing reminded me of the old robot stereotype of moving each limb one at a time, then twisting the neck, then speaking, etc. That would have to be some massively complex software to handle all of the movement and, as you said, interacting factors.

My dad says that one of the ways to mimic the processing of the human brain is to create "neurons." Real neurons have different magnitudes, apparently, so if you replace each neuron in the human brain with a specific value, the computer can treat them in the same way.

I think the computer would be able to register "I am feeling sad," based on what it knows should make itself sad, and then change its actions accordingly based on how it knows we would react. But is it really feeling the emotion? Are we really feeling the emotion, or is it too just a complex reaction to some event? If I'm given $20, and my sister is given $50, do I really feel jipped, or do I just know that I should and then act as if I do?

(Or more troubling yet, how would one know if he/she was truly in love with someone, and didn't just think so because that person loved them? Is it just a relationship of convenience?)

It seems that I should have written a follow-up post of my own. :)

Kenny said...

Fair enough, a massive neural network might mirror the architecture of the brain. However, the problem of programing for such an architecture remains. At this point, "teaching" a neural network remains a rather complicated endeavor, but there is nothing to say that this will remain the case. However, it is kind of nice to think that our own neural networks in some sense carry millions of years of learning.

Furthermore, brains are more complex than their neural layout. Perhaps unfortunately, our minds are quite influenced by the crazy chemicals we douse them in. While it seems possible to come up with a way to reflect this, either in the hardware layout or the software implemented, it seems to add another layer of consideration to the problem.

Again, I don't know about you, but I am perfectly sure that I feel emotions. However, I don't think that precludes them from being complex reactions to our environment, it just means that I experience something as a result. For example, a robot "experiencing" heartbreak might curl up into fetal position in bed because that is what its programming instructs it to do, but a human would do so because they actually experienced pain. Whether the experience of pain is actually a mechanism we have developed to produce desirable reactions is irrelevant, a computer could react in a hurt manner, but a human would do so because he or she hurt (or because they are a manipulative person).

elfarmy17 said...

Oh, no, I definitely believe I feel emotions. I was just trying to be a devil's advocate there. :)