Computing power doubles every 18 months to 2 years, this is often called Moore's Law. It is theorized that, at some point, computers will achieve the computational power of the human brain and begin self-improving, this event is called the Singularity. After this, futurists like Ray Kurzweil believe that we will be improved by the machines, or we will become machines. Dystopian story writers, of course, usually predict we will be subsumed, enslaved, or exterminated by the machines, but I digress. The method that, we predict, a computer would be able to implement these self-improving processes is called a genetic algorithm. A genetic algorithm is allowed to change, or mutate, itself and judges the "fitness" of the resulting program according to some cost structure.
Before we begin, some xkcd levity. Let me start with a pragmatic concern that I have with Kurzweil's work as I understand it. I only read a bit of Kurzweil for my Thinking Critically About Technology course, so I do not claim to be an expert. That said, it is my understanding that the Singularity is set to occur when the computing power of an artificial computer matches that of the human brain, where computing power is measured in the ability of a processor to perform a certain number of operations in a given time. This seems to sidestep the dual problems of architecture and software.
Architecture refers to the structure of the processor performing the calculations. Computers tend to be linear processors, although nowadays most home computers have two processors working together, each tends to do its own thing, so one will keep your game running while the other makes sure your music is playing from a separate program. In contrast, the human brain is massively parallel. Despite common aphorisms, it is not terribly difficult to walk and chew gum at the same time, and you are also breathing, regulating your circulatory system, probably thinking about something, and so forth. Not only are we able to run a truly massive number of processes simultaneously in our brain, the processes interact in our brains, for example, our mood subconsciously affects our mannerisms and demeanor. Suffice it to say that, even if a computer were to have the raw power of the human brain, it does not seem clear that it would be able to harness it to the same effect as a human brain. Writing software to take advantage of parallel processing requires a very different type of thinking than linear processing, and is still considered a tricky problem. Or, to put it a different way, any animal with a larger brain probably has more processing power available to it, but (I think) there is something unique that humans do with their processing power which cannot be explained without an appeal to a biological analogue of software.
Of course I could be wrong, and certainly wouldn't mind finding out that I am, and the Singularity could come right on schedule. I hope it does; it seems like the experience of collaborating with an intelligence that was not human could open up vast insights into ourselves and our place in the world. I like to believe that, if we could communicate with something that had a vastly different perspective, we would obtain a better all around view of our own existence.
As you may have noticed, after the Singularity the interesting questions, to me, become less technical and more philosophical, along the lines of what does it mean to be a person? Before I burnt out Fall semester, I wrote a three part series on the philosophy of consciousness, which is quite relevant to this topic if you are interested. I also recommend the Science Fiction novel Blindsight by Peter Watts; although rather bleak, it deals with the issue of consciousness in a compelling and thought provoking manner.
I would like to conclude with the subject of computers having feelings. The problem of other minds, which I mentioned in my post about the song Poker Face, implies that we don't even know that other humans have feelings, only that they tend to act in a manner that is consistent with how we act when we experience feelings. So, it would not be necessary for a computer to actually experience emotions for us to believe they have feelings, merely that they respond in a manner consistent with our expectations of emotional beings.
This brings me to the terrifying thought with which I closed my second post in the series on consciousness. It seems feasible that we could create computers that mimicked the responses to emotions, but did not actually experience emotions. If we were to replace ourselves with such computer replicas, we would destroy all beauty by forever blinding the eyes of the beholders.