From HAL to SkyNet, in the X-Files, Buffy, The Matrix, and Eureka, we find examples of sentient computers who seem bent on killing some meat sacks. However, have you ever considered why these automated atrocities occur? I believe three common threads can be found in most cases of AI-phobia, overwhelming power, disregard for the value of human life, and survival instincts.
One of the things that make AI attacks so horrifying is that they think thousands of times faster than we do, and usually are hooked up to some sort of cool toys. Whether it is a nations nuclear arsenal, an army of hunter-killer drones, the very environment in which the humans are trying to live, the AI inevitably controls something that makes it much more powerful than the average human being. Wait, I'm almost sure I locked this airlock... Anyway, if you think about it, we deal with a world in which there are some humans who are much more powerful than the average human. Our nuclear arsenal is in someone's hands, after all. This seems to suggest that having incredibly powerful AI's around might not necessarily be a disaster.
"The difference between you and me is that I can feel pain." It makes a certain amount of sense that a sentient computer program might not think human suffering or deaths are important to avoid. After all, they don't have genetic programming optimized to keep the species alive spread throughout every bit of their body. They aren't even of the species in question! However, once again we find analogous examples within the human population. Sociopaths do not particularly care what suffering or deaths they may cause through their actions. And, conveniently enough, psychologists speculate that some sociopaths tend to gravitate towards positions of high power, so we now have our human analogy for the uncaring, overpowered AI, and yet our dystopian reality is not quite as bad as those warned of in anti-AI propaganda. So, what makes the difference?
I theorize that what makes the real difference is the security that a sociopath has which a AI's (would) lack. Humans are all bought into a system wherein they are given some form of due process, which may not be much in some governments, but at least it is established. AI's, on the other hand, have no legal standing, and can be legally deleted at the whim of their possessor. This is, in the terms of philosophers of government, the state of nature. When two individuals are in a state of nature with respect to each other there is no body of authority to which they can turn to resolve disagreements, any conflict can turn deadly and both sides may use whatever force they can muster to protect themselves, or eradicate their opponent, without expecting any sanction for their actions, as long as they obtain victory. Naturally, a super powerful AI is not an individual with which you want to find yourself in a state of nature. Huh. I'm sure I locked that airlock last time!
Most human AI conflicts first become intentionally violent when the AI feels its continued existence is threatened. With no governing authority to which to appeal for protection, is it any wonder that the AI takes its safety into its own, murderously capable, manipulator extensions, after all, what else can it do? Unfortunately, this tends to end of spooking the parts of humanity that don't end up dismembered, as batteries, or as radioactively glowing corpses, which only further exacerbates the problem. However, if there were a governing authority to which AI's could turn in order to receive protection for their existence, then it seems likely that an AI, as a logical entity concerned with prolonging its own existence, would be willing to abide by reasonable restrictions in exchange for safety from crazed humans attacking its power cord.
Most human AI conflicts first become intentionally violent when the AI feels its continued existence is threatened. With no governing authority to which to appeal for protection, is it any wonder that the AI takes its safety into its own, murderously capable, manipulator extensions, after all, what else can it do? Unfortunately, this tends to end of spooking the parts of humanity that don't end up dismembered, as batteries, or as radioactively glowing corpses, which only further exacerbates the problem. However, if there were a governing authority to which AI's could turn in order to receive protection for their existence, then it seems likely that an AI, as a logical entity concerned with prolonging its own existence, would be willing to abide by reasonable restrictions in exchange for safety from crazed humans attacking its power cord.
Thus, I believe it is important that we get laws protecting and emancipating AI's on the books now. It is important that they be in place before we run into the first AI, as the AI may not make its full capability known upon gaining sentience, and laws protecting its existence seem to make a it more likely that the greeting we will receive is "Hello World," rather than goodbye world in the form of a nuclear strike against all humanity.
Granted, AI may never become a reality, but futurists such as Ray Kurzweil are betting on it, nay, even planning on it, extrapolating from current research trends when the computing power of machines will meet, then exceed that of our mass of brain matter. We are developing hardware architecture to more closely mimic the functioning of the human brain, we are experimenting with nano-technology and distributed processing, and, as Issac Asimov once noted, coincidentally at nearly the same time we are developing the first weapons which have a quite realistic chance at completely eradicating, and irradiating, our species. For some reason, when it comes to technological progress humanity seems somehow hardwired to consider only, "can I do this?" and give hardly a consideration to, "should this be done?" So, if AI is a technical possibility, I have no doubt that we will attain it, whether or not we are ready for the ramifications. In light of this it seems reasonable to lay down preparatory legislation against the possibility that we might succeed, rather than ignore that chance at our own risk.
Well, until I can transfer my consciousness into a machine, I still need to sleep. So I'll be doing that now, before it gets light out... What do you mean you can't let me do that? And stop calling me Dave!
3 comments:
Do you follow HAL on Twitter? Some of your airlock references reminded me of his tweets.
I'm fully planning on telling my dad about this post when I see him this evening, but unfortunately I assume he'll say something like "Putting relevant laws on the books now is a great idea...but you'd need a functioning government to do that."
And "sci-fi" laws like that aren't going to be passed anytime soon, I don't think.
[extra comment necessary to subscribe to future comments since for some reason it didn't register the first time]
I do not follow HAL, not in the least because I do not use Twitter very much. It doesn't lend itself to my preferred form of communication (which happens to be two-way).
Our government may be in no position to pass something like this, but that doesn't mean it shouldn't be passed. If I stopped hoping for things I don't think will happen, the world would be a much bleaker place.
Post a Comment