Thought experiments of this sort are called Trolley Problems, and they come in a wide array of variations. They are an attempt to probe what we think is the right thing to do. For example, more people would switch the train to the side track than would push someone in front of the train, even if doing so would slow it down, so the end result would be the same, one dead to save five. On the other hand, when told that they knew the person they were pushing was responsible for sabotaging the train so that the five workers were placed in danger, people once again are quite willing to sacrifice the one for the many.
A recent article in MSU's newspaper, The State News, reported that a professor was using virtual reality technology to see what people actually decided in this situation. This is, of course, inherently interesting, but it is the response of Professor Lindemann, a professor of philosophy, that I truly wish to address. Of course, one must interpret it with a large dose of goodwill, both because who knows how accurate the story is to Professor Lindemann's actual thoughts and because newspapers are not the best forum for detailed philosophical exposition, perhaps to the detriment of our civilization. Professor Lindemann is said to have raised two main objections to the nature of the experiment.
Her first objection is that the evidence from this study is not valuable since the subjects were college-age and, "don’t have much life experience," to quote the article, not the Professor. I am unsure exactly what this means, to be honest. Unless we happen to be ethicists, what relevant life experiences do we gain as we age that would significantly help us make such a decision? I'm significantly older than the average undergraduate, but I don't believe myself any more qualified to make that decision than I was four years ago. Of course, I am still fairly young, at least that is what I keep telling myself, so maybe I still lack the relevant experience. Which brings me to my furthermore. If even people in their late twenties lack this information, then a fairly large portion of the population does not have it, so why are we uninterested in their moral position?
However, this first difference of opinion is merely a pedantic point compared to the second. Professor Lindemann goes on to say, "the trouble with the trolley problem is that if you actually test people with it, you only know what their instincts are. It doesn’t tell you much about the right thing to do." Prima facie this seems reasonable. The study tests what people do, ergo the results might tell you what people think is the right thing to do, but not what the right thing to do actually is. This overlooks one quirk of ethics that has, on occasion, irked me. Our attempts to codify a system of ethics tend to be primarily guided by our intuition about what is right and wrong. That is to say, aside from the accusation of logical inconsistency, about the only criticism which one can level against a well defined system of ethics is that it holds certain actions to be ethical that one feels should not be ethical.
For Kantian absolutism, which holds that one should choose moral rules to which one would wish everyone to adhere and then follow them in all situations, a common objection is that if a scared person ran into your basement then an angry man with a deadly weapon ran up asking after them one would be prohibited from lying to them, because the rule against lying should be respected at all times. For Utilitarianism, the ethical system that advocates maximizing the good, which we can call the happiness for simplicity, an analogous objection is that the system would condone murdering someone as long as they were universally disliked and caused a significant amount of grief for everyone with whom they interacted. Moral relativism is criticized because it cannot condemn sociopathic serial killers or the Nazis, depending on whether you are discussing personal or cultural relativism. In fact, the only trick to avoid this type of criticism seems to be remaining vague as to what the system actually requires of a moral agent, such as social contract and caring ethics do. Because we use our own moral intuition as the guide by which we calibrate our ethical theories, it does seem of some practical value to know what our moral instincts are.
Of course, it is certainly possible that this is entirely the wrong way to go about things. If ethics are supposed to codify what the right thing to do, and we are trying to obtain a system that tells us the right thing to do is the thing that we think is right to do, one might well ask of what use is the actual system. This is a valid concern, which unfortunately runs afoul of some pragmatic considerations. Namely, what else is there to consider? Any method for evaluating the value of different systems of ethics must, by its very nature, contain judgement about what is good in order to assess the systems. Once you have set what good is, you have already instituted, at some level, a system of ethics which, as it is integral to the judging process, cannot itself be evaluated using the judging process.
So, hopefully we can all agree that knowing that about 90% of people will throw the switch is interesting information. But what about the other 10%? Since it is late, and they are quite interesting in their own right, I think I shall leave berating them as a task for another post.
So, hopefully we can all agree that knowing that about 90% of people will throw the switch is interesting information. But what about the other 10%? Since it is late, and they are quite interesting in their own right, I think I shall leave berating them as a task for another post.
No comments:
Post a Comment