Imagine the following situation: you are walking your morning walk along the primary school in your neighborhood. You walk past the playground, where children are playing until the bell rings and school starts. And then suddenly, out of the blue, a man enters the playground: he is wearing a machine gun. A loaded machine gun, to be exact. He aims his gun at one the children and yells: ‘The children of today will be the corpses of tomorrow. This is God’s revenge for the tormenting betrayal of the West.’ And while he is pointing his gun at a little girl, you recognize that he has dropped his handgun. You pick up the handgun, and see that it is loaded. In the corner of your eye, you see a child peeing its paints, while sounds of crying and terror fill your ears. You aim the gun at the man and think to yourself: Shall I kill him? Or not?
Because what should you do? There are two competing philosophical positions that might assist you in making this decision. But before knowing which position to choose, you should answer to following question: should you strive to maximize the overall level of ‘happiness’, irrespective of the act you have to undertake (killing someone in this case), or should you stick to absolute moral values, regardless of what the immediate consequences of doing so might be? This is the decision between utilitarianism and absolute ethics.
Utilitarianism claims that all actions that increase the overall level of good in the world, the level of good caused by an action minus the level of suffering caused by this is action, is a good action. You can see what, according to this view, you should do in the example: kill that guy. After all, the suffering he will cause the children is (presumably) much more than the suffering he will incur by being killed. It’s a tradeoff: one human life versus many more. Nothing more, and nothing less.
However, is this how we usually perform moral actions? By just checking whether our actions will maximize the overall level of good? That’s not what we usually associate with acting morally, right? You help your friend because you feel like you want to help him, not because it increases the overall level of utility, do you? Or are we indeed nothing more than walking and talking calculators; adding and subtracting gains and losses in a split second? And if so, how can we be sure about the number we include in our calculation? Imagine that the guy in our example isn’t intending to kill any child. We might assume that he is going to kill children, but are sure about that? He might have just been drunk and confused, but not planning to do any physical harm. So in this case we wouldn’t increase the of overall level of utility by killing him, right? My point is: you don’t know what the consequences of someone’s actions will be, until you have have witnessed them. So how are you going to take this into account?
The competing view is derived from Kant’s moral philosophy, in which the notion of the Categorical Imperative plays a crucial role. According to this Categorical Imperative, you should “act only according to that maxim whereby you can and, at the same time, will that it should become a universal law”. This law has nothing to do with increasing the overall level of good in the world; you should ask yourself what the world would look like if everyone would perform the action that you were considering to do (like killing someone), and you would have to check whether this is a world you could live and whether this is a world you want to live in. If your action doesn’t meet these requirements, it’s an immoral action and you shouldn’t perform it.
So, what if we would apply the Categorical Imperative to our case of the (potential) child murderer? What if everyone of us would kill someone who they expect is going to kill people? Would that be a world we could and want to live in? Well, it might not be a world we want to live in. After all, as we’ve just seen, we don’t know for sure whether the man will indeed kill the children; and if would be a little harsh to kill someone because of our inadequate projections, would it? But, more importantly, acting according to the aforementioned maxim (“kill someone who you expect is going to kill people”) doesn’t seem to b a world we could live in. After all, if you are planning on killing someone, the man with the gun in this example, you should be killed also, right? But who’s going to do that? And shouldn’t that person be killed either? An infinite regress will result. So you see: it is impossible to make this law into a universal law; a law that everyone of us should (always) stick to.
Ethics is not so easy. So, what do you think?