‘Moral Logic’: a Guide for Political Decision Making?

Modal logic is – as far as I am concerned – all about what might possibly be the case (alethic logic) or about what we know (epistemic logic), but not about what we should or should not do. That is: ethics seems not to be grounded in modal logic – or any logic for that matter. And that’s a pity, for I believe that logic can play a valuable role in moral decision making, especially in politics. Let me illustrate this with an example:

Let’s say that a politician proposes a policy A (‘Taxes are increased’). Let’s suppose that it is common knowledge that A leads to B (‘A –> B’), with B being ‘The disposable income of the poor is decreased’. Now, let’s say the politician doesn’t want B, (we write ‘–B’). Then, you could reasonably say that, by letting ‘–’ follow the rules of negation, and by applying modus tollens, we get –A. That is: the politician does not want A.

This last step requires clarification. Suppose that we know that by increasing taxes (A), the disposable income of the poor will be decreased (B). Then knowing that the politician doesn’t want the income of the poor to be decreased, the politician should not increase taxes. Then, assuming that no-one wants to do something he should not do (we are dealing with very rational agents here), it follows that the politician does not want A (‘–A’).

This ‘logic’ is consequentialist in nature. That is, you decide whether to perform a certain action (A), by looking at its consequences (B). In case you want B, you are good. In case you don’t want B (–B), then – by modus tollens – it follows that you should not do A. Hence you don’t want A, giving –A. This logic is of of course very strict; it follows absolute rules, axioms or principles. Hence it might be suited best to model a moral system that is equally strict. Think about Kantian ethics. On the other hand, a system like utilitarian ethics might be better modelled by a different mathematical model.

Workings
Let’s dive a little deeper into the working of this moral logic. One way this logic might work is as follows.

(1) You start with a set of axioms; propositions you absolutely want, or absolutely don’t want to be the case:

A
–B
C

(2) Next you look at the actions available, and the consequences these actions entail:

D –> A
D –> B
E –> C.

(3) Then you choose an action (in this case either D or E), which does not have any consequences you absolutely don’t want. In this case you should not choose D, for D –> B, and you have –B, hence –D. That is, according to the rules laid down, we don’t want D; hence the only option that remains is E.

Extended
Of course, this ‘logic’ does not obey all the regular rules of logic; for instance, it does not obey the rule of modal logic that the two modal operators can be expressed in terms of each other – we don’t even have two modal operators. But still, by applying the very simple rules laid down above, applying this logic can be helpful. I find this logic particularly valuable in analysing arguments used in political decision making, for politics is a prime example of the interplay between actions (the antecedent of our material conditional) and normative consequences (the consequences).

The above logic can be extended to take into account degrees of preferences. You could make a hierarchy of consequences, with consequences higher at the hierarchy being morally superior to those below, so that – in case you have more than one action to choose from – you should choose the one having the consequences highest in the hierarchy. This would also suit Artificial Intelligence very well.

What do you guys think of the moral logic?