Partnership TheYoungSocrates and the Institute of Arts and Ideas: ‘Everything We Know Is Wrong’

I recently discovered the Institute of Arts and Ideas (IAI), a non-profit organization that attempts to make philosophical thinking more accessible to the general public. They publish podcasts and articles about all sorts of philosophical subjects, such as free will versus determinism, egoism versus altruism and philosophy of science.

I will regularly post their podcasts, starting with ‘Everything We Know Is Wrong’, about (the limits of) the scientific method).

It turns out that many scientific experiments are irreproducible, meaning: if you follow the same methods as a researcher who obtained certain results, it is not at all certain that you will get the same results. This raises questions about the scientific method, and whether it a proper way to obtain the truth, or facts at least.

It is fair to say that a difference should be made between social sciences and psychology on the one hand, and natural sciences on the other. Experiments in the later are, in turns out, reproducible in general, while experiments in the first are not that often. This raises doubts among certain philosophers and scientists about the scientific status of such fields. But don’t they just apply the same methods as physics does? Hence, shouldn’t the results obtained from the social sciences be treated with equal regard as results from the natural sciences?

These are interesting questions, many of which are at the core of Episode 15 from the series ‘Philosophy For Our Times’ of the IAI:

 

‘Moral Logic’: a Guide for Political Decision Making?

Modal logic is – as far as I am concerned – all about what might possibly be the case (alethic logic) or about what we know (epistemic logic), but not about what we should or should not do. That is: ethics seems not to be grounded in modal logic – or any logic for that matter. And that’s a pity, for I believe that logic can play a valuable role in moral decision making, especially in politics. Let me illustrate this with an example:

Let’s say that a politician proposes a policy A (‘Taxes are increased’). Let’s suppose that it is common knowledge that A leads to B (‘A –> B’), with B being ‘The disposable income of the poor is decreased’. Now, let’s say the politician doesn’t want B, (we write ‘–B’). Then, you could reasonably say that, by letting ‘–’ follow the rules of negation, and by applying modus tollens, we get –A. That is: the politician does not want A.

This last step requires clarification. Suppose that we know that by increasing taxes (A), the disposable income of the poor will be decreased (B). Then knowing that the politician doesn’t want the income of the poor to be decreased, the politician should not increase taxes. Then, assuming that no-one wants to do something he should not do (we are dealing with very rational agents here), it follows that the politician does not want A (‘–A’).

This ‘logic’ is consequentialist in nature. That is, you decide whether to perform a certain action (A), by looking at its consequences (B). In case you want B, you are good. In case you don’t want B (–B), then – by modus tollens – it follows that you should not do A. Hence you don’t want A, giving –A. This logic is of of course very strict; it follows absolute rules, axioms or principles. Hence it might be suited best to model a moral system that is equally strict. Think about Kantian ethics. On the other hand, a system like utilitarian ethics might be better modelled by a different mathematical model.

Workings
Let’s dive a little deeper into the working of this moral logic. One way this logic might work is as follows.

(1) You start with a set of axioms; propositions you absolutely want, or absolutely don’t want to be the case:

A
–B
C

(2) Next you look at the actions available, and the consequences these actions entail:

D –> A
D –> B
E –> C.

(3) Then you choose an action (in this case either D or E), which does not have any consequences you absolutely don’t want. In this case you should not choose D, for D –> B, and you have –B, hence –D. That is, according to the rules laid down, we don’t want D; hence the only option that remains is E.

Extended
Of course, this ‘logic’ does not obey all the regular rules of logic; for instance, it does not obey the rule of modal logic that the two modal operators can be expressed in terms of each other – we don’t even have two modal operators. But still, by applying the very simple rules laid down above, applying this logic can be helpful. I find this logic particularly valuable in analysing arguments used in political decision making, for politics is a prime example of the interplay between actions (the antecedent of our material conditional) and normative consequences (the consequences).

The above logic can be extended to take into account degrees of preferences. You could make a hierarchy of consequences, with consequences higher at the hierarchy being morally superior to those below, so that – in case you have more than one action to choose from – you should choose the one having the consequences highest in the hierarchy. This would also suit Artificial Intelligence very well.

What do you guys think of the moral logic?

Public Opinion and Information: A Dangerous Combination

‘That guy is an asshole. The way he treated his wife is absolutely disgusting. I’m glad she left him, she deserves better…much better.’ That’s the response of society when it finds out that a famous soccer player has hit his wife, and that the pair consequently decided to split the sheets. But based on what does society form this judgment, or any judgment for that matter? Based on information of course! It heard from the tabloids what has occurred, it processes this information, and then comes to the most ‘reasonable’ conclusion/judgment. It’s pretty much like science, in that it bases its conclusions on data and reasons. But the prime difference between science and gossip/public opinion is that the latter doesn’t actively try to refute its conclusions: it solely responds to the data it receives. And this has some striking consequences.

Because what happens whenever the data changes? What happens when one or two lines in a tabloid form a new and ‘shocking’ announcement? What if it appears that – while the football player and his wife were still together – the wife had an affair with another guy? Then suddently the whole situation changes. Then suddenly the wife deserved to be hit. Then suddenly a hit in the face was a mild punishment for what she did. Then suddenly most people would have done the same whenever confronted with the same situation. Suddenly there is new data that to be taken into account. But what are the implications of this observation?

The public opinion can be designed and molded by regulating the (limited) amount of information it receives. And this goes not only for gossip, but just as much for more urgent matters like politics and economics. It isn’t society’s duty to gather as much data as possible, compare evidence for and against positions, and come to the most reasonable conclusion. No, society only has to take the final step: forming the judgment. And if you understand how it is that this mechanism works, you can (ab)use it for your own good. You could if you were in politics ‘accidentally’ leak information about a conversation the prime minister had with his colleagues, and thereby change the political game. The prime minister will be forced to respond to these ‘rumors’, thereby validating the (seemingly) importance of the issue. For why else would he take the time to respond to it? And suddenly, for the rest of his days, he will be reminded for this rumor, whether it turns out to be true – as it was in Bill Clinton‘s case – or not: where there’s smoke, there is fire.

But let me ask you something: don’t you think that famous people make mistakes everyday? Even if only 1 percent of the wives would get hit by their famous husbands every year, that would still be more than enough to fill each tabloid for the entire year. But what if – from all the ‘beating cases’ – only one or two would become public a year? Then – and only then – the guy who did the hitting becomes a jerk. Why? Because even though it might have been the case that the guys hits his wife, even if we don’t know it, now we have the data to back up our judgement. And since we’re reasonable creatures who only jump to conclusions whenever we’ve got evidence to do so, we are suddenly morally allowed to do so.

We find ourselves to be reasonable creatures for solely basing our judgments on the data we receive. We find this a better way to go than just claiming things even though we don’t know them for sure. And although this might very well be the reasonable way to go, we have to remind ourselves that we’re slaves to the data, and therefore vulnerable to those providing the data. We have to be aware that even though we don’t know about the cases we don’t have data about, this doesn’t imply that the cases aren’t there. It merely means that the parties involved – whether this is the (ex) wife of a famous soccer player or anyone else – saw no reason to leak the data. It only means that their interests were more aligned than they were opposed. And we should take people’s interests – and the politics behind it – into account when jumping to judgments based on the data we receive.

But what do you think?

Beliefs, Desires and Coming Up with Reasons

A normal logical inference looks something like the following: (1) C leads to A, (2) C leads to B, (3) A and B are present, so (4) C might be true. In other words, you have got reasons – (1), (2) and (3) – for believing something, and these reasons make you think that something else – (4) – might be true. This is an example of an inference to the best explanation. But do we always act so rationally? Do we always come up with reasons before we come up with the conclusion that is supposed to follow from the reasons? Or do we – sometimes – come up with the conclusions first and then start searching reasons for validating these conclusions? Like, when we really want to buy that television and then start reasoning why it would be good for us to have that television? Let’s take a look at that.

There’s a difference between having beliefs that are based upon reasons (like ‘I see rain dropping of the window’ + ‘I see people wearing trench coats’ so ‘It must be raining outside.’) and longings or desires (like ‘I want a television. Period.) Where we need reasons to believe the beliefs, the desires are just there. What we can see here is a difference in the chronological order for coming up with reasons for a belief or desire: in case of beliefs we come up with reasons before getting at the belief, and with desires we have desires s and then start coming up with reasons for why we should give into that desire.

But there is another difference – beside the difference in order – between ‘belief reasoning’ and ‘desire reasoning’. The belief reasoning eventually leads up to an idea, while the desire reasoning eventually ends up with an action (or not). The rational component – that is, the Ego – that has do deal with all the inputs or impulses entering our conscious and unconscious mind, is called for in different stages of the reasoning trajectory. Where the Ego is apparent in first stage of the belief reasoning – the part in which we’re thinking whether or not we consider a belief to be true – it becomes apparent in the desire reasoning only after the belief has settled.

So what? Is this a problem? Well, not necessarily: not if the two types of reasoning stay completely separated. Not for as long as beliefs are preceded by reasons, and not for as long as desires are – or are not – acted upon based upon reasons. It only becomes dangerous when the two become intertwined: especially when we just happen to believe something and then start coming up with reasons for why it is that we just happen to believe this something. Since unlike desires, beliefs aren’t something you just have. Beliefs are there solely because you’ve got reasons for them. Otherwise, they wouldn’t be beliefs, but merely desires.

So, what we can conclude from this? Well, a conclusion could be that you should watch out for those people that – in a discussion, for example – just seem to believe something and then start coming up with reasons for why it is that they just happen to believe this something. Since, if these people are confusing the notions of belief and desire, it can be very difficult – or even impossible – for you to change their (unreasonable) beliefs. After all, desires are just there, which is reason enough for having them, while beliefs require reasons. And if this isn’t realized, the discussion might get stuck at the level of implementation: the level at which is being decided how the belief should be implemented – that is: validated by society – instead of reasoning whether or not the belief is reasonable in the first place. And we don’t want that to happen, or that’s at least what I believe.

But what do you think?