Beliefs, Desires and Coming Up with Reasons

A normal logical inference looks something like the following: (1) C leads to A, (2) C leads to B, (3) A and B are present, so (4) C might be true. In other words, you have got reasons – (1), (2) and (3) – for believing something, and these reasons make you think that something else – (4) – might be true. This is an example of an inference to the best explanation. But do we always act so rationally? Do we always come up with reasons before we come up with the conclusion that is supposed to follow from the reasons? Or do we – sometimes – come up with the conclusions first and then start searching reasons for validating these conclusions? Like, when we really want to buy that television and then start reasoning why it would be good for us to have that television? Let’s take a look at that.

There’s a difference between having beliefs that are based upon reasons (like ‘I see rain dropping of the window’ + ‘I see people wearing trench coats’ so ‘It must be raining outside.’) and longings or desires (like ‘I want a television. Period.) Where we need reasons to believe the beliefs, the desires are just there. What we can see here is a difference in the chronological order for coming up with reasons for a belief or desire: in case of beliefs we come up with reasons before getting at the belief, and with desires we have desires s and then start coming up with reasons for why we should give into that desire.

But there is another difference – beside the difference in order – between ‘belief reasoning’ and ‘desire reasoning’. The belief reasoning eventually leads up to an idea, while the desire reasoning eventually ends up with an action (or not). The rational component – that is, the Ego – that has do deal with all the inputs or impulses entering our conscious and unconscious mind, is called for in different stages of the reasoning trajectory. Where the Ego is apparent in first stage of the belief reasoning – the part in which we’re thinking whether or not we consider a belief to be true – it becomes apparent in the desire reasoning only after the belief has settled.

So what? Is this a problem? Well, not necessarily: not if the two types of reasoning stay completely separated. Not for as long as beliefs are preceded by reasons, and not for as long as desires are – or are not – acted upon based upon reasons. It only becomes dangerous when the two become intertwined: especially when we just happen to believe something and then start coming up with reasons for why it is that we just happen to believe this something. Since unlike desires, beliefs aren’t something you just have. Beliefs are there solely because you’ve got reasons for them. Otherwise, they wouldn’t be beliefs, but merely desires.

So, what we can conclude from this? Well, a conclusion could be that you should watch out for those people that – in a discussion, for example – just seem to believe something and then start coming up with reasons for why it is that they just happen to believe this something. Since, if these people are confusing the notions of belief and desire, it can be very difficult – or even impossible – for you to change their (unreasonable) beliefs. After all, desires are just there, which is reason enough for having them, while beliefs require reasons. And if this isn’t realized, the discussion might get stuck at the level of implementation: the level at which is being decided how the belief should be implemented – that is: validated by society – instead of reasoning whether or not the belief is reasonable in the first place. And we don’t want that to happen, or that’s at least what I believe.

But what do you think?

Ethics and Mathematics: The Love for Absolute Rules

Ethics is not mathematics. For, unlike mathematics, ethics cannot function solely based on a set of axioms, or ‘absolutely true staring points for reasoning,’ like a + b = b + a. Based on axioms, we can build an entire world  (‘mathematics’) in which we can be sure that, only by following these rules of inference, we will always end up with the truth, the truth and nothing but the truth. Hence it’s understandable that philosophers have thought to themselves: ‘Damn, how cool would it be if we could apply the same trick to ethics; that we, confronted with any action, could decide whether the action would be right or wrong?’ Surely: society has tried to build its very own rule-based system, the system of law. But is this a truly axiomatic system? Are there truly fundamental rights from which the rules of justice can be inferred? Let’s take a look at that.

Immanuel Kant made the distinction between hypothetical imperatives and categorical imperatives. These are two ‘kinds of rules’, with the first ‘being applicable to someone dependent upon him having certain ends‘; for example, if I wish to acquire knowledge, I must learn. Thus we’ve got: desired end (‘knowledge’) + action (‘learning’) = rule. Categorical imperatives, on the other hand, denote ‘an absolute, unconditional requirement that asserts its authority in all circumstances, both required and justified as an end in itself.’ We can see that there is no desired end present in this kind of rule; only the ‘action = rule‘-part.

But how could a categorical imperative be applied in practice? A belief leading up to a categorical imperative could for example be: Gay marriage is okay. Period. That would imply that, you believe that, irrespective of the conditions present in a particular environment – thus no matter whether there is a republic or democratic regime, whether the economy is going great or not – gay marriage is okay. However, as it stands, it is not yet a categorical imperative, since this claim doesn’t urge you (not) to do something (such as ‘You shall not kill’, which is a categorical imperative). The rightful categorical imperative would be something like (G): ‘You should accept gay marriage.’ This is an unconditional requirement that asserts its authority in all circumstances and is justified as an end in itself

Now: let’s assume that you’re talking to someone who doesn’t agree with (G). Because now it gets interesting, for now you have to make a decision: you either stick to (G) or you reformulate (G) into a hypothetical imperative. The first option is clear: you just say ‘I believe that gay marriage should be allowed always and everywhere. Period.’ Seems fair, right? But what if the person you’re talking to would respond by saying, ‘Okay…so even when citizens would democratically decide that gay marriage is unacceptable?’

Now you have got a problem, for this might be situation in which two of your categorical imperatives are contradictory, such as (G) and (D): ‘Decisions coming about through a democratic process should be accepted.’ Both (G) and (D) are unconditional rules: they should be acted on irrespective of the situation you’re in. But this is clearly impossible, for (G) forces you to accept gay marriage, while (D) forces you to do the opposite.

You could of course say that (G) is merely your belief (you believe that gay marriage should be accepted, not that this particular democratic society should find this too), but then you seem to fall into a form of moral relativism. Given that you don’t want that to happen, you have to decide which one is the true categorical imperative: (G) or (D)? And which one can be turned into ‘merely’ a hypothetical imperative?

You could of course decide to turn (D) into (D.a): ‘Only if you believe that a decision has come about through a democratic process and is a good decision, you should accept the decision.’ Or you could turn (G) into (G.a): ‘Only if the decision has come about through a democratic process, gay marriage should be accepted.’ But is this really how we form our moral judgements? Is (D.a) truly a rule you believe to be ‘fair’? And (G.a): do you truly believe that gay marriage is okay only if it is accepted by society? That is: do you make the moral value of gay marriage dependent upon the norms prevalent within a society? I doubt it.

So we are stuck; stuck into a paradox, a situation in which two absolute rules are contradictory, and the only way out is through turning at least one of them into an unintuitive and seemingly inadequate hypothetical imperative. So what to conclude? We’ve seen that categorical imperatives look powerful; as if they can truly guide our lives for once and for all; no more need to search for conditions that might be relevant to our judgements. But we’ve also seen that when two categorical imperatives are contradictory – that is, when two rules cannot be followed at the same time – changes have to be made: at least one of them has to be turned into a hypothetical imperative. In order to do so, a certain ‘value hierarchy’ is required, based upon which these categorization decisions can be made. Hence it seems that even Kant’s absolute ethics – with its absolute categorical imperatives – seems to be relative: relative to (the value of) other imperatives, that is. Therefore mathematical ethics, as presented above, seems to be impossible.

But what do you think?

How to Interpret the Notion of Chance?

We all think we’re familiar with the notion of ‘chance‘. But are we really? And if so, what are the consequences we should attach to our interpretation of chance? For instance, are chances purely descriptive in nature – in the sense that they refer only to past events – or do they have a predictive power that might be based upon some kind of underlying ‘natural’ force producing the structured data? And why would it even matter how to interpret chance? Let’s take a look behind the curtains of a probabilistic interpretation of chance, right into its philosophical dimensions.

On average, 12,3 per 100.000 inhabitants of the USA get killed in a traffic accident. Also, 45 percent of Canadian men are expected to develop some form of cancer at some point in their lives. So, what do you think about these data? First of all: does the fact that 12.3 out of 100.000 inhabits get killed in traffic tell you anything about the likelihood that you are going to be killed in traffic? I guess not. It is merely a descriptive notion invented to condense a large amount of data into an easy to read figure. It says nothing about your future, or anyone’s future for that matter. After all: you will either die in traffic or you will not, and you will either get cancer or you will not. At this point in your life you are absolutely clueless which way it will turn out to be. For all you know, it might be a 50-50 kind of situation.

Although this interpretation of chance might feel counter-intuitive, it seems a more reasonable position to take than believing you are expected to die in traffic with a probability of 12,3/100.000. You are after all a unique person and you don’t have 100.000 ways to go. You either go one way, or the other. It is only by adding huge amounts of data together that scientists can come to compressed figures (like chances), thereby describing what has happened in the past. But description does not equal prediction, and totality does not equal uniqueness.

What are the implications of this manner of looking at chance for our interpretation of science? What about the inferences scientists make based upon data, like the one about cancer I mentioned above? Are they making unjustified claims by posing that 45 percent of men are expected to die of cancer? I believe this might indeed be the case. In case scientists want to be fully justified in getting at their conclusions, they should do away with any claims regarding the likelihood of any event happening in the future. That seems to be the only manner for staying true for 100 percent to the data available.

But watch it: this is not to say that the scientific enterprise has lost its value. Science can still be the vehicle best-suited for gathering huge amounts of data about the world, and for presenting these data in such a way that we are able to get a decent glimpse of what is going on in the world around us. And that is where – I believe – the value of science resides: in the provision of data in an easy to understand manner. Not in the making of predictions, or inferences of any kind, as many scientists might happen to believe: just the presentation of data, a job which is difficult enough in itself.

You could say that I am not justified in make this claim. You could back up your argument by saying that a difference should be made between the case of ’45 percent of men are expected to get some form of cancer’ and ‘one specific man has a 45 percent chance of getting cancer’. Where the latter might be untrue, because of the fact that one will either get cancer or not, the former might be more justified. That is because it divides a group into units that will either get cancer or not. However, although this might be true to a certain extent, it still seems to be an unjustified manner to make predictions about the way the world will turn out to be. After all, considering 100 men to be the unit of selection is only to replace the level of the individual with the level of a group. On an even higher level of abstraction, one could consider the 100 men to be one unit, which subsequently would make the conclusions reached unjustified again.

Also, when choosing to make predictions on the level of the group, why does one choose the higher- instead instead of the lower level? Why wouldn’t it be okay to say that, instead of human beings, cells are the true units that either get cancer or not? That’s only a difference in the level of analysis, right?

So, next time you read somewhere that 99 of the 100 people fail in achieving something, interpret this for what it is: a description of what has happened in the past that can inform you in making the decision about what you should do right now. So don’t interpret this as meaning that you only have a one percent chance of being able to achieve a certain goal, because that would be a totally unjustified inference to make: an inference that goes way beyond what the data can support. And don’t consider a scientific fact to be a prediction about the future. Consider it for what it is: a useful description of the past, but no legitimate claim about the future.

But what do you think?