So Little to Say in So Many Words

I just returned from a lecture in Philosophy of Language, which is a course I attend at my university. It’s a course in which the ideas of the “big thinkers” of 20th century analytical philosophy of language are dealt with. And although I find the topic very interesting, I couldn’t help but become annoyed by the overdose of irrelevant digressions of the lecturer. It made my thoughts wander off to a more fascinating – and less annoying – place.

Let me ask you: why do people use so many words while saying so damn little? Why do people seem to think that the most important “thing” in communication is for them to convey their message, and that they should do so regardless of how long their “elucidation” would become? Don’t people see that using more words, especially when saying the same thing in multiple ways, deflates the value of each of the words said? How can we – the listeners – know what’s relevant and what’s not if relevant and irrelevant words are mixed into one act of communication? Don’t people see that the use of more words increases the risk for the totality of words to convey a contradictory message? That more words implies more meanings, and that more meanings implies more opportunity for confusion to arise?

Being succinct in communicating your thoughts is harder than being elaborate. It is as Einstein once put it, “Things should be made as simple as possible, but not any simpler”. Only by making things simple you can convey the core of what you mean to say. But it is often the fear of the second part of Einstein’s claim (of making things “too simple“) that makes us digress about – what could have been – a very simple idea. We believe that by showing the broadness of our vocabulary, we are able to show our true intelligence. But, to use another quote of Einstein, “If you can’t explain it simply, you don’t understand it well enough”. And that’s completely true. Only in the realms of academia, in which nuance and exceptions should be praised, is the use of “complex” terminology or digressions required – and therefore legitimized. But even then one should try to keep the number of words used at an absolute minimum.

That’s why I decide to end this article at this point. I could have written another 200 words but I don’t think the increase in the value of my message would weigh up against the extra words you’d have to read.

But what do you think?

The Butterfly Effect: How Small Decisions Can Change Your Life

The butterfly effect: a term often used within the context of ‘unpredictable systems‘ like the weather and other ‘natural’ systems. For those who don’t know it, the butterfly effect refers to a system being ‘(very) sensitive to changes in its initial conditions‘. As the name implies, think about a butterfly flapping his wings and, because of this small flapping, causes a hurricane to occur at a later point in time and possibly an entirely different region in space. The butterfly in this example is the symbol for how small changes in an earlier stage can cause huge changes to occur at a later point in time.

But can’t this concept be applied to life as well? Think about it: have you ever experienced a small phenomenon occurring – like you receiving a mail, you stumbling upon something on the internet or you meeting a person who happens to change the way you think – that, looking back, has influenced your life significantly? Let’s take the example of you talking to a person who made you change your mind. I can only speak for myself, but I definitely have had a couple of such experiences in my life. Let me give you an example of my life that illustrates the effect utter randomness can have on the course of your life:

I didn’t know what kind of Master to attend after finishing my Bachelors. While thinking about studying economics in Rotterdam (the Netherlands), I came in touch with a professor philosophy of science, who – at the time – was supervising my bachelor thesis. I had always though about attending a Master philosophy somewhere at a university, but I had difficulties with the ‘vague touch’ Philosophy masters tend to have; none of them seemed analytic or logical enough to me.

The professor and I – during one of our supervising sessions – accidentally stumbled on the question what I wanted to do after my Bachelor philosophy; so I told him about my plan to go to Rotterdam. When he asked me why I wanted to study Economics there, I didn’t really know what to say. I said, ‘Well, I always dreamed about studying abroad at a nice university; Oxford, Cambridge or something along those lines. But there don’t really seems to be Masters over there that suit my interests. That is: thinking about the world in a “non-vague” manner.’ He responded, `Have you tried the LSE (London School of Economics and Political Science)? They have a Master Philosophy and Economics and a Master Philosophy of Science. Isn’t that something for you?’ ‘Also,’ he added, ‘A good friend of mine – someone I hang out with on a regular basis – is a member of the selection commission of that Master Philosophy of Science. It might be interesting for you.’ I took a look at this Master and I was sold right away. I applied, got accepted and have studied a year in London.

What if I wouldn’t have talked to this professor about my ambitions? What if I would have had a different thesis supervisor? What if I would have had a headache that day and didn’t feel like talking? Then my future would very likely have looked very differently.

So what can we – or what did I – learn from this story: I learned that I shouldn’t hesitate to take opportunities, no matter how small they might seem. Because those small opportunities might cause a stream of new possibilities to arise later on. And the same goes for the opposite: I should avoid bad actions, no matter how small. I remember that – a couple of years ago – I said something mean to my football trainer, and I have regret it ever since. In other words: small actions can have significant consequences.

But what do you think?

To Kill or Not to Kill, That’s the Question

Imagine the following situation: you are walking your morning walk along the primary school in your neighborhood. You walk past the playground, where children are playing until the bell rings and school starts. And then suddenly, out of the blue, a man enters the playground: he is wearing a machine gun. A loaded machine gun, to be exact. He aims his gun at one the children and yells: ‘The children of today will be the corpses of tomorrow. This is God’s revenge for the tormenting betrayal of the West.’ And while he is pointing his gun at a little girl, you recognize that he has dropped his handgun. You pick up the handgun, and see that it is loaded. In the corner of your eye, you see a child peeing its paints, while sounds of crying and terror fill your ears. You aim the gun at the man and think to yourself: Shall I kill him? Or not?

Because what should you do? There are two competing philosophical positions that might assist you in making this decision. But before knowing which position to choose, you should answer to following question: should you strive to maximize the overall level of ‘happiness’, irrespective of the act you have to undertake (killing someone in this case), or should you stick to absolute moral values, regardless of what the immediate consequences of doing so might be? This is the decision between utilitarianism and absolute ethics.

Utilitarianism claims that all actions that increase the overall level of good in the world, the level of good caused by an action minus the level of suffering caused by this is action, is a good action. You can see what, according to this view, you should do in the example: kill that guy. After all, the suffering he will cause the children is (presumably) much more than the suffering he will incur by being killed. It’s a tradeoff: one human life versus many more. Nothing more, and nothing less.

However, is this how we usually perform moral actions? By just checking whether our actions will maximize the overall level of good? That’s not what we usually associate with acting morally, right? You help your friend because you feel like you want to help him, not because it increases the overall level of utility, do you? Or are we indeed nothing more than walking and talking calculators; adding and subtracting gains and losses in a split second? And if so, how can we be sure about the number we include in our calculation? Imagine that the guy in our example isn’t intending to kill any child. We might assume that he is going to kill children, but are sure about that? He might have just been drunk and confused, but not planning to do any physical harm. So in this case we wouldn’t increase the of overall level of utility by killing him, right? My point is: you don’t know what the consequences of someone’s actions will be, until you have have witnessed them. So how are you going to take this into account?

The competing view is derived from Kant’s moral philosophy, in which the notion of the Categorical Imperative plays a crucial role. According to this Categorical Imperative, you should “act only according to that maxim whereby you can and, at the same time, will that it should become a universal law”. This law has nothing to do with increasing the overall level of good in the world; you should ask yourself what the world would look like if everyone would perform the action that you were considering to do (like killing someone), and you would have to check whether this is a world you could live and whether this is a world you want to live in. If your action doesn’t meet these requirements, it’s an immoral action and you shouldn’t perform it.

So, what if we would apply the Categorical Imperative to our case of the (potential) child murderer? What if everyone of us would kill someone who they expect is going to kill people? Would that be a world we could and want to live in? Well, it might not be a world we want to live in. After all, as we’ve just seen, we don’t know for sure whether the man will indeed kill the children; and if would be a little harsh to kill someone because of our inadequate projections, would it? But, more importantly, acting according to the aforementioned maxim (“kill someone who you expect is going to kill people”) doesn’t seem to b a world we could live in. After all, if you are planning on killing someone, the man with the gun in this example, you should be killed also, right? But who’s going to do that? And shouldn’t that person be killed either? An infinite regress will result. So you see: it is impossible to make this law into a universal law; a law that everyone of us should (always) stick to.

Ethics is not so easy. So, what do you think?