Interest cannot be created. It can only be discovered

You have to discover what you find interesting

You have to discover what you find interesting

‘Are you interested in the stock market?’ I asked a colleague of mine, who works as a economics editor at a newspaper, and hence has to write about stocks, markets etc. ‘I have to’, he said, ‘It is part of my job’. ‘You cannot have to be interested in something. You either are or you are not interested. Period.’ I replied. ‘You can get used to something, but you cannot become interested in something.’ He smiled at me, and walked away; I think he agreed.

Intrinsic
There is a huge difference between interests and skills: while you can develop the latter, you cannot develop the former. Interests are an intrinsic part of your nature; they define, to a large extent, who you are. If you are, for whatever reason, interested in history, you will tend to become ‘better at’ history. Maybe even choose a history related job. But you are good at it, because you find it interesting. It is not because you are good at something, that you are interested in it. That is impossible.

And that brings us to the difficulty of interests. If someone tells you: ‘Just do whatever you find interesting. Find a job you like, and then do just do it,’ it seems like reasonable advice. And if you know what you’re interested in, it might even be helpful advice. But the problem starts if you don’t know what you are interested in. Because, interests being an intrinsic part of your identity, you cannot create an interest in something. You can become better, or worse, at doing something; you can even get used to it. But you cannot become interested in it.

Do something
But what then should you do if don’t know what you are interested in? If it all starts with knowing what you find interesting, and then just doing that, then it seems like you are on a dead end if you don’t know what you find interesting.

Well, if you don’t know your interests, and given that you cannot create interest in something, you have to choose a different approach: you have to find your interests. And the only way to find them, is by engaging in all sorts of activities, so that by doing these activities you can find out what you do (and what you don’t) find interesting. You cannot sit down on a chair, thinking deeply (‘soul-searching’) about what you like to do. This only works if you already know what you find interesting; not if you still have to discover that.

Hence, to those of you that are sitting at home, not knowing what to do with their lives; not knowing what kind of job to pursue, I would say the following: get out there, and find what you are interested in. For interest cannot be created. It can only be discovered.

But what do you think?

Read The Life of a Twenty-Something to see why so many people in their twenties don’t know what to do with their lives

The Difference between What You Get and What You Earn

In economic theory, it is claimed that if a market would function perfectly, people would get for their products and services whatever it is they contribute in terms of value. And the same goes the other way: people would pay whatever they find a product or service worthy of. But when you take a look at the real world markets, and all the actors in these real world markets, this principle doesn’t seem to hold. Not at all.

I want to show this by giving one example. That of the banker, and the hacker.

A banker invents all kinds of ingenious derivative constructs, futures and other financial products in order to make money. The more complex the better. For if a product is complex, the layman doesn’t understand it. And if the layman doesn’t understand it, it is easy to lure him into what might seem to be an attractive deal, but which in fact is nothing but a ticking time bomb.

It is generally acknowledged that bankers, and especially the bankers referred to above, are at least partially responsible for the credit crisis we have experienced. It is safe to say that a lot of wealth has been lost during the crisis; people lost their homes, their jobs, and governments had to step in to save the day. In other words: these bankers have, at least over the last couple of years, made a negative contribution to the overall utility of society.

Why then do they get paid so much? Why then do they get a high positive utility for acting in a manner that ultimately decreases society’s utility? Although I am not interested in explaining this phenomenon in this post, one explanation could be that it seems like the bankers contribute a lot of happiness, because they (can) create a lot of money, and – in our capitalist society – money equals happiness. Hence the bankers create a lot of happiness.

Luckily, there are also people who do the exact opposite: they don’t get paid anything while making lots of people happy. They are the modern day equivalent of Robin Hood. An example would be the people contributing content to Wikipedia. But also the people behind Popcorn Time; a digital platform at which you can stream pretty much any movie, and all for free. These people make very many people happy – an exception would be the film distributors of course – but don’t get paid anything. Even though, in contract to the banker, their net contribution to society’s utility is positive.

Although we don’t pay the Wikipedia guys and Popcorn Time geeks in terms of money, we can pay them in terms of a currency that is even more valuable: gratefulness and respect. Something the bankers cannot count on. Because after all: there is a difference between what you get, and what you earn.

But what do you think?

‘Moral Logic': a Guide for Political Decision Making?

Modal logic is – as far as I am concerned – all about what might possibly be the case (alethic logic), or about what we know (epistemic logic) etc. But not about what we should or should not do. That is, ethics seems not to be grounded in modal logic – or any logic for that matter. And that’s a pity, for I believe that logic can play a valuable role in moral decision making. Especially in politics. Let me illustrate this via an example:

Let’s say that a politician proposes a policy A (‘Taxes are increased’). Let’s say that it is common knowledge that A leads to B (‘A –> B’), with B being ‘The disposable income of the poor is decreased’. Now, let’s say the politician doesn’t want B, (we write ‘–B’). Then, you could reasonably say that, by letting ‘–’ follow the rules of negation, and by applying modus tollens, we get –A. That is: the politician does not want A.

This last step requires clarification. Suppose that we know that by increasing taxes (A), the disposable income of the poor will be decreased (B), and knowing that the politician doesn’t want the income of the poor to be decreased, the politician should not increase taxes. Then, assuming that no-one wants to do something he should not do (we are dealing with very rational agents here), it follows that the politician does not want A (‘–A’).

This ‘logic’ is consequentialist in nature. That is, you decide whether to perform a certain action (A), by looking at its consequences (B). In case you want B, you are good. In case you don’t want B (–B), then – by modus tollens – it follows that you should not do A. Hence you don’t want A, giving –A. This logic is of of course very strict; it follows absolute rules, axioms or principles. Hence it might be suited to model a moral system that is equally strict. Think about Kantian ethics. On the other hand, a system like utilitarian ethics might be better modelled by a different mathematical model.

Workings
Let’s dive a little deeper into the working of this ‘moral logic’. One way this logic might work is as follows.

(1) You start with a set of axioms; propositions you absolutely want, or absolutely don’t want:

A
–B
C

(2) Next you look at the actions available, and the consequences those actions entail:

D –> A
D –> B
E –> C.

(3) Then you choose an action (in this case either D or E), which does not have any consequences you absolutely don’t want. In this case you should not choose D, for D –> B and –B, hence –D. That is, according to the rules laid down, we don’t want D; hence the only option that remains is E.

Extended
Of course, this ‘logic’ does not obey all the regular rules of logic; for instance, it does not obey the rule of modal logic that the two modal operators can be expressed in terms of each other – we don’t even have two modal operators. But still, by applying the very simple rules laid down above, applying this logic can be helpful. I find this logic particularly valuable in analysing arguments used in political decision making, for politics is a prime example of the interplay between actions (the antecedent of our material conditional) and normative consequences (the consequences).

The above logic can be extended to better take into account preferences. You could make a hierarchy of consequences, with consequences higher at the hierarchy being morally superior to those below, so that – in case you have more than one action to choose from – you should choose the one having the consequences highest in the hierarchy.

What do you guys think of the ‘moral logic’?

Sex ever more present in Pop Music: problematic or not?

The prevalence of 'sex' in pop music

The prevalence of ‘sex’ in pop music

Look at the video clip of Miley Cyrus’s song Wrecking Ball. Now tell me: what do you think? Probably something along the lines of: why is she naked pretty much all time?

But while Cyrus’s clip is ‘shocking’, it seems like we have hit a new peak in the prevalence of sex in pop-music. This peak is called Anaconda and its singer Nicki Minaj.

The facts
It is not only old people who say that today’s music is all about sex. There are data to back up this claim. Psychology professor Dawn R. Hobbs shows in Evolutionary Psychology (scientific magazine) that approximately 92 percent (!) of the songs that made it into the Billboard Top 10 in 2009 contained ‘reproductive’ messages, with ‘reproductive’ obviously being synonymous with ‘sex’. But while this research shows that today’s popular music is very much about sex, it doesn’t compare this to how it was in ‘the good old days’.

But different research, by the LA Times, shows that pop-songs today are more about sex than ever before. The research shows that ever since the beginning of the 1990s, the word ‘sex’ is much more prevalent in singles appearing in the Billboard Year-End Hot 100 singles (see picture).

Hence we can safely say that today’s pop music is very much – or at least much more than two decades ago – about sex.

Problematic?
This is not to say that this trend towards ‘sex-songs’ is problematic. Maybe it is caused by noble motives, such as liberalizing talk about sex.

But it seems that the occurrence of ‘sex’ in today’s pop-songs has one and one reason only: songs about sex are more popular than songs about different topics. This is shown by the facts presented above. And since the music business is – like any other business – commercial, it intends to sell as much of its goods as possible. Hence it makes sense, from the perspective of money-making businesses, to produce songs that are about sex. Hence it is not nobility, but profit-seeking motives that are responsible for this trend.


Nicky Minaj with Anaconda

This trend might pose a problem if you – a consumer of pop-music – are looking for artists that provide you with an original perspective on society; views on, let’s say, the exploitation of the working class. Views that make you think. Hence the sex-trend might deprive today’s youth of the supply of original views required for them to develop themselves.

But what do you think?

Top Universities, Reputation and Employers

One of the top universities

The University of Cambridge: one of the top universities

It is a fact that some universities are more popular among employers than others. See this link for a ranking of the top 10 universities in the world — according to employers in 2013/2014. There are hardly any surprises in this top 10. As always, the University of Oxford, Cambridge and Harvard are included.

The question I ask in this post is: based on what criteria does an employer prefer one university to an other? And how reasonable is it for a company to base its preference on these criteria?

Admission standards
It seems fair to say that universities like Oxford and Cambridge have higher admission standards than pretty much any other university in the world. Therefore, being admitted to such a university is by itself an indication that you are ‘better’ (in terms of pre-university academic results etc.) than non-admitted applicants.

Hence one could say that it makes sense for employers, knowing about these strict admission procedures, to be more inclined to pick someone from such a university than from any other university. After all, the ‘top’ universities already have done part of the selecting for them.

Harvard students not necessarily better
But the above reasoning is not valid. Since even though it might be true that the Oxfords and Cambridges of this world pick the students that were the best before they entered university, it doesn’t follow that these students are still the best after they have been through university.

It might very well be so that someone who didn’t do his utmost best in his undergraduate studies (and therefore was not admitted to a top university) decides to change his effort when attending a Master. After all, he knows that there are people from Oxford and Cambridge around, so he has to step up his game in order to get a decent job.

The opposite might be true for a person studying at a top university. He might feel like, now he has been accepted into this prestigious institution, the chance of him finding a good job have increased significantly; so much that ‘just passing’ his Master might be sufficient for him to still obtain a job that suits his criteria.

In other words: getting a degree from a top university doesn’t necessarily make you more educated than someone who has got his degree from a ‘not-top’ university.

Social factors
When we look a little further, we see that social factors play a role too in the hiring process of a company. After all, a company – let’s call it ‘Company A’– wants the best employees. Therefore it might look at the ‘best’ firms in its industry in order to see where they get their employees from. Seeing that they get their employees from the top universities, the company believes that it should do so too; after all: these companies are the best in the industry, hence they should have the best employees, right? And given that these employees come from the top universities, these universities must provide the best employees.  Hence Company A hires someone from a top university.

Now assume another company enters the industry. This company will be even more inclined to hire someone of a top university because of the increase in the university’s reputation due to Company A employing its students. This points to the fact that companies do not look solely at the capabilities of its potential employees; the reputation of the university the candidates have studied at is of importance as well.

Top universities still good
The above is not to say that employing students is all based on the unjustified supposition that top universities provide the best employees. After all, it seems reasonable to suppose that those entering top universities are motivated, disciplined and will enhance their capabilities while attending the top university. Hence it is likely that they will still be ‘best’ after having gone through their top-university education.

Given that being a good student implies being a good employee, the latter implies that these students will be good employees. But it should be kept in mind that social factors such as the reputation of a university are self-perpetuating, hence no watertight indicator of the quality of students.

Tobacco Taxation and Autonomy: How do They Add Up?

According to a survey held by the British “Action on Smoking and Health” society (the “ASH”) 20% of the British adults smoke. Is this a good thing? I don’t know. I believe that the act of smoking isn’t intrinsically good or bad; it is something that each person should decide for him- or herself. However, what I believe is intrinsically valuable is human autonomy. By autonomy I mean the right each person has to decide form him- or herself how to live his or her life without unjustified intervention from third parties. And it is this latter point I want to draw attention to.

According to the ASH, in 2012, 77% of the price of a pack of cigarettes consisted of tax. Multiply that by the packs of cigarettes sold, and you get an amount of £10.5 billion raised through tobacco taxation. This is six times as much as the spending by the NHS (the National Health Service) on tobacco related diseases; these were “merely” £1.7 billion. So the question that comes to mind is: what justifies the £8.8 billion that remains after subtracting the NHS costs from the money raised through tobacco taxation?

The ASH claims that the inequality between the two numbers is no issue, since “tobacco tax is not and never has been a down payment on the cost of dealing with ill health caused by smoking”. But what then is its purpose? They claim that the high level of tax on tobacco in Britain serves two purposes: (A) to reduce smoking through the price incentive, and (B) to raise taxes from a source that has least impact on the economy. The latter point has been scrutinized extensively by economists, and I don’t think I can add anything of value to that discussion. So let’s focus on the first point: the government’s goal of reducing smoking (through the price incentive).

In making this claim, the ASH implicitly assumes that it is within the government set of duties to reduce smoking among its citizens. But is it? One can justify tobacco taxation on the grounds of the health care costs incurred by the non-smoking part of society. However, as we have seen, this amount by no means adds up to the taxes levied on tobacco. So let me rephrase the question: is it within the government’s rights to push away its citizens from performing an activity that they (the government) don’t value? I believe this question directs us towards the more fundamental question of where it is that the boundary lies between justifiable government intervention and morally objectionable behavior. One could say, as I believe, that it is one thing (and justifiable) to prevent non-smokers from being financially hurt by the actions of smokers, but it is a completely different thing (and not justifiable) to promote “non-smoking values” among its citizens, merely for the sake of – what appear to be – paternalistic motives.

As with any government intervention in society, the benefits of the intervention should be weighed against the costs. Presumed that there might be an intrinsic value in having a non-smoking society, a point the ASH doesn’t provide any argument for, the costs of violating an intrinsically valuable human right (autonomy, that is) should be included in the calculation as well. And until that has been done, the question of whether the £10.5 billion in tobacco taxation is justified remains open for debate.

Exams In the Summer Term: The Optimal Option?

Most universities in the United Kingdom apply what is called the “trimester-structure”: the division of the academic year into a Michaelmas, Lent and Summer Term. In general, although this differs per program and per university, it is the case that by far most of the examinations are due in the Summer Term. The question is: is this the optimal educational structure? There are, I think, at least two main problems with the structure as it is currently being applied: one regarding its didactic implications, and one regarding its (in)efficiency.

Let’s start with the didactics. As numerous scientific studies have shown, feedback – and especially immediate feedback – are of great importance in the learning of new material. This is because, when mastering new material, it is important to be made aware in an early stage of errors that – if not resolved – might turn into significant problems. And although immediate feedback is part of most lectures and seminars, there’s one crucial area in which this aspect seems to be ignored: examinations. As mentioned before, an intrinsic part of the trimester-structure is that by far most of the examination takes place in the last term (i.e., the Summer Term). This implies that material studied in the first term (Michaelmas Term) gets tested in the third term (Summer Term). It seems reasonable to assume that, in this case, the feedback period between absorbing the material and the material being tested is very long (a couple of months), and therefore lacks the impact it could have upon correcting students’ knowledge.

Besides a didactic argument, one could employ what might be called an “economical” view on studying. Scientific research – from Psychology Today – shows that students have the tendency to study more when the exams get nearer. One could say that the “marginal knowledge-output of learning” is higher when the examination period gets nearer. For now it is irrelevant whether this is due to procrastination on the side of the students, or due to an intrinsic part of human motivation. The fact of the matter is that, when applied to the trimester-structure, this tendency implies that most of students’ studying will take place in the (short) period before the Summer Term. But isn’t this an inefficient usage of both the Michaelmas and Lent Term?

There seems to be an easy way in which the current system could be improved upon (in the light of the aforementioned two arguments). One way would be by moving away from 100% examinations in the Summer Term to – let’s say – 33% exams per term. Another option might be to keep the 100% examination structure in place, but simply create more courses that take up one term only, and test these after the respective term. Besides being optimal from an economic perspective, since students will be studying “at full capacity” all the time, these options would drastically shorten the feedback-period between the absorption of new material and the testing of it, therefore being beneficial from a didactic point of view as well.

In conclusion, it might be worthwhile to take a look at these, and likewise options, to improve upon the educational structure currently applied by many universities in the United Kingdom.

A Claim in Favour of Legalizing Euthanasia

It recently came to my attention that euthanasia, the act of deliberately ending a person’s life to relieve suffering, is illegal in the United Kingdom (UK). Being a Dutchman, and the Netherlands being a country in which euthanasia is legal, I was surprised to read this. But I was even more surprised to read that euthanasia is, depending on the circumstances, judged as either manslaughter or murder, and punishable by law up to life imprisonment. Just to give you an idea: assisted suicide is illegal too, but punishable by up to ‘only’ 14 years.

Arguments
Being fully aware of the fact that euthanasia is a controversial topic, I want to make a claim in favor of legalizing euthanasia (in the UK, but this goes for any democracy). The first argument for this claim might sound dramatic, but I believe it hits the core of the issue. It is the following: no single individual has decided to come into this world. Our parents ‘decided’ to have a child, and there we were. From this it follows that none of us chose to live a life with perpetual (and incurable) pain, which is the life many terminally ill people live. So, having been put on this world without his consent, and not having chosen for the extreme pains he – being a terminally ill person – experiences, it would only be fair for him to be able to ‘opt out’ of life whenever he wants to; in a humane manner that is, thus excluding suicide.

Note that I am talking about the option for euthanasia for terminally ill people only. And this brings me to my second argument, which has to do with the position of doctors. Let’s ask ourselves the question: what is the duty of doctors? Is it to cure people? If so, then terminally ill people shouldn’t be treated by a doctor in the first place, since – by definition – terminally people cannot be cured from whatever it is that they are suffering from. Hence, given that terminally people are in fact being treated by doctors, there must be another reason the medical community has for treating them; I presume something in the form of ‘easing their pain’.

Now, given that we have a doctor and that he wants to ‘ease the pain’ of the terminally ill, I assume that he wants to do so in the best manner possible; that is, by using the method that eases the pain most, keeping in mind any future consequences the treatment might have. But what if a patient has crossed a certain ‘pain threshold’, and the doctor knows which great certainty that the patient cannot be cured from his disease? In this case it seems that not performing euthanasia would be equivalent to prolonging the patient’s suffering, without improving the chance of recovery (and recovery is, by definition, absent for terminally ill people). It is in those cases, and those cases only, that euthanasia seems to be the optimal method for ‘easing the pain’, and should therefore be applied by doctors (note that I am, of course, only arguing for euthanasia for those who want to receive euthanasia).

NHS
It is not that the National Health Service (the ‘NHS’) hasn’t thought about these arguments. On the contrary; they have a entire webpage devoted to ‘Arguments for and against euthanasia and assisted suicide’. Although I agree with none of the arguments the NHS gives against euthanasia, there is one that I find particularly wrong, and which they call the ‘alternative argument’. The alternative argument states that ‘there is no reason for a person to suffer because effective end of life treatments are available’. Hence euthanasia should be no option. One of the ‘alternatives’ the NHS puts forward is that ‘all adults have the right to refuse medical treatment, as long as they have sufficient capacity to make a decision’ (which in practice has the same effect as euthanasia: the patient will die).

But refusing medical treatment is clearly in no way a valid alternative to euthanasia, for the aims of refusing medical treatment and the aims of euthanasia are profoundly different. While refusing medical treatment is about – clearly – the refusal of medical treatment, euthanasia is about wanting (a form of) medical treatment. Therefore, the fact that there might be another way in which the aim of the former can be accomplished is irrelevant and ineffective from the perspective of pursuing the aim of the latter. Also, the cases to which a refusal of medical treatment might apply are likely to be very different from the ones to which euthanasia is applied.

Imagine, for example, a car accident, in which one of the victims is severely injured, and needs acute medical treatment in order not to die. This is an accident, in which no terminally ill people are involved. Refusing medical treatment seems a reasonable option; euthanasia not. Now imagine the life of a cancer patient, who is terminally ill, and who realizes that his suffering will only become worse. Euthanasia seems a reasonable option; refusing medical treatment not.

To end this post with a personal note, I would like to say that I hope that, in this 21st century we are living in, where everyone gets older and older, and prolonging life seems to be the preferred option a priori, irrespective of the specific circumstances, I hope we can engage in a healthy discussion about a topic so relevant as euthanasia. Of course, many of us are still young and hope not to experience severe illness soon, but looking at the people we love and seeing them suffer unbearably seems to me sufficient reason for not condemning euthanasia straight away.

But what do you think?

Celebrities and Privacy: An Unlucky Combination

I was surfing on the internet and stumbled upon a picture of Jennifer Lawrence (famous actrice) having lunch with her boyfriend at a London restaurant. Obviously the photo was shot by a paparazzo. I thought to myself: ‘Why is someone allowed to take a picture of this event?’ The obvious answer is: because it is legal to do so. But then the next question would be: should it be legal to do so? In other words: do we have the moral right to do so?

One could say that, since celebrities are – by definition – famous, we (‘society’) have the right to know what they are doing. But the latter claim clearly doesn’t follow from its premise. For suppose that we would have that right. Then we would be allowed to stalk celebrities each and every minute of the day to see what they are doing; even when they are at home, watching TV or taking a shower. This is clearly absurd. Hence we do not have that right.

A stronger argument for the claim of us having the right to film and take pictures of celebrities would be following. These people (the celebrities) choose to do a job that was likely to make them well-known. Hence they should accept all the consequences of this decision (including being photographed by the tabloids etc.). But although it might be true that celebrities should accept these consequences on pain of living an unhappy life, it doesn’t follow from this that all of the consequences are morally justified. We might want to ask ourselves the question: should we as a society want to force anyone – celebrity or not – to accept a consequence that might be immoral? If not, then we might want to reconsider our privacy laws.

Of course it isn’t unjustified to take pictures of or film any celebrity. A prime minster for example should be allowed to be photographed when he is attending an international congress. But this is not because we have the right to know what he is doing in his private life. For even if we would have that right, it wouldn’t apply to this case since the congress is clearly not a private matter. Society has a right to know whether her representatives are doing a good job and it solely because of this right that we are allowed to take pictures of the prime minister at the congress. But since not all celebrities are our (legal) representatives, not all celebrities have the (moral) right to be photographed.

I hope that this post has made you consider the moral dilemma that is at stake. Although we might consider it to be ‘fun’ to see what Jennifer Lawrence (or any celebrity for that matter) is doing in her spare time, our enjoyment by itself might not be sufficient reason for the paparazzi being allowed to taking photos of her in her private life. The fact that something is legal doesn’t mean that it should be legal.

But what do you think?

Feelings of Shame: Biologically or Socially determined?

We’ve all had it. That feeling of being deeply disappointed in yourself. That feeling of knowing that you’ve done something wrong, even though you might not know exactly what. I’m talking of course about the feeling of shame. But what is shame? Is it nothing but a chemical response our bodies tend to have towards “embarrassing” situations? And if so, how do our bodies decide between embarrassing and non-embarrassing situations? And what role does our social context play in determining our feelings of shame?

Like any feeling, shame has developed to increase our procreation chances. If we wouldn’t feel any shame, we might have never become the social creatures that we are. Imagine that you would be a caveman hunting with your fellow cavemen. While you’re sitting in the bush, you decide to attack a very angry looking bear, even though the leader of the group explicitly told you not to do so. If you wouldn’t feel bad – feel “ashamed” – about this situation afterwards, there would be nothing to prevent you from doing this “stupid” behavior again. In other words: there would be nothing withholding you from endangering you and your group members again. Sooner or later you would end up being banned from the tribe or dead.

This example might be a oversimplification of the actual workings of our “shame mechanism”, but it should do the job in explaining how our tendency to feel shame has come about. Millions and millions of years of evolution have weeded out those not feeling shame; ending up with a population in which (almost) anyone has the ability to feel shame.

However, while our ability to feel shame is biologically determined, the content of our feelings of shame – that is where we feel ashamed about - is for the biggest part socially determined. And the reason for that is simple: if the content of our feelings of shame wouldn’t be socially determined, they would always lack “environmental relevancy”. What do I mean that? Well – to return to the example of the cavemen – if we would be biologically “tuned” to experience shame whenever we let our fellow hunters down while chasing an angry looking bear, this would imply the requirement a great deal of likewise shame mechanisms to prevent us from doing anything shameful/harmful in life. And because our society is ever-changing – at least a faster pace than our biological makeup – we would always remain tuned to a historical environment; an environment not relevant in sifting the fit from the weak in today’s world. That’s why the ability to feel shame is biologically determined, but the instances that trigger our feelings of shame come about (mainly) through our social context.

There are, however, some aspects of life more important in determining one’s procreation chances than others. The most prominent of course being our sexual capabilities. This could explain why sex seems to take such a prominent position in the whole realm of of areas we could be ashamed about; sex related events simply tend to have a more profound physical effect on us than non-sex related events. This might be why people have the tendency to feel ashamed about their weight, looks, sexual experience, sexual orientation etc.: all of these have – or have had in the past – a significant effect in determining one’s procreation chances.

These are my thoughts on the issue; what are yours?

Mr. Nobody: A True Philosophical Journey

I have just seen the movie “Mr. Nobody“, and I recommend everyone who is interested in philosophy to go see this movie. It’s by far the most philosophical and mind-boggling movie I have ever seen. The movie shows, among other things, the lack of control we have over the course of our lives. Each and every moment in life we “make decisions” that make us go one way or another, and this string of decisions is in fact what we call our lives. The movie also shows a rather deterministic view on life. The butterfly effect, as explicated in the movie, is the prime example of this; even the smallest change in the course of history can make our lives turn out completely different from what it would have been otherwise.

Each movie can be interpreted in multiple ways, and that surely goes for Mr. Nobody. However, I believe that from a philosophical point of view there is at least one issue very prominent, and that is the struggle between free will on the one hand and determinism on the other.

What will follow might be hard to grasp for those who have not seen the movie yet. Therefore I assume that, by this point, you have seen the movie. At first sight, Mr. Nobody is all about choices. That is: what will happen in Nemo’s life given that he has made a certain choice (e.g., to jump on the train or not). The fact that there is this possibility of at least two different worlds Nemo could live (i.e., the one with his mother and the one with his father) seems to imply that Nemo had (in retrospect) the possibility of choosing either of the options. And it is this element of what seems to be some form of autonomy (i.e., the “free will” element) that returns frequently in the movie. Another instance of it can be found in his meeting with Elise on her doorstep. In one “life” Nemo expresses his feelings for Elise, after which they get married and get children. In another life, Nemo does not express his feelings, and his potential future with Elise never occurs.

However, the true question I asked myself after watching this movie was: does Nemo in fact have the possibility to choose? Or are his “choices” predetermined by whatever it is that occurs in his environment? An instance of the latter could be found in Nemo loosing Anna’s number because the paper he wrote her number on becomes wet (and therefore unreadable). In other words, these circumstances seem to force (or at least push) Nemo in the direction of a life without Anna; a circumstance that results from an unemployed Brazilian boiling an egg, which is another occurrence of the butterfly effect. So although it might appear that Nemo has the opportunity to make choices, it might in fact be that “the world” (as in the environment he is living  in) has already made this choice for him.

The struggle between the “apparent” existence of free will and the “true” deterministic nature of the world is just one among many philosophical issues raised by this movie. Another is that of the “arrow of time”: the fact that we cannot alter the past but can influence the future. It is this aspect of time (the fact that it moves in one direction only) that makes the free will versus determinism issue so difficult (if not impossible) to resolve. After all, if we could simply go back in time, and see whether we would have behaved in the same manner, irrespective of the non-occurrence of any circumstances, we might get a much better feel on the nature of free will. After all, if we would happen to act more or less the same, irrespective of the circumstances we would be put into, we would appear to have something resembling free-will. If not, determinism might be a more realistic option.

Nonetheless, this is a very interesting movie who those interested in philosophy will surely enjoy. And to those who have seen it I ask: what did you think of it?

Public Education: an Insult to our Intelligence

More than 30 years ago – in 1979 – Milton Friedman and his wise Rose Friedman published the book Free to Choose, in which they make a (compelling) claim in favor of returning authority to the free market by taking it away from the government. The arguments they come up with for defending this claim are profoundly grounded in empirical evidence, pointing at the inefficient and unequal spending of tax payers’ money on the “big” issues of society (healthcare, Social Security, public assistance etc.). I want to zoom in at the expenditures on public education, and in particular on the immoral and degrading effect this can have on citizens.

We human beings are intelligent creatures. Some are – without a doubt – better equipped (mentally) for dealing with the whims of the free market than others, but still almost all of us are reasonably capable of fulfilling our needs in life. We can go the supermarket by ourselves, deciding for ourselves what we want to eat for breakfast and dinner; the government doesn’t have to do this for us. We can decide for ourselves how we want to spend our leisure time, whether we want to go the movies or not; we don’t need the government to decide this for us. Not only because the government cannot know what each one of us wants – therefore inevitably being inefficient in the spending of its (read: our) resources – but also because we know that we are intelligent human beings, very much capable of making our own decisions in life.

And this intelligence of ours doesn’t have to confine itself to mundane decisions like how to spend our free time. We are equally competent in deciding for ourselves how we want to spend our money on more pressing issues in life: what hospital we want to attend, whether to assist our loved ones financially whenever the need arises, and what school our children should attend. These issues are so important for our well-being – and our children’s – that, instead of putting the government in charge of these decisions, we should be the ones choosing what we consider to be best for our – and our children’s – future.

In 1979, the Friedmans noticed an upward trend in the government taking control of so many of these decisions – decisions that have a relatively big impact upon our financial resources. The most striking example of this might be the public financing of (elementary, secondary and higher) education. In 1979, the average US citizen paid $2000 per child attending public education, even though not everyone’s child – assuming that you even had a child – made use of public educational resources. The Friedmans found this state of affairs harming to the right of each individual to decide where to spent his money at, including the decision to put his child at a privately financed educational institution.

Therefore they came up with a “voucher plan”, a plan in which every US citizen would – per child – get a voucher exchangeable for a certain amount of money ($2000, $1500 or $1000) they could cash in only if their child would attend an appropriate educational institution. This voucher plan would come in the place of the tax each US citizen was obliged to pay, irrespective of them having children and irrespective of their children attending a public educational institution. This plan would make sure that only the ones making use of pubic educational services would be charged, thereby excluding the non-using part of society.

The Friedmans made – primarily – financial arguments in favor of their voucher plan, saying that – on the whole – public educational costs would remain the same, and that parent’s would use their increase in autonomy for finding the school that best suited the needs of their children. The relatively free market that would be created on the basis of the voucher plan, would improve the quality of both public and private education. I believe – however – that one argument in favor of the voucher plan, and the free market in general, has not received the attention it deserved – at least not in the Friedmans’ Free to Choose. And that is the argument of human intelligence.

As pointed at before, humans are – for the biggest part – perfectly capable of deciding for themselves where to spend their money at. We wouldn’t want anyone else to do our groceries and schedule our leisure time for us – at least not for money. However, that is exactly what the government does when it comes down to public education. The government proclaims that – as Friedman explains – it is the only actor possessing the professional knowledge required for deciding what’s best for our children – thereby implying that they are indispensable in order for our children to receive a qualitatively good education.

What this claim comes down to is the government saying – or not saying – that we (“the crowd”) don’t understand what’s important and what’s not in regard to our children’s education, and that – because of that – they should step in and release us of this impossible duty of ours. We don’t understand what to do, but luckily they do. They are the father looking out for us, protecting us from doing harm to our children and to the rest of society.

I find this an insult to the basic level of intelligence the majority of the people has. We very well think to know what’s important in our children’s education – likely better than the government, since – in contrast to the government – we know our children. Thus besides all the financial benefits of the voucher plan, by returning autonomy to the Average Joe, a voucher plan is required for respecting people’s intelligence. It’s – just like driving a car – a right each parent should be endowed with, if only the necessary condition (the having of children, that is) is met. After all, we are no fools, are we?

What do you think?