Why It Is Possible to Make Above Average Returns – Even in Efficient Markets

There is a well-known hypothesis in financial economics, called the Efficient Market Hypothesis (EMH), that spawns a lot of debate. The EMH states that financial markets are ‘informationally efficient’. In other words: a financial asset’s market price always incorporates and reflects all available relevant information. Hence no investor can consistently use such information to find stocks that earn him above average returns. After all: such information is already reflected in the asset’s price; so if there is a lot of ‘positive’ information about the company, the stock’s market price will have increased, and if there’s a lot of ‘negative’ information, the price will have decreased.

I want to make an argument why, even if the EMH holds, it might still be possible to consistently earn above average returns on investments. The argument is basically very simple. Let’s first recall the EMH. We know that an efficient market is a market in which the price of a financial asset (let’s say a stock) always incorporates and reflects all available information. Hence, you cannot benefit from the set of available information in such a way that you can consistently earn above average returns on investing in the asset – or any asset for that matter. But does it follow from this that you cannot consistently achieve above average returns? I don’t think so.

Because what if you are consistently better than other investors in anticipating future information? Then, even though the stock’s market price reflects all available information, you can utilize this anticipated future information to decide whether to buy or sell a stock. And if you can anticipate future information (which is information not yet incorporated and reflected in the stock’s price) better than the average investor, then you can earn above average returns, time after time.

However, anticipating future information and consistently earning above-average returns is no easy feat, and requires extensive research and expertise in the financial industry. Wealth management firms, with their team of experienced investment professionals, can provide individuals with the necessary tools and knowledge to make informed investment decisions. By partnering with a trusted firm, individuals can learn more about Vigilant Wealth Management and how their investment strategies align with their personal goals and risk tolerance. While the EMH may hold in theory, the reality of the financial market is much more complex, and it is important to have a skilled team on your side to navigate it effectively.

This all sounds pretty abstract. So let’s look an example. Suppose there is a stock of a company that produces wind turbines – call it ‘stock A’. Furthermore, let’s suppose that at this point in time investors are on average not confident about wind energy’s potential. They might think that the cost of producing wind energy is too high, its profits depend solely on the current regulation, or that it will still take a long time before our fossil fuels are depleted, making the switch to wind energy not urgent yet. Given these considerations the stock trades at a price of – let’s say – 10. Let’s assume that this price indeed incorporates and reflects all available information – such as information contained in annual reports, expert analyses etc. Hence it seems reasonable to say that you cannot consistently earn above average returns on this stock by utilizing only this pool of existing information.

But what if you believe that, given the ever increasing energy consumption and ever decreasing level of fossil fuels, society has in the middle-long term no choice but to turn to alternative forms of energy – forms such as wind energy? If you think this is true, then you can anticipate that any future information about the wind-turbine producer will be positive – at least more positive than today’s information is. You can anticipate that the future information will show an increase in the firm’s revenues, or – for example, in case the firm is close to bankruptcy but you know that its managers don’t profit from a bankruptcy – a decrease in costs. Given that the market is efficient, you know that at the time this information will become public, the market price of the stock will increase to reflect this information, to a price of let’s say 20. If you can anticipate such future information consistently, then you can anticipate the future stock price consistently, allowing you to consistently earn above average returns – despite the perfectly efficient market.

An equivalent way to look at this matter is to say that you take into account more information than the average investor in calculating the stock’s fair value. Let’s say that you are doing a net present value calculation, and you have estimated the firm’s future cash flows. In case of stock A, investors used estimated cash flows that lead them to a fair value of 10. However, given your anticipation of future information, you estimate these cash flows to be higher – leading you to a higher valuation of the stock. Again: if you can consistently anticipate future information better than the average investor, you can consistently earn above average returns – even in an efficient market.

Public Opinion and Information: A Dangerous Combination

‘That guy is an asshole. The way he treated his wife is absolutely disgusting. I’m glad she left him, she deserves better…much better.’ That’s the response of society when it finds out that a famous soccer player has hit his wife, and that the pair consequently decided to split the sheets. But based on what does society form this judgment, or any judgment for that matter? Based on information of course! It heard from the tabloids what has occurred, it processes this information, and then comes to the most ‘reasonable’ conclusion/judgment. It’s pretty much like science, in that it bases its conclusions on data and reasons. But the prime difference between science and gossip/public opinion is that the latter doesn’t actively try to refute its conclusions: it solely responds to the data it receives. And this has some striking consequences.

Because what happens whenever the data changes? What happens when one or two lines in a tabloid form a new and ‘shocking’ announcement? What if it appears that – while the football player and his wife were still together – the wife had an affair with another guy? Then suddently the whole situation changes. Then suddenly the wife deserved to be hit. Then suddenly a hit in the face was a mild punishment for what she did. Then suddenly most people would have done the same whenever confronted with the same situation. Suddenly there is new data that to be taken into account. But what are the implications of this observation?

The public opinion can be designed and molded by regulating the (limited) amount of information it receives. And this goes not only for gossip, but just as much for more urgent matters like politics and economics. It isn’t society’s duty to gather as much data as possible, compare evidence for and against positions, and come to the most reasonable conclusion. No, society only has to take the final step: forming the judgment. And if you understand how it is that this mechanism works, you can (ab)use it for your own good. You could if you were in politics ‘accidentally’ leak information about a conversation the prime minister had with his colleagues, and thereby change the political game. The prime minister will be forced to respond to these ‘rumors’, thereby validating the (seemingly) importance of the issue. For why else would he take the time to respond to it? And suddenly, for the rest of his days, he will be reminded for this rumor, whether it turns out to be true – as it was in Bill Clinton‘s case – or not: where there’s smoke, there is fire.

But let me ask you something: don’t you think that famous people make mistakes everyday? Even if only 1 percent of the wives would get hit by their famous husbands every year, that would still be more than enough to fill each tabloid for the entire year. But what if – from all the ‘beating cases’ – only one or two would become public a year? Then – and only then – the guy who did the hitting becomes a jerk. Why? Because even though it might have been the case that the guys hits his wife, even if we don’t know it, now we have the data to back up our judgement. And since we’re reasonable creatures who only jump to conclusions whenever we’ve got evidence to do so, we are suddenly morally allowed to do so.

We find ourselves to be reasonable creatures for solely basing our judgments on the data we receive. We find this a better way to go than just claiming things even though we don’t know them for sure. And although this might very well be the reasonable way to go, we have to remind ourselves that we’re slaves to the data, and therefore vulnerable to those providing the data. We have to be aware that even though we don’t know about the cases we don’t have data about, this doesn’t imply that the cases aren’t there. It merely means that the parties involved – whether this is the (ex) wife of a famous soccer player or anyone else – saw no reason to leak the data. It only means that their interests were more aligned than they were opposed. And we should take people’s interests – and the politics behind it – into account when jumping to judgments based on the data we receive.

But what do you think?

The Use of the Panopticon in the Workplace

The Panopticon was a prison designed to “allow a watchmen to observe all inmates of an institution, without them being able to tell whether or not they are being watched.” Think of it as God watching – or not watching – from a cloud at what we’re doing and punishing us if we’ve behaved badly. The trick of the Panopticon is that – no matter whether someone (a watchmen or God) is actually watching – the “non-watchers” always feel like they’re being watched and therefore will try to make sure that they always stick to the rules.

Interesting concept, huh? An interesting question would be: how can we apply this fairly old idea into our modern societies? Well, there are many applications of Panopticon-like structures already in our modern Western civilization. Technologies like camera’s and sound recorders can make citizens – for example – feel like they’re being watched at all times. And it is this feeling – not the act of there being an observer actually watching them – that prevents them from doing bad stuff. Cost-efficient, right?

Now, let’s take a look at the workplace. Social media cost an employer an average of $65.000 dollars per year per employee. That’s some serious money, isn’t it? So you can understand that employers are looking for ways in which to reduce this – and many other – “work-distracting” activity. An option would be to block all “work-irrelevant” websites. But then the question is: what’s relevant and what’s not? Is checking the news relevant? It could be; it depends on what the news is, right? However, this option would have much less effect if an employee’s time-wasting activities would be performed outside of his computer-area.

Now let me ask you the following question: if you were an employee, and you would know that your boss could be watching what you were doing at any point in time, would you then still “check your Facebook-page” or send some “work-related” mails to you friends? Would you still be wasting your valuable working time if you would know that your boss would receive a message if you didn’t touch your keyboard for – let’s say – 10 minutes (except for the breaks, of course)? I doubt it.

So why isn’t it the Panopticon applied in the workplace yet (as far as we – or at least I – know)? Probably because people find it “wrong” for employers to do so. They find it wrong for employees to have the feeling of being watched all the time. But the question is: is this a legitimate reason for not implementing the concept? After all, a production worker is being watched all the time by his employer, right? So why not an employee sitting behind his computer? Is sitting behind a computer a free pass for just doing what you want in your working time? In the time you’re being paid by your employer? Thereby hurting your company’s profits and – indirectly – the security of your job and the job of your peers? I don’t think so.

But what do you think? Can we do this, or not?

How to Interpret the Notion of Chance?

We all think we’re familiar with the notion of ‘chance‘. But are we really? And if so, what are the consequences we should attach to our interpretation of chance? For instance, are chances purely descriptive in nature – in the sense that they refer only to past events – or do they have a predictive power that might be based upon some kind of underlying ‘natural’ force producing the structured data? And why would it even matter how to interpret chance? Let’s take a look behind the curtains of a probabilistic interpretation of chance, right into its philosophical dimensions.

On average, 12,3 per 100.000 inhabitants of the USA get killed in a traffic accident. Also, 45 percent of Canadian men are expected to develop some form of cancer at some point in their lives. So, what do you think about these data? First of all: does the fact that 12.3 out of 100.000 inhabits get killed in traffic tell you anything about the likelihood that you are going to be killed in traffic? I guess not. It is merely a descriptive notion invented to condense a large amount of data into an easy to read figure. It says nothing about your future, or anyone’s future for that matter. After all: you will either die in traffic or you will not, and you will either get cancer or you will not. At this point in your life you are absolutely clueless which way it will turn out to be. For all you know, it might be a 50-50 kind of situation.

Although this interpretation of chance might feel counter-intuitive, it seems a more reasonable position to take than believing you are expected to die in traffic with a probability of 12,3/100.000. You are after all a unique person and you don’t have 100.000 ways to go. You either go one way, or the other. It is only by adding huge amounts of data together that scientists can come to compressed figures (like chances), thereby describing what has happened in the past. But description does not equal prediction, and totality does not equal uniqueness.

What are the implications of this manner of looking at chance for our interpretation of science? What about the inferences scientists make based upon data, like the one about cancer I mentioned above? Are they making unjustified claims by posing that 45 percent of men are expected to die of cancer? I believe this might indeed be the case. In case scientists want to be fully justified in getting at their conclusions, they should do away with any claims regarding the likelihood of any event happening in the future. That seems to be the only manner for staying true for 100 percent to the data available.

But watch it: this is not to say that the scientific enterprise has lost its value. Science can still be the vehicle best-suited for gathering huge amounts of data about the world, and for presenting these data in such a way that we are able to get a decent glimpse of what is going on in the world around us. And that is where – I believe – the value of science resides: in the provision of data in an easy to understand manner. Not in the making of predictions, or inferences of any kind, as many scientists might happen to believe: just the presentation of data, a job which is difficult enough in itself.

You could say that I am not justified in make this claim. You could back up your argument by saying that a difference should be made between the case of ’45 percent of men are expected to get some form of cancer’ and ‘one specific man has a 45 percent chance of getting cancer’. Where the latter might be untrue, because of the fact that one will either get cancer or not, the former might be more justified. That is because it divides a group into units that will either get cancer or not. However, although this might be true to a certain extent, it still seems to be an unjustified manner to make predictions about the way the world will turn out to be. After all, considering 100 men to be the unit of selection is only to replace the level of the individual with the level of a group. On an even higher level of abstraction, one could consider the 100 men to be one unit, which subsequently would make the conclusions reached unjustified again.

Also, when choosing to make predictions on the level of the group, why does one choose the higher- instead instead of the lower level? Why wouldn’t it be okay to say that, instead of human beings, cells are the true units that either get cancer or not? That’s only a difference in the level of analysis, right?

So, next time you read somewhere that 99 of the 100 people fail in achieving something, interpret this for what it is: a description of what has happened in the past that can inform you in making the decision about what you should do right now. So don’t interpret this as meaning that you only have a one percent chance of being able to achieve a certain goal, because that would be a totally unjustified inference to make: an inference that goes way beyond what the data can support. And don’t consider a scientific fact to be a prediction about the future. Consider it for what it is: a useful description of the past, but no legitimate claim about the future.

But what do you think?