top of page
  • Writer's pictureChristoph Burkhardt

Why Your Forecasting is Wrong and What to Do About It

For years I have been fascinated by the workings of the human mind when it evaluates a prediction into the future or makes a decision about whether something is possible or not. Certainly, our capacity to compute what is going to happen in the future is very limited. We are not very good at forecasting. As we have already discussed, it is not only our inability to know what is com- ing our way, but also the fact we quite actively base our forecasts on what we believe will happen, rather than what is likely going to happen. The science fiction author and futurist Arthur C. Clarke put it this way:

“When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.”

Clarke is saying that our experience equips us with the ability to see what is possible in the near future. An experienced scientist can make accurate judgments about what is possible. But making judgment calls about developments as impossible is an entirely different game. And our experience does not prepare us for this game. The two parts seem like they describe very similar processes; after all, “possible” and “impossible” are mutually exclusive terms. Things are either possible or impossible; they cannot be both. But they are not similar when we look at the processing of human cognition. When we use our experience and knowledge to describe why something is possible, we look at all the details that make something happen.


Do we have the technology, the resources, the data, the knowledge, the people, or the infrastructure to make it happen? After going through the list of capabilities and resources we need, we may find that everything we need is available, so it will be possible. In this respect, our experience sets the stage for the judgment call. We can clearly see the path of something becoming reality. When we look at things that we initially feel are unlikely because they are in the future, we do not see this path clearly. There is a missing link between the status quo and the future state of the world we are imagining. And here is the problem: just because we do not see how something becomes possible in the future does not mean that this something becomes any less likely than if we could easily imagine it to happen. Our imagination is not only very flawed when we think about the future; it is also a pretty bad predictor for what is impossible. MIT’s Roy Amara put this point brilliantly in what is now called Amara’s Law:

“We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.”

Applying what Amara is saying about new technologies to fore- casting in general, I would summarize human cognition as the combination of us being naive in the short run and arrogant in the long run. Our imagination provides us with farfetched ideas of what is possible in the near future, but it also makes us underestimate the impact of innovation in the long run. This is why, in many science fiction stories and movies, we see robots with human level intelligence able to do everything a modern-day slave would be able to do. At the same time we see very few changes in the way we live our lives in these stories. In most cases people still go to work in some form, they own vehicles and sometimes even read newspapers (the ones printed on paper).

Thus while some technologies and developments are quite naïve, since we will not see robots with human-level intelligence in our homes anytime soon, we are also arrogant towards the effects these robots will have on shifts in economic and societal areas of life. We will not work and live like we do today and new technologies are only a small trigger in the much bigger paradigm shifts we see unfolding in front of us. Many people are surprised by economic developments or political shifts because our imagination has been too arrogant to see them as a long-term consequence, while we think of short-term shifts in new technologies as far greater than they actually are in the short run.

When we think about the future of robots with human-level intelligence, we therefore ignore for too long the real shifts taking place, while we are being totally naive about robots taking over the world. Of course we need to be careful. Of course we need regulation. And of course we need to educate ourselves in detail about how these technologies work. But at the same time, we do not need to worry about the end of our world within the next twenty or thirty years. Forecasting that is being based on fear and is very unlikely to be correct. The frame of mind to think about the future has to come from a neutral and historically aware place. We have to be very careful not to develop self-fulfilling prophecies driven by fear. Sometimes the worst-case scenario only becomes true because too many believe it will.

51 views0 comments
bottom of page