The Planning Fallacy
One explanation is based on psychological research conducted by Daniel Kahneman and Amost Tversky. Back in 1979, they discovered that research subjects had a tendency to underestimate the amount of time required to accomplish a task. This cognitive bias is most often referred to as, “The Planning Fallacy.”
Since 1979 hundreds of studies have replicated their findings. One such study took place at the University of Waterloo in Ontario, Canada. College students were asked by researchers to predict how long it would take to write their thesis. The average estimate was just over 34 days. The students were also asked to guess how long it might take if everything went smooth, as well as to estimate how any potential delays might impact their original estimate. Respectively, the average estimates were roughly 27, and 48 days. In reality, the average time it took for students to turn in their thesis was 55 days, a full week past even the worst-case estimate and over 50% longer than the base estimate.
And time is not the only thing we underestimate, we also tend to underestimate resources that might be required. In a 2002 study, homeowners that were planning to remodel their kitchens were asked to estimate how much it would cost. The average estimate was $19,000, while the average cost was actually $39,000, over double the original prediction.
The planning fallacy is not limited to individual judgments, but groups and organizations as well. Have you ever heard stories of government projects where a hammer ends up costing over $400, or a $600 dollar toilet seat? While these types of inflated prices are not entirely products of the planning fallacy, it is undoubtedly a factor. What about the case of the Sydney Opera house? In that case, a $7 million dollar project ended up costing $102 million and took 10 years longer to complete than originally projected. For some actual research data, there was a 2005 study of railway projects that showed that over a period of 30 years, four out of every five projects significantly over estimated how many passengers would end up using the system.
There are a number of theories that try to explain why we are so bad at estimating into the future. One theory is that it is a self-serving bias to be a bit overly optimistic, that while our beliefs might be less than accurate, it helps provide the initial momentum required to take on a challenge. In doing so, our memory of past events is less than accurate. When we recall past performance, any delays we tend to attribute to external factors that we safely label as, “Not my fault.” These are factors that we rationalize were beyond our control and therefore should not influence future predictions. At the same time, we tend to discount any factors related to our personal performance and rationalize an increase in our abilities to stay motivated or a renewed commitment to the goal.
Another theory is not as positively framed, but offers to explain our tendency to be poor planners based on known limitations of our cognitive abilities. The theory suggests that in making future predictions, individuals construct a mental narrative or general story that has them accomplishing the task. This story will only include the major elements that are known variables. Small, individual distractions and mishaps that will occur, such as an unexpected phone call are not factored into the equation. Given these small distractions are by and large unknowns, any estimates will understandably be incorrect. It is not bias, rather an inability to account for all the variables.
There are a number of ways to try and deal with the planning fallacy. Two of the methods are based on scientific research and the last fix is common practice in the field of project management.
-1- Anchor and Adjust: According to some research, the best way is to anchor your future predictions based on objective past performance. If the last time you went on a diet you lost an average of 1 lb. a week, that’s what you should use to estimate the next time you want to lose some weight. This does not mean to ignore projected changes or benefits of a new idea, method or process, but to use past performance to anchor and then adjust.
-2- List Delays: Another evidence-backed technique is to intentionally consider setbacks. A study in two thousand and four showed that estimates were more accurate after participants considered three obstacles that might impede their progress. To use this technique, simply do the same…anytime you set a goal or are trying to estimate the time or resources required to finish a task, include a step where you create a list of any potential delays.
-3- Add-X: Similar to listing delays, this method can help by starting with the base assumption that any projected timeline is by default underestimated. The question then is how much time do you add? For a defined task that is simple with no hidden variables add 10% to the original estimate. For an ill-defined task that is complex with unknown variables add 50% to the original estimate. If it is a novel task never before attempted, add 100%.
The Add-X method is understandably a judgment call, as what is a simple task for one person is a complex task for another. If I try to repair my own car I don’t have the knowledge or skills. It is a complex task with plenty of unknowns for me. If a repair manual I find online says it should take an hour to fix, I should probably go ahead and plan for it to take double the amount of time. On the other hand, for a professional mechanic it is a simple task without unknowns. Still, the mechanic should add at least 10% to account for any unexpected distractions or mishaps.
The next time you are creating or updating your to-do-list, think about the planning fallacy. Think about the science behind our natural tendency to over estimate what we can accomplish and underestimate the time and resources it will take to accomplish the goals we set. While ultimately there is no known method that actually eliminates cognitive bias all together, there are techniques that you can use to try and offset the planning fallacy.