Following his work on geopolitical forecasting, Philip Tetlock co-founded The Good Judgement Project, along with decision scientist Barbara Mellers and UPenn colleague Don Moore.
Their further research identified four key steps to improving forecast accuracy, shared on the (now archived) ‘Science’ section of the project’s site. These steps:
- Talent spotting: Identifying the naturally better forecasters. Theyâre curious, highly analytical and numerically savvy, rational, highly open-minded, and quick to revise their views when presented with new evidence. In thinking, they also structure and disaggregate problems, take an ‘outside view’, and systematically look for base rates.
Nowadays, the team uses Good Judgement Open to identify some of their new ‘superforecasters’. - Training: Forecasting is a skill that can be learned and improved. Training focuses on techniques to reduce cognitive biases and apply structured thinking. Key strategies include breaking down problems, considering alternative outcomes, and updating predictions as new information emerges.
The team’s early “cognitive-debiasing” training known as CHAMPS KNOW lasted only an hour, but improved forecasting skills by 11% over an extended period of time. - Teaming: Diverse teams make better predictions than individuals. By grouping forecasters with different perspectives and encouraging collaboration, there’s a “surge of accuracy that goes way beyond what you’d expect”, said the researchers in an interview with Knowledge at Wharton.
- Aggregation: Finally, individual predictions are aggregated into a single forecast, but not just through (weighted) averages. A method called log-odds extremising aggregation essentially combines group predictions, adjusting the single consolidated forecast to be more extreme where there is a consensus. So the more people that agree on a factor, the model pushes its probability closer to certainty (so towards 0% or 100%). The idea is that a confident consensus is often more reliable than a simple average would suggest.