(Tetlock, Gardner; Crown Publishers, 2015)

Chapter 1: An Optimistic Skeptic

Tom Friedman (globally-recognized predictor) vs Bill Flack (virtually unknown predictor who is right quite often).

The One About the Chimp

The average expert was roughly as accurate as a dart-throwing chimpanzee. Author conducted a twenty-year (1984 - 2004) study: gathered up a big group of experts (academics, pundits, etc) to make thousands of predictions about the economy, stocks, elections, wars, and other issues of the day. Time passed, and when the researcher checked the accuracy of the predictions, he found that the average expert did as well as random guessing.

People are in front of the cameras for reasons other than their skill at predictions. Old forecasts are like old news--soon forgotten--and pundits are almost never asked to reconcile what they said with what actually happened. The one undeniable talent that talking heads have is their skill at telling a compelling story with conviction.

It was easiest to beat chance on the shortest-range questions that only required looking one year out, and accuracy fell off the further out experts tried to forecast, approaching random-chance levels at 3-5 years out.

Author: "Call me an optimistic skeptic."

The Skeptic

(The story of Mohamed Bouazizi, 17 Dec 2010, who self-immolated in Tunisia after a police shakedown and started the Arab Spring.) Could Tom Friedman have predicted the Arab Spring? Of course not. (Black swan event.)

1972: Meteorologist Edward Lorenz publishes "Predictability: Does the Flap of a Butterfly's Wings in Brazil Set Off a Tornado in Texas?" about tiny data entry variations in computer simulations (0.506127 with 0.506) could product dramatically different long-term forecasts. Inspired "chaos theory": in nonlinear systems (like the atmosphere) even small changes in initial conditions can mushroom to enormous proportions. This shifted scientific opinion toward the view that there are hard limits on predictability, a deeply philosophical question.

The Optimist

It is one thing to recognize the limits on predictability, and quite another to dismiss all prediction as an exercise in futility. We make mundane predictions routinely (when to leave the house to avoid traffic, how to anticipate other drivers' behavior, expecting other people to follow through on their commitments to meetings, etc). So much of our reality is this predictable, or more so.

Of course each of these pockets of predictability can be abruptly punctured: a restaurant very likely will open its doors when it says it will, but it may not for a number of reasons (manager sleeping late, fire, bankruptcy, pandemic, nuclear war, ...). There are no certainties in life--not even death or taxes, if we assign a nonzero probability to the invention of technologies that let us upload the contents of our brains into a cloud-computing network and the emergence of a future society so public-spirited and prosperous that the state can be funded with charitable donations.

How predictable something is depends on what we are trying to predict, how far into the future, and under what circumstances.

Forecasters are often groping in the dark--they have no idea how good their forecasts are in the short, medium, or long term--and no idea how good their forecasts could become. At best, they have vague hunches. That's because the forecast-measure-revise procedure operates only within the rarified confines of high-tech forecasting. It's a demand-side problem: the consumers of forecasting don't demand evidence of accuracy, so there is no measurement, which means no revision, and without revision, they can be no improvement.

(History of the Good Judgment Project (GJP) and its efforts.)

Two conclusions emerge from this research: One, foresight is real. Two, what makes these superforecasters so good isn't who they are, it's what they do: Foresight isn't a mysterious gift bestowed at birth. It is the product of particular ways of thinking, of gathering information, of updating beliefs. Broadly speaking, superforecasting demands thinking that is open-minded, careful, curious and above all, self-critical. It also demands focus. The kind of thinking that produces superior judgment does not come effortlessly--only the determined can deliver it reasonably consistently, which is why our analyses have consistently found commitment to self-improvement to be the strongest predictor of performance.

A Forecast About Forecasting

(Discussions about human judgment vs computer machine learning/AI.) "I think it's going to get stranger and stranger" for people to listen to the advice of experts whose views are informed only by their subjective judgment--human thought is beset by psychological pitfalls. "So what I want is that human expert paired with a computer to overcome the human cognitive limitations and biases." (David Ferrucci, IBM Watson chief engineer)

We will need to blend computer-based forecasting and subjective judgment in the future.

Chapter 2: Illusions of Knowledge

(Discussions of cognitive biases and people loudly proclaiming incorrect conclusions with no proof.)

Chapter 3: Keeping Score

Chapter 4: Superforecasters

Chapter 5: Supersmart?

Chapter 6: Superquants?

Chapter 7: Supernewsjunkies?

Chapter 8: Perpetual Beta

Chapter 9: Superteams

Chapter 10: The Leader's Dilemma

Chapter 11: Are They Really So Super?

Chapter 12: What's Next?

Appendix: 10 Commandments for Aspiring Superforecasters

For more details visit www.goodjudgment.com

  1. Triage: Focus on questions where your hard work is likely to pay off; don't waste time on either easy "clocklike" questions (where simple rules of thumb can get you close to the right answer) or impenetrable "cloud-like" questions (where even fancy statistical models can't be the dart-throwing champ). Concentrate on questions in the Goldilocks zone of difficulty, where effort pays off the most. ... Triage judgment calls get harder as we come closer to home. ... Certain classes of outcomes have well-deserved reputations for being radically unpredictable, but we usually don't discover how unpredictable outcomes are until we have spun our wheels for a time trying to gain analytical traction. Bear in mind the two basic errors it is possible to make here: We could fail to try to predict the potentially predictable, or we could waste our time trying to predict the unpredictable. Which error would be worse in the situation you face?

  2. Break seemingly intractable problems into tractable sub-problems. Decompose the problem into its knowable and unknowable parts. Flush ignorance into the open. Expose and examine your assumptions. Dare to be wrong by making your best guesses. Better to discover errors quickly than to hide them behind vague verbiage.

  3. Strike the right balance between inside and outside views. Nothing is 100% "unique"; uniqueness is a matter of degree. Conduct creative searches for comparison classes even for seemingly unique events. Superforecasters are in the habit of posing the outside-view question: How often do things of this sort happen in situations of this sort? (Summers' estimates: Larry Summers, a Harvard professor knows about the planning fallacy--estimators tend to underestimate the time they need, often by factors of two or three--so he doubles the estimate, then moves to the next higher time unit. "One hour" becomes "two days", and "two days" becomes "four weeks".)

  4. Strike the right balance between under- and over-reacting to evidence. Belief updating is necessary. Don't suppose that belief updating is always easy because it sometimes is; skillful updating requires teasing subtle signals from noisy news flows, all the while resisting the lure of wishful thinking. Savvy forecasters learn to ferret out telltale clues before the rest of us, snooping for nonobvious lead indicators--what would have to happen before "X" could. The best forecasters tend to be incremental belief updaters, often moving from probabilities of 0.4 to 0.35 or from 0.6 to 0.65. Yet superforecasters also know how to jump (move their probability estimates fast in response to diagnostic signals).

  5. Look for the clashing causal forces at work in each problem. For every good policy argument, there is typically a counterargument that is at least worth acknowledging. Each side should list, in advance, the signs that would nudge them toward the other. In classical dialectics, thesis meets antithesis, producing synthesis. Synthesis is an art that requires reconciling irreducibly subjective judgments.

  6. Strive to distinguish as many degrees of doubt as the problem permits but no more. Few things are either certain or impossible, and "maybe" isn't all that informative. The more degrees of uncertainty you can distinguish, the better a forecaster you are likely to be. Translating vague-verbiage hunches into numeric probabilities feels unnatural at first but it can be done with patience and practice. Don't reserve rigorous reasoning for trivial pursuits.

  7. Strike the right balance between under- and over-confidence, between prudence and decisiveness. Superforecasters understand the risks of both rushing to judgment and of dawdling too long near "maybe". Long-term accuracy requires getting good scores on both calibration and resolution.

  8. Look for the errors behind your mistakes but beware of rearview-mirror hindsight biases. Don't try to justify or excuse your failures--own them! Conduct unflinching postmortems on both failures and successes: Where exactly did I go wrong (or right)? Remember that although the more common error is to learn too little from failure and to overlook flaws in your basic assumptions, it is also possible to learn too much (you may have been basically on the right track but made a minor technical mistake that had big ramifications).

  9. Bring out the best in others and let others bring out the best in you. Master the fine arts of team management, especially perspective-taking (understanding the arguments of the other side so well that you can reproduce them to the other's satisfaction), precision questioning (helping others to clarify their arguments so they are not misunderstood), and constructive confrontation (learning to disagree without being disagreeable).

  10. Master the error-balancing bicycle. Implementing each commandment (these 10) requires balacing opposing errors. You can't become a superforecaster by reading training manuals. Learning requires doing with good feedback that leaves no ambiguity about whether you are succeeding or whether you are failing. Practice is not just going through the motions of making forecasts, or casually reading the news and tossing out probabilities--superforecasting is the product of deep, deliberative practice.

  11. Don't treat commandments like commandments. "It is impossible to lay down binding rules because two cases will never be exactly the same." --Helmuth von Moltke Guidelines are the best we can do in a world where nothing is certain or exactly repeatable. Superforecasting requires constant mindfulness, even when you are dutifully following these commandments.


Tags: reading   thinking   decision-making  

Last modified 02 October 2024