(by Annie Duke, Portfolio/Penguin, 2018, ISBN 9780735216365)

Chapter 1: Life is Poker, not Chess

The hazards of resulting: we link results with decisions even though it is easy to point out indisputable examples where the relationship between decisions and results isn't so perfectly correlated.

Quick or dead: our brains weren't built for rationality: our brains evolved to create certainty and order. We are uncomfortable with the idea that luck plays a significant role in our lives. We recognize the existence of luck, but we resist the idea that, despite our best efforts, things might not work out the way we want. It feels better for us to imagine the world as an orderly place, where randomness does not wreak havoc and things are perfectly predictable. We evolved to see the world that way. Creating order out of chaos has been necessary for our survival. When we work backwards from results to figure out why those things happened, we are susceptible to a variety of cognitive traps, like assuming causation when there is only correlation, or cherry-picking data to confirm the narrative we prefer. We will pound a lot of square pegs into round holes to maintain the illusion of a tight relationship between our outcomes and our decisions. Kahneman's System 1" and "System 2" thinking; also called "reflexive mind" and "deliberative mind" by Gary Marcus (Kluge: The Haphazard Evolution of the Human Mind).

Game theory: "the study of mathematical models of conflict and cooperation between intelligent rational decision-makers."

Poker vs chess: chess contains no hidden information and very little luck. Poker is a game of incomplete information.

"I'm not sure": using uncertainty to our advantage: admitting that we don't know has an undeservedly bad reputation. The first step is understanding what we don't know. Ignorance: How It Drives Science (Stuart Firestein) points out that in science, "I don't know" is a necessary step toward enlightenment. What makes a decision great is not that it has a great outcome. A great decision is the result of good process, and that process must include an attempt to accurately represent our own state of knowledge. That state of knowledge, in turn, is some variation of "I'm not sure". Acknowledging uncertainty is the first step in executing on our goal to get closer to what is objectively true. "I'm not sure" is simply a more accurate representation of the world. When we accept that we can't be sure, we are less likely to fall into the trap of black-and-white thinking. The secret is to make peace with walking around in a world where we recognize that we are not sure and it's OK.

Redefining wrong: When we think in advance about the chances of alternative outcomes and make a decision based on those chances, it doesn't automatically make us wrong when things don't work out. It just means that one event in a set of possible futures occurred. Any prediction that isn't 0% or 100% can't be wrong solely because the most likely future doesn't unfold.

Chapter 2: Wanna Bet?

We've all been to Des Moines: hiring an employee is not a riskless choice. By treating decisions as bets, poker players explicitly recognize that they are deciding on alternative futures, each with benefits and risks. They also recognize that there are no simple answers. If we follow the example of poker players by making explicit that our decisions are bets, we can make better decisions and anticipate (and take protective measures) when irrationality is likely to keep us from acting in our best interest.

All decisions are bets: "bet": "a choice made by thinking about what will probably happen"; "to risk losing (something) when you try to do or achieve something"; "to make decisions that are based on the belief that something will happen or is true". Every decision commits us to some course of action that, by definition, eliminates acting on other alternatives.

Most bets are against ourselves: One of the reasons we don't naturally think of decisions as bets is because we get hung up on the zero-sum nature of the betting that occurs in the gambling world. In most of our decisions, we are not betting against another person; we are betting against all the future versions of ourselves that we are not choosing. Whenever we make a choice, we are betting on a potential future; we are betting that the future version of us that results from the decisions we make will be better off. Ignoring the risk and uncertainty in every decision might make us feel better in the short run, but the cost to the quality of our decision-making can be immense. If we can find ways to become more comfortable with uncertainty, we can see the world more accurately and be better for it.

Our bets are only as good as our beliefs: we bet based on what we believe about the world. Part of the skill in life comes from learning to be a better calibrator, using experience and information to more objectively update our beliefs to more accurately represent the world. The more accurate our beliefs, the better the foundation of the bets we make. There is also skill in identifying when our thinking patterns might lead us astray, no matter what our beliefs are, and in developing strategies to work with (and sometimes around) those thinking patterns.

Hearing is believing: we form beliefs in a haphazard way, believing all sorts of things based just on what we hear out in the world but haven't researched for ourselves. (How to determine a man will go bald; how you calculate a dog's age in human years. Both are common misperceptions -- Google "common misconceptions".) How we think we form abstract beliefs: (1) We hear something; (2) We think about it and vet it, determining whether it is true or false; (3) We form our belief. How we actually form abstract beliefs: (1) We hear something; (2) We believe it to be true; (3) Only sometimes, later, if we have the time or inclination, we think about it and vet it, determining whether it is, in fact, true or false. Daniel Gilbert (Stumbling on Happiness) is responsible for pioneering work on belief formation: "People are credulous creatures who find it very easy to believe and very difficult to doubt." How we form beliefs was shaped by the evolutionary push towards efficiency rather than accuracy. Truthseeking, the desire to know truth regardless of whether the truth aligns with the beliefs we currently hold, is not naturally supported by the way we process information. We might think of ourselves as open-minded and capable of updating our beliefs based on new information, but the research conclusively shows otherwise. Instead of altering our beliefs to fit new information, we do the opposite, altering our interpretation of that information to fit our beliefs.

They saw a game: our pre-existing beliefs influence the way we experience the world. That those beliefs aren't formed in a particularly orderly way leads to all sorts of mischief in our decision-making.

The stubbornness of beliefs: once a belief is lodged, it becomes difficult to dislodge, leading us to notice and seek out evidence confirming our belief, rarely challenging the validity of confirming evidence and ignore or work hard to actively discredit information contradicting the belief. This is called motivated reasoning.

Being smart makes it worse: the smarter you are, the better you are at constructing a narrative that supports your beliefs, rationalizing and framing the data to fit your argument or point of view. The better you are with numbers, the better you are at spinning those numbers to conform to and support your beliefs.

Wanna bet?: when someone challenges us on a belief, signaling their confidence that our belief is inaccurate in some way, ideally it triggers us to vet the belief, taking an inventory of the evidence that informed us. Being asked if we are willing to bet money on it makes it much more likely that we will examine our information in a less-biased way, be more honest with ourselves about how sure we are of our beliefs, and be more open to updating and calibrating our beliefs. Offering a wager brings the risk out in the open, making explicit what is already implicit (and frequently overlooked).

Redefining confidence: The Half-Life of Facts is a great read about how practically every fact we've ever known has been subject to revision or reversal. We are in a perpetual state of learning, and that can make any prior fact obsolete. We would all be well-advised to take a good hard look at our beliefs; we would be better served as communicators and decision-makers if we thought less about whether we are confident in our beliefs and more about how confident (scale of zero to ten) we are; capture the shades of grey. When we work toward belief calibration, we become less judgmental of ourselves. Incorporating percentages or ranges of alternatives into the expression of our beliefs means that our personal narrative no longer hinges on whether we were right or wrong but on how well we incorporate new information to adjust the estimate of how accurate our beliefs are. Declaring our uncertainty in our beliefs to others makes us more credible communicators. Expressing our level of confidence also invites other people to be our collaborators. When we declare something as 100% fact, others might be reluctant to offer up new and relevant information that would inform our beliefs for two reasons: first, they might be afraid they are wrong and so won't speak up; second, even if they are very confident their information is high quality, they might be afraid of making us feel bad or judged. Expressing our beliefs this way also serves our listeners, by signaling that the belief needs further vetting.

Chapter 3: Bet to Learn: Fielding the Unfolding Future

Outcomes are feedback: "Experience is not what happens to a man; it is what a man does with what happens to him." --Aldous Huxley. The difference between getting experience and becoming an expert lies in the ability to identify when the outcomes of our decisions have something to teach us and what that lesson might be. The challenge is that any single outcome can happen for multiple reasons; the unfolding future is a big data dump that we have to sort and interpret, and the world doesn't connect the dots for us between outcomes and causes. To reach our long-term goals, we have to improve at sorting out when the unfolding future has something to teach us, when to close the feedback loop. The first step to doing this well is in recognizing that things sometimes happen because of the other form of uncertainty: luck.

Luck vs skill: fielding outcomes: The way our lives turn out is the result of two things: the influence of skill and the influence of luck. Chalk up an outcome to skill, and we take credit for the result; chalk up an outcome to luck and it wasn't in our control. It is hard to get this right.

Working backward is hard: the SnackWell's Phenomenon: betting on low fat instead of sugar. Working backwards from the way things turn out isn't easy. We can get to the same outcome by different routes. Outcomes don't tell us what's our fault and what isn't, what we should take credit for and what we shouldn't. We can't simply work backward from the quality of the outcome to determine the quality of our beliefs or decisions. Rats get tripped up by uncertainty as well: when rats are trained on a fixed reward schedule, they learn pretty fast; when you remove the reward, the behave is quickly extinguished; when rats are trained on a variable/intermittent reinforcement schedule, that introduces uncertainty; when you remove the reward, the behavior extinguishes only after a very long time of fruitless behavior, sometimes thousands of tries. Worse, outcomes are rarely all skill or all luck.

"If it weren't for luck, I'd win every one": the way we field outcomes is predictably patterned: we take credit for the good stuff, and blame the bad stuff on luck so it won't be our fault. Stanford law professor Robert MacCoun found that in 75% of accidents, the victims blamed someone else for their injuries. In single-vehicle accidents, 37% of drivers still found a way to pin the blame on someone else. Self-serving bias has immediate and obvious consequences for our ability to learn from experience.

People watching: watching is an established learning method. Unfortunately, learning from watching is just as fraught with bias. We field the outcomes of our peers predictably; where we blame the bad outcomes on bad luck, when it comes to our peers, bad outcomes are clearly their fault. While our own good outcomes are due to our awesome decision-making, when it comes to other people, good outcomes are because they got lucky.

Other people's outcomes reflect on us: Blaming others for their bad results and failing to give them credit for their good ones is under the influence of ego; knocking down a peer by finding them at fault for a loss lifts our personal narrative. Schadenfreude. We feel it's a zero-sum game. Our genes are competitive; natural selection proceeds by competition among the phenotypes of genes so we literally evolved to compete, a drive that allowed our species to survive. We think we know the ingredients for happiness; Sonja Lyubomirsky (psych prof at UCR) summarized several reviews of the literature on the elements we commonly consider: "a comfortable income, robust health, a supportive marriage, and lack of tragedy or trauma", however, "the general conclusion from almost a century of research on the determinants of well-being is that objective circumstances, demographic variables and life events are correlated with happiness less than intuition and everyday experience tells they out to be. By several estimates, all of these variables put together account for no more than 8% to 15% of the variance in happiness." What accounts for most of the variance in happiness is how we're doing comparatively. Would You Rather… earn $70k in 1900 or $70k now, most choose 1900, even though 1900 has no novocaine, air-conditioning, refrigeration or computers; we'd rather lap the field in 1900 with an average life expectancy of only forty-seven years than life in the middle of the pack with an average life expectancy of seventy-six years. A lot of the way we feel about ourselves comes from how we think we compare with others. We can learn better and be more open-minded if we work toward a positive narrative driven by engagement in truthseeking and striving toward accuracy and objectivity: giving others credit when it's due, admitting when our decisions could have been better, and acknowledging that almost nothing is black and white.

Reshaping habit: habits operate in a neurological loop consisting of three parts: the cue, the routine and the reward. To change a habit, you must keep the old cue, and deliver the old reward, but insert a new routine. We can work to change the habit of mind by substituting what makes us feel good. The golden rule of habit change says we don't have to give up the reward of a positive update to our narrative: we can work to get the reward of feeling good from being a good credit-giver, a good learner, and (as a result) a good decision-maker. Instead of feeling bad when we have to admit a mistake, what if the bad feeling came from the thought that we might be missing a learning opportunity just to avoid blame?

Wanna bet? Redux: The key is that in explicitly recognizing the way we field an outcome is a bet, we consider a greater number of alternative causes more seriously than we otherwise would have. The prospect of a bet makes us examine and refine our beliefs, in this case the belief about whether luck or skill was the main influence in the way things turned out. When we treat outcome fielding as a bet, it pushes us to field outcomes more objectively into the appropriate buckets because that is how bets are won. Thinking in bets triggers a more open-minded exploration of alternative hypotheses, of reasons supporting conclusions opposite to the routine of self-serving bias; we are more likely to explore the opposite side of an argument more often and more seriously--and that will move us closer to the truth of the matter. Thinking in bets also triggers perspective taking, leveraging the difference between how we field our own outcomes versus others' outcomes to get closer to the objective truth.

Chapter 4: The Buddy System

"Maybe you're the problem, do you think?": Not all situations are appropriate for truthseeking, nor are all people interested in the pursuit. Any of us who wants to get better at thinking in bets would benefit from having more David Lettermans in our lives; Lettermaning needs agreement by both parties to be effective.

The red pill or the blue pill?: Our brains have evolved make our version of the world more comfortable: our beliefs are nearly always correct; favorable outcomes are the result of our skill; there are plausible reasons why unfavorable outcomes are beyond our control; we compare favorably with our peers; we deny or at least dilute the most painful parts of the message. Giving that up is not the easiest choice; by choosing to exit the matrix, we are asserting that striving for a more objective representation of the world, even if it is uncomfortable at times, will make us happier and more successful in the long run.

Thinking in bets is easier if you have other people to help you. A good decision group is a grown-up version of the buddy system. If we can find a few people to choose to form a truthseeking pod with us and help us to do the hard work connected with it, it will move the needle--just a little bit, but with improvements that accumulate and compound over time. We will be more successful in fighting bias, seeing the world more objectively, and, as a result, we will make better decisions. As long as there are three people in the group (two to disagree and one to referee), the truthseeking group can be stable and productive.

Not all groups are created equal: a well-chartered group can be particularly useful for habits that are difficult to break or change. But while a group can function to be better than the sum of the individuals, it doesn't automatically turn out that way. Being in a group can improve our decision quality by exploring alternatives and recognize where our thinking might be biased, but a group can also exacerbate our tendency to confirm what we already believe (echo chamber). Confirmatory thought amplifies bias; promotes a love and celebration of one's own beliefs. Exploratory thought encourages an open-minded and objective consideration of alternative hypotheses and a tolerance of dissent to combat bias. Exploratory thought helps the members of a group reason toward a more accurate representation of the world. Without an explicit charter for exploratory thought and accountability to that charter, our tendency when we interact with others follows our individual tendency. "Complex and open-minded thought is most likely to be activated when decision makers learn prior to forming any opinions that they will be accountable to an audience (a) whose views are unknown, (b) who is interested in accuracy, (c) who is reasonably well-informed, and (d) who has a legitimate reason for inquiring into the reasons behind participants' judgments/choices." (Lerner, Tetlock) Groups can improve the thinking of individual decision-makers when the individuals are accountable to a group whose interest is in accuracy; charter should also encourage and celebrate a diversity of perspectives to challenge biased thinking by individual members. Jonathan Haidt (The Righteous Mind: Why Good People Are Divided by Politics and Religion): "If you put individuals together in the right way, such that some individuals can use their reasoning powers to disconfirm the claims of others, and all individuals feel some common bond or shared fate that allows them to interact civilly, you can create a group that ends up producing good reasoning as an emergent property of the social system. This is why it's so important to have intellectual and ideological diversity within any group or institution whose goal is to find truth." In combination, the advice of these experts in group interaction adds up to a blueprint for a truthseeking charter: (1) A focus on accuracy (over confirmation), which includes rewarding truthseeking, objectivity, and open-mindedness within the group; (2) Accountability, for which members have advance notice, and (3) Openness to a diversity of ideas.

The group rewards focus on accuracy -- a productive decision group can harness the desire for approval by rewarding accuracy and intellectual honesty with social approval.

"One Hundred White Castles... and a large chocolate shake": how accountability improves decision-making -- accountability is a willingness or obligation to answer for our actions or beliefs to others. A bet is a form of accountability. After spending time in that (poker players) kind of environment, you become hypervigilant about your level of confidence in your beliefs; it is truly putting your money where your mouth is. Accountability also improves our decision-making and information processing when we are away from the group because we know in advance that we will have to answer to the group for our decisions: imagining how the discussion will go helps us to spot more errors on our own and catch them more quickly.

The group ideally exposes us to a diversity of viewpoints -- diversity and dissent are not only checks on fallibility, but the only means of testing the ultimate truth of an opinion. If we take a bunch of people with that limitation [being a human and thus having only one point of view] and put them together into a group, we get exposed to diverse opinions, can test alternative hypotheses, and move toward accuracy. To get a more objective view of the world, we need an environment that exposes us to alternate hypotheses and different perspectives. To view ourselves in a more realistic way, we need other people to fill in our blind spots. A group with diverse viewpoints can help us by sharing the work to combat motivated reasoning and biased outcome fielding; by thinking in bets, we run through a series of questions to examine the accuracy of our beliefs, for example:
* Why might my belief not be true?
* What other evidence might be out there bearing on my belief?
* Are there similar areas I can look toward to gauge whether similar beliefs to mine are true?
* What sources of information could I have missed or minimized on the way to reaching my belief?
* What are the reasons someone else could have a different belief, what's their support, and why might they be right instead of me?
* What other perspectives are there as to why things turned out the way that they did?

By asking these questions, we are taking a big step toward calibration, but there is only so much we can do to answer these questions on our own. It is a lot easier to have someone else offer their perspective than for you to imagine you're another person and think about what their perspective might be. A diverse group can do some of the heavy lifting of de-biasing for us. Dissent channels and red teams are a beautiful implementation of John Stuart Mill's bedrock principle that we can't know the truth of a matter without hearing the other side. Diversity is the foundation of productive group decision-making, but we can't underestimate how hard it is to maintain.

Federal judges: drift happens: Cass Sunstein (Harvard law professor) conducted massive study on ideological diversity in federal judicial panels; when there was political diversity on the panels, that diversity improved the panel's work--a single panelist from the other party had "a large disciplining effect". The more homogeneous we get, the more the group will promote and amplify confirmatory thought.

Social psychologists: confirmatory drift and Heterodox Academy: First, there is a natural drift toward homogeneity and confirmatory thought; we all experience this gravitation toward people who think like we do. Second, groups with diverse viewpoints are the best protection against confirmatory thought; the opinions of the group members aren't much help if it is a group of clones.

Wanna bet (on science)?: Experts engaging in traditional peer review, providing their opinion on whether an experimental result would replicate, were right 58% of the time. A betting market in which the traders were the exact same experts and those experts had money on the line predicted correctly 71% of the time.

Chapter 5: Dissent to Win

CUDOS to a magician: CUDOS stands for Communism (data belonging to the group), Universalism (apply uniform standards to claims and evidence, regardless of where they came from), Disinterestedness (vigilance against personal conflicts that can influence the group's evaluation) and Organized Skepticism (discussion among the group to encourage engagement and dissent).

Mertonian communism: more is more: the communal ownership of data within groups. Any attempt at accuracy is bound to fall short if the truthseeking group has only limited access to potentially pertinent information; without all the facts, accuracy suffers. As a rule of thumb, if we have an urge to leave out a detail because it makes us uncomfortable or requires even more clarification to explain away, those are exactly the details we must share. The mere fact of our hesitation and discomfort is a signal that such information may be critical to providing a complete and balanced account. We are naturally reluctant to share information that could encourage others to find fault in our decision-making. Group(s) [can] make this easier by making [one] feel good about commiting [oneself] to improvement.

Universalism: don't shoot the message: "Truth-claims, whatever their source, are to be subjected to preestablished impersonal criteria." (Merton) Acceptance/rejection of an idea must not "depend on the personal or social attributes of their protagonist." Don't disparage or ignore an idea just because you don't like who or where it came from. The accuracy of the statement should be evaluated independent of its source. Nearly any group can create an exercise to develop and reinforce the open-mindedness universalism requires.

Disinterestedness: we all have a conflict of interest, and it's contagious: Conflicts of interest come in many flavors; our brains have built-in conflicts of interest, interpreting the world around us to confirm our beliefs (and more). We are not naturally disinterested. A group is less likely to succumb to ideological conflicts of interest when they don't know what the interest is. Another way a group can de-bias members is to reward them for skill in debating opposing points of view and finding merit in opposing positions. The group's reinforcement ought to discourage us from creating a straw-man argument when we're arguing against our beliefs, and encourage us to feel good about winning the debate. This is one of the reasons it's good for a group to have at least three members, two to disagree and one to referee.

Organized skepticism: real skeptics make arguments and friends: True skepticism is consistent with good manners, civil discourse, and friendly communications. Skepticism is about approaching the world by asking why things might not be true rather than why they are true. Thinking in bets embodies skepticism by encouraging us to examine what we do and don't know and what our level of confidence is in our beliefs and predictions. This moves us closer to what is objectively true. Without embracing uncertainty, we can't rationally bet on our beliefs, and we need to be particularly skeptical of information that agrees with us because we know that we are biased to just accept and applaud confirming evidence. When we implement the norm of skepticism, we naturally modulate expression of dissent with others. Organized skepticism invites people into a cooperative exploration. Skepticism should be encouraged and, where possible, operationalized.

Communicating with the world beyond our group: we have to take the most constructive, civil elements of truthseeking communication and introduce them carefully. First, express uncertainty. Second, lead with assent; listen for things you agree with, state those and be specific, and then follow with an "and" instead of "but". Third, ask for a temporary agreement to engage in truthseeking; if someone is offloading emotion to us, we can ask them if they are just looking to vent or if they are looking for advice. Finally, focus on the future; rather than rehashing what has already happened, try instead to engage about what the person might do so that things will turn out better going forward.

Chapter 6: Adventures in Mental Time Travel

Let Marty McFly run into Marty McFly: in real-life decision-making, when we bring our past- or future-self into the equation, the space-time continuum doesn't unravel; a visit from past- or future-us helps present-us make better bets.

Night Jerry: "Night Guy always screws Morning Guy." When we make in-the-moment decisions (and don't ponder the past or future) we are more likely to be irrational and impulsive. This is called temporal discounting; we are willing to take an irrationally large discount to get a reward now instead of waiting for a bigger reward later.

Moving regret in front of our decisions: Regret is one of the most intense emotions we feel, but it's arguable about whether it's productive or useful. The problem isn't so much whether regret is an unproductive emotion; it's that regret occurs after the fact, instead of before. Suzy Welch developed 10-10-10: "What are the consequences of each of my options in ten minutes? Ten months? Ten years?" We can build on Welch's tool by asking the question through the frame of the past: "How would I feel today if I had made this decision ten minutes ago? Ten months ago? Ten years ago?" Moving regret in front of a decision has numerous benefits. First it can influence us to make a better decision. Second, it helps us treat ourselves (regardless of the actual decision) more compassionately after the fact. We can anticipate and prepare for negative outcomes; we can devise a plan to respond to a negative outcome instead of just reacting to it; we can also familiarize ourselves with the likelihood of a negative outcome and how it will feel.

A flat tire, the ticker, and a zoom lens: Flat tire in a downpour--how does it feel? Likely, like the worst moment of your life. But if the flat tire had happened a year ago, do you think it would have an effect on your happiness today, or your overall happiness over the past year? Not likely; it likely wouldn't cause your overall happiness to tick up or down. In our decision-making lives, we aren't that good at taking this kind of perspective; it just feels how it feels in the moment and we react to it.

"Yeah, but what have you done for me lately?": the way we field outcomes is path dependent; it doesn't matter so much where we end up as how we got there. What has happened in the recent past drives our emotional response much more than how we are doing overall. Our in-the-moment emotions affect the quality of the decisions we make in those moments, and we are very willing to make decisions when we are not emotionally fit to do so.

Tilt: from pinball machines; when the emotional center of the brain starts pinging, the limbic system (specifically the amygdala) shuts down the prefrontal cortex. We light up, then we shut down our cognitive control center. By recognizing in advance these verbal and physiological signs that ticker watching is making us tilt, we can commit to develop certain habit routines at those moments. We can precommit to walk away from the situation when we feel the signs of tilt, take some space until we calm down and get some perspective, recognizing that when are on tilt we aren't decision-fit.

Ulysses contracts: time traveling to precommit -- past-us preventing present-us from doing something stupid has become known as a Ulysses contract (Ulysses/Odysseus filling his sailors' ears with wax and tying himself to the mast so he could hear the Sirens). It's the perfect interaction between past-you, present-you and future-you. Ulysses contracts can be barrier-raising (against irrational or undesirable behavior) or barrier-reducing (to increase the ease of making the desirable decision).

Decision swear jar: a simple kind of precommitment contract to implement accountability.

Reconnaissance: mapping the future -- for us to make better decisions, we need to perform reconnaissance on the future. If a decision is a bet on a particular future based on our beliefs, then before we place a bet we should consider in detail what those possible futures might look like. Figure out the possibilities, then take a stab at the probabilities. Scenario planning: consider a broad range of possibilities for how the future might unfold to help guide long-term planning and preparation. After identifying as many of the possible outcomes as we can, we want to make our best guess at the probability of each of those futures occurring. The reason we do reconnaissance is because we are uncertain; we don't (and likely can't) know how often things will turn out a certain way with exact precision. It's not about approaching our future predictions from a point of perfection; it's about acknowledging that we're already making a prediction about the future every time we make a decision, so we're better off if we make that explicit. If we're worried about guessing, we're already guessing. By at least trying to assign probabilities, we will naturally move away from the default of 0% or 100%, away from being sure it will turn out one way and not another. Scouting various futures has numerous additional benefits. First, scenario planning reminds us that the future is inherently uncertain; by making that explicit in our decision-making process, we have a more realistic view of the world. Second, we are better prepared for how we are going to respond to different outcomes that might result from our initial decision. We can anticipate positive or negative developments and plan our strategy, rather than being reactive. If our reconnaissance has identified situations where we are susceptible to irrationality, we can try to bind our hands with a Ulysses contract. Third, anticipating the range of outcomes also keeps us from unproductive regret (or undeserved euphoria) when a particular future happens. Finally, by mapping out the potential futures and probabilities, we are less likely to fall prey to resulting or hindsight bias.

Backcasting: working backward from a positive future -- If we were contemplating a thousand-mile walk, we'd be better off imagining ourselves looking back from the destination and figuring how we got there. When it comes to advance thinking, standing at the end and looking backward is much more effective than looking forward from the beginning. Imagining a successful future and backcasting from there is a useful time-travel exercise for identifying necessary steps for reaching our goals; working backward helps even more when we give ourselves the freedom to imagine an unfavorable future.

Premortems: working backward from a negative future -- A premortem is where we check our positive attitude at the door and imagine not achieving our goals. Despite the popular wisdom that we achieve success through positive visualization, it turns out that incorporating negative visualization makes us more likely to achieve our goals. Gabriele Oettingen (Rethinking Positive Thinking: Inside the New Science of Motivation) has conducted over twenty years of research, consistently finding that people who imagine obstacles in the way of reaching their goals are more likely to achieve success, a process she has called "mental contrasting". We need to have positive goals, but we are more likely to execute on those goals if we think about negative futures. We start a premortem by imagining why we failed to reach our goal; then we imagine why. All those reasons why we didn't achieve our goal help us anticipate potential obstacles and improve our likelihood of succeeding. The key to a successful premortem is that everyone feels free to look for the most creative, relevant and actionable reasons why things didn't work out, and they are motivated to scour everything--personal experience, company experience, historical precedent, sports analogies, etc--to come up with ways a decision or plan can go bad, so the team can anticipate and account for them.

Dendrology and hindsight bias (or, Give the chainsaw a rest): think about time as a tree. The trunk is the past. A tree has only one, growing trunk, just as we have only one, accumulating past. Branches are the potential futures. Thicker branches are the equivalent of more probable futures, thinner branches are less probable ones. As the future becomes the past, what happens to all those branches? The ever-advancing present acts like a chainsaw; when one of those many branches happens to be the way things turn out, present-us cuts off all those other branches that didn't materialize and obliterates them. When we look into the past and see the only thing that happened, it seems to have been inevitable. Why wouldn't it? That's hindsight bias, an enemy of probabilistic thinking. By keeping an accurate representation of what could have happened (and not a version edited by hindsight), memorializing the scenario plans and decision trees we create through good planning process, we can be better calibrators going forward.


Tags: reading   books   management   psychology   thinking  

Last modified 16 December 2024