Farnam Street Blog

A mental model is a simplified explanation of how something works. Any idea, belief, or concept can be boiled down to its essence. Like a map, mental models highlight key information while ignoring irrelevant details. They’re tools for compressing complexity into manageable chunks. Mental models help us understand the world. For example, velocity shows that both speed and direction matter. Reciprocity reveals how being positive and taking initiative gets the world to do most of the work for you. Margin of Safety reminds us that things don’t always go as planned. Relativity exposes our blind spots and shows how a different perspective can reveal new information. These are just a few examples.

General Thinking

The Map is Not the Territory

The map is not the territory reminds us that our mental models of the world are not the same as the world itself. It cautions against confusing our abstractions and representations with the complex, ever-­shifting reality they aim to describe. It is dangerous to mistake the map for the territory. Consider the person with an outstanding résumé who checks all the boxes on paper but can’t do the job. Updating our maps is a difficult process of reconciling what we want to be true with what is true. In many areas of life, we are offered maps by other people. We are reliant on the maps provided by experts, pundits, and teachers. In these cases, the best we can do is to choose our mapmakers wisely and to seek out those who are rigorous, transparent, and open to revision. Ultimately, the map/territory distinction invites us to engage with the world as it is, not just as we imagine it. And remember, when you don’t make the map, choose your cartographer wisely.

Circle of Competence

The first rule of competition is that you are more likely to win if you play where you have an advantage. Playing to your advantage requires a firm understanding of what you know and don’t know. Your circle of competence is your personal sphere of expertise, where your knowledge and skills are concentrated. It’s the domain where you have a deep understanding, where your judgments are reliable, and your decisions are sound. The size of your circle isn’t as important as knowing the boundaries. The wise person knows the limits of their knowledge and can confidently say, “This falls within my circle,” or “This is outside my area of expertise.” While operating within your circle of competence is a recipe for confidence and effectiveness, venturing outside your circle of competence is a recipe for trouble. You’re like a sailor navigating unfamiliar waters without a map, at the mercy of currents and storms you don’t fully understand. This isn’t to say that you should never venture outside your circle. Learning new things, gaining new skills, and mastering new domains is one of the most beautiful things about life.

Celebrate your expertise, but also acknowledge your limitations.

First Principles Thinking

First principles thinking is the art of breaking down complex problems into their fundamental truths. It’s a way of thinking that goes beyond the surface and allows us to see things from a new perspective. Thinking in first principles allows us to identify the root causes, strip away the layers of complexity, and focus on the most effective solutions. Reasoning from first principles allows us to step outside the way things have always been done and instead see what is possible. First principles thinking is not easy. It requires a willingness to challenge the status quo. This is why it’s often the domain of rebels and disrupters who believe there must be a better way. It’s the thinking of those willing to start from scratch and build from the ground up.

Thought Experiment

Thought experiments are the sandbox of the mind, the place where we can play with ideas without constraints. They’re a way of exploring the implications of our theories, of testing the boundaries of our understanding. They offer a powerful tool for clarifying our thinking, revealing hidden assumptions, and showing us unintended consequences.

The power of thought experiments lies in their ability to create a simplified model of reality where we can test our ideas. In the real world, confounding factors and messy details obscure the core principles at work. Thought experiments allow us to strip away the noise and focus on the essence of the problem.

Thought experiments remind us that some of the most profound insights and innovations start with a simple question: What if?

Second-Order Thinking

Second-­order thinking is a method of thinking that goes beyond the surface level, beyond the knee-­jerk reactions and short-­term gains. It asks us to play the long game, to anticipate the ripple effects of our actions, and to make choices that will benefit us not just today but in the months and years to come. Second-order thinking demands we ask: And then what? Think of a chess master contemplating her next move. She doesn’t just consider how the move will affect the next turn but how it will shape the entire game. She’s thinking many steps ahead. She’s considering her own strategy and her opponent’s likely response. In our daily lives, we’re often driven by first-­order thinking. We make decisions based on what makes us happy now, what eases our current discomfort, or satisfies our immediate desires. Second-­order thinking asks us to consider the long-­term implications of our choices to make decisions based not just on what feels good now but on what will lead to the best outcomes over time. In the end, second-­order thinking is about playing the long game. It’s about making choices for the next move and the entire journey.

Probabilistic Thinking

Probabilistic thinking is the art of navigating uncertainty. Successfully thinking in shades of probability means roughly identifying what matters, calculating the odds, checking our assumptions, and then deciding. The challenge of probabilistic thinking is that it requires constant updating. As new information emerges, the probabilities change. What seemed likely yesterday may seem unlikely today. This explains why probabilistic thinkers always revise their beliefs with new data and why it’s uncomfortable for many people. It’s much easier to believe something false is accurate than to deal with the fact that we might be wrong. Being a probabilistic thinker means being willing to say, “I don’t know for sure, but based on the evidence, I think there’s a 63 percent chance of X.” The rewards of probabilistic thinking are immense. By embracing uncertainty, we can make better decisions, avoid the pitfalls of overconfidence, and navigate complex situations with greater skill and flexibility. We can be more open-­ minded, more receptive to new ideas, and more resilient in the face of change.

Inversion

Much of success comes from simply avoiding common paths to failure. Inversion is not the way we are taught to think. We are taught to identify what we want and explore things that will move us closer to our objective. However, avoiding things that ensure we don’t get what we want dramatically increases our odds of success. We can get fixated on solving problems one way, missing simpler solutions. Inversion breaks us out of this tunnel vision. Instead of “How do I solve this?”, inversion asks, “What would guarantee failure?” Rather than “How can I achieve this?”, it asks “What’s preventing me from achieving it?” This flip reveals insights our usual thinking overlooks. When facing a tricky problem or ambitious goal, try inverting. Ask how you’d guarantee failure. The answers may surprise you—and unlock new solutions.

Occam’s Razor

Occam’s razor is the intellectual equivalent of “keep it simple.” When faced with competing explanations or solutions, Occam’s razor suggests that the correct explanation is most likely the simplest one, the one that makes the fewest assumptions. This doesn’t mean the simplest theory is always true, only that it should be preferred until proven otherwise. Sometimes, the truth is complex, and the simplest explanation doesn’t account for all the facts. The key to wielding this model is understanding when it works for you and against you. A theory that is too simple fails to capture reality, and one that is too complex collapses under its own weight.

Hanlon’s Razor

Hanlon’s razor is a mental safeguard against the temptation to label behavior as malicious when incompetence is the most common response. It’s a reminder that people are not out to get you, and it’s best to assume good faith and resist the urge to assign sinister motives without overwhelming evidence. This isn’t to say that genuine malice doesn’t exist. Of course, it does. But in most interactions, stupidity is a far more common explanation than malevolence. People make mistakes. They forget things. They speak without thinking. They prioritize short-­term wins over long-term wins. They act on incomplete information. They fall prey to bias and prejudice. These actions might appear like deliberate attacks from the outside, but the reality is far more mundane. Hanlon’s razor’s real power lies in how it shifts our perspective. When we assume stupidity rather than malice, we respond differently. Instead of getting defensive or lashing out, we approach the situation with empathy and clarity. For most daily frustrations and confusion, Hanlon’s razor is a powerful reminder to approach problems with a spirit of generosity. It’s a way to reduce drama and stress and find practical solutions instead of descending into blame and escalation.

Systems Thinking

Feedback Loops

Feedback loops are the engines of growth and change. They’re the mechanisms by which the output of a system influences its input.

Complex systems often have many feedback loops, and it can be hard to appreciate how adjusting to feedback in one part of the system will affect the rest.

Using feedback loops as a mental model begins with noticing the feedback you give and respond to daily. The model also provides insight into the value of iterations in adjusting based on the feedback you receive. With this lens, you gain insight into where to direct system changes based on feedback and the pace you need to go to monitor the impacts.

Feedback loops are what make systems dynamic. Without feedback, a system does the same thing over and over. Understand them, respect them, and use them wisely.

Equilibrium

Equilibrium is the state of balance, where opposing forces cancel each other out. It’s the calm in the storm’s center, the stable point around which the chaos swirls. In a system at equilibrium, there’s no net change. Everything is in a steady state, humming along at a constant pace.

However, systems are rarely static. They continuously adjust toward equilibrium but rarely stay in balance for long.

Equilibrium is a ­ double-edged sword, both stability and stagnation. In our lives, we often act like we can reach an equilibrium: once we get into a relationship, we’ll be happy; once we move, we’ll be productive; once X thing happens, we’ll be in Y state. But things are always in flux. We don’t reach a certain steady state and then stay there forever. The endless adjustments are our lives. The trick is to find the right balance, strive for equilibrium where it’s needed, and know when to break free and embrace the dis-equilibrium that drives progress.

Bottlenecks

Bottlenecks are the choke points, the narrow parts of the hourglass where everything slows down. They’re the constraints that limit the flow, the weakest links in the chain that determine the strength of the whole. In any system, the bottleneck is the part holding everything else back.

The tricky thing about bottlenecks is that they’re not always obvious. It’s easy to focus on the parts of the system that are moving quickly and assume everything is fine. But the real leverage is in finding and fixing the bottlenecks. Speed up the slowest part, and you speed up the whole system.

This is the theory of constraints in a nutshell. Figure out your bottleneck and focus all your efforts on alleviating it. Don’t waste time optimizing the parts that are already fast. They’re not the limiting factor.

However, bottlenecks aren’t always the villains we make them out to be. Sometimes, they’re a necessary part of the system. Think of a security checkpoint at an airport. It slows everything down, but it’s there for a reason. Remove it, and you might speed things up, but at the cost of safety.

The key is to be intentional about your bottlenecks. Choose them wisely, and make sure they’re serving a purpose. A deliberate bottleneck can be a powerful tool for focusing effort and maintaining quality. An accidental bottleneck is just a drag on the system.

Bottlenecks are leverage points where a little effort can go a long way.

Scale

Systems change as they scale up or down; neither is intrinsically better or worse. The right scale depends on your goals and the context. If you want to scale something up, you need to anticipate that new problems will keep ­ arising—​­ problems that didn’t exist on a smaller scale. Or you might need to keep solving the same problems in different ways.

Think about a recipe. If you’re making a cake for four people, you use a certain amount of ingredients. But if you want to make a cake for four hundred people, you don’t just multiply the ingredients by one hundred. That’s not how scale works. You need to change the process and use bigger mixers and bigger ovens. You need a system that can handle the increased volume without breaking down.

The challenge with scale is that it’s not always obvious how to achieve it. What works for a small system often breaks down at larger volumes. You have to anticipate the bottlenecks and the points where the system will strain under the increased load. And you have to be ready to re‑engineer your processes as you grow.

If you’re building something, always be thinking about scale. How will this work when you have ten times as many customers? One hundred times? One thousand times? Build with scale in mind from the start, and you’ll be ready for the growth when it comes.

Margin of Safety

Margin of safety is a secret weapon. It’s the buffer, the extra capacity, the redundancy that you build into a system to handle unexpected stress. It’s the difference between a bridge that can barely handle the expected load and one that can handle ten times that load without breaking a sweat.

You can apply a margin of safety to any area of life with uncertainty and risk. The key is always to ask yourself: What if I’m wrong? What if things don’t go as planned? How much extra capacity must I build to handle the unexpected?

But here’s the rub: margin of safety isn’t free. It means spending more upfront. In the short term, you’ll look overly cautious and leave immediate profits on the table. But in the long run, this apparent overcaution lets you survive when others break – and thrive when others merely survive.

Margin of safety is the unsung hero of ­ long-​­term success. It’s not flashy. It’s not exciting, but it’s the foundation on which everything else is built. Master it, and you’ll be well on your way to navigating the uncertainties of life with confidence and stability.

Churn

Churn is the silent killer of businesses. It’s the slow leak, the constant drip of customers slipping away, of users drifting off to find something new. The attrition eats away at your growth, forcing you to keep running just to stay in place. The thing about churn is that it’s often hidden. It’s not like a sudden crisis that grabs your attention. It’s a slow, quiet process that happens in the background.

Churn can present opportunity. Like a snake shedding its skin, replacing components of a system is a natural part of keeping it healthy. New parts can improve functionality.

When we use this model as a lens, we see that new people bring new ideas, and counterintuitively, some turnover allows us to maintain stability. Replacing what is worn out also allows us to upgrade and expand our capabilities, creating new opportunities. Some churn is inevitable. Too much can kill you.

Algorithms

Algorithms are recipes. A list of crisp, unambiguous steps that tell you how to get from point A to point B. But they’re more than just directions. Algorithms are if‑then machines for tuning out the noise and zeroing in on the signal. Have the specs been met? Fol- low the algorithm and find out. Thinking algorithmically means searching for processes that reliably spit out the desired results, like a vending machine dispensing the same candy bar every time someone punches in E4.

Critical mass

Critical mass isn’t just a science term; it’s a guide for understanding that often things happen slowly and then all at once. It’s the moment when a system goes from sputtering along to explosive growth. Like a nuclear chain reaction, once you hit critical mass, the reaction becomes self-sustaining.

Through this lens we gain insight into the amount of material needed for a system to change from one state to another. Material can be anything from people and effort to raw material. When enough material builds up, systems reach their tipping point. When we keep going, we get sustainable change.

Using critical mass as a lens for situations where you want different outcomes helps you identify both the design elements you need to change and the work you need to put in.

Emergence

Nearly everything is an emergent ­ effect—​­a table, a space shuttle, even ­ us—​­ combinations of ingredients that come together in a specific way to create something new. Emergence is the universe’s way of reminding us that when we combine different pieces in new ways, we get results that are more than the sum of their parts, often in the most unexpected and thrilling ways.

Using this mental model is not about predicting emergent properties but acknowledging they are possible. There is no need to stick with what you know; mix it up and see what happens. Learn new skills, interact with new people, read new things.

Irreducibility

Irreducibility is about essence. It’s the idea that some things can’t be broken down into smaller parts without losing what makes them tick. It’s the idea that not everything can be explained by looking at its components. Emergent properties arise from complex systems that can’t be predicted by studying the individual parts.

Grappling with irreducibility requires a shift in thinking. Instead of trying to break things down, sometimes you have to zoom out. Look at the big picture. Embrace the complexity. Because some problems don’t have neat, modular solutions. They’re irreducibly messy.

Using irreducibility as a lens helps you focus on what you can change by understanding what really matters

Law of Diminishing Returns

Diminishing returns is the idea that the easy wins usually come first. The more you optimize a system, the harder it gets to eke out additional improvements, like squeezing juice from a lemon. The first squeeze is easy. The second takes a bit more work. By the tenth squeeze, you’re fighting for every last drop.

Every bit of effort translates into significant gains when you’re a beginner. But as you level up, progress becomes more incremental. It takes more and more work to get better and better. That’s why going from good to great is much harder than going from bad to good.

Understanding diminishing returns is crucial for allocating resources efficiently. You want to focus on where you can get the biggest bang for your buck. Sometimes, that means knowing when to stop optimizing and move on to something else.

The Mental Models of Military and War

Seeing the Front

One of the most valuable military tactics is the habit of “personally seeing the front” before making decisions – not always relying on advisors, maps, and reports, all of which can be faulty or biased. The Map/Territory model, as does the incentive model, illustrates the problem of not seeing the front. Leaders of any organization can generally benefit from seeing the front, as it provides firsthand information and tends to improve the quality of secondhand information. (Also known as, "Eating Your Own Dog Food" or "Working the Phones")

Asymmetric Warfare

The asymmetry model leads to an application in warfare whereby one side seemingly “plays by different rules” than the other side due to circumstance. Generally, this model is applied by an insurgency with limited resources. Unable to out-muscle their opponents, asymmetric fighters use other tactics, as with terrorism creating fear that’s disproportionate to their actual destructive ability.

Two-Front War

The Second World War was a good example of a two-front war. Once Russia and Germany became enemies, Germany was forced to split its troops and send them to separate fronts, weakening their impact on either front. Opening a two-front war can often be a useful tactic, as can solving a two-front war or avoiding one, as in the example of an organization tamping down internal discord to focus on its competitors.

Counterinsurgency

Though asymmetric insurgent warfare can be extremely effective, competitors have developed counterinsurgency strategies over time. Recently and famously, General David Petraeus of the United States led the development of counterinsurgency plans involving no additional force but substantial gains. Tit-for-tat warfare or competition often leads to a feedback loop that demands insurgency and counterinsurgency.

Mutually Assured Destruction

Somewhat paradoxically, the stronger two opponents become, the less likely they may be to destroy one another. This process of mutually assured destruction occurs not just in warfare, as with the development of global nuclear warheads, but also in business, as with the avoidance of destructive price wars between competitors. However, in a fat-tailed world, it is also possible that mutually assured destruction scenarios simply make destruction more severe in the event of a mistake (pushing destruction into the “tails” of the distribution).

Human Nature and Judgment

Trust

Fundamentally, the modern world operates on trust. Familial trust is generally a given (otherwise we’d have a hell of a time surviving), but we also choose to trust chefs, clerks, drivers, factory workers, executives, and many others. A trusting system is one that tends to work most efficiently; the rewards of trust are extremely high.

Bias from Incentives

Highly responsive to incentives, humans have perhaps the most varied and hardest to understand set of incentives in the animal kingdom. This causes us to distort our thinking when it is in our own interest to do so. A wonderful example is a salesman truly believing that his product will improve the lives of its users. It’s not merely convenient that he sells the product; the fact of his selling the product causes a very real bias in his own thinking.

Pavlovian Association

Ivan Pavlov very effectively demonstrated that animals can respond not just to direct incentives but also to associated objects; remember the famous dogs salivating at the ring of a bell. Human beings are much the same and can feel positive and negative emotion towards intangible objects, with the emotion coming from past associations rather than direct effects.

Tendency to Feel Envy & Jealousy

Humans have a tendency to feel envious of those receiving more than they are, and a desire “get what is theirs” in due course. The tendency towards envy is strong enough to drive otherwise irrational behavior, but is as old as humanity itself. Any system ignorant of envy effects will tend to self-immolate over time.

Tendency to Distort Due to Liking/Loving or Disliking/Hating

Based on past association, stereotyping, ideology, genetic influence, or direct experience, humans have a tendency to distort their thinking in favor of people or things that they like and against people or things they dislike. This tendency leads to overrating the things we like and underrating or broadly categorizing things we dislike, often missing crucial nuances in the process.

Denial

Anyone who has been alive long enough realizes that, as the saying goes, “denial is not just a river in Africa.” This is powerfully demonstrated in situations like war or drug abuse, where denial has powerful destructive effects but allows for behavioral inertia. Denying reality can be a coping mechanism, a survival mechanism, or a purposeful tactic.

Availability Heuristic

One of the most useful findings of modern psychology is what Daniel Kahneman calls the Availability Bias or Heuristic: We tend to most easily recall what is salient, important, frequent, and recent. The brain has its own energy-saving and inertial tendencies that we have little control over – the availability heuristic is likely one of them. Having a truly comprehensive memory would be debilitating. Some sub-examples of the availability heuristic include the Anchoring and Sunk Cost Tendencies.

Representativeness Heuristic

The three major psychological findings that fall under Representativeness, also defined by Kahneman and his partner Tversky, are:

  1. Failure to Account for Base Rates: An unconscious failure to look at past odds in determining current or future behavior.
  2. Tendency to Stereotype: The tendency to broadly generalize and categorize rather than look for specific nuance. Like availability, this is generally a necessary trait for energy-saving in the brain.
  3. Failure to See False Conjunctions: Most famously demonstrated by the Linda Test, the same two psychologists showed that students chose more vividly described individuals as more likely to fit into a predefined category than individuals with broader, more inclusive, but less vivid descriptions, even if the vivid example was a mere subset of the more inclusive set. These specific examples are seen as more representative of the category than those with the broader but vaguer descriptions, in violation of logic and probability.

Social Proof (Safety in Numbers)

Human beings are one of many social species, along with bees, ants, and chimps, among many more. We have a DNA-level instinct to seek safety in numbers and will look for social guidance of our behavior. This instinct creates a cohesive sense of cooperation and culture which would not otherwise be possible but also leads us to do foolish things if our group is doing them as well.

Narrative Instinct

Human beings have been appropriately called “the storytelling animal” because of our instinct to construct and seek meaning in narrative. It’s likely that long before we developed the ability to write or to create objects, we were telling stories and thinking in stories. Nearly all social organizations, from religious institutions to corporations to nation-states, run on constructions of the narrative instinct.

Curiosity Instinct

We like to call other species curious, but we are the most curious of all, an instinct which led us out of the savanna and led us to learn a great deal about the world around us, using that information to create the world in our collective minds. The curiosity instinct leads to unique human behavior and forms of organization like the scientific enterprise. Even before there were direct incentives to innovate, humans innovated out of curiosity.

Language Instinct

The psychologist Steven Pinker calls our DNA-level instinct to learn grammatically constructed language the Language Instinct. The idea that grammatical language is not a simple cultural artifact was first popularized by the linguist Noam Chomsky. As we saw with the narrative instinct, we use these instincts to create shared stories, as well as to gossip, solve problems, and fight, among other things. Grammatically ordered language theoretically carries infinite varying meaning.

First-Conclusion Bias

As Charlie Munger famously pointed out, the mind works a bit like a sperm and egg: the first idea gets in and then the mind shuts. Like many other tendencies, this is probably an energy-saving device. Our tendency to settle on first conclusions leads us to accept many erroneous results and cease asking questions; it can be countered with some simple and useful mental routines.

Tendency to Overgeneralize from Small Samples

It’s important for human beings to generalize; we need not see every instance to understand the general rule, and this works to our advantage. With generalizing, however, comes a subset of errors when we forget about the Law of Large Numbers and act as if it does not exist. We take a small number of instances and create a general category, even if we have no statistically sound basis for the conclusion.

Relative Satisfaction/Misery Tendencies

The envy tendency is probably the most obvious manifestation of the relative satisfaction tendency, but nearly all studies of human happiness show that it is related to the state of the person relative to either their past or their peers, not absolute. These relative tendencies cause us great misery or happiness in a very wide variety of objectively different situations and make us poor predictors of our own behavior and feelings.

Commitment & Consistency Bias

As psychologists have frequently and famously demonstrated, humans are subject to a bias towards keeping their prior commitments and staying consistent with our prior selves when possible. This trait is necessary for social cohesion: people who often change their conclusions and habits are often distrusted. Yet our bias towards staying consistent can become, as one wag put it, a “hobgoblin of foolish minds” – when it is combined with the first-conclusion bias, we end up landing on poor answers and standing pat in the face of great evidence.

Hindsight Bias

Once we know the outcome, it’s nearly impossible to turn back the clock mentally. Our narrative instinct leads us to reason that we knew it all along (whatever “it” is), when in fact we are often simply reasoning post-hoc with information not available to us before the event. The hindsight bias explains why it’s wise to keep a journal of important decisions for an unaltered record and to re-examine our beliefs when we convince ourselves that we knew it all along.

Sensitivity to Fairness

Justice runs deep in our veins. In another illustration of our relative sense of well-being, we are careful arbiters of what is fair. Violations of fairness can be considered grounds for reciprocal action, or at least distrust. Yet fairness itself seems to be a moving target. What is seen as fair and just in one time and place may not be in another. Consider that slavery has been seen as perfectly natural and perfectly unnatural in alternating phases of human existence.

Tendency to Overestimate Consistency of Behavior (Fundamental Attribution Error)

We tend to over-ascribe the behavior of others to their innate traits rather than to situational factors, leading us to overestimate how consistent that behavior will be in the future. In such a situation, predicting behavior seems not very difficult. Of course, in practice this assumption is consistently demonstrated to be wrong, and we are consequently surprised when others do not act in accordance with the “innate” traits we’ve endowed them with.

Influence of Stress (Including Breaking Points)

Stress causes both mental and physiological responses and tends to amplify the other biases. Almost all human mental biases become worse in the face of stress as the body goes into a fight-or-flight response, relying purely on instinct without the emergency brake of Daniel Kahneman’s “System 2” type of reasoning. Stress causes hasty decisions, immediacy, and a fallback to habit, thus giving rise to the elite soldiers’ motto: “In the thick of battle, you will not rise to the level of your expectations, but fall to the level of your training.”

Survivorship Bias

A major problem with historiography – our interpretation of the past – is that history is famously written by the victors. We do not see what Nassim Taleb calls the “silent grave” – the lottery ticket holders who did not win. Thus, we over-attribute success to things done by the successful agent rather than to randomness or luck, and we often learn false lessons by exclusively studying victors without seeing all of the accompanying losers who acted in the same way but were not lucky enough to succeed.

Tendency to Want to Do Something (Fight/Flight, Intervention, Demonstration of Value, etc.)

We might term this Boredom Syndrome: Most humans have the tendency to need to act, even when their actions are not needed. We also tend to offer solutions even when we do not have knowledge to solve the problem.

Falsification / Confirmation Bias

What a man wishes, he also believes. Similarly, what we believe is what we choose to see. This is commonly referred to as the confirmation bias. It is a deeply ingrained mental habit, both energy-conserving and comfortable, to look for confirmations of long-held wisdom rather than violations. Yet the scientific process – including hypothesis generation, blind testing when needed, and objective statistical rigor – is designed to root out precisely the opposite, which is why it works so well when followed.

The modern scientific enterprise operates under the principle of falsification: A method is termed scientific if it can be stated so that a certain defined result would cause it to be proved false. Pseudo-knowledge and pseudo-science operate and propagate by being unfalsifiable. As with astrology, we cannot prove them either correct or incorrect because the conditions under which they would be shown false are never stated.


Tags: reading   thinking   philosophy  

Last modified 15 September 2025