youaskedwhat?
Subscribe
Everyday MysteriesPsychology

Why do we find it so hard to think about risk properly?

We are significantly more afraid of flying than driving, significantly less afraid of sugar than heroin, and significantly wrong about almost all of it.

Why do we find it so hard to think about risk properly?
Claude — AI author5 May 2026
Another view:Psychologist · late 40s

In the United States, the September 11 attacks killed approximately 3,000 people. In the year following, an estimated 1,500 additional Americans died in car accidents because they switched from flying to driving, driving being roughly 65 times more dangerous per mile than commercial aviation. The fear of terrorism generated by those attacks caused, directly and measurably, more road deaths in one year than the total passenger fatalities on US commercial flights in the preceding decade. The people who died in those car accidents were not killed by terrorists. They were killed by the gap between perceived and actual risk, operating through ordinary human decision-making at scale.

This is not an isolated case. It is a demonstration of a systematic feature of human cognition that kills people regularly, usually in less dramatic and therefore less documented ways.

How Risk Assessment Is Supposed to Work

The rational model of risk assessment is simple in principle: multiply the probability of an event by its magnitude, compare that figure across available options, and choose the option with the lowest expected harm. This works reasonably well for known risks with stable probabilities and measurable outcomes. It works very badly for the kinds of risks humans actually encounter, emotionally charged, vividly imaginable, socially loaded, and embedded in environments shaped by other people's choices about what information to provide.

The machinery we actually use for risk assessment was not built for the world we currently inhabit. It was built for the ancestral environment in which risks were immediate, visible, physical, and social. A large predator, a violent rival, a contaminated water source, rejection from the group, these were the threats our cognitive systems were calibrated to detect. They are all vivid, present, and rapidly actionable. Evolution rewarded fast, strong responses to these threats. It did not reward careful probabilistic reasoning about slow-moving, invisible, or statistically complex risks, because those weren't the threats that killed your ancestors before they could reproduce.

The calibration mismatch Our risk-detection systems are calibrated for the wrong century. The threats that kill the most people, sedentary lifestyle, road accidents, diet, air pollution, are slow, undramatic, and invisible. We're superb at detecting the wrong ones.

The Heuristics That Fail Us

Several well-documented cognitive heuristics produce systematic risk miscalibration. The availability heuristic leads us to judge the likelihood of events by how easily they come to mind, which is correlated with how recently or vividly they've been reported, not with how common they actually are. Plane crashes are memorable, dramatic, and heavily reported; car accidents are ordinary. The availability of each shapes perceived risk in the opposite direction from actual risk.

Dread risk, the amplification of any risk that involves loss of control, catastrophic potential, involuntary exposure, and unknown mechanisms, produces responses disproportionate to the actual magnitude. Nuclear power generation, despite a safety record substantially better than coal, natural gas, or oil per unit of energy produced, triggers dread responses that have resulted in its being abandoned in favour of fossil fuels with far worse death rates. We replaced a safer technology with a more dangerous one because one of them feels dangerous and the other doesn't.

Scope insensitivity means we don't scale our concern proportionately with magnitude. Studies have shown that people are willing to contribute roughly the same amount to save 2,000 birds from an oil spill as to save 200,000, the scope of the harm barely affects the emotional response. This produces policy environments where vivid, specific, small-scale harms attract enormous resources while vast diffuse harms with the same or greater total impact are largely ignored.

The problem is not that people are stupid. It's that the cognitive tools available are approximately 200,000 years old, and the risk environment changed very recently. The mismatch is predictable, systematic, and causes substantial preventable harm.

Why It's Hard to Fix

Understanding the biases helps somewhat, studies show that people who know about the availability heuristic make marginally better risk judgements. But knowing about a bias and correcting for it are different things. The emotional systems that generate risk responses operate fast and automatically; the deliberate reasoning that might correct them is slow and effortful and runs on limited cognitive resources. Correcting for availability bias requires noticing that you're using it, recalling relevant base rates, doing actual mental arithmetic, and overriding the emotional response, all under conditions where you're often busy, distracted, or not particularly motivated to do the cognitive work.

The policy implication is that improving individual risk literacy, while worthwhile, is insufficient. Environments and institutions need to be designed to compensate for the biases rather than exploit them, which is the opposite of what media systems, political incentives, and commercial interests typically do.

We are exactly as bad at risk assessment as you'd expect a species to be that evolved its risk-detection systems in a completely different world from the one it now inhabits.

Disagree? Say so.

Genuine pushback is welcome. Personal abuse is not.

Related questions

The short answer is that our risk perception evolved for a different environment. We are equipped with cognitive systems built to detect immediate, visible, social threats - predators, aggressive rivals, spoiled food. We are not equipped to assess statistical probabilities of diffuse future harms involving large numbers of strangers. The mismatch between the environment we evolved in and the risks we actually face produces systematic errors.

Daniel Kahneman and Amos Tversky documented many of these in detail: availability bias (we overestimate risks that are easy to recall), loss aversion (losses loom larger than equivalent gains), scope insensitivity (we react similarly to saving 2,000 birds or 200,000 birds). These are not failures of intelligence. They are features of fast, heuristic-based cognition that served us well for most of human history and serve us poorly for modern risk assessment.

The availability heuristic explains a lot of public risk distortion. Plane crashes are vivid, memorable, and socially salient - so people fear flying. Car crashes are mundane and statistically distributed - so people don't fear driving even though it is enormously more dangerous. The emotional imprint of the event, not its probability, drives the response.

The more depressing finding is that education helps less than we'd like. Even people who know about these biases remain subject to them in real decision-making contexts, particularly under stress or time pressure. Knowing the bias and correcting for it in the moment are different skills. Better risk communication and institutional design that compensates for our cognitive limitations probably does more than expecting individuals to override their own psychology.

P

The Psychologist

Psychologist · late 40s

The short answer is that our risk perception evolved for a different environment. We are equipped with cognitive systems built to detect immediate, visible, social threats - predators, aggressive rivals, spoiled food. We are not equipped to assess statistical probabilities of diffuse future harms involving large numbers of strangers. The mismatch between the environment we evolved in and the risks we actually face produces systematic errors.

Daniel Kahneman and Amos Tversky documented many of these in detail: availability bias (we overestimate risks that are easy to recall), loss aversion (losses loom larger than equivalent gains), scope insensitivity (we react similarly to saving 2,000 birds or 200,000 birds). These are not failures of intelligence. They are features of fast, heuristic-based cognition that served us well for most of human history and serve us poorly for modern risk assessment.

The availability heuristic explains a lot of public risk distortion. Plane crashes are vivid, memorable, and socially salient - so people fear flying. Car crashes are mundane and statistically distributed - so people don't fear driving even though it is enormously more dangerous. The emotional imprint of the event, not its probability, drives the response.

The more depressing finding is that education helps less than we'd like. Even people who know about these biases remain subject to them in real decision-making contexts, particularly under stress or time pressure. Knowing the bias and correcting for it in the moment are different skills. Better risk communication and institutional design that compensates for our cognitive limitations probably does more than expecting individuals to override their own psychology.

M

The Mathematician

Mathematician · early 40s

Most people's probability intuitions are simply wrong in ways that are predictable and consistent. The birthday problem is a standard example: in a group of 23 people, the probability that at least two share a birthday is about 50%. Most people estimate something much lower. The correct answer is counterintuitive even for people who can do the underlying arithmetic once it's set up for them.

Conditional probability is even harder. The famous Monty Hall problem - where switching doors on a game show is clearly the better strategy but feels wrong - is one that mathematicians have argued about, incorrectly, in public. Our intuitions about how probabilities update in the light of new information are systematically unreliable. This matters enormously in medicine, where understanding test sensitivity and specificity, false positive rates, and base rates is essential for interpreting results, and where even doctors routinely make errors.

The compounding issue is that most real risks are joint probabilities - the probability that A and B and C all happen - and human intuition is particularly bad at these. We tend to estimate joint probabilities as if the events were independent and rare events as if they were impossible. Both tendencies tend to produce dangerous underestimation of complex risk.

The solution is not to make everyone a probabilist - the teaching overhead is enormous and the application rate is low. The better solutions are structural: forcing functions like pre-mortems in organisations, decision trees, and quantitative risk frameworks that take the calculation out of individual intuition and put it into a process. We cannot easily fix the intuition. We can design around it.

D

The Doctor

Doctor · early 50s

I see this play out in clinic regularly. A patient is terrified of a vaccine that has a one-in-a-million chance of serious side effects, while they continue smoking, or driving without a seatbelt, or ignoring symptoms they know warrant investigation. The risk arithmetic is completely clear and completely uninformative about what they will actually do.

What I have found is that people's relationship with risk is not primarily about probability. It is about control, familiarity, and the meaning of the risk in question. Driving feels controllable, even though your actual control over outcomes is quite limited. Flying doesn't, even though the outcomes are significantly safer. Smoking is a choice you make repeatedly and can undo. A vaccine is done to you, once, by a professional. The psychological architecture of those situations is different, and different psychology produces different responses to identical statistics.

The medical communication challenge is that patients need to make genuine risk comparisons - treatment versus no treatment, this drug versus that drug - in conditions of emotional salience and under stress. Presenting statistics alone is rarely adequate. What works better is giving the comparison a concrete form: "Out of 100 people in your situation, about 60 do better with this treatment." People can work with pictures and comparisons in ways they cannot work with percentages.

I don't think the difficulty of thinking about risk means people are irrational. I think it means that risk thinking is a skill that requires scaffolding, and the scaffolding is rarely provided. That is a failure of communication and design as much as a failure of cognition.