This section builds on Part I by further exploring ways Systems 1 and 2 both fall short of ideal reasoning. Kahneman describes a wide variety of effects, but the overall gist is this: System 1 is a natural storyteller, and much better at narrative than at number crunching. As for System 2, it is not very good at number crunching either, so when it engages and tries to correct System 1’s findings, the results can be illogical. The many ways these shortcomings manifest can be grouped under four headings.

People are suggestible, sometimes bizarrely so.

Kahneman describes a phenomenon known as “anchoring” and offers several illustrations. Example: Suppose people are shown a spinning number wheel that lands on a random number—but actually the number will always be either 10 or 65. When asked what percent of U.N. nations are African, the average answer will be 25 percent for participants who saw 10, and 45 percent for those who saw 65. Other examples include real estate agents’ estimation of a house’s market value, hypothetical sentencing decisions by German judges, and the average number of cans of soup purchased at a grocery store.

Kahneman initially believed anchoring to be a form of priming, a System 1 phenomenon, while his longtime research partner, Amos Tversky, thought it to be a somewhat more defensible System 2 behavior. Later research, Kahneman reports, revealed that anchoring is sometimes one, sometimes the other.

People like to keep their thinking simple.

People rely on heuristics, quick-and-easy rules of thumb for forming judgments. By one such rule, called the “availability heuristic,” people judge the frequency of something by noting how readily instances are brought to mind. This habit seems reasonable but sometimes produces odd results. Example: People who were asked to think of six times when they were assertive have a fairly easy time doing so and therefore rate themselves as assertive. People who were asked to think of twelve times when they were assertive manage the first six easily but struggle with the last six, and therefore end up rating themselves as not very assertive.

The defects of the availability heuristic are magnified by the mass media. News reports of a few cases of something worrisome (like a chemical spill) quickly produce a widespread misperception of a serious, common threat to public safety. The fear itself becomes part of the story and prompts additional reporting, leading to the formation of a self-perpetuating availability cascade. Further amplifying the problem is the so-called “affect heuristic,” which bases judgments on feelings and flattens nuances. The complex mix of pros and cons associated with pesticides, for instance, is reduced to a gut-level for or against position.

People trust stereotypes more than statistics.

People tend to make inferences based on stereotyped descriptions, even ones known to be unreliable. People tend to ignore base rates, the statistical facts that often point to a different conclusion. Example: A person whose alleged personality fits the stereotype of a computer programmer is judged to be more likely a programmer than a farmer, even though farmers are far more numerous and some of them will have programmer-like personalities. Such reliance on stereotypes is called the “representativeness heuristic.”

This heuristic can produce illogical results when people are gauging probabilities. Example: A woman who takes yoga classes will be judged more likely to be a feminist bank teller than to be simply a bank teller—even when both options are presented, and even though every woman who is both a feminist and a bank teller is a bank teller. Kahneman and Tversky dubbed this the “conjunction fallacy.” X is, illogically, judged less likely than X and Y. Something similar happens when a set of valuable baseball cards is judged less valuable, as a set, after a few worthless cards are added to the mix.

People see causality where none exists.

A recurring theme, introduced in Chapter 10 and returned to in Chapters 16–18, is people’s eagerness to latch on to causal explanations, even when the correct explanation is purely statistical. Example: During World War II, German rockets landing at random places in London left some neighborhoods unscathed. Many Londoners suspected, wrongly, that these neighborhoods harbored spies. Where base rates are concerned, a purely statistical base rate, like “Most taxicabs in this city belong to company X” tends to be ignored, whereas a base rate that hints at causality, like “Most taxicabs involved in accidents belong to company X” is much more likely to be paid attention to, even when the difference shouldn’t matter.

Related to base rates is a phenomenon known as “regression to the mean.” Example: Cadet pilots who perform exceptionally well on one landing will, on average, do worse on the next one; and conversely, pilots who land poorly (but in one piece) tend to do better the next time. This doesn’t mean that criticizing the second group led to improvement, nor that praising the first group was counterproductive. All cadet pilots can be expected to have good landings and bad ones, so that any group distinguished by the kind of landing they just had can be expected to have more mixed results on the next attempt. System 1 tends to make predictions based on simple impressions and consistently forgets to account for regression to the mean. (“This is a good bunch of pilots. They will do well tomorrow.”) System 2 is better at remembering about regression to the mean, but even System 2 requires training to incorporate this tricky concept into sound predictions about the future.