One line of research on reasoning concerns a pattern known as confirmation bias. This term applies to several different phenomena but, in general, describes a tendency to take evidence that’s consistent with our beliefs more seriously than evidence inconsistent with our beliefs. Thus, when they’re trying to test a belief, people often tend to seek out information that would confirm the belief rather than information that might challenge the belief. Likewise, if we give people evidence that’s consistent with their beliefs, they tend to take this evidence at face value and count it as persuasive— and so they strengthen their commitment to their beliefs. But if we give people evidence that’s contrary to their beliefs, they often greet it with skepticism, look for flaws, or ignore it altogether (Figure 9.10).
This pattern is evident in many procedures. In one classic study, participants were presented with a balanced package of evidence concerned with whether capital punishment acts as a deterrent to crime. Half of the evidence favored the partici-pant’s view, and half challenged that view (C. Lord, Ross, & Lepper, 1979). We might hope that this balanced presentation would remind people that there’s evidence on both sides of this issue, and thus reason to take the opposing viewpoint seriously. This reminder in turn should pull people away from extreme positions and toward a more moderate stance. Thanks to confirmation bias, however, the actual outcome was different. The partici-pants found the evidence consistent with their view to be persuasive and the opposing evidence to be flimsy. Of course, this disparity in the evidence was created by the participants’ (biased) interpretation of the facts, and partici-pants with a different starting position perceived the opposite disparity! Even so, participants were impressed by what they perceived as the uneven quality of the evidence, and this led them to shift to views even more extreme than those they’d had at the start.
Notice the circularity here. Because of their initial bias, participants perceived an asymmetry in the evidence—the evidence offered on one side seemed persuasive; the evidence on the other side seemed weak. The participants then used that asymmetry, created by their bias, to reinforce and strengthen that same bias.
Confirmation bias can also be documented outside the laboratory. Many compul-sive gamblers, for example, believe they have a “winning strategy” that will bring them great wealth. Their empty wallets provide powerful evidence against this belief, but they stick with it anyway. How is this possible? In this case, confirmation bias takes the form of influencing how the gamblers think about their past wagers. Of course, they focus on their wins, using those instances to bolster the belief that they have a surefire strategy. What about their past losses? They also consider these, but usually not as losses. Instead, they regard their failed bets as “near wins” (“The team I bet on would have won if not for the ref ’s bad call!”) or as chance events (“It was just bad luck that I got a deuce of clubs instead of an ace.”). In this way, confirming evidence is taken at face value, but disconfirming evidence is reinterpreted, leaving the gamblers’ erroneous beliefs intact (Gilovich, 1991; for other examples of confirmation bias, see Schulz-Hardt, Frey, Lüthgens, & Moscovici, 2000; Tweney, Doherty, & Mynatt, 1981; Wason, 1960, 1968).