4 Nothing More Than Feelings It is remarkable how many horrible ways we could die. Try making a list. Start with the standards like household accidents and killer diseases. After that, move into more exotic fare. 'Hit by bus,' naturally. 'Train derailment,' perhaps, and 'stray bullet fired by drunken revellers.' For those with a streak of black humour, this is where the exercise becomes enjoyable. We may strike a tree while skiing, choke on a bee, or fall into a manhole. Falling airplane parts can kill. So can banana peels. Lists will vary depending on the author's imagination and tolerance for bad taste, but I'm quite sure that near the end of every list will be this entry: 'Crushed by asteroid.' Everyone knows deadly rocks can fall from the sky but outside space camps and science fiction conventions the threat of death-by-asteroid is used only as a rhetorical device for dismissing some worry as real but too tiny to worry about. I may have used it myself once or twice. I probably won't again, though, because in late 2004 I attended a conference that brought together some of the world's leading astronomers and geo-scientists to discuss asteroid impacts. The venue was Tenerife, one of Spain's Canary Islands that lie off the Atlantic coast of North Africa. Intentionally or not, it was an ideal setting. The conference was not simply about rocks in space, after all. It was about understanding a very unlikely, potentially catastrophic risk. And the Canary Islands are home to rwo other very unlikely, potentially catastrophic risks. First, there are the acrive volcanoes. All the islands were created by volcanic activity and Tenerife is dominated by a colossus called Teide, the third-largest volcano in the world. Teide is still quite active, having erupted three times in the last 300 years. 61 Risk Nothing More Than Feelings And there is the rift on La Palma mentioned in the last chapter. One team of scientists believes it will drop a big chunk of the island into the Atlantic and several hours later people on the east coast of North and South America will become extras in the greatest disaster movie of all time. Other scientists dispute this, saying a much smaller chunk of La Palma is set to go, that it will crumble as it drops, and that the resulting waves won't even qualify as good home video. They do agree that a landslide is possible, however, and that it is likely to happen soon in geological terms — which means it could be 10,000 years from now or tomorrow morning. Now, one might think the residents of the Canary Islands would find it somewhat unsettling that they could wake to a cataclysm on any given morning. But one would be wrong. Teide's flanks are covered by large, pleasant towns filled with happy people who sleep quite soundly. There are similarly no reports of mass panic among the 85,000 residents of La Palma. The fact that the Canary Islands are balmy and beautiful probably has something to do with the residents' equanimity in the face of Armageddon. There are worse places to die. The Example Rule is also in play. The last time Teide erupted was in 1909, and no one has ever seen a big chunk of inhabited island disappear. Survivors would not be so sanguine the day after either event. But that can't be all there is to it. Terrorists have never detonated a nuclear weapon in a major city but the mere thought of that happening chills most people, and governments around the world are working very hard to see that what has never happened never does. Risk analysts call these low-probability/high-consequence events. Why would people fear some but not others? Asteroid impacts - classic low-probabiliry/high-consequence events - are an almost ideal way to investigate that question. The earth is under constant bombardment by cosmic debris. Most of what hits us is no bigger than a fleck of dust but because those flecks enter the earths atmosphere at speeds of up to 72 kilometres per second, they pack a punch out of all proportion to their mass. Even the smallest fleck disappears in the brilliant flash of light that we quite misleadingly call shooting stars. The risk to humans from these cosmic firecrackers is zero. But the debris pelting the planet comes in a sliding scale of sizes. There are bits no bigger than grains of rice, pebbles, throwing stones. They all enter the atmosphere at dazzling speed and so each modest increase in size means a huge jump in the energy released when they burn. A rock one-third of a metre across explodes with the force of two tons of dynamite when it hits the atmosphere. About a thousand detonations of this size happen each year. A rock one metre across - a size commonly used in landscaping - erupts with the force of 100 tons of dynamite. That happens about 40 times each year. At three metres across, a rock hits with the force of 2,000 tons of dynamite. That's two-thirds of the force that annihilated the city of Halifax in 1917, when a munitions-laden ship exploded in the harbour. Cosmic wallops of that force hit the earth roughly twice a year. And so it goes up the scale, until, at 30 mettes across, a rock gets a name change. It is now called an asteroid, and an asteroid of that size detonates in the atmosphere like two million tons of dynamite - enough to flatten everything on the ground within 10 kilometres. At 100 metres, asteroids pack the equivalent of 80 million tons of dynamite. We have historical experience with this kind of detonation. On June 30, 1908, an asteroid estimated to be 60 metres wide exploded eight kilometres above Tunguska, a remote region in Siberia, smashing flat some 2,000 square kilometres of forest. Bigger asteroids get really scary. At one kilometre across, an asteroid could dig a crater 15 kilometres wide, spark a fireball that appears 25 times larger than the sun, shake the surrounding region with a 7.8 earthquake, and, possibly, hutl enough dust into the atmosphere to create a 'nuclear winter.' Civilization may or may not survive such a collision, but at least the species would. Not so the next weight class. A chunk of rock 10 kilometres across would add humans and most other terrestrial creatures to the list of species that once existed. This is what did in the dinosaurs. Fortunately, there aren't many giant rocks whizzing around space. In a paper prepared for the Organization for Economic Cooperation and Development, astronomer Clark Chapman estimated that the chance of humanity being surprised by a 62 63 Risk doomsday rock in the next century is one in a million. But the smaller the rock, the more common they are - which means the smaller the rock, the greater the chance of being hit by one. The probability of the earth being walloped by a 300-metre asteroid in any given year is 1 in 50,000, which makes the odds 1 in 500 over the course of the century. If a rock like that landed in the ocean, it could generate a mammoth tsunami. On land, it would devastate a region the size of a small country. For a 100-metre rock, the odds are 1 in 10,000 in one year and 1 in 100 over the next 100 years. At 30 metres, the odds are 1 in 250 per year and 1 in 2.5 over the next 100 years. Figuring out a rational response to such low-probability/ high-consequence risks is not easy. We generally ignore one-in-a-million dangers because they're just too small and life's too short. Even risks of 1 in 10,000 or 1 in 1,000 are routinely dismissed. So looking at the probability of an asteroid strike, the danger is very low. But it's not zero. And what if it actually happens? It's not one person who's going to die, or even 1,000 or 10,000. It could be millions, even billions. At what point does the scale of the loss make it worth our while to deal with a threat that almost certainly won't come to pass in our lifetime or that of our children or their children? Reason has a typically cold-hearted answer: It depends on the cost. If it costs little to protect against a low-probability/high-consequence event, it's worth paying up. But if it costs a lot, we may be better off putting the money into other priorities -reducing other risks, for example — and taking our chances. For the most part, this is how governments deal with low-probability/high-consequence hazards. The probability of the event, its consequences, and the cost are all put on the table and considered together. That still leaves lots of room for arguments. Experts endlessly debate how the three factors should be weighted and how the calculation should be carried out. But no one disputes that all three factors have to be considered if we want to deal with these dangers rationally. With regard to asteroids, the cost follows the same sliding scale as their destructive impact. The first step in mitigating the hazard is spotting the rock and calculating whether it will collide with the earth. If the alarm bell rings, we can then talk about whether Nothing More Than Feelings it would be worth it to devise a plan to nudge, nuke, or otherwise nullify the threat. But spotting asteroids isn't easy because they don't emit light, they only reflect it. The smaller the rock, the harder and more expensive it is to spot. Conversely, the bigger the rock, the easier and cheaper it is to detect. That leads to two obvious conclusions. First, asteroids at the small end of the sliding scale should be ignored. Second, we definitely should pay to locate those at the opposite end. And that has been done. Beginning in the early 1990s, astronomers created an international organization called Spaceguard, which coordinates efforts to spot and catalogue asteroids. Much of the work is voluntary but various universities and institutions have made modest contributions, usually in the form of time on telescopes. At the end of the 1990s, nasa gave Spaceguard annual funding of $4 million a year (from its $ 10-billion annual budget). As a result, astronomers believe that by 2008 Spaceguard will have spotted 90 per cent of asteroids bigger than one kilometre across. That comes close to eliminating the risk from asteroids big enough to wipe out every mammal on earth, but it does nothing about smaller asteroids - asteroids capable of demolishing India, for example. Shouldn't we pay to spot them, too? Astronomers think so. So they asked nasa and the European Space Agency for $30 to $40 million a year for 10 years. That would allow them to detect and record 90 per cent of asteroids 140 metres and bigger. There would still be a small chance of a big one slipping through but it would give the planet a pretty solid insurance policy against cosmic collisions - not bad for a one-time expense of $300 million to $400 million. That's considerably less than the original amount budgeted to build a new American embassy in Baghdad and not a lot more than the $195 million owed by foreign diplomats to New York City for unpaid parking tickets. But despite a lot of effort over many years, the astronomers couldn't get the money to finish the job. A frustrated Clark Chapman attended the conference in Tenerife. It had been almost 25 years since the risk was officially recognized, the science wasn't in doubt, public awareness had been raised, governments had been warned, but the progress was modest. He wanted to know why. To help answer that question, the conference organizers brought 64 65 Risk Nothing More Than Feelings Paul Slovic to Tenerife. With a career that started in the early 1960s, Slovic is one of the pioneers of risk perception research. It's a field that essentially began in the 1970s as a result of proliferating conflicts between expert and lay opinion. In some cases - cigarettes, seat belts, drunk driving - the experts insisted the risk was greater than the public believed. But in more cases -nuclear power was the prime example - the public was alarmed by things most experts insisted weren't so dangerous. Slovic, a professor of psychology at the University of Oregon, co-founded Decision Research, a private research corporation dedicated to figuring out why people reacted to risks the way they did. In studies that began in the late 1970s, Slovic and his colleagues asked ordinary people to estimate the fatality rates of certain activities and technologies, to rank them according to how risky they believed them to be, and to provide more details about their feelings. Do you see this activity or technology as beneficial? Something you voluntarily engage in? Dangerous to future generations? Little understood? And so on. At the same time, they quizzed experts — professional risk analysts — on their views. Not surprisingly, experts and laypeople disagreed about the seriousness of many items. Experts liked to think - and many still do - that this simply reflected the fact that they know what they're talking about and laypeople don't. But when Slovic subjected his data to statistical analyses it quickly became clear there was much more to the explanation than that. The experts followed the classic definition of risk that has always been used by engineers and others who have to worry about things going wrong: Risk equals probability times consequence. Here, consequence' means the body count. Not surprisingly, the experts' estimate of the fatalities inflicted by an activity or technology corresponded closely with their ranking of the riskiness of each item. When laypeople estimated how fatal various risks were they got mixed results. In general, they knew which items were most and least lethal. Beyond that, their judgments varied from modestly incorrect to howlingly wrong. Not that people had any clue that their hunches might not be absolutely accurate. When Slovic asked people to rate how likely it was that an answer was wrong, they often scoffed at the very possibility. One-quarter 66 actually put the odds of a mistake at less than 1 in 100 - although one in eight of the answers rated so confidently were, in fact, wrong. It was another important demonstration of why intuitions should be treated with caution - and another demonstration that they aren't. The most illuminating results, however, came out of the ranking of riskiness. Sometimes, laypeople's estimate of an item's body count closely matched how risky they felt the item to be — as it did with the experts. But sometimes there was little or no link between 'risk' and 'annual fatalities.' The most dramatic example was nuclear power. Laypeople, like experts, correctly said it inflicted the fewest fatalities of the items surveyed. But the experts ranked nuclear power as the 20th most risky item on a list of 30, while most laypeople said it was number one. Later studies had 90 items, but again nuclear power ranked first. Clearly, people were doing something other than multiplying probability and body count to come up with judgments about risk. Slovic's analyses showed that if an activity or technology were seen as having certain qualities, people boosted their estimate of its riskiness regardless of whether it was believed to kill lots of people or not. If it were seen to have other qualities, they lowered their estimates. So it didn't matter that nuclear power didn't have a big body count. It had all the qualities that pressed our risk-perception buttons, and that put it at the top of the public's list of dangers. 1. Catastrophic potential: If fatalities would occur in large numbers in a single event - instead of in small numbers dispersed over time - our perception of risk rises. 2. Familiarity: Unfamiliar or novel risks make us worry more. 3. Understanding: If we believe that how an activity or technology works is not well understood, our sense of risk goes up. 4. Personal control: If we feel the potential for harm is beyond our control - like a passenger in an airplane -we worry more than if we feel in control - the driver of a car. 5. Voluntariness: If we don't choose to engage the risk, it feels more threatening. 67 Risk Nothing More Than Feelings 6. Children: It's much worse if kids are involved. 7. Future generations: If the risk threatens future generations, we worry more. 8. Victim identify: Identifiable victims rather than statistical abstractions make the sense of risk rise. 9. Dread: If the effects generate fear, the sense of risk rises. 10. Trust: If the institutions involved are not trusted, risk rises. 11. Media attention: More media means more worry. 12. Accident history: Bad events in the past boost the sense of risk. 13. Equity: If the benefits go to some and the dangers to others, we raise the risk ranking. 14. Benefits: If the benefits of the activity or technology are not clear, it is judged to be riskier. 15. Reversibility: If the effects of something going wrong cannot be reversed, risk rises. 16. Personal risk: If it endangers me, it's riskier. 17. Origin: Man-made risks are riskier than those of natural origin. 18. Timing: More immediate threats loom larger while those in the future tend to be discounted. Many of the items on Slovic's list look like common sense. Of course something that puts children at risk presses our buttons. Of course something that involves only those who choose to get involved does not. And one needn't have ever heard of the Example Rule to know that a risk that gets more media attention is likely to bother us more than one that doesn't. But for psychologists, one item on the list - 'familiarity' - is particularly predictable, and particularly important. We are bombarded with sensory input, at every moment, always. One of the most basic tasks of the brain is to swiftly sort that input into two piles: the important stuff that has to be brought to the attention of the conscious mind and everything else. What qualifies as important? Mostly, it's anything that's new. Novelty and unfamiliarity - surprise - grab our attention like nothing else. Drive the same road you've driven to work every day for the last 12 years and you are likely to pay so little conscious attention that you may not remember a thing you've seen when you pull into the parking lot. That is if the drive is the same as it always is. But if, on the way to work, you should happen to see a naked, pot-bellied man doing calisthenics on his front lawn, your consciousness will be roused from its slumber and you will arrive at work with a memory you may wish were a little less vivid. The flip side of this is a psychological mechanism called habituation. It's the process that causes a stimulus we repeatedly experience without positive or negative consequences to gradually fade from our attention. Anyone who wears perfume or cologne has experienced habituation. When you buy a new scent and put it on, you catch a whiff of the fragrance all day long. The next day, the same. But if you wear it repeatedly, you gradually notice it less and less. Eventually, you may smell it only the moment you put it on and you will hardly pay attention to it even then. If you've ever wondered how the guy in the next cubicle at work can stand to reek of bad cologne all day, wonder no more. Habituation is particularly important in coping with risk because risk is everywhere. Have a shower in the morning and you risk slipping and breaking your neck. Eat a poached egg and you could be poisoned. Drive to work and you may be crushed, mangled, or burned alive. Walk to work and carcinogenic solar radiation may rain down on you, or you may be hit by a bus, or have a heart attack, or be crushed by an asteroid. Of course, the chance of any of these horrible things happening is tiny -exposure to sunshine excepted, of course — and it would be a waste of our mental resources to constantly be aware of them. We need an 'off' switch. That switch is habituation. To carry out her famous observations of chimpanzees, prima-tologist Jane Goodall sat very still in their midst and watched them go about their ordinary business hour after hour, something that was possible only because the chimpanzees essentially ignored Goodall. To get the chimps to do that, Goodall had to show up and sit down, day after day, month after month, until the animals' alarm and curiosity faded and they stopped paying attention to her. The same process can be observed in other species. As I am writing this sentence, there is a black squirrel on my window sill eagerly chewing bird seed without the slightest regard for the large omnivore sitting in a chair barely one metre 68 69 Risk Nothing More Than Feelings away. The birds that share the seed are equally blase when I am in my backyard although I would need binoculars to see them up close in a forest. As for humans, simply recall the white-knuckle grip you had on the steering wheel the first time you drove on a freeway and then think of the last time the sheer boredom of driving nearly caused you to fall asleep at the wheel. If you had been asked on that first drive how dangerous it is to drive on a freeway, your answer would be a little different than it is now that habituation has done its work. Habituation generally works brilliantly. The problem with it, as with everything the unconscious mind does, is that it cannot account for science and statistics. If you've smoked cigarettes every hour, every day, for years, without suffering any harm, the cigarette in your hand won't feel threatening. Not even your doctor's warning can change that because it is your conscious mind that understands the warning but your conscious mind does not control your feelings. The same process of habituation can also explain why someone can become convinced it isn't so risky to drive a car drunk, or to not wear a seat belt, or to ride a motorcycle without a helmet. And if you live quietly for years in a pleasant Spanish town, you're unlikely to give a second thought to the fact that your town is built on the slopes of the world's third-largest active volcano. For all the apparent reasonableness of Paul Slovic's list of risk factors, however, its value is limited. The problem is the same one that bedevils focus groups. People know what they like, what they fear, and so on. But what's the source of these judgments? Typically, it is the unconscious mind - Gut. The judgment may come wholly from Gut, or it may have been modified by the conscious mind — Head. But in either case, the answer to why people feel as they do lies at least partly within Gut. Gut is a black box; Head can't peer inside. And when a researcher asks someone to say why she feels the way she does about a risk, it's not Gut she is talking to. It is Head. Now, if Head simply answered the researcher's question with a humble 'I don't know,' that would be one thing. But Head is a compulsive rationalizer. If it doesn't have an answer, it makes one up. There's plenty of evidence for rationalization but the most 70 memorable - certainly the most bizarre - was a series of experiments on so-called split-brain patients by neuroscientist Michael Gazzaniga. Ordinarily, the left and right hemispheres of the brain are connected and they communicate in both directions but one treatment for severe epilepsy is to sever the two sides. Split-brain patients function surprisingly well but scientists realized that because the two hemispheres handle different sorts of information, each side can learn something that the other isn't aware of. This effect could be induced deliberately in experiments by exposing only one eye or the other to written instructions. In one version of his work, Gazzaniga used this technique to instruct the right hemisphere of a split-brain patient to stand up and walk. The man got up and walked. Gazzaniga then verbally asked the man why he was walking. The left hemisphere handles such 'reason questions and even though that hemisphere had no idea what the real answer was, the man immediately responded that he was going for a soda. Variations on this experiment always got the same result: The left hemisphere quickly and ingeniously fabricated explanations rather than admit it had no idea what was going on. And the person whose lips delivered these answers believed every word. ■ When a woman tells a researcher how risky she thinks nuclear power is, what she says is probably a reliable reflection of her feelings. But when the researcher asks the person why she feels the way she does, her answer is likely to be partly or wholly inaccurate. It's not that she is being deceitful. It's that her answer is very likely to be, in some degree, a conscious rationalization of an unconscious judgment. So maybe it's true that what really bothers people about nuclear power are the qualities on Slovic's checklist. Or maybe that stuff is just Head rationalizing Gut's judgment. Or maybe it's a little of both. The truth is we don't know what the truth is. Slovic's list was, and still is, very influential in the large and growing business of risk communication because it provided a handy checklist that allowed analysts to quickly and easily come up with a profile for any risk. Is it man-made? Is it involuntary? Its simplicity also appealed to the media. Newspaper and magazine articles about risk still recite the items on the list as if they explain everything we need to know about why people react to 71 Risk Nothing More Than Feelings some dangers and not others. But Slovic himself acknowledges the list's limitations. 'This was the mid-1970s. At the time we were doing the early work, we had no real appreciation for the unconscious, automatic system of thought. Our approach assumed that this was the way people were analyzing risks, in a very thoughtful way.' Ultimarely, Slovic and his colleagues found a way out of this box with the help of two clues buried within their data. The first lay in the word dread. Slovic found that dread — plain old fear - was strongly correlated with several other items on the list, including catastrophic, involuntary, and inequitable. Unlike some of the other items, these are loaded with emotional content. And he found that this cluster of qualities - which he labelled 'the dread factor' - was by far the strongest predictor of people's reaction to an activity or technology. This was a strong hint that there was more going on in people's brains than cool, rational analysis. The second clue lay in something that looked, on the surface, to be a meaningless quirk. It turned out that peoples ratings of the risks and benefits for the 90 activities and technologies on the list were connected. If people thought the risk posed by something was high, they judged the benefit to be low. The reverse was also true. If they thought the benefit was high, the risk was seen as low. In technical terms, this is an 'inverse correlation.' It makes absolutely no sense here because there's no logical reason that something - say, a new prescription drug — can't be both high risk and high benefit. It's also true that something can be low risk and low benefit — sitting on the couch watching Sunday afternoon football comes to mind. So why on earth did people put risk and benefit at opposite ends of a see-saw? It was curious but it didn't seem important. In his earliest papers on risk, Slovic mentioned the finding in only a sentence or two. In the years to come, however, the model of a two-track mind - Head and Gut operating simultaneously — advanced rapidly. A major influence in this development was the work of Robert Zajonc, a Stanford psychologist, who explored what psychologists call affect - which we know simply as feeling or emotion. Zajonc insisted that we delude ourselves when we think that we evaluate evidence and make decisions by calculating rationally. 72 'This is probably seldom the case,' he wrote in 1980. 'We buy cars we "like," choose the jobs and houses we find "attractive," and then justify those choices by various reasons.' With this new model, Slovic understood the limitations of his earlier research. Working with Ali Alhakami, a Ph.D. student at the University of Oregon, he also started to realize that the perceived link between risk and benefit he discovered earlier may have been much more than a quirk. What if people were reacting unconsciously and emotionally at the mention of a risky activity or technology? They hear 'nuclear power' and . . . ugh! They have an instantaneous, unconscious reaction. This bad feeling actually happens prior to any conscious thought and because it comes first, it shapes and colours the thoughts that follow - including responses to the researchers' questions about risk. That would explain why people see risk and benefit as if they were sitting at opposite ends of a see-saw. How risky is nuclear power? Nuclear power is a Bad Thing. Risk is also bad. So nuclear power must be very risky. And how beneficial is nuclear power? Nuclear power is Bad, so it must not be very beneficial. When Gut reacts positively to an activity or technology - swimming, say, or aspirin — it tips the see-saw the other way: Aspirin is a Good Thing so it must be low risk and high benefit. To test this hypothesis, Slovic and Alhakami, along with colleagues Melissa Finucane and Stephen Johnson, devised a simple experiment. Students at the University of Western Australia were divided into two groups. The first group was shown various potential risks - chemical plants, cellphones, air travel - on a computer screen and asked to rate the riskiness of the item on a scale from one to seven. Then they rated the benefits of each. The second group did the same, except they had only a few seconds to make their decisions. Other research had shown that time pressure reduces Head's ability to step in and modify Gut's judgment. If Slovic's hypothesis was correct, the see-saw effect between risk and benefit should be stronger in the second group than the first. And that's just what they found. In a second experiment, Slovic and Alhakami had students of the University of Oregon rate the risks and benefits of a technology (different trials used nuclear power, natural gas, and food 73 Risk Nothing More Than Feelings preservatives). Then they were asked to read a few paragraphs describing some of the benefits of the technology. Finally, they were asked again to rate the risks and benefits of the technology. Not surprisingly, the positive information they read raised student's ratings of the technology's benefits in about one-half of the cases. But most of those who raised their estimate of the technology's benefits also lowered their estimate of the risk - even though they had not read a word about the risk. Later trials in which only risks were discussed had the same effect but in reverse: People who raised their estimate of the technology's risks in response to the information about risk also lowered their estimate of its benefit. Various names have been used to capture what's going on here. Slovic calls it the affect heuristic. I prefer to think of it as the Good-Bad Rule. When faced with something, Gut may instantly experience a raw feeling that something is Good or Bad. That feeling then guides the judgments that follow: 'Is this thing likely to kill me? It feels good. Good things don't kill. So, no, don't worry about it.' The Good-Bad Rule helps to solve many riddles. In Slovic's original studies, for example, he found that people consistently underestimated the lethality of all diseases except one: The lethality of cancer was actually ojwestimated. One reason that might be is the Example Rule. The media pay much more attention to cancer than diabetes or asthma and so people can easily recall examples of deaths caused by cancer even if they don't have personal experience with the disease. But consider how you feel when you read the words diabetes and asthma. Unless you or someone you care about has suffered from these diseases, chances are they don't spark any emotion. But what about the word cancer7. It's like a shadow slipping over the mind. That shadow is affect — the 'faint whisper of emotion,' as Slovic calls it. We use cancer as a metaphor in ordinary language — meaning something black and hidden, eating away at what's good — precisely because the word stirs feelings. And those feelings shape and colour our conscious thoughts about the disease. The Good-Bad Rule also helps explain our weird relationship with radiation. We fear nuclear weapons, reasonably enough, while nuclear power and nuclear waste also give us the willies. Most experts argue that nuclear power and nuclear waste are not nearly as dangerous as the public thinks they are, but people will not be budged. On the other hand, we pay good money to soak up solar radiation on a tropical beach and few people have the slightest qualms about deliberately exposing themselves to radiation when a doctor orders an X-ray. In fact, Slovic's surveys confirmed that most laypeople underestimate the (minimal) dangers of X-rays. Why don't we worry about sun-tanning? Habituation may play a role, but the Good-Bad Rule certainly does. Picture this: You, lying on a beach in Mexico. How does that make you feel? Pretty good. And if it is a Good Thing, our feelings tell us, it cannot be all that risky. The same is true of X-rays. They are a medical technology that saves lives. They are a Good Thing, and that feeling eases any worries about the risk they pose. On the other end of the scale are nuclear weapons. They are a Very Bad Thing - which is a pretty reasonable conclusion given that they are designed to annihilate whole cities in a flash. But Slovic has found feelings about nuclear power and nuclear waste are almost as negative and when Slovic and some colleagues examined how the people of Nevada felt about a proposal to create a dump site for nuclear waste in that state, they found that people judged the risk of a nuclear waste repository to be at least as great as that of a nuclear plant or even a nuclear weapons testing site. Not even the most ardent anti-nuclear activist would make such an equation. It makes no sense - unless people's judgments are the product of intensely negative feeling to all things 'nuclear.' Of course, the Example Rule also plays a role in the public's fear of nuclear power, given the ease with which we latch onto images of the Chernobyl disaster the moment nuclear power is mentioned. But populat fears long predate those images, suggesting there is another unconscious mechanism at work. This illustrates an important limitation in our understanding of how intuitive judgment works, incidentally. By carefully designing experiments, psychologists are able to identify mechanisms like the Example Rule and the Good-Bad Rule, and we can look at circumstances in the real world and surmise that this or that mechanism is involved. But what we can't do - at least not yet - is tease out precisely which mechanisms are doing what. We 74 75 Risk Nothing More Than Feelings can only say that people's intuitions about nuclear power may be generated by either the Example Rule or the Good-Bad Rule or both. We're not used to thinking of our feelings as the sources of our conscious decisions but research leaves no doubt. Studies of insurance, for example, have revealed that people are willing to pay more to insure a car they feel is attractive than one that is not, even when the monetary value is the same. A 1993 study even found that people were willing to pay more for airline travel insurance covering 'terrorist acts' than for deaths from 'all possible causes.' Logically, that makes no sense, but 'terrorist acts' is a vivid phrase dripping with bad feelings, while 'all possible causes' is bland and empty. It leaves Gut cold. Amos Tversky and psychologist Eric Johnson also showed the influence of bad feelings can extend beyond the thing generating the feelings. They asked Stanford University students to read one of three versions of a story about a tragic death - the cause being either leukemia, fire, or murder - that contained no information about how common such tragedies are. They then gave the students a list of risks - including the risk in the story and 12 others - and asked them to estimate how often they kill. As we might expect, those who read a tragic story about a death caused by leukemia rated leukemia's lethality higher than a control group of students who didn't read the story. The same with fire and murder. More surprisingly, reading the stories led to increased estimates for all the risks, not just the one portrayed. The fire story caused an overall increase in perceived risk of 14 per cent. The leukemia story raised estimates by 73 per cent. The murder story led the pack, raising risk estimates by 144 per cent. A 'good news' story had precisely the opposite effect - driving down perceived risks across the board. So far, I've mentioned things - murder, terrorism, cancer — that deliver an unmistakable emotional wallop. But scientists have shown that Gut's emotional reactions can be much subtler than that. Robert Zajonc, along with psychologists Piotr Winkielman and Norbert Schwarz, conducted a series of experiments in which Chinese ideographs flashed briefly on a screen. Immediately after seeing an ideograph, the test subjects, students at the University of Michigan, were asked to rate the image from one to six, with 76 six being very liked and one not liked at all. (Anyone familiar with the Chinese, Korean, or Japanese languages was excluded from the study, so the images held no literal meaning for those who saw them.) What the students weren't told is that just before the ideograph appeared, another image was flashed. In some cases, it was a smiling face. In others, it was a frowning face or a meaningless polygon. These images appeared for the smallest fraction of a second, such a brief moment that they did not register on the conscious mind and no student reported seeing them. But even this tiny exposure to a good or bad image had a profound effect on the students' judgment. Across rhe board, ideographs preceded by a smiling face were liked more than those that weren't positively primed. The frowning face had the same effect in the opposite direction. Clearly, emotion had a powerful influence and yet not one student reported feeling any emotion. Zajonc and other scientists believe that can happen because the brain system that slaps emotional labels on things - nuclear power: Bad! - is buried within the unconscious mind. So your brain can feel something is good or bad even though you never consciously feel good or bad. (When the students were asked what they based their judgments on, incidentally, they cited the ideograph's aesthetics, or they said that it reminded them of something, or they simply insisted that they 'just liked it.' The conscious mind hates to admit it simply doesn't know.) After putting students through the routine outlined above, Zajonc and his colleagues then repeated the test. This time, however, the images of faces were switched around. If an ideograph had been preceded by a smiling face in the first round, it got a frowning face, and vice versa. The results were startling. Unlike the first round, the flashed images had little effect. People stuck to their earlier judgments. An ideograph judged likeable in the first round because — unknown to the person doing the judging - it was preceded by a smiling face was judged likeable in the second round even though it was preceded by a frowning face. So emotional labels stick even if we don't know they exist. In earlier experiments - since corroborated by a massive amount of research - Zajonc also revealed that positive feeling for something can be created simply by repeated exposure to it, while 77 Risk positive feelings can be strengthened with more exposure. Now known as the mere exposure effect, this phenomenon is neatly summed up in the phrase 'familiarity breeds liking.' Corporations have long understood this, even if only intuitively. The point of much adverrising is simply to expose people to a corporation's name and logo in order to increase familiarity, and, as a result, positive feelings toward them. The mere-exposure effect has considerable implications for how we feel about risks. Consider chewing tobacco. Most people today have never seen anyone chew a wad but someone who lives in an environment where it's common is likely to have a positive feeling for it buried within his brain. That feeling colours his thoughts about chewing tobacco - including his thoughts about how dangerous it is. Gut senses chewing tobacco is Good. Good things don't cause cancer. How likely is chewing tobacco to give you cancer? Not very, Gut concludes. Note that the process here is similar to that of habituation, but it doesn't require the level of exposure necessary for habituation to occur. Note also that this is not the warm glow someone may feel at the sight of a tin of tobacco because it brings back memories of a beloved grandfather who was always chewing the stuff. As the name says, the mere exposure effect requires nothing more than mere exposure to generate at least a little positive feeling. Beloved grandfathers are not necessary. Much of the research about affect is conducted in laboratories but when psychologists Mark Frank and Thomas Gilovich found evidence in lab experiments that people have strongly negative unconscious reactions to black uniforms, they dug up corroboration in the real world. All five black-clad teams in the National Football League, Frank and Gilovich found, received more than the league-average number of penalty yards in every season but one between 1970 and 1986. In the National Hockey League, all three teams that wore black through the same period got more than the average number of penalty minutes in every season. The really intriguing thing is that these teams were penalized just as heavily when they wore their alternate uniforms -white with black trim - which is just what you would expect from the research on emotion and judgment. The black uniform slaps a negative emotional label on the team and that label sticks Nothing More Than Feelings even when the team isn't wearing black. Gilovich and Frank even found a near-perfect field ttial of their theory in the 1979-80 season of the Pittsburgh Penguins. For the first 44 games of the season, the team wore blue uniforms. During rhat time, they averaged eight penalty minutes a game. But for the last 35 games of the season, the team wore a new black uniform. The coach and players were the same as in the first half of the season, and yet the Penguins' penalty time rose 50 per cent to 12 minutes a game. Another real-world demonstration of the Good-Bad Rule at work comes around once a year. Christmas isn't genet ally perceived as a killer. It probably didn't even make your list of outlandish ways to die. But it should. 'Tis the season for falls, burns, and electrocutions. In Britain, warns the Royal Society for the Prevention of Accidents (rospa), holiday events typically include 'about 1,000 people going to hospital after accidents with Christmas trees; another 1,000 hurt by trimmings or when decorating theit homes; and 350 hurt by Christmas tree lights.' The British government has run ad campaigns noting that people are 50 per cent more likely to die in a house fire during the holidays. In the United States, no less an authority than the undersecretary of Homeland Security penned an op-ed in which he warned that fires caused by candles 'increase four-fold during the holidays.' Christmas trees alone start fires in 200 homes. Altogether, 'house fires during the winter holiday season kill 500 and injure 2,000 people,' wrote the under-secretary, 'and cause more than $500 million in damage.' Now, I am not suggesting we should start fretting about Christmas. Much of the public education around the holiday strikes me as a tad exaggerated and some of it — like the rospa press release that draws our attention to the risk of 'gravy exploding in microwave ovens' - is unintentionally funny. But compated to some of the risks that have grabbed headlines and generated real public worry in the past - shark attacks, 'stranger danger,' Satanic cults, and herpes, to name a few - the risks of Christmas are actually substantial. And yet these annual warnings are annually ignored, or even played for laughs (exploding gravy!) in the media. Why the discrepancy? Part of the answer is surely the powerful emotional content of Christmas. Christmas isn't just a 78 79 Risk Good Thing. It's a Wonderful Thing. And Gut is sure Wonderful Things don't kill. The fact that Gut so often has instantaneous, emotional reactions that it uses to guide its judgments has a wide array of implications. A big one is the role of justice in how we react to risk and tragedy. Consider two scenarios. In rhe first, a little boy plays on smooth, sloping rocks at the seashore. The wind is high and his mother has told him not to go too close to the water. But with a quick glance to make sure his mother isn't looking, the boy edges forward until he can slap his hands on the wet rocks. Intent on his little game, he doesn't see a large wave roar in. It knocks him backward then pulls him tumbling into the ocean where strong currents drag him into deep water. The mother sees and struggles valiantly to reach him but the pounding waves blind her and beat her back. The boy drowns. Now imagine a woman living alone with her only child, a young boy. In the community, the woman is perfectly respectable. She has a job, friends. She even volunteers at a local animal shelter. But in private, unknown to anyone, she beats her child mercilessly for any perceived fault. One night, the boy breaks a toy. The woman slaps and punches him repeatedly. As the boy cowers in a corner, blood and tears streaking his face, the woman gets a pot from the kitchen and returns. She bashes the boy's head with the pot, then tosses it aside and orders him to bed. In the night, a blood clot forms in the boy's brain. He is dead by morning. Two lives lost, two sad stories likely to make the front page of the newspaper. But only one will prompt impassioned letters to the editor and calls to talk radio shows, and we all know which one it is. Philosophers and scholars may debate the nature of justice but for most of us justice is experienced as outrage at a wrong and satisfaction at the denunciation and punishment of that wrong. It is a primal emotion. The woman who murdered her little boy must be punished. It doesn't matter that she isn't a threat to anyone else. This isn't about safety. She must be punished. Evolutionary psychologists argue that this urge to punish wrongdoing is hard-wired because it is an effective way to discourage bad behaviour. 'People who are emotionally driven to retaliate against Nothing More Than Feelings those who cross them, even at a cost to themselves, are more credible adversaries and less likely to be exploited,' writes cognitive psychologist Steven Pinker. Whatever its origins, rhe instinct for blame and punishment is often a critical component in our reactions to risks. Imagine there is a gas that kills 20,000 people a year in the European Union and another 21,000 a year in the United States. Imagine further that this gas is a by-product of industrial processes and scientists can precisely identify which industries, even which factories, are emitting the gas. And imagine that all these facts are widely known but no one - not the media, not environmental groups, not the public — is all that concerned. Many people haven't even heard of this gas, while those who have are only vaguely aware of what it is, where it comes from, and how deadly it is. And they're not interested in learning more. Yes, it is an absurd scenario. We would never shrug off something like that. But consider radon. It's a radioactive gas that can cause lung cancer if it pools indoors at high concentrations, which it does in regions that scientists can identify with a fair degree of precision. It kills an estimated 41,000 people a year in the United States and the European Union. Public health agencies routinely run awareness campaigns about the danger but journalists and environmentalists have seldom shown much interest and the public, it's fair to say, have only a vague notion of what this stuff is. The reason for this indifference is clear: Radon is produced naturally in some rocks and soils. The deaths it inflicts are solitary and quiet and no one is responsible. So Gut shrugs. In Paul Slovic's surveys, the same people who shook at the knees thinking about radiation sources like nuclear waste dumps rated radon - which has undoubtedly killed more people than nuclear waste ever could — a very low risk. Nature kills, but nature is blameless. No one shakes a fist at volcanoes. No one denounces heat waves. And the absence of outrage is the reason that natural risks feel so much less threatening than man-made dangers. The Good-Bad Rule also makes language critical. The world does not come with explanatory notes, after all. In seeing and experiencing things, we have to frame them this way or that to make sense of them, to give them meaning. That framing is done with language. 80 81 Risk Nothing More Than Feelings Picture a lump of cooked ground beef. It is a most prosaic object and the task of judging its quality shouldn't be terribly difficult. There would seem to be few, if any, ways that language describing it could influence people's judgment. And yet psychologists Irwin Levin and Gary Gaeth did just that in an experiment disguised as marketing research. Here is a sample of cooked beef, the researchers told one group of subjects. It is '75 per cent lean.' Please examine it and judge it; then taste some and judge it again. With a second group, the researchers provided the same beef but they described it as '25 per cent fat.' The result: On first inspection, the beef described as '75 per cent lean' got much higher ratings than the '25 per cent fat' beef. After tasting the beef, the bias in favour of the 'lean' beef declined but was still evident. Life and death are somewhat more emotional matters than lean and fat beef, so it's not surprising that the words a doctor chooses can be even more influential than those used in Levin and Gaeth's experiment. A 1982 experiment by Amos Tversky and Barbara McNeil demonstrated this by asking people to imagine they were patients with lung cancer who had to decide between radiation treatment and surgery. One group was told there was a 68 per cent chance of being alive a year after the surgery. The other was told there was a 32 per cent chance of dying. Framing the decision in terms of staying alive resulted in 44 per cent opting for surgery over radiation treatment, but when the information was framed as a chance of dying, that dropped to 18 per cent. Tversky and McNeil repeated this experiment with physicians and got the same results. In a different experiment, Tversky and Daniel Kahneman also showed that when people were told a flu outbreak was expected to kill 600 people, people's judgments about which program should be implemented to deal with the outbreak were heavily influenced by whether the expected program results were described in terms of lives saved (200) or lives lost (400). The vividness of language is also critical. In one experiment, Cass Sunstein - a University of Chicago law professor who often applies psychology's insights to issues in law and public policy — asked students what they would pay to insure against a risk. For one group, the risk was described as 'dying of cancer.' Others 82 were told not only that the risk was death by cancer but that the death would be 'very gruesome and intensely painful, as the cancer eats away at the internal organs of the body.' That change in language was found to have a major impact on what students were willing to pay for insurance - an impact that was even greater than making a large change in the probability of the feared outcome. Feeling trumped numbers. It usually does. Of course the most vivid form of communication is the photographic image and, not surprisingly, there's plenty of evidence that awful, frightening photos not only grab our attention and stick in our memories - which makes them influential via the Example Rule - they conjure emotions that influence our risk perceptions via the Good-Bad Rule. It's one thing to tell smokers their habit could give them lung cancer. It's quite another to see the blackened, gnarled lungs of a dead smoker. That's why several countries, including Canada and Australia, have replaced text-only health warnings on cigarette packs with horrible images of diseased lungs, hearts, and gums. They're not just repulsive. They increase the perception of risk. Even subtle changes in language can have considerable impact. Paul Slovic and his team gave forensic psychiatrists - men and women trained in math and science — what they were told was another clinicians assessment of a mental patient confined to an institution. Based on this assessment, the psychiatrists were asked, would you release this patient? Half the assessments estimated that patients similar to Mr Jones 'have a 20 per cent chance of committing an act of violence' after release. Of the psychiatrists who read this version, 21 per cent said they would refuse to release the patient. The wording of the second version of the assessment was changed very slightly. It is estimated, the assessment said, that '20 out of every 100 patients similar to Mr Jones' will be violent after release. Of course, '20 per cent' and '20 out of every 100' mean the same thing. But 41 per cent of the psychiatrists who read this second version said they would keep the patient confined, so an apparently trivial change in wording boosted the refusal rate by almost 100 per cent. How is that possible? The explanation lies in the emotional content of the phrase '20 per cent.' It's hollow, abstract, a mere statistic. What's a 'per cent'? Can I see 83 Risk Nothing More Than Feelings a 'per cent'? Can I touch it? No. But '20 out of every 100 patients' is very concrete and real. It invites you to see a person. And in this case, the person is committing violent acts. The inevitable result of this phrasing is that it creates images of violence — 'some guy going crazy and killing someone,' as one person put it in post-experiment interviews — which make the risk feel bigger and the patient's incarceration more necessary. People in the business of public opinion are only too aware of the influence seemingly minor linguistic changes can have. Magnetic resonance imaging (mri), for example, was originally called 'nuclear magnetic resonance imaging' but the 'nuclear' was dropped to avoid tainting a promising new technology with a stigmatized word. In politics, a whole industry of consultants has arisen to work on language cues like these - the Republican Party's switch from 'tax cuts' and 'estate tax' to 'tax relief and 'death tax' being two of its more famous fruits. The Good-Bad Rule can also wreak havoc on our rational appreciation of probabilities. In a series of experiments conducted by Yuval Rottenstreich and Christopher Hsee, then with the Graduate School of Business at the University of Chicago, students were asked to imagine choosing between $50 cash and a chance to kiss their favourite movie star. Seventy per cent said they'd take the cash. Another group of students was asked to choose between a 1 per cent chance of winning $50 cash and a 1 per cent chance of kissing their favourite movie star. The result was almost exactly the reverse: 65 per cent chose the kiss. Rottenstreich and Hsee saw the explanation in the Good-Bad Rule: The cash carries no emotional charge and so a 1 per cent chance to win $50 feels as small as it really is; but even an imagined kiss with a movie star stirs feelings that cash does not and so a 1 per cent chance of such a kiss looms larger. Rottenstreich and Hsee conducted further variations of this experiment that came to the same conclusion. Then they turned to electric shocks. Students were divided into two groups, with one group told the experiment would involve some chance of a $20 loss and the other group informed that there was a risk of 'a short, painful but not dangerous shock.' Again, the cash loss is emotionally neutral. But the electric shock is truly nasty. Students were then told the chance of this bad thing happening 84 was either 99 per cent or 1 per cent. So how much would you pay to avoid this risk? When there was a 99 per cent chance of losing $20, they said they would pay $18 to avoid this almost-certain loss. When the chance dropped to 1 per cent, they said they would pay just one dollar to avoid the risk. Any economist would love that result. It's a precise and calculated response to probability, perfect rationality. But the students asked to think of an electric shock did something a little different. Faced with a 99 per cent chance of a shock, they said they would pay $10 to stop it. But when the risk was 1 per cent, they were willing to pay $7 to prorect themselves. Clearly, the probability of being zapped had almost no influence. What mattered is that the risk of being shocked is nasty - and they felt it. Plenty of other research shows that even when we are calm, cool, and thinking carefully, we aren't naturally inclined to look at the odds. Should I buy an extended warranty for my new giant-screen television? The first and most important question I should ask is how likely it is to break down and need repair, but research suggests there's a good chance I won't even think about that. And if I do, I won't be entirely logical about it. Certainty, for example, has been shown to have outsize influence on how we judge probabilities: A change from 100 per cent to 95 per cent carries far more weight than a decline from 60 per cent to 55 per cent, while a jump from zero per cent to five per cent will loom like a giant over a rise from 25 per cent to 30 per cent. This focus on certainty helps explain our unfortunate tendency to think of safety in black-and-white terms — something is either safe or unsafe - when, in reality, safety is almost always a shade of grey. And all this is true when there's no fear, anger, or hope involved. Toss in a strong emotion and people can easily become - to use a term coined by Cass Sunstein — 'probability blind.' The feeling simply sweeps the numbers away. In a survey, Paul Slovic asked people if they agreed or disagreed that a one-in-10 million lifetime risk of getting cancer from exposure to a chemical was too small to worry about. That's an incredibly tiny risk - far less than the lifetime risk of being killed by lightning and countless other risks we completely ignore. Still, one-third disagreed; they would 85 Risk worry. That's probability blindness. The irony is that probability blindness is itself dangerous. It can easily lead people to overreact to risks and do something stupid like abandoning air travel because terrorists hijacked four planes. It's not just the odds that can be erased from our minds by the Good-Bad Rule. It is also costs. 'It's worth it if even one life is saved' is something we often hear of some new program or regulation designed to reduce a risk. That may be true, or it may not. If, for example, the program costs $100 million and it saves one life, it is almost certainly not worth it because there are many other ways $100 million could be spent that would certainly save more than one life. This sort of cost-benefit analysis is itself a big and frighten-ingly complex field. One of the many important insights it has produced is that, other things being equal, 'wealthier is healthier.' The more money people and nations have, the healthier and safer they tend to be. Disaster relief people see this maxim in operation every time there is a major earthquake. People aren't killed by earthquakes. They are killed by buildings that collapse in earthquakes and so the flimsier the buildings, the more likely people are to die. This is why earthquakes of the same magnitude may kill dozens in California but hundreds of thousands in Iran, Pakistan, or India. The disparity can be seen even within the same city. When a massive earthquake struck Kobe, Japan, in 1995, killing 6,200 people, the victims were not randomly distributed across the city and region. They were overwhelmingly people living in poor neighbourhoods. Government regulations can reduce risk and save lives. California's buildings are as tough as they are in part because building codes require them to be. But regulations can also impose costs on economic activity, and since wealthier-is-healthier, economic costs can, if they are very large, put more lives at risk than they keep safe. Many researchers have tried to estimate how much regulatoty cost is required to 'take the life' of one person but the results are controversial. What's broadly accepted, however, is the idea that regulations can inflict economic costs and economic costs can reduce health and safety. We have to account for that if we want to be rational about risk. We rarely do, of course. As political scientist Howard Margolis Nothing More Than Feelings describes in Dealing With Risk, the public often demands action on a risk without giving the slightest consideration to the costs of that action. When circumstances force us to confront those costs, however, we may change our minds in a hurry. Margolis cites the case of asbestos in New York City's public schools, which led to a crisis in 1993 when the start of the school year had to be delayed several weeks because work to assess the perceived danger dragged on into September. Parents had overwhelmingly supported this work. Experts had said the actual risk to any child from asbestos was tiny, especially compared to the myriad other problems poor kids in New York faced, and the cost would be enormous. But none of that mattered. Like the cancer it can cause, asbestos has the reputation of a killer. It triggers the Good-Bad Rule and once that happens, everything else is trivia. 'Don't tell us to calm down!' one parent shouted at a public meeting. 'The health of our children is at stake.' But when the schools failed to open in September, it was a crisis of another kind for the parents. Who was going to care for their kids? For poor parents counting on the schools opening when they always do, it was a serious burden. 'Within three weeks,' Margolis writes, 'popular sentiment was overwhelmingly reversed.' Experiences like this, along with the research on the role of emotion in judgment, have led Slovic and other tisk teseatchers to draw several conclusions. One is that experts are wrong to think they can ease fears about a risk simply by 'getting the facts out.' If an engineer tells people they shouldn't worry because the chance of the reactot melting down and spewing vast radioactive clouds that would satutate their children and put them at risk of cancer . . . Well, they won't be swayed by the odds. Only the rational mind - Head - cares about odds and, as we have seen, most people are not accustomed to the effort required for Head to intervene and correct Gut. Our natural inclination is to go with our intuitive judgment. Another important implication of the Good-Bad Rule is something it shares with the Rule of Typical Things: It makes us vulnerable to scary scenarios. Consider the story told by the Bush administration in support of the invasion of Iraq. It was possible Saddam Hussein would seek to obtain the materials to build Risk Nothing More Than Feelings nuclear weapons. It was possible he would start a nuclear weapons program. It was possible the program would successfully create nuclear weapons. It was possible Saddam would give those weapons to terrorisrs. It was possible that terrorists armed with nukes would seek to detonate them in an American city, and it was possible they would succeed. All these things were possible, but a rational assessment of this scenario would examine the odds of each of these events occurring on the understanding that if even one of them failed to occur, the final disaster would not happen. But that's not how Gut would analyze it with the Good-Bad Rule. It would start at the other end - an American city reduced to radioactive rubble, hundreds of thousands dead, hundreds of thousands more burned and sick - and it would react. This is an Awful Thing. And that feeling would not only colour the question of whether this is likely or not, it would overwhelm it, particularly if the scenario were described in vivid language - language such as the White House's oft-repeated line, 'we don't want the smoking gun to be a mushroom cloud.' Like terrorists armed with a nuclear weapon, an asteroid can also flatten a city. But asteroids are only rocks. They are not wrapped in the cloak of evil as terrorists are, nor are they stigmatized like cancer, asbestos or nuclear power. They don't stir any particular emotion and so they don't engage the Good-Bad Rule and overwhelm our sense of how very unlikely they are to hurt us. The Example Rule doesn't help, either. The only teally massive asteroid impact in the modern era was the Tunguska event, which happened a century ago in a place so remote only a handful of people saw it. There have been media reports of 'near misses' and a considerable amount of attention paid to astronomers' warnings, but while these may raise conscious awareness of the issue, they're very different from the kind of concrete experience our primal brains are wired to respond to. Many people also know of the theory that an asteroid wiped out the dinosaurs but that's no more real and vivid in our memories than the Tunguska event, and so the Example Rule would steer Gut to conclude that the risk is tinier than it actually is. There is simply nothing about asteroids that could make Gut sit up and take notice. We don't feel the risk. For that reason, Paul Slovic told the astronomers at the Tenerife conference, 'it will be hard to generate concern about asteroids unless there is an identifiable, certain, imminent, dreadful threat.' And of course, when there is an identifiable, certain, imminent, dfeadful threat, it will probably be too late to do anything about it. Still, does that matter? It is almost certain that the earth will not be hit by a major asteroid in our lifetime or that of our children. If we don't take the astronomers' advice and buy a planetary insurance policy, we'll collectively save a few bucks and we will almost certainly not regret it. But still - it could happen. And the $400 million cost of the insurance policy is very modest relative to how much we spend coping with other risks. For that reason, Richard Posner, a U.S. appeals court judge and public intellectual known for his hard-nosed economic analysis, thinks the astronomers should get their funding. 'The fact that a catastrophe is very unlikely to occur is not a rational justification for ignoring the risk of its occurrence,' he wrote. The particular catastrophe that prompted Posner to write those words wasn't an asteroid strike, however. It was the Indian Ocean tsunami of 2004. Such an event had not happened in the region in all of recorded history and the day before it actually occurred experts would have said it almost certainly would not happen in our lifetime or that of our children. But experts would also have said — and in fact did say, in several reports - that a tsunami warning system should be created in the region because the cost is modest. The experts were ignored and 230,000 people died. That disaster occurred three weeks after the Canary Islands conference on asteroids ended. Hours after waves had scoured coasdines from Indonesia to Thailand and Somalia, Slava Gusiakov, a Russian expert on tsunamis who had attended the conference, sent an emotional e-mail to colleagues. 'We were repeatedly saying the words low-probability/high-consequence event,' he wrote. 'It just happened.' 88 89 A Story About Numbers Japanese prostitutes were the first women to connect silicone and plumper breasts. It was the 1950s and American servicemen in Japan preferred breasts like they knew them back home so prostitutes had themselves injected with silicone or liquid paraffin. The manufactured silicone breast implant followed in the early 1960s. In 1976, the United States Food and Drug Administration was given authority over medical devices, which meant the fda could require manufacturers to provide evidence that a device is safe in order to get permission to sell it. Breast implants were considered medical devices but because they had been sold and used for so many years without complaints, the fda approved their continued sale without any further research. It seemed the reasonable thing to do. The first whispers of trouble came from Japanese medical journals. Some Japanese women were being diagnosed with connective-tissue diseases - afflictions like rheumatoid arthriris, fibromyalgia, and lupus. These women had also been injected, years before, with silicone, and doctors suspected the two facts were linked. In 1982, an Australian report described three women with silicone breast implants and connective-tissue diseases. What this meant wasn't clear. It was well known implants could leak or rupture but could silicone seep into the body and cause these diseases? Some were sure that was happening. The same year as the Australian report, a woman in San Francisco sued implant manufacturers, demanding millions of dollars for making her sick. The media reported both these stories widely, raising concerns among more women and more doctors. More cases appeared in the medical literature. The number of diseases associated with 90 A Story About Numbers implants grew. So did the number of stories in the media. Fear spread. In 1990, an episode of Face to Face With Connie Chung aired on cbs. Tearful women told stories of pain, suffering, and loss. They blamed their silicone implants. And Chung agreed. First came the implants, then came the disease. What more needed to be said? The tone of the widely watched episode was angry and accusatory, with much of the blame focused on the fda. That broke the dam. Stories linking implants with disease — with headlines like 'Toxic Breasts' and 'Ticking Time Bombs' -flooded the media. A Congressional hearing was held. Advocacy groups - including Ralph Nader's Public Citizen - made implants a top target. Feminists - who considered breast augmentation to be 'sexual mutilation,' in the words of best-selling writer Naomi Wolf - attacked implants as a symbol of all that was wrong wirh modern society. Under intense pressure, the fda told manufacturers in early 1992 that they had 90 days to provide evidence that implants were safe. The manufacturers cobbled together what they could but the fda felt it was inadequate. Meanwhile, a San Francisco jury awarded $7.34 million to a woman who claimed her implants, manufactured by Dow Corning, had given her mixed connective-tissue disease. The fda banned silicone breast implants in April 1992, although it emphasized that the implants were being banned only because they had yet to be proved safe, as the manufacturers were required to do, not because they had been proved unsafe. The roughly one million American women with the implants shouldn't worry, the fda chief insisted. But they did worry. Along with the successful lawsuit, the fda ban was seen as proof that the implants were dangerous. The media filled with stories of suffering, angry women and 'the trickle of lawsuits became a flood,' wrote Marcia Angell, editor of the New England Journal of Medicine at the time and the author of the definitive book on the crisis, Science on Trial: The Clash Between Medical Science and the Law in the Breast Implant Case. In 1994, the manufacturers agreed to the largest class-action settlement in history. A fund was created with $4.25 billion, including $1 billion for the lawyers who had turned implant 91 Risk lawsuits into a veritable industry. As part of the deal, women would have to produce medical records showing that they had implants and one of the many diseases said to be caused by implants but they didn't have to produce evidence that the disease actually was caused by the implants - either in their case or in women generally. 'Plaintiffs' attorneys sometimes referred clients to clinicians whose practice consisted largely of such patients and whose fees were paid by the attorneys,' wrote Angell. 'Nearly half of all women with breast implants registered for the settlement, and half of those claimed to be currently suffering from implant-related illnesses.' Not even the mammoth settlement fund could cover this. Dow Corning filed for bankruptcy and the settlement collapsed. The transformation of silicone implants was complete. Once seen as innocuous objects no more dangerous than silicone contact lenses, implants were now a mortal threat. In surveys Paul Slovic conducted around this time, most people rated the implants 'high risk.' Only cigarette smoking was seen as more dangerous. And yet, at this point, there was still no scientific evidence that silicone breast implants actually cause connective-tissue disease or any other disease. As late as 1994, there wasn't even a single epidemiological survey. 'What we saw in the courtroom and in much of the media,' wrote Angell, 'were judgments based on anecdote and speculation.' This dramatic sequence of events was driven by many things, naturally, but the most critical was not the chemistry of silicone, the biology of breasts, the tenacity of activists, the rapaciousness of lawyers, the callousness of corporations, or the irresponsible sensationalism of the media. No, the most fundamental factor was the simple fact that humans are good with stories and bad with numbers. Every journalist knows that people respond very differently to numbers and stories. A news story that says an event has taken the lives of many people may be able to get a reader's attention for a brief moment, but it needs more to keep it. Think of reports like A bus overturned in the Peruvian Andes today, killing 35.' Or 'flooding in Bangladesh continues — aid groups believe thousands have perished.' These reports scarcely pause the coffee cup 92 A Story About Numbers at our lips. They are hollow, meaningless. The fact that they're often about people far away may contribute to our lack of concern but more important is their content: They are facts and numbers. If I add some graphic descriptions (the bus tumbled down a mountain pass) or vivid images (survivors clinging to wreckage as corpses float by in the flood water) I am far more likely to draw in readers or viewers. But even that connection will be fleeting. To really grab people's attention, to make them think and feel, the journalist has to make the story personal. I once sat in a Mexican hotel room idly watching a cnn story about severe flooding in the capital of Indonesia — scores dead, hundreds of thousands homeless — when I turned the channel and saw, at the bottom of the screen of a Spanish-language station, this urgent bulletin: Anna Nicole Smith muere! I know only a few words of Spanish but rnuere is one of them. I was shocked. 'Anna Nicole Smith is dead,' I called out to my wife in the bathroom. I did not inform her of the Indonesian floods, needless to say, although by any rational measure that story was vastly more important than the untimely loss of a minor celebrity. But Anna Nicole Smith was an identifiable person; the dead in Indonesia were statistics. And the loss of an identifiable person can move us in ways that statistical abstractions cannot. That's just human nature. Almost 3,000 people were killed that sunny morning in September 2001, but what does that statistic make us feel? It is big, certainly. But it is a cold, empty number. In itself, it makes us feel little or nothing. The best it can do is remind us of the images of the day - the explosion, the collapsing towers, the survivors shuffling through scattered paper and ash - that are infused with the emotions the number lacks. Still more potent are images of a single person, such as the horrifying photo of a man falling head first to his death or rhe businessman walking away with his briefcase and empty eyes. Then there are the personal stories - like that of Diana O'Connor, 37, the 15 th of 16 children in a Brooklyn family, who had worked at three jobs to pay her way through college and whose drive to succeed earned her an executive's office high up in the World Trade Center. Diana O'Connor may have been only one of thousands to die that day but her story, told in a way that 93 Risk A Story About Numbers allows us to imagine this one person, can move us in a way that the phrase 'almost 3,000 were killed' never can. There's a reason that statistics have been called 'people with the tears dried off.' The power of personal stories explains the standard format of most feature reports in newspapers and television: Introduce a person whose story is moving, connect that story to the larger subject at hand, discuss the subject with statistics and analysis, and close by returning to the person with the moving story. It's a sugar-coated pill and done well it is journalism at its best. It connects the reader emotionally but it also provides the intellectual substance needed to really understand an issue. It is, however, a lot easier to tell someone's touching story and skip the stuff in the middle, and the delightful thing - delightful for the lazy journalist, that is - is that a touching story minus analysis is just as likely to grab and hold the attention of readers and viewers as a touching story with excellent analysis. People love stories about people. We love telling them and we love hearing them. It's a universal human trait, and that suggests to evolutionary psychologists that storytelling — both the telling and the listening - is actually hard-wired into the species. For that to be true, there must be evolutionary advantages to storytelling. And there are. Storytelling is a good way to swap information, for one thing, which allows people to benefit from each other's experiences. And storytelling is intensely social. Robin Dunbar of the University of Liverpool noted that while chimpanzees don't tell stories, they do spend about 20 per cent of each day picking ticks from each others' fur. They aren't being fastidious; they're being social. Grooming is what chimpanzees and other social primates do to form and maintain personal bonds. Like chimps, humans are social primates. But our hunter-gatherer ancestors lived in larger bands than chimpanzees and our ancestors would have had to spend as much as 50 per cent of their time picking ticks if they were to bond as chimps do. Talking, on the other hand, is something we can do with many people at the same time. We can even talk while doing other things. That makes chat the ideal replacement for tick-picking. Studies of the ordinary daily conversations of modern humans, Dunbar notes, find little of it is instructional. Most is personal chit-chat — people telling stories about people. Storytelling can also be a valuable form of rehearsal. 'If survival in life is a matter of dealing with an often inhospitable physical universe, and [of] dealing with members of our own species, both friendly and unfriendly, there would be a general benefit to be derived from imaginatively exercising the mind in order to prepare it for the next challenge,' writes philosopher Denis Dutton. 'Story-telling, on this model, is a way of running multiple, relatively cost-free experiments with life in order to see, in the imagination, where courses of action may lead. Although narrative can deal with the challenges of the natural world, its usual home is, as Aristotle also understood, in the realm of human relations.' Shakespeare may have as much to tell us about psychology as psychologists do, which is why we respond to his plays as we do. When Iago whispers in the ear of Othello and Othello's love for Desdemona turns to hate, and hate to murder, we sense that, yes, this could happen. This is what jealousy and distrust can do. This is true. But sometimes stories are not true, or least they are an incomplete guide to what is true. The stories that led to the banning of silicone breast implants were deeply personal and painful. And there were so many. It seemed so obviously true that implants cause disease. It felt true. Gut said so. 'There are thousands upon thousands of women who have breast implants and complain of terrible pain,' Cokie Roberts reported on abc News's Nightline in 1995. 'Can they all be wrong?' The answer to that was: Possibly. At the time implants were banned, there were roughly 100 million adult women in the United States. Of those, about 1 per cent had implants and 1 per cent had connective-tissue disease. So 'we could expect by coincidence alone that 10,000 would have both,' Marcia Angell noted. The tragic stories of women who got silicone breast implants and who suffered connective-tissue disease did not, and could not, demonstrate that the implants caused the disease. What was needed were epidemiological studies to determine whether the rate of disease among women with implants was higher than it was among women without implants. If it was, that wouldn't definitively prove that implants cause disease -there could be a third factor connecting the two - but it would be solid grounds for suspicion and further investigation. But there 94 95 Risk A Story About Numbers weren't any epidemiological studies. Scientists opposed to the ban made this point repeatedly. So did the fda, which insisted all along that it was only banning the implants while it awaited word from the epidemiologists. The risk hasn't been proved, the fda emphasized. There is no evidence. This outraged activist groups, whose slogan became 'we are the evidence!' No one could doubt their sincerity but passion and pain are no substitute for reason, and reason said there was no evidence. Anecdotes aren't data: That's a favourite expression of scientists. Anecdotes - stories - may be illuminating in the manner of Shakespeare. They may also alert us to something that needs scientific investigation. The proliferating stories of breast implants causing disease were certainly grounds for concern and aggressive research. But anecdotes don't prove anything. Only data -properly collected and analyzed — can do that. This has always been true but the advance of science and technology has made it all the more important. We can now measure in microns and light-years and detect in parts per billion. Information and numbers are piling up. To really understand this proliferating information, we must do much more than tell stories. Unfortunately, what isn't increasing is Gut's skill in handling numbers. Shaped in a world of campfires and flint spears, our intuition is as innately lousy with numbers as it is good with stories. Stanislas Dehaene, a neuroscientist at the College de France, notes that animals as varied as dolphins and rats have a very basic grasp of numbers. They can easily tell the difference between two and four and they 'have elementary addirion and subtraction abilities.' But as the numbers go up, their abilities go down rapidly. Even numbers as low as six and seven require more time and effort to grasp and use than one or two. It turns out humans' innate skill with numbers isn't much better than that of rats and dolphins. 'We are systematically slower to compute, say, 4 + 5 than 2 + 3,' writes Dehaene. And just as animals have to slow down and think to discriminare between close quantities such as 7 and 8, 'it takes us longer to decide that 9 is latger than 8 than to make the same decision for 9 versus 2.' Of course humans also have the capacity to move beyond this stage but the struggle every schoolchild has learning the multiplication tables is a reminder of the limits of our natural grasp of numbers. 'Sadly enough, innumeracy may be our normal human condition,' writes Dehaene, 'and it takes considerable effort to become numerate.' How many of us make that effort isn't clear. A Canadian polling company once asked people how many millions there are in a billion. Forty-five per cent didn't know. So how will they react when they're told that the arsenic levels in their drinking water are three parts per billion? Even an informed layperson will have to gather more information and think hard to make sense of that information. But those who don't know what a billion is can only look to Gut for an answer and Gut doesn't have a clue what a billion is. Gut does, however, know that arsenic is a Bad Thing: Press the panic button. The influence of our ancesttal environment is not limited to the strictly innumerate, however. Physicist Herbert York once explained that the reason he designed the nuclear warhead of the Atlas rocket to be one megaton was that one megaton is a particularly round number. 'Thus the actual physical size of the first Atlas warhead and the number of people it would kill were determined by the fact that human beings have two hands with five fingers each and therefore count by tens.' Numeracy also fails to give numbers the power to make us feel. Charities long ago learned that appeals to help one, identifiable person are far more compelling than references to large numbers of people in need. 'If I look at the mass, I will never act,' wrote Mother Teresa. 'If I look at the one, I will.' The impotence of numbers is underscored by our reactions to death. If the death of one is a tragedy, the death of a thousand should be a thousand times worse, but our feelings simply do not work that way. In the eatly years of the 1980s, reporting on aids was sparse despite the steadily growing number of victims. That changed in July 1985, when the number of newspaper articles on aids published in the United States soared 500 per cent. The event that changed everything was Rock Hudson's announcement that he had aids: His familiar face did what no statistic could. 'The death of one man is a tragedy, the deaths of millions is a statistic,' said that expert on death, Joseph Stalin. Numbers may even hinder the emotions brought out by the presence of one, suffering person. Paul Slovic, Deborah Small, 96 97 Risk A Story About Numbers and George Loewenstein set up an experiment in which people were asked to donate to African relief. One appeal featured a statistical overview of the crisis, another profiled a seven-year-old girl, and a third provided both the profile and the statistics. Not surprisingly, the profile generated much more giving than the statistics alone, but it also did better than the combined profile-and-statistics pitch — as if the numbers somehow interfered with the empathetic urge to help generated by the profile of the little girl. Of course, big numbers can impress, which is why activists and politicians are so keen on using them. But big numbers impress by size alone, not by human connection. Imagine standing at mid-field of a stadium filled with 30,000 people. Impressive? Certainly. That's a lot of people. Now imagine the same scenario but with 90,000 people. Again, it's impressive, but it's not three times more impressive because our feelings aren't calibrated to that scale. The first number is big. The second number is big. That's the best Gut can do. A curious side effect of our inability to feel large numbers — confirmed in many experiments - is that proportions can influence our thoughts more than simple numbers. When Paul Slovic asked groups of students to indicate, on a scale from 0 to 20, to what degree they would support the purchase of airport safety equipment, he found they expressed much stronger support when told that the equipment could be expected to save 98 per cent of 150 lives than when they were told it would save 150 lives. Even saving '85 per cent of 150 lives' garnered more support than saving 150 lives. The explanation lies in the lack of feeling we have for the number 150. It's vaguely good, because it represents people's lives, but it's abstract. We can't picture 150 lives and so we don't feel 150 lives. We can feel proportions, however. Ninety-eight per cent is almost all. It's a cup filled nearly to overflowing. And so we find saving 98 per cent of 150 lives more compelling than saving 150 lives. Daniel Kahneman and Amos Tversky underscored the impotence of statistics in a variation of the famous 'Linda experiment. First, people were asked to read a profile of a man that detailed his personality and habits. Then they were told that this man was drawn from a group that consisted of 70 engineers and 98 30 lawyers. Now, the researchers asked, based on everything you know, is it more likely this man is a lawyer or an engineer? Kahneman and Tversky ran many variations of this experiment and in every one, the statistics - 70 engineers, 30 lawyers — mattered less than the profile. Statistical concepts may be even less influential than numbers. Kahneman once discovered that an Israeli flight instructor had concluded, based on personal experience, that criticism improves performance while praise reduces it. How had he come to this strange conclusion? When student pilots made particularly good landings, he praised them - and their subsequent landings were usually not as good. But when they made particularly bad landings, he criticized them - and the subsequent landings got better. Therefore, he concluded, criticism works but praise doesn't. What this intelligent, educated man had failed to account for, Kahneman noted, was 'regression to the mean': If an unusual result happens, it is likely to be followed by a tesult closer to the statistical average. So a particularly good landing is likely to be followed by a landing that's not as good, and a particularly bad landing is likely to be improved on next time. Criticism and praise have nothing to do with the change. It's just numbers. But because we have no intuitive sense of regression to the mean, it takes real mental effort to catch this sort of mistake. The same is true of the statistical concept of sample bias. Say you want to know what Americans think of the job the president is doing. That should be simple enough. Just ask some Americans. But which Americans you ask makes all the difference. If you go to a Republican tally and ask people as they leave, it's pretty obvious that your sample will be biased (whether the president is a Republican or a Democrat) and it will produce misleading conclusions about what Americans' think. The same would be true if you surveyed only Texans, or Episcopalians, or yoga instructors. The bias in each case will be different and sometimes the way it skews the numbers may not be obvious. But by not properly sampling the population you are interested in - all Americans - the results you obtain will be distorted and unreliable. Pollsters typically avoid this hazard by tandomly selecting telephone numbers from the entire population whose views are being sought, which creates a legitimate sample and meaningful 99 Risk A Story About Numbers results. (Whether the sample is biased by the increasing rate at which people refuse to answer surveys is another matter.) In the silicone breast implant scare, what the media effectively did was present a deeply biased sample. Story after story profiled sick women who blamed their suffering on implants. Eventually, the number of women profiled was in the hundreds. Journalists also reported the views of organizations that represented thousands more women. Cumulatively, it looked very impressive. These were big numbers and the stories - I got implants then I got sick - were frighteningly similar. How could you not think there was something to this? But the whole exercise was flawed because healthy women with implants didn't have much reason to join lobby groups or call reporters, and reporters made little effort to find these women and profile them because 'Woman Not Sick' isn't much of a headline. And so, despite the vast volume of reporting on implants, it no more reflected the health of all women with breast implants than a poll taken at a Republican rally would reflect the views of all Americans. Our failure to spot biased samples is a product of an even more fundamental failure: We have no intuitive feel for the concept of randomness. Ask people to put 50 dots on a paper in a way that is typical of random placement and they're likely to evenly disperse them — not quite in lines and rows but evenly enough that the page will look balanced. Show people two sets of numbers - 1, 2, 3, 4, 5, 6 and 10, 13, 19, 25, 30, 32 - and they'll say the second set is more likely to come up in a lottery. Have them flip a coin and if it comes up heads five times in a row, they will have a powerful sense that the next flip is more likely to come up tails than heads. All these conclusions are wrong because they're all based on intuitions that don't grasp the nature of randomness. Every flip of a coin is random — as is every spin of the roulette wheel or pull on a slot machine's arm - and so any given flip has an equal chance of coming up heads or tails; the belief that a long streak increases the chances of a different result on the next go is a mistake called gambler's fallacy. As for lotteries, each number is randomly selected and so something that looks like a pattern -1, 2, 3, 4, 5, 6 - is as likely to occur as any other result. And it is fantastically unlikely that 50 dots randomly distributed on paper would wind up evenly dispersed; instead, thick clusters of dots will form in some spots while other portions of the paper will be dot-free. Misperceptions of randomness can be tenacious. Amos Tversky, Tom Gilovich, and Robert Vallone famously analyzed basketball's 'hot hand' - the belief that a player who has sunk his last two, three, or four shots has the 'hot hand' and is therefore more likely to sink his next shot than if he has just missed a shot - and proved with rigorous statistical analysis that the 'hot hand' is a myth. For their trouble, the psychologists were mocked by basketball coaches and fans across the United States. Our flawed intuitions about randomness generally produce only harmless foibles like beliefs in 'hot hands' and Aunt Betty's insistence that she has to play her lottery numbers next week because the numbers she has played for 17 years have never come up so they're due any time now. Sometimes, though, there are serious consequences. One reason that people often respond irrationally to flooding - why rebuild in the very spot where you were just washed out? - is their failure to grasp randomness. Most floods are, in effect, random events. A flood this year says nothing about whether a flood will happen next year. But that's not what Gut senses. A flood this year means a flood next year is less likely. And when experts say that this year's flood is the 'flood of the century' - one so big it is expected to happen once every 100 years - Gut takes this to mean that another flood of similar magnitude won't happen for decades. The fact that a 'flood of the century' can happen three years in a row just doesn't make intuitive sense. Head can understand that, with a little effort, but not Gut. Murder is a decidedly non-random event but in cities with millions of people the distribution of murders on the calendar is effectively random (if we set aside the modest influence that seasonal changes in the weather can have in some cities). And because it's random, clusters will occur — periods when far more murders than average happen. Statisticians call this Poisson clumping, after the French mathematician Simeon-Denis Poisson, who came up with a calculation that distinguishes between the clustering one can expect purely as a result of chance and clustering 100 101 Risk caused by something else. In the book Struck by Lightning, University of Toronto mathematician Jeffrey Rosenthal tecounts how five murders in Toronto that fell in a single week generated a flurry of news stories and plenty of talk about crime getting out of control. The city's police chief even said it proved the justice system was too soft to deter criminals. But Rosenthal calculated that Toronto, with an average of 1.5 murders per week, had 'a 1.4 per cent chance of seeing five homicides in a given week, purely by chance. So we should expect to see five homicides in the same week once every 71 weeks - nearly once a year!' The same calculation showed that thete is a 22 per cent chance of a week being murder-free purely by chance, and Toronto often does experience murder-free weeks, Rosenthal noted. 'But I have yet to see a newspaper headline that screams "No Murders This Week!'" Cancer clusters ate another frightening phenomenon that owes much to our inability to see randomness. Every year in developed countries, public health authorities field calls from people convinced that their town's eight cases of leukemia or five cases of brain cancer cannot possibly be the result of mere chance. And people always know the real cause. It is pesticides on farm fields, radiation from the region's nuclear plant, or toxins seeping out of a nearby landfill. In almost every case, they don't have any actual evidence linking the supposed threat to the cancers. The mere fact that a suspicious cancer rate exists near something suspect is enough to link them in most people's minds. In the great majority of these panics, officials do some calculations and find that the rate of illness can easily be the result of chance alone. This is explained and that's the end of it. But sometimes - usually when residents who don't trust the official explanation take their worries to the media and politicians get involved - full-scale investigations are launched. Almost always, nothing is found. Residents and activists have been known to reject even these findings but theit suspicions say far more about the power of Gut-based judgments than they do about cancer. I don't want to overstate Gut's failings. Even in this world of satellites and microchips, intuition still gets many things right. It's also important to remember that science and statistics have their own limitations. They never fully eliminate uncertainty, for A Story About Numbers one. Statistics may tell us that an apparent cancer cluster could be a product of chance but they can't tell us that it is a product of chance. And even the most thorough epidemiological studies cannot absolutely prove that farmers' pesticides are or are not causing cancer - they can only suggest it, sometimes weakly, sometimes strongly, but always with some degree of uncertainty. In all forms of scientific inquiry, hard facts and strong explanations are built up slowly, and only with great effort. 'Sometimes Gut does get it right, even before science does,' Paul Slovic notes. 'Other times Gut's intuitions turn science on to a problem that needs examining. Often, science's best answer contains much uncertainty. In such cases, if the benefits are not great and the risks are scary, it may be best to go with Gut.' At least until science tells us more. It's also heartening to know thete is evidence that we can, with a little effort, make ourselves much less vulnerable to Gut's weaknesses. In a seties of four studies, a team of psychologists led by Ellen Petets, a colleague of Slovic's at Decision Research, examined whether numeracy makes any difference to the mistakes Gut tends to make. It did, in a big way. The studies repeated several well-known experiments - including some mentioned earliet in this book — but this time participants were also tested to see how skilled they were with numbers and math. The results were unequivocal: The more numerate people were, the less likely they were to be tripped up by Gut's mistakes. It's not clear whether this effect is the result of a numerate person's Head being better able to intervene and correct Gut or if numeracy, like golf, is a skill that can be learned by the conscious mind and then transferred, with lots of practice, to the unconscious mind. But in either case, numeracy helps. Much less encouraging is what Ms Peters found when she tested the numeracy levels of the people in her experiments. Only 74 pet cent were able to answer this question correctly: 'If Person As chance of getting a disease is 1 in 100 in 10 years, and Person B's risk is double that of Person A, what is B's risk?' Sixty-one per cent got this question right: 'Imagine that we roll a fair, six-sided die 1,000 times. Out of 1,000 rolls, how many times do you think the die will come up even (2, 4, or 6)?' And just 46 per cent figured out this one: 'In the Acme Publishing Sweepstakes, the chance of winning a car is one in 1,000. What per 102 103 Risk cent of tickets of Acme Publishing Sweepstakes win a car?' Peters's test subjects were university students. When even a nations university-educated elite has such a weak grasp of the numbers that define risk, that nation is in danger of getting risk very wrong. The breast-implant panic was at its peak in June 1994, when science finally delivered. A Mayo Clinic epidemiological survey published in the New England Journal of Medicine found no link between silicone implants and connective-tissue disease. More studies followed, all with similar results. Finally, Congress asked the Institute of Medicine (I.O.M.), the medical branch of the National Academies of Science, to survey rhe burgeoning research. In 1999, the I.O.M. issued its report. 'Some women with breast implants are indeed very ill and the I.O.M. committee is very sympathetic to their distress,' the report concluded. 'However, it can find no evidence that these women are ill because of their implants.' In June 2004, Dow Corning emerged from nine years of bankruptcy. As part of its reorganization plan, the company created a fund of more than $2 billion in order to pay off more than 360,000 claims. Given the state of the evidence, this might seem like an unfair windfall for women with implants. It was unfair to Dow Corning, certainly, but it was no windfall. Countless women had been tormented for years by the belief that their bodies were contaminated and they could soon sicken and die. In this tragedy, only the lawyers won. In November 2006, the Food and Drug Administration lifted the ban on silicone breast implants. The devices can rupture and cause pain and inflammation, the fda noted, but the very substantial evidence to date does not indicate that they pose a risk of disease. Anti-implant activists were furious. They remain certain that silicone breast implants are deadly and it seems nothing can convince them otherwise. The Herd Senses Danger You are a bright, promising young professional and you have been chosen to participate in a three-day project at the Institute of Personality Assessment and Research at the University of California in sunny Berkeley. The researchers say they are interested in petsonality and leadership and so they have brought together an impressive group of 100 to take a closer look at how exemplary people like you think and act. A barrage of questions, tests, and experiments follows, including one exercise in which you are asked to sit in a cubicle with an electrical panel. Four othet patticipants sit in identical cubicles next to you, although you cannot see each other. Slides will appear on the panel that will ask you questions, you are told, and you can answer with the switches on the panel. Each of the panels is connected to the others so you can all see each other's answers, although you cannot discuss them. The order in which you will answer will vary. The questions are simple enough at first. Geometric shapes appear and you are asked to judge which is larger. At the beginning, you are the first person directed to respond. Then you are asked to be the second to answer, which allows you to see the first person's response before you give yours. Then you move to the number three spot. There's nothing that takes any careful consideration at this point so things move along quickly. Finally, you are the last of the group to answer. A slide appears with five lines on it. Which line is longest? It's obvious the longest is number four but you have to wait before you can answer. The first person's answer pops up on your screen: number five. That's odd, you think. You look carefully at the lines. Number four is obviously longer than number five. Then the second answer 104 105 Risk appears: number five. And the third answer: number five. And the fourth: number five. Now it's your turn to answer. What will it be? You clearly see everyone is wrong. You shouldn't hesitate to flip the switch for number four. And yet there's a good chance you won't. When this experiment was conducted by Richard Crutchfield and colleagues in the spring of 1953, 15 people out of 50 ignored what they saw and went with the consensus. Crutchfield's work was a variation on experiments conducted by Solomon Asch in the same era. In one of psychology's most famous experiments, Asch had people sit together in groups and answer questions that supposedly tested visual perception. Only one person was the actual subject of the experiment, however. All the others were instructed, in the later stages, to give answers that were clearly wrong. In total, the group gave incorrect answers 12 times. Three-quarrers of Asch's test subjects abandoned their own judgment and went with the group at least once. Overall, people conformed to an obviously false group consensus one-third of rhe time. We are social animals and what others think matters deeply to us. The group's opinion isn't everything; we can buck the trend. But even when the other people involved are strangers, even when we are anonymous, even when dissenting will cost us nothing, we want to agree with the group. And that's when the answer is instantly clear and inarguably rrue. Crutchfield's experiment involved slightly more ambiguous questions, including one in which people were asked if they agreed with the statement 'I believe we are made better by the trials and hardships of life.' Among subjects in a control group that was not exposed to the answers of others, everyone agreed. But among those in the experiment who thought that everyone else disagreed with the statement, 31 per cent said they did not agree. Asked whether they agreed with the statement 'I doubt whether I would make a good leader,' every person in the control group rejected it. But when the group was seen to agree with the statement, 37 per cent of people went along with the consensus and agreed that they doubted themselves. Crutchfield also designed three questions that had no right answer. They included a series of numbers that subjects were The Herd Senses Danger asked to complete, which was impossible because the numbers were random. In that case, 79 per cent of participants did not guess or orherwise struggle to come up with their own answer. They simply went with what the group said. These studies of conformity are often cited to cast humans as sheep, and it certainly is disturbing to see people set aside what they clearly know to be true and say what they know to be false. That's all the more true from the perspective of the early 1950s, when Asch and Crutchfield conducted their classic experiments. The horror of fascism was a fresh memory and communism was a present threat. Social scientists wanted to understand why nations succumbed to mass movements, and in that context it was chilling to see how easy it is to make people deny what they see with their own eyes. But from an evolutionary perspective, the human tendency to conform is not so strange. Individual survival depended on the group working together and cooperation is much more likely if people share a desire to agree. A band of doubters, dissenters, and proud nonconformists would not do so well hunting and gathering on the plains of Africa. Conformity is also a good way to benefit from the pooling of information. One person knows only what he knows but 30 people can draw on the knowledge and experience of 30, and so when everyone else is convinced there are lions in the tall grass it's reasonable to set aside your doubts and take another route back to camp. The group may be wrong, of course. The collective opinion may have been unduly influenced by one person's irrational opinion or by bad or irrelevant information. But still, other things being equal, it's often best to follow the herd. It's tempting to think things have changed. The explosion of scientific knowledge over the last five centuries has provided a new basis for making judgments that is demonstrably superior to personal and collective experience. And the proliferation of media in the last several decades has made that knowledge available to anyone. There's no need to follow the herd. We can all be fully independent thinkers now. Or rather, we can be fully independent thinkers if we understand the following sentence, plucked from the New England Journal of Medicine: 'In this randomized, multicenter study 106 107 Risk The Herd Senses Danger involving evaluators who were unaware of treatment assignments, we compared the efficacy and safety of posaconazole with those of fluconazole or itraconazole as prophylaxis for patients with prolonged neutropenia.' And this one from a physics journal: 'We evaluare the six-fold integral representation for the second-order exchange contribution to the self-energy of a dense three-dimensional electron gas on the Fermi surface.' And then there's this fascinating insight from a journal of cellular biology: 'Prior to microtubule capture, sister centromeres resolve from one another, coming to rest on opposite surfaces of the condensing chromosome.' Clearly, today's fully independent thinker will have to have a thorough knowledge of biology, physics, medicine, chemistry, geology, and statistics. He or she will also require an enormous amount of free time. Someone who wants to independently decide how risky it is to suntan on a beach, for example, will find there are thousands of relevant studies. It would take months of reading and consideration in order to draw a conclusion about this one, simple risk. Thus if an independent thinker really wishes to form entirely independent judgments about the risks we face in daily life, or even just those we hear about in the news, he or she will have to obtain multiple university degrees, quit his or her job, and do absolutely nothing but read about all the ways he or she may die until he or she actually is dead. Most people would find that somewhat impractical. For them, the only way to tap the vast pools of scientific knowledge is to rely on the advice of experts - people who are capable of synthesizing information from at least one field and making it comprehensible to a lay audience. This is preferable to getting your opinions from people who know as little as you do, naturally, but it too has limitations. For one thing, experts often disagree. Even when there's widespread agreement, there will still be dissenters who make their case with impressive statistics and bewildering scientific jargon. Another solution is to turn to intermediaries - those who are not experts themselves but claim to understand the science. Does abortion put a woman's health at risk? There's heaps of research on the subject. Much of it is contradictory. All of it is complicated. But when I took a look at the website of Focus on the 108 Family, a conservative lobby group that wants abortion banned, I see that the research quite clearly proves that abortion does put a woman's health at risk. Studies are cited, statistics presented, scientists quoted. But then when I look at the website of the National Abortion Rights Action League (naral), a staunchly pro-choice lobby group, I discover that the research indisputably shows abortion does not put a woman's health at risk. Studies are cited, statistics presented, scientists quoted. Now, if I happened to trust naral or Focus on the Family, I might decide that their opinion is good enough for me. But a whole lot of people would look at this differently. naral and Focus on the Family are lobby groups pursuing political agendas, they would think. Why should I trust either of them to give me a disinterested assessment of the science? As Homer Simpson sagely observed in an interview with broadcaster Kent Brockman, 'People can come up with statistics to prove anything, Kent. Forty per cent of all people know that.' There's something to be said for this perspective. On important public issues, we constantly encounter analyses that are outwardly impressive — lots of numbers and references to studies — that come to radically different conclusions even though they all claim to be portraying the state of the science. And these analyses have a suspicious tendency to come to exactly the conclusions that those doing the analyzing find desirable. Name an issue, any issue. Somewhere there are lobbyists, activists, and ideologically driven newspaper pundits who would be delighted to provide you with a rigorous and objective evaluation of the science that just happens to prove that the interest, agenda, or ideology they represent is absolutely right. So, yes, skepticism is warranted. But Homer Simpson isn't merely skeptical. He is cynical. He denies the very possibility of knowing the difference between true and untrue, between the more accurate and the less. And that's just wrong. It may take a little effort to prove that the statistic Homer cites is fabricated, but it can be done. The truth is out there, to quote another staple of 1990s television. Along with truth, cynicism endangers trust. And that can be dangerous. Researchers have found that when the people or institutions handling a risk are trusted, public concern declines: It 109 Risk The Herd Senses Danger matters a great deal whether the person telling you not to worry is your family physician or a tobacco company spokesman. Researchers have also shown, as wise people have always known, that trust is difficult to build and easily lost. So trust is vital. But trust is disappearing fast. In most modern countries, political scientists have found a long-term decline in public trust of various authorities. The danger here is that we will collectively cross the line separating skepticism from cynicism. Where a reasonable respect for expertise is lost, people are left to search for scientific understanding in Google and Internet chat rooms and the sneer of the cynic may mutate into unreasoning, paralyzing fear. That end state can be seen in the anti-vaccination movements growing in the United States, Britain, and elsewhere. Fuelled by distrust of all authority, anti-vaccination activists tail against the dangers of vaccinating children (some imaginary, some real-but-rare) while ignoring the immense benefits of vaccination — benefits that could be lost if these movements continue to grow. This same poisonous distrust is on display in John Weingart's Waste Is a Terrible Thing to Mind, an account of Weingart's agonizing work as the head of a New Jersey board given the job of finding a site for a low-level radioactive waste disposal facility. Experts agreed that such a facility is not a serious hazard, but no one wanted to hear that. 'At the Siting Board's open houses,' writes Weingart, who is now a political scientist at Rutgers University, 'people would invent scenarios and then dare Board members and staff to say they were impossible. A person would ask, "What would happen if a plane crashed into a concrete bunker filled with radioactive waste and exploded?" We would explain that while the plane and its contents might explode, nothing in the disposal facility could. And they would say, "But what if explosives had been mistakenly disposed of, and the monitoring devices at the facility had malfunctioned so they weren't noticed?" We would head down the road of saying that this was an extremely unlikely set of events. And they would say, "Well, it could happen, couldn't it?"' Fortunately, we have not entirely abandoned trust and expetts can still have great influence on public opinion, particularly when they manage to forge a consensus among themselves. Does hiv cause aids? For a long time, there were scientists who said it did not but the overwhelming majority said it did. The public heard and accepted the majority view. The same scenario is playing out now with climate change - most people in every Western country agree that man-made climate change is real not because they've looked into the science for themselves, but because they know that's what most scientists think. But as Howard Margolis describes in Dealing with Risk, scientists can also find themselves resoundingly ignored when their views go against strong public feelings. Margolis notes that the American Physical Society - an association of physicists - easily convinced the public that cold fusion didn't work but it had no impact when it issued a positive report on the safety of high-level nuclear waste disposal. So scientific information and the opinions of scientists can certainly play a role in how people judge risks, but — as the continued divisions between expert and lay opinion demonstrate - they aren't nearly as influential as scientists and officials might like. We remain a species powerfully influenced by the unconscious mind and its tools — particularly the Example Rule, the Good-Bad Rule, and the Rule of Typical Things. We also remain social animals who care about what other people think. And if we aren't sure whether we should worry about this risk or that, whether other people are worried makes a huge difference. 'Imagine that Alan says that abandoned hazardous waste sites are dangerous, or that Alan initiates protest action because such a site is located nearby,' writes Cass Sunstein in Risk and Reason. 'Betty, otherwise skeptical or in equipoise, may go along with Alan; Carl, otherwise an agnostic, may be convinced that if Alan and Betty share the relevant belief, the belief must be true. It will take a confident Deborah to resist the shared judgments of Alan, Betty and Carl. The result of these sets of influences can be social cascades, as hundreds, thousands or millions of people come to accept a certain belief because of what they think other people believe.' Of course it's a big leap from someone in a laborarory going along with the group answer on meaningless quesrions to 'hundreds, thousands or millions of people' deciding that something is dangerous simply because that's what other people think. After all, people in laboratory experiments know their answers don't really matter. They won't be punished if they make mistakes 110 111 Risk The Herd Senses Danger and they won't be rewarded for doing well. But in the real world, our views do matter. For one thing, we are citizens of democracies in which popular opinion influences how governments respond - or don't respond - to risks. More concretely, what we think about risks can be critical in our personal lives. Will you support the creation of a hazardous waste site in your region? If it goes ahead, it means jobs and economic growth. But it could also mean a threat to the health of your family. Will you shrug off the threat of pedophiles and hoodlums and let your kids play unsupervised in the neighbourhood park? Will you eat foods with trans-fats? Each of these questions matters far more than whether line number four is longer than line number five, so it's hard to believe that on these important matters we would so casually abandon our own judgment and follow the herd. In fact, there is some evidence that the importance of these questions would change the rate at which we conform to the group view. It would increase it. Substantially. Psychologists at the University of Iowa - Robert Baron, Joseph Vandello, and Bethany Brunsman - designed an experiment in which they told participants they were testing the accuracy of eyewitness verifications. The testing would involve slides appearing briefly and questions based on what was shown: Was the man wearing glasses? Did you see the man in the police lineup? Three people rook the test each round. This was a ruse, of course. Two of the three test-takers were actually part of the real experiment, which was to see if people would go along with the false answers they gave. Two new variables set this experiment apart from the earlier research on conformity. In some trials, the slides were shown repeatedly or they were left on the screen for a full five seconds. The researchers knew from earlier testing that people almost never make mistakes under those conditions. But in other trials, the slide would flash onscreen only briefly, making the questions much harder to answer. The researchers also gave participants different background information about the purpose of the experiment. Some were told it was merely a pilot study intended to get a general sense of how people perceive things that might be used sometime in the future 'to develop a test that lets us know how accurate people are as witnesses.' This was the 'low importance' version of the experiment. But others were told the researchers had already developed the 'Iowa Witness Identification Test' that 'will soon be adopted by police departments and courtrooms in cases involving eyewitness testimony,' the researchers said. 'Over the next few weeks, we will be running research participants such as yourselves through the revised I.W.I.T. to establish reliable norms. As a result, we hope that you try your best on this test because establishing accutate norms is crucial. Most people do try hard on this test because they are interested in seeing how good their eyewitness accuracy is compared to others. But, to increase your interest in doing well on this test, we will be awarding prizes of $20 at the end of the experimental testing period to the participants who score the highest in accuracy.' This was the 'high importance' condition. The first results were an almost exact duplicate of the original conformity experiments: When the task was easy and people thought the experiment was 'low importance,' one-third abandoned their own judgment and conformed to the group answer. Then came the 'easy task/high importance' version. The researchers expected conformity would fall under those conditions, and it did. But it didn't disappear: Between 13 per cent and 16 per cent still followed the group. Things got intriguing when the questions became harder to answer. Among those who thought the test was 'low importance,' a minority conformed to the group, just as they did when the questions were easy to answer. But when the test was high importance,' conformity actually went up. The researchers also found that under those conditions, people became more confident about the accuracy of theit group-influenced answers. 'Our data suggest,' wrote the researchers, 'that so long as the judgments are difficult or ambiguous, and the influencing agents are united and confident, increasing the importance of accuracy will heighten confidence as well as conformity — a dangerous combination.' Judgments about risk are often difficult and important. If Baron, Vandello, and Brunsman are right, those are precisely the conditions under which people are most likely to conform to the views of the group and feel confident that they are right to do so. But surely, one might think, an opinion based on nothing more than the uninformed views of others is a fragile thing. We 113 Risk are exposed to new information every day. If the group view is foolish, we will soon come across evidence that will make us doubt our opinions. The blind can't go on leading the blind for long, can they? Unfortunately, psychologists have discovered another cognitive bias that suggests that, in some circumstances, the blind can actually lead the blind indefinitely. It's called confirmation bias and its operation is both simple and powerful. Once we have formed a view, we embrace information that supports that view while ignoring, rejecting or harshly scrutinizing information that casts doubt on our view. Any belief will do. It makes no difference whether the thought is about trivia or something important. It doesn't matter if the belief is the product of long and careful consideration or something I believe simply because everybody else in the Internet chat room said so. Once a belief is established, our brains will seek to confirm it. In one of the earliest studies on confirmation bias, psychologist Peter Wason simply showed people a sequence of three numbers - 2, 4, 6 - and told them the sequence followed a certain rule. The participants were asked to figure out what that rule was. They could do so by writing down three more numbers and asking if they were in line with the rule. Once you think you've figured out the rule, the researchers instructed, say so and we will see if you're right. It seems so obvious that the rule the numbers are following is 'even numbers increasing by two.' So let's say you were to take the test. What would you say? Obviously, your first step would be to ask: 'What about 8, 10, 12? Does that follow the rule?' And you would be told, yes, that follows the rule. Now you are really suspicious. This is far too easy. So you decide to try another set of number. Does '14, 16, 18' follow the rule? It does. At this point, you want to shout out the answer - the rule is even numbers increasing by two! - but you know there's got to be a trick here. So you decide to ask about another three numbers: 20, 22, 24. Right, again! Most people who take this test follow exactly this pattern. Every time they guess, they are told they are right and so, it seems, the evidence that they are right piles up. Naturally, they 114 The Herd Senses Danger become absolutely convinced that their initial belief is correct. Just look at all the evidence! And so they stop the test and announce that they have the answer: It is 'even numbers increasing by two.' And they are told that they are wrong. That is not the rule. The correct rule is actually 'any three numbers in ascending order.' Why do people get this wrong? It is very easy to figure out that the rule is not 'even numbers increasing by two.' All they have to do is try to disconfirm that the rule is even numbers increasing by two. They could, for example, ask if'5, 7, 9' follows the rule. Do that and the answer would be, yes, it does - which would instantly disconfirm the hypothesis. But most people do not try to disconfirm. They do the opposite, trying to confirm the rule by looking for examples that fit it. That's a futile strategy. No matter how many examples are piled up, they can never prove that the belief is correct. Confirmation doesn't work. Unfortunately, seeking to confirm our beliefs comes narurally, while it feels strange and counterintuitive to look for evidence that contradicts our beliefs. Worse still, if we happen to stumble across evidence that runs contrary to our views, we have a strong tendency to belittle or. ignore it. In 1979 — when capital punishment was a top issue in the United States - American researchers brought together equal numbers of supporters and opponents of the death penalty. The strength of their views was tested. Then they were asked to read a carefully balanced essay that presented evidence that capital punishment deters crime and evidence that it does not. The researchers then re-tested people's opinions and discovered that they had only gotten stronger. They had absorbed the evidence that confirmed their views, ignored the rest, and left the experiment even more convinced that they were right and those who disagreed were wrong. Peter Wason coined the term confirmation bias and countless studies have borne out his discovery - or rather, his demonstration of a tendency thoughtful observers have long noted. Almost 400 years ago, Sir Francis Bacon wrote that 'the human understanding when it has once adopted an opinion (either as being a received opinion or as being agreeable to itself) draws all things else to support and agree with it. And though there be a greater number and weight of instances to be found on the other side, 115 Risk yet these it either neglects and despises, or else by some distinction sets aside and rejects; in order that by this great and pernicious predetermination the authority of its former conclusions may remain inviolate . . .' Wise words proven true every day by countless pundits and bloggers. The power of confirmation bias should not be underestimated. Duting the U.S. presidential election of 2004, a team of researchers led by Drew Westen at Emory University brought together 30 committed partisans - half Democrats, half Republicans - and had them lie in magnetic resonance imaging (mri) machines. While their brains were being scanned, they were shown a series of three statements by or about George W Bush. The second statement contradicted the first, making Bush look bad. Participants were asked whether the statements were inconsistent and were then asked to rate how inconsistent they were. A third statement then followed that provided an excuse for the apparent contradiction between the statements. Participants were asked if perhaps the statements were not as inconsistent as they first appeared. And finally, they were again asked to rate how inconsistent the first two statements were. The experiment was repeated with John Kerry as the focus and a third time with a neutral subject. The superficial results were hardly surprising. When Bush supporters were confronted with Bush's contradictory statements, they rated them to be less contradictory than Kerry supporters. And when the explanation was provided, Bush supporters considered it to be much more satisfactory than did Kerry supporters. When the focus was on John Kerry, the results reversed. There was no difference between Republicans and Democrats when the neutral subject was tested. All this was predictable. Far more startling, however, was what showed up on the mri. When people processed information that ran against their strongly held views - information that made their favoured candidate look bad - they actually used different parts of the brain than they did when they processed neutral or positive information. It seems confirmation bias really is hardwired in each of us, and that has enormous consequences for how opinions survive and spread. Someone who forms a belief based on nothing more than the The Herd Senses Danger fact that other people around him hold that belief nonetheless has a belief. That belief causes confirmation bias to kick in, so in-coming information is screened: If it supports the belief, it is readily accepted; if it goes against the belief, it is ignored, scrutinized carefully or flatly rejected. Thus, if the information that turns up in newspapers, televisions, and conversation is mixed -and it very often is when risk is involved - this bias will steadily strengthen a belief that originally formed only because it's what everybody else said during a coffee break. That's on the individual level. What happens when people who share a belief get together to discuss it? Psychologists know the answer to that, and it's not pretty. They call it group polarization. It seems teasonable to think that when like-minded people get together to discuss a proposed hazardous waste site, or rhe breast implants they believe are making them sick, or some other risk, their views will tend to coalesce around the average within the group. But they won't. Decades of research has proved that groups usually come to conclusions that are more extreme than the average view of the individuals who make up the group. When opponents of a hazardous waste site gather to talk about it, they will become convinced the site is more dangerous than they originally believed. When a woman who believes breast implants are a threat gets together with women who feel the same way, she and all the women in the meeting ate likely to leave believing they had previously underestimated the danger. The dynamic is always the same. It doesn't matter what the subject under discussion is. It doesn't matter what the particular views are. When like-minded people get together and talk, their existing views tend to become more extreme. In part, this strange human foible stems from our tendency to judge ourselves by comparison with others. When we get together in a group of like-minded people, what we share is an opinion rhat we all believe to be correct and so we compare ourselves with others in the group by asking 'How correct am I?' Inevitably, most people in the group will discover that they do not hold the most extreme opinion, which suggests they are less correct, less virtuous, than others. And so they become more extreme. Psychologists confirmed this theory when they put people 116 117 Risk The Herd Senses Danger in groups and had them state their views without providing reasons why — and polarization still followed. A second force behind group polarization is simple numbers. Prior to going to a meeting of people who believe silicone breast implants cause disease, a woman may have read several articles and studies on the subject. But because the people at the meeting greatly outnumber her, they will likely have information she was not aware of. Maybe it's a study suggesting implants cause a disease she has never heard of, or it's an article portraying the effects of implant-caused diseases as worse than she knew. Whatever it is, it will lead her to conclude the situation is worse than she had thought. As this information is pooled, the same process happens to everyone else in the meeting, with people becoming convinced that the problem is bigger and scarier than they had thought. Of course, it's possible that people's views could be moderared by hearing new information that runs in the opposite direction — an article by a scientist denying that implants cause disease, for example. But remember confirmation bias: Every person in that meeting is prone to accepting information that supports their opinion and ignoring or rejecting information that does not. As a result, the information that is pooled at the meeting is deeply biased, making it ideal for radicalizing opinions. Psychologists have also demonstrated that because this sort of polarization is based on information-sharing alone, it does not require anything like a face-to-face conversation - a fact amply demonstrated every day on countless political blogs. So Alan convinces Betty, which persuades Carl, which settles it for Deborah. Biased screening of information begins and opinions steadily strengthen. Organizations are formed, information exchanged. Views become more extreme. And before you know it, as Cass Sunstein wrote, there are 'hundreds, thousands or millions of people' who are convinced they are threatened by some new mortal peril. Sometimes they're right. It took only a few years for almost everyone to be convinced that aids was a major new disease. But they can also be very wrong. As we saw, it wasn't science that transformed the popular image of silicone breast implants from banal objects to toxic killers. Reasonable or not, waves of worry can wash over communities, regions, and nations, but they cannot roll on forever. They 118 follow social networks and so they end whete those networks end - which helps explain why the panic about silicone breast implants washed across the United States and Canada (which also banned the implants) but caused hardly a ripple in Europe. The media obviously play a key role in getting waves started and keeping them rolling because groups make their views known through more than conversations and e-mail. Groups also speak through the media, explicitly but also implicitly. Watch any newscast, read any newspaper: Important claims about hazards - heroin is a killer drug, pollution causes cancer, the latest concern is rapidly getting worse - will simply be stated as true, without supporting evidence. Why? Because they are what 'everybody knows' is true. They are, in other words, group opinions. And like all group opinions, they exert a powerful influence on the undecided. The media also respond to rising worry by producing more reports - almost always emotional stories of suffering and loss -about the thing that has people worried. And that causes the Guts of readers and viewers ro sit up and take notice. Remember the Example Rule? The easier it is to recall examples of something happening, Gut believes the more likely it is to happen. Growing concern about silicone breast implants prompted more stories about women with implants and terrible illnesses. Those stories raised the public's intuitive estimate of how dangerous silicone breasr implants are. Concern continued to grow. And that encouraged the media to produce more stories about sick women with implants. More fear, more reporting. More reporting, more fear. Like a microphone held too close to a loudspeaker, modern media and the primal human brain create a feedback loop. Against this background,' writes Cass Sunstein, 'it is unsurprising that culturally and economically similar nations display dramatically different reactions to identical risks. Whereas nuclear power enjoys widespread acceptance in France, it arouses considerable fear in the United States. Whereas generic engineering of food causes immense concern in Europe, it has been a non-issue in the United States, at least until recently. It is also unsurprising that a public assessment of any given risk may change suddenly and dramatically even in the absence of a major change in the relevant scientific information.' 119 Risk The Herd Senses Danger So far we've identified two sources - aside from rational calculation — that can shape our judgments about risk. There's the unconscious mind - Gut — and the tools it uses, particularly the Example Rule and the Good-Bad Rule. And there are the people around us, whose opinions we naturally tend to conform to. But if that is all there were to the story, then almost everybody within the same community would have the same opinions about which risks are alarming and which are not. But we don't. Even within any given community opinions are often sharply divided. Clearly something else is at work, and that something is culture. This is tricky terrain. For one thing, 'culture' is one of those words that mean different things to different people. Moving from psychology to culture also means stepping from one academic field to another. Risk is a major subject within sociology, and culture is the lens through which sociologists peer. But the psychologists who study risk and their colleagues in the sociology departments scarcely talk to each other. In the countless volumes on risk written by sociologists, the powerful insights provided by psychologists over the last several decades typically receive little more than a passing mention, if they are noticed at all. For sociologists, culture counts. What happens in my brain when someone mentions lying on the beach in Mexico — do I think of tequila or skin cancer? -isn't terribly interesting or important. In effect, a line has been drawn between psychology and culture, but that line reflects the organization of universities far more than it does what's going on inside our skulls. Consider how the Good-Bad Rule functions in our judgment of risk. The thought of lying on a beach in Mexico stirs a very good feeling somewhere in the wrinkly folds of my brain. As we have seen, that feeling will shape my judgment about the risk involved in lying on a beach until I turn the colour of a coconut husk. Even if a doctor were to tell me this behaviour will materially increase my risk of getting skin cancer, the pleasant feeling that accompanies any discussion of the subject will cause me to intuitively downplay the risk: Head may listen to the doctor but Gut is putting on sunglasses. Simple enough. But a piece of the puzzle is missing. Why does the thought of lying on a Mexican beach fill me with positive feelings? Biology doesn't do it. We may be wired to enjoy the feeling of sunlight — it's a good source of heat and vitamin D — but we clearly have no natural inclination to bake on a beach since humans only started doing this in relatively modern times. So where did I learn that this is a Good Thing? Experience, certainly. I did it and it was delightful. But I thought it would be delightful before I did it. That was why I did it. So again, I have to ask the question: Where did I get this idea from? For one, I got it from people who had done ir and who told me it's delightful. And I got it from others who hadn't done it but who had heard that it was delightful. And I got it - explicitly or implicitly — from books, magazines, television, radio, and movies. Put all this together and it's clear I got the message that it's delightful to suntan on a Mexican beach from the culture around me. I'm Canadian. Every Canadian has either gone south in the winter or dreamed of it. Tropical beaches are as much a part of Canadian culture as wool hats and hockey pucks, and that is what convinced me that lying on a beach in Mexico is delightful. Even if I had never touched toes to Mexican sand, the thought of lying on a beach in Mexico would trigger nice feelings in my brain -. and those nice feelings would influence my judgment of the risks involved. This is a very typical story. There are, to be sure, some emotional reactions that are mainly biological in origin, such as revulsion for corpses and feces, but our feelings are more often influenced by experience and culture. I have a Jewish friend who follows Jewish dietary laws that forbid pork. He always has. In fact, he has internalized those rules so deeply he literally feels nauseated by the sight of ham or bacon. But for me, glazed ham means Christmas and the smell of frying bacon conjures images of sunny Saturday mornings. Obviously, eating pork is not terribly dangerous but still there is a risk of food poisoning (trichinosis in particular). If my friend and I were asked to judge that risk, the very different feelings we have would lead our unconscious minds - using the Good-Bad Rule - to very different conclusions. The same dynamic plays a major role in our perceptions about the relative dangers of drugs. Some drugs are forbidden. Simply to possess them is a crime. That is a profound stigma, and we feel it in our bones. These are awful, wicked substances. Sometimes 120 121 Risk The Herd Senses Danger we talk about them almost as if they are sentient creatures lurking in alleyways. With such strong feelings in play, it is understandable that we would see these drugs as extremely dangerous: Snort that cocaine, shoot that heroin, and you'll probably wind up addicted or dead. There's no question drugs can do terrible harm, but there is plenty of reason to think they're not nearly as dangerous as most people feel. Consider cocaine. In 1995, the World Health Organization completed what it touted as 'the largest global study on cocaine use ever undertaken.' Among its findings: 'Occasional cocaine use,' not intensive or compulsive consumption, is 'the most typical pattern of cocaine use' and 'occasional cocaine use does not typically lead to severe or even minor physical or social problems.' Of course it is very controversial to suggest that illicit drugs aren't as dangerous as commonly believed, but exaggerated perceptions of risk are precisely what we would expect to see given the deep hostility most people feel towards drugs. Governments not only know this, they make use of it. Drug-use prevention campaigns typically involve advertising and classroom education whose explicit goal is to increase perceived risk (the who's cocaine report described most drug education as 'superficial, lurid, excessively negative') while drug agencies monitor popular perceptions and herald any increase in perceived risk as a positive development. Whether the perceived risks are in line with the actual risks is not a concern. Higher perceived risk is always better. Then there are the licit drugs. Tobacco is slowly becoming a restricted and stigmatized substance, but alcohol remains a beloved drug in Western countries and many others. It is part of the cultural fabric, the lubricant of social events, the symbol of celebration. A 2003 survey of British television found alcohol routinely appeared in 'positive, convivial, funny images.' We adore alcohol and for that reason it's no surprise that public health officials often complain that people see little danger in a drug whose consumption can lead to addiction, cardiovascular disease, gastrointestinal disorders, liver cirrhosis, several types of cancer, fetal alcohol syndrome and fatal overdose - a drug that has undoubtedly killed far more people than all the illicit drugs combined. The net effect of the radically different feelings we have for alcohol and other drugs was neatly summed up in a 2007 report of the Canadian Centre on Substance Abuse: Most people 'have an exaggerated view of the harms associated with illegal drug use, bur consistently underestimate the serious negative impact of alcohol on society.' That's Gut, taking its cues from the culture. The Example Rule provides another opportunity for culture to influence Gut. That's because the Example Rule - the easier it is to recall examples of something happening, the greater the likelihood of that thing happening - hinges on the strength of the memories we form. And the strength of our memories depends greatly on attention: If I focus strongly on something and recall it repeatedly, I will remember it much better than if I only glance at it and don't think about it again. And what am I most likely to focus on and recall repeatedly? Whatever confirms my existing thoughts and feelings. And what am I least likely to focus on and recall repeatedly? Whatever contradicts my thoughts and feelings. And what is a common source of the thoughts and feelings that guide my attention and recall? Culture. The people around us are another source of cultural influence. Our social networks aren't formed randomly, after all. We are more comfortable with people who share our thoughts and values. We spend more time with them at work, make them our friends, and marry them. The Young Republican with the Ronald Reagan t-shirt waiting in an airport to catch a flight to Washington, D.C., may find himself chatting with the anti-globalization activist with a Che Guevara beret and a one-way ticket to Amsterdam but it's not likely he will be adding her to his Christmas card list - unlike the mba student who collides with the Young Republican at the check-in line because she was distracted by the soaring eloquence of Ronald Reagan's Third State of the Union Address playing on her iPod. So we form social networks that tend to be more like than unlike, and we trust the people in our networks. We value their opinions and we talk to them when some new threat appears in the newspaper headlines. Individually, each of these people is influenced by culture just as we are, and when culture leads them to form a group opinion, we naturally want to conform to it. The manifestations of culture I've discussed so far — Mexican vacations, alcohol and illicit drugs, kosher food - have obvious 122 123 Risk The Herd Senses Danger origins, meaning, and influence. But recent research suggests cultutal influences run much deeper. In 2005, Dan Kahan of the Yale Law School, along with Paul Slovic and others, conducted a randomly selected, nationally representative sutvey of 1,800 Americans. After extensive background questioning, people were asked to rate the setiousness of various risks, including climate change, guns in private hands, gun control laws, marijuana, and the health consequences of abortion. One result was entirely expected. As in many past surveys, non-whites rated risks higher than whites and women believed risks were more serious than men. Put those two effects together and you get what is often called the white male effect. White men routinely feel hazards are less serious than other people. Sociologists and political scientists might think that isn't surprising. Women and racial minorities tend to hold less political, economic, and social power than white men and have less trust in government authorities. It makes sense that they would feel more vulnetable. But researchers have found that even aftet statistically accounting fot these feelings, the disparity between white men and everybody else remains. The white male effect also cannot be explained by different levels of scientific education -Paul Slovic has found that female physical scientists rate the risks of nuclear power higher than male physical scientists, while female members of the British Toxicological Society were far more likely than male members to rate the risk posed by various activities and technologies as moderate ot high. It is a riddle. A hint of the answer was found in an earlier survey conducted by Paul Slovic in which he discovered that it wasn't all white males who perceived things to be less dangerous than everybody else. It was only a subset of about 30 per cent of white males. The remaining 70 per cent saw things much as women and minorities did. Slovic's survey also revealed that the confident minority of white men tended to be better-educated, wealthier, and more politically conservative than others. The 2005 survey was designed in part to figure out what was happening inside the heads of white men. A key component was a series of questions that got at people's most basic cultural world views. These touched on really basic matters of how human 124 societies should be organized. Should individuals be self-reliant? Should people be required to share good fortune? And so on. With the results from these questions, Kahan slotted people into one of four world views (developed from the Cultural Theory of Risk first advanced by the anthropologist Mary Douglas and political scientist Aaron Wildavsky). In Kahan's terms they were individualist, egalitarian, hierarchist, and communitarian. When Kahan crunched his numbers, he found lots of correlations between risk and other factors like income and education. But the strongest correlations were between risk perception and world view. If a person were, for example, a hierarchist - someone who believes people should have defined places in society and respect authority - you could quite accurately predict what he felt about various risks. Abortion? A serious risk to a woman's health. Marijuana? A dangerous drug. Climate change? Not a big threat. Guns? Not a problem in the hands of law-abiding citizens. Kahan also found that a disproportionate number of white men were hierarchists or individualists. When he adjusted the numbers to account for this, the white male effect disappeared. So it wasn't race and gender that mattered. It was culture. Kahan confirmed this when he found that although black men generally rated the risks of ptivate gun ownership to be very high, black men found to be individualist rated guns a low risk - just like white men who were individualist. Hierarchists also rated the risk posed by guns to be low. Communitarians and egalitarians, however, feel they are very dangerous. Why? The explanation lies in feelings and the cultures that create them. 'People who've been raised in a relatively individualistic community or who've been exposed to cettain kinds of ttaditional values will have a positive association with guns,' Kahan says. 'They'll have positive emotions because they'll associate them with individualistic vittues like self-reliance or with certain kinds of traditional roles like a protective father. Then they'll form the corresponding perception. Guns are safe. Too much gun control is dangerous. Whereas people who've been raised in more communitatian communities will develop negative feelings toward guns. They'll see them as evidence that people in the community are distrustful of each other. They'll resent the 125 Risk idea that the public function of protection is taken by individuals who are supposed to do it for themselves. People who have an egalitarian sensibility, instead of valuing traditional roles like protector and father and hunter, might associate them with patriarchy or stereotypes that they think treat women unfairly and they'll develop a negative affective orientation toward the gun.' And once an opinion forms, information is screened to suit. In the survey, after people were asked to rate the danger posed by guns, they were then asked to imagine that there was clear evidence that their conclusion about the safety of guns is wrong. Would they still feel the same way about guns? The overwhelming majority said yes, they would. That's pretty clear evidence that what's driving people's feelings about the risks posed by guns is more than the perceived risks posed by guns. It's the culture, and the perception of guns within it. That culture, Kahan emphasizes, is American and so the results he got in the poll apply only to the United States. 'What an American who has, say, highly egalitarian views thinks about risk may not be the same as what an egalitarian in France thinks about risk. The American egalitarian is much more worried about nuclear power, for example, than the French egalitarian.' This springs from the different histories that produce different cultures. 'I gave you the story about guns and that story is an American story because of the unique history of firearms in the United States, both as tools for settling the frontier and as instruments for maintaining authority in a slave economy in the South. These created resonances that have persisted over time and have made the gun a symbol that evokes emotions within these cultural groups that then generate risk perceptions. Something completely different could, and almost certainly would, happen some place else that had a different history with weapons.' In 2007, Kahan's team ran another nation-wide survey. This time the questions were about nanotechnology - technology that operates on a microscopic scale. Two results leapt out. First, the overwhelming majority of Americans admitted they knew little or nothing about this nano-whatzit. Second, when asked if they had opinions about the risks and benefits of nanotechnology, the overwhelming majority of Americans said they did, and they freely shared them. The Herd Senses Danger How can people have opinions about something they may never have heard of until the moment they were asked if they had an opinion about it? It's pure affect, as psychologists would say. If they like the sound of 'nanotechnology,' they feel it must be low risk and high benefit. If it sounds a little creepy, it must be high risk and low benefit. As might be expected, Kahan found that the results of these uninformed opinions were all over the map, so they really weren't correlated with anything. But at this point in the survey, respondents were asked to listen to a little information about nanotechnology. The information was deliberately crafted to be low key, simple, factual — and absolutely balanced. Here are some potential benefits. Here are some potential risks. And now, the surveyors asked again, do you have an opinion about the risks and benefits of nanotechnology? Sure enough, the information did change many opinions. 'We predicted that people would assimilate balanced information in a way biased by their cultural predispositions toward environmental risks generally,' says Kahan. And they did. Hierarchists and individualists latched onto the information about benefits, and their opinions became much more bullish - their estimate of the benefits rose while the perceived risks fell. Egalitarians and communitarians did exactly the opposite. And so, as a result of this little injection of information, opinions suddenly became highly correlated to cultural world views. Kahan feels this is the strongest evidence yet that we unconsciously screen information about risk to suit our most basic beliefs about the organization of society. Still, it is early days for this research. What is certain at this point is that we aren't the perfectly rational creatures described in outdated economics textbooks, and we don't review information about risks with cool detachment and objectivity. We screen it to make it conform to what we already believe. And what we believe is deeply influenced by the beliefs of the people around us and of the culture in which we live. In that sense, the metaphor I used at the start of this book is wrong. The intuitive human mind is not a lonely Stone Age hunter wandering a city it can scarcely comprehend. It is a Stone Age hunter wandering a city it can scarcely comprehend in the 126 127 Risk company of millions of other confused Stone Age hunters. The tribe may be a little bigger these days, and there may be more taxis than lions, but the old ways of deciding what to worry about and how to stay alive haven't changed. 128 Conclusion 12 Conclusion: There's never been a better time to be alive In central Ontario, near where my parents live, there is a tiny cemetery filled with rusted ironwork and headstones heaved to odd angles by decades of winter frost and spring thaws. This was farm country once. Pioneers arrived at the end of the 19th century, cut the trees, pulled up the stumps, and discovered, after so much crushing labour, that their new fields amounted to little more than a thin layer of soil stretched across the bare granite of the Canadian Shield. Most farms lasted a generation or two before the fields were surrendered to the forests. Today, only the cemeteries remain. The pioneers were not wealthy people but they always bought the biggest headstones they could afford. They wanted something that declared who they were, something that would last. They knew how easily their own existence could end. Headstones had to endure. 'Children of James and Janey Morden,' announces one obelisk in the cemetery. It's almost six feet tall. The stone says the first to die was Charles W. Morden. He was four years and nine months old. It was the winter of 1902. The little boy would have complained that he had a sore throat. He was tired and his forehead felt a little warm to his mother's hand. A day or two passed and as Charles lay in bed he grew pale. His heart raced. His skin burned and he started to vomit. His throat swelled so that each breath was a struggle and his head was immobilized on the sweat-soaked pillow. His mother, Janey, would have known what was torturing her little boy but with no treatment she likely wouldn't have dared speak its name. 302 Then Charles's little brother, Earl, started to cry. His throat was sore, he moaned. And he was so hot. Albert, the oldest of the boys, said he, too, was tired. And yes, his throat hurt. Charles W. Morden died on Tuesday, January 14, 1902. His father would have had to wrap the little boy's body in a blanket and carry him out through the deepening snow to the barn. The cold would seep into the corpse and freeze it solid until spring, when rising temperatures would thaw the ground and the father could dig his son's grave. The next day, both Earl and Albert died. Earl was two years and ten months old. Albert was six years and four months. Their father would have gotten out two more blankets, wrapped his sons, and taken them out to the bam to freeze. Then the girls started to get sick. On January 18, 1902, the eldest died. Minnie Morden was ten years old. Her seven-year-old sister, Ellamanda, died the same day. On Sunday, January 19, 1902, the fever took litde Dorcas, barely 18 months old. For the final time, James Morden bundled one of his children in a blanket, walked through the snow, and laid her down in the cold and dark of the barn, where she and her brothers and sisters would wait through the long winter to be buried. The same fever that swept away the Morden children in the winter of 1902 leapt from homestead to homestead - the obelisk next to the Mordens' is dedicated to the two children Elias and Laura Ashton lost within weeks of their neighbours. The Ashtons already knew what it felt like to lose children. Their 15-year-old son had died in 1900, and a five-year-old boy had been taken from them eight years before that. It's hard to find a family that did not suffer losses like these in generations past. Cotton Mather, the Puritan minister in late-17th-century New England, named one of his daughters Abigail. She died. So he gave the same name to the next daughter. She too died. So he named a third daughter Abigail. She survived to adulthood but died giving birth. In all, Cotton Mather - a well-to-do man in a prosperous society — lost 13 children to worms, diarrhea, measles, smallpox, accidents, and other causes. A dead child is a sign no more surprising than a broken pitcher or blasred flower,' he said in a sermon, and yet, familiar as it was, death 303 Risk never lost its power to make the living suffer. 'The dying of a child is like the tearing of a limb from us,' wrote Increase Mather, Cotton's father. Children were especially vulnerable, but not uniquely so. The plague that swept through the homes of the Mordens and Ashtons and so many others is typical in this regard. It was diphtheria, a disease that is particularly deadly to children but can also kill adults. In 1878, the four-year-old granddaughter of Queen Victoria contracted diphtheria and passed it on to her mother, the queen's daughter. Queen Victoria was wealthy and powerful beyond compare and yet she could do nothing. Both daughter and granddaughter died. That world is not ours. We still know tragedy and sorrow, of course, but in neither the quantity nor the quality of those who came before us. A century ago, most people would have recognized the disease afflicting Charles Morden (the enlarged neck was particularly notorious). Today, we may have heard the word 'diphtheria' once or twice - it comes up when we take our babies into the doctors office to get their shots - but few of us know anything about it. Why would we? A vaccine created in 1923 all but eradicated the disease across the developed world and drastically reduced its toll elsewhere. The triumph over diphtheria is only one of a long line of victories that created the world we live in. Some are dramatic - the extinction of smallpox is a greater monument to civilization than the construction of the pyramids. Others are considerably less exciting - fortifying foods with vitamins may lack glamour but it eliminated diseases, made children stronger, and contribured greatly to increased lifespans. And some are downright distasteful to talk about — we wrinkle our noses at the mere mention of human waste but the development of sewage disposal systems may have saved more lives than any other invention in history. In 1725, the average baby born in what was to become the United States had a life expectancy of 50 years. The American colonies were blessed with land and resources and American longevity was actually quite high relative to England — where it was a miserable 32 years — and mosr other places and times. And it was creeping up. By 1800, it had reached 56 years. But then 304 Conclusion it slipped back, thanks in part to the growth in urban slums. By 1850, it was a mere 43 years. Once again, however, it started inching up. In 1900, it stood at 48 years. This is the stoty of life expectancy throughout human history: A little growth is followed by a little decline and the centuries roll on without much progress. But then everything changed. By 1950, American life expectancy had soared to 68. And by the end of the 20th century, it stood at 78 years. The news was as good or better in other developed countries, where life expectancy approached or exceeded 80 years at the turn of the century. In the second half of the century, similarly dramatic gains were made throughout most of the developing world. The biggest factor in this spectacular change was the decline in deaths among children. In 1900, almost 20 per cent of all children born in the United States - one in five - died before they were five years old; by 1960, that had fallen to three per cent; by 2002, it was 0.8 per cent. There have been huge improvements in the developing world, too. Fifty years ago in Latin America, more than 15 per cent of all children died before their fifth birthday; today, that figure is roughly two per cent. Between 1990 and 2006 alone, the child mortality rate fell 47 per cent in China and 34 per cent in India. It is in our nature to become habituated to changes in our environment and so we think it is petfectly commonplace for the average person to be hail and hearty for more than seven or eight decades and that a baby born today will live an even healthier and longer life. But if we raise our eyes from this moment and look to the history of our species, it is clear this is not commonplace. It is a miracle. And the miracle continues to unfold. 'There are some people, including me, who believe that the increase in life expectancy in the coming century will be about as large as it was in the past century,' says Robert Fogel, the economic historian and Nobel laureate who has spent decades studying health, mortality and longevity. If Fogel is right, the change will be even more dramatic than it sounds. That's because massive reductions in child mortality - the largest source of 20th-century gains - are no longer possible simply because child mortality has already been 305 Risk driven so low. So for equivalent improvements in lifespan to be made in the 21st century, there will have to be huge declines in adult mortality. And Fogel feels there will be. 'I believe that college-age students today, half of them will live to be 100.' Other researchers are not so bullish, but there is a consensus that the progress of the 20th century will continue in the 21st. A 2006 World Health Organization study of global health trends to 2030 concluded that in each of three different scenarios — baseline, optimistic and pessimistic - child mortality will fall and life expectancy will rise in every region of the world. There are clouds on humanity's horizons, of course. If, for example, obesity turns out to be as damaging as many researchers believe it to be, and if obesity rates keep rising in rich countries, it could undermine a great deal of progress. But potential problems like this have to be kept in perspective. 'You can only start worrying about over-eating when you stop worrying about under-eating, and for most of our history we worried about under-eating,' Fogel wryly observes. Whatever challenges we face, it remains indisputably true that those living in the developed world are the safest, healthiest, and richest humans who ever lived. We are still mortal and there are many things that can kill us. Sometimes we should worry. Sometimes we should even be afraid. But always we should remember how very lucky we are to be alive now. In an interview for a pbs documentary, Linda Birnbaum, a leading research scientist with the U.S. Environmental Protection Agency, struck exactly the right balance between taking potential threats seriously and keeping those threats in perspective. 'I think as parents, we all worry about our children,' said Birnbaum, who, at the time, led a team investigating the hypothesis that endocrine disruptor chemicals in the environment were taking a hidden toll on human health. 'But I think that we have to look at the world our children are living in and realize that they have tremendous access to food, to education, to all the necessities of life plus much more. That their lifespan is likely to be greater than ours is, which is certainly greater than our parents' was and much greater than our grandparents or great-grandparents.' Anyone who has spent an afternoon in a Victorian cemetery knows that gratitude, not fear, should be the defining feeling of Conclusion our age. And yet it is fear that defines us. We worry. We cringe. It seems the less we have to fear, the more we fear. One obvious source of this paradox is simple ignorance. 'Most people don't know the history,' says Fogel. 'They just know their own experience and what's happening around them. So they take all of the great advances for granted.' But there's much more to the explanation of why history's safest humans are increasingly hiding under their beds. There's the omnipresent marketing of fear, for one. Politicians, corporations, activists and non-governmental organizations want votes, sales, donations, support and memberships and they know that making people worry about injury, disease, and death is often the most effective way of obtaining their goals. And so we are bombarded daily with messages carefully crafted to make us worry. Whether that worry is reasonable or not — whether it is based on due consideration of accurate and complete facts — is not a central concern of those pumping out the messages. What matters is the goal. Fear is merely a tactic. And if twisted numbers, misleading language, emotional images, and unreasonable conclusions can more effectively deliver that goal - and they often can - so be it. The media are among those that profit by marketing fear — nothing gives a boost to circulation and ratings like a good panic - but the media also promote unreasonable fears for subtler and more compelling reasons. The most profound is the simple human love of stories and storytelling. For the media, the most essential ingredient of a good story is the same as that of a good movie, play, or tale told by the campfire: It has to be about people and emotions, not numbers and reason. Thus the particularly tragic death of a single child will be reported around the world while a massive and continuing decline in child mortality rates is hardly noticed. This isn't a failing of the media so much as it is a reflection of the hard-wiring of a human brain that was shaped by environments that bore little resemblance to the world we inhabit. We listen to iPods, read the newspaper, watch television, work on computers, and fly around the world using brains beautifully adapted to picking berries and stalking antelope. The wonder is not that we sometimes make mistakes about risks. The wonder is that we sometimes get it right. 306 307 Risk Conclusion So why is it that so many of the safest humans in history are scared of their own shadows? There are three basic components at work: the brain, the media, and the many individuals and organizations with an interest in stoking fears. Wire these three components together in a loop and we have the circuitry of fear. One of the three raises an alarm; the signal is picked up and repeated by the next component and then the other; the alarm returns to the original component and a louder alarm goes out. Fear amplifies. Other alarms are raised about other risks, more feedback loops are created, and the 'unreasoning fear' Roosevelt warned against becomes a fixture of daily life. In part, this is an inevitable condition of modernity. Our Stone Age brains can't change, we won't abandon information technology, and the incentives for marketing fear are growing. But while we may not be able to cut the circuitry of fear, we can at least turn down the volume. The first step is simply recognizing that there are countless individuals and organizations that have their own reasons for inflating risks, and that most journalists not only fail to catch and correct these exaggerations, they add their own. We need to be skeptical, to gather information, to think carefully about it and draw conclusions for ourselves. We also have to recognize that the brain that is doing this careful thinking is subject to the foibles of psychology. This is actually more difficult than it sounds. Psychologists have found that people not only accept the idea that other people's thinking may be biased, they tend to overestimate the extent of that bias. But almost everyone resists the notion that their own thinking may also be biased. One survey of medical residents, for example, found that 61 per cent said they were not influenced by gifts from drug company salespeople but only 16 per cent said the same of other physicians. It's as if each of us recognizes that to err is human, but, happily for us, we are not human. But even if we accept that we, too, are human, coping with the brain's biases is not easy. Researchers have tried to 'debias' thinking by explaining to people what biases are and how they influence us, but that doesn't work. Consider the Anchoring Rule. The reader now knows that when we have to guess a number, we unconsciously grab onto the number we came across most recently and adjust up or down from it. But if I were to mention 308 that Mozart died at the age of 34 and then ask you to guess how many countries have a name beginning with the letter A,' your unconscious mind would still deploy the Anchoring Rule and the number 34 would still influence your guess. Not even a conscious decision to ignore the number 34 will make a difference because the directive to ignore it comes from Head and Head does not control Gut. We simply cannot switch off the unconscious mind. What we can do is understand how Gut works and how it sometimes makes mistakes. 'People are not accustomed to thinking hard,' Daniel Kahneman wrote, 'and are often content to trust a plausible judgment that quickly comes to mind.' That is the most important change that has to be made. Gut is good, but it's not perfect, and when it gets risk wrong people can come to foolish conclusions such as believing that young women are at serious risk of breast cancer while older women are free and clear, or that abandoning airplanes for cars is a good way to stay safe. To protect ourselves against unreasoning fear, we must wake up Head and tell it to do its job. We must learn to think hard. Very often, Head and Gut will agree. When that happens, we can be confident in our judgments. But sometimes Head will say one thing, Gut another. Then there's reason to be cautious. A quick and final judgment isn't necessary to deal with most of the risks we face today so when Head and Gut can't agree, we should hold off. Gather more information. Think some more. And if Head and Gut still don't match up, swallow hard and go with Head. After the September 11 attacks, millions of Americans did the opposite and chose to abandon planes for cars. This mistake cost the lives of more than 1,500 people. Putting Head before Gut is not easily done, but for the fears it can ease, and the lives it can save, it is worth the effort. So maybe we really are the safest, healthiest, and wealthiest humans who ever lived. And maybe we can significantly reduce the remaining risks we face simply by eating a sensible diet, exercising, not smoking and obeying all traffic regulations. And maybe we can expect more of this good fortune to extend into the future if current trends persist. 309 Risk But, the determined worrier may ask, what if current trends don't persist? What if catastrophe strikes? Judging by what's on offer in bookstores and newspaper commentary pages, it will strike. Energy depletion, climate chaos, and mass starvation are popular themes. So ate nucleat terrorism and annihilating plagues. Catasttophist writing is very much in vogue and it can be terribly depressing. 'Even after the terrorist attacks of September 11, 2001,' wrote James Howard Kunstler, author of The Long Emergency, 'America is still sleepwalking into the future. We have walked out of our burning house and we are now heading off the edge of a cliff.' Perhaps this will be -to use the title of a book by British astronomer and president of the Royal Society Martin Rees - Our Final Hour. Armageddon is in the air. Cormac McCarthy's The Road — a novel about a father and son wandering through a future America devastated by an unknown catastrophe - was released in 2006. A year later came Jim Grace's The Pesthouse, a novel about two people wandering through a future America slightly less devastated by an unknown catastrophe. When two renowned authors working in isolation come up with near-identical plots, they are tapping into the Zeitgeist, and it is grim, indeed. Even Thomas Friedman - the New York Times columnist who made his name as a techno-optimist — occasionally slips into fearful pessimism. In Septembet 2003, Friedman wrote, he took his daughter to college with the sense that 'I was dropping my daughter off into a world that was so much more dangerous than the world she was born into. I felt like I could still promise my daughtet het bedroom back, but I could not promise her the wotld, not in the carefree way that I had explored it when I was het age.' Friedman's story neatly captures a common belief. The past wasn't perfect, but at least we knew where we stood. Now when we look into the futute, all we see is a black void of uncertainty in which there are so many ways things could go horribly wrong. This world we live in really is a more dangerous place. Oddly, though, when we look into the past that we think was not so frightening, we find a lot of people who felt about their time as we do about ours. It 'was like the end of the world,' wrote the German poet Heinrich Heine in 1832. Heine was in 310 Conclusion Paris and cholera was sweeping across France. In a matter of hours, perfectly healthy people would collapse, shrivel like raisins in the sun and die. Refugees fled theit homes only to be attacked by terrified strangers desperate to keep the plague away. Choleta was new to Europe and no one knew how it was spread or how to treat the victims. The terror they felt as it swept the land is unimaginable. Literally so: We know, looking back, that this was not the end of the world - when we imagine 19th-century Paris, we tend to think of the Moulin Rouge, not plague - and that knowledge removes the uncertainty that was the defining feature of the experience for Heine and the others who lived through it. Simply put, history is an optical illusion: The past always appears more certain than it was, and that makes the future feel more uncertain - and therefore frightening - than ever. The roots of this illusion lie in what psychologists call 'hindsight bias.' In a classic series of studies in the eatly 1970s, Baruch Fischhoff gave Israeli university students detailed descriptions of events leading up to an 1814 war between Great Britain and the Gurkas of Nepal. The descriprion also included military factors that weighed on the outcome of the conflict, such as the small number of Gurka soldiers and the rough terrain the British weren't used to. One thing missing was the war's outcome. Instead, one group of students was told there were four possible results - British victory, Gurka victory, stalemate with no peace settlement and stalemate with settlement. Now, they were asked, how likely was it that the war would end in each of these outcomes? A second group of students was divided into four sections. Each section was given the same list of fout outcomes. But the first section was told that the war actually did end in a British victory (which it did, incidentally). The second section was told it concluded in a Gurka victory. The third section was told it ended in a stalemate with no settlement, and the fourth was told it was a stalemate with a settlement. Now, they were asked, how likely were each of the fout outcomes? Knowing what happened - or at least believing you know -changed everything. Students who weren't told how the war ended gave an average rating of 33.8 per cent to the probability of a British victory. Among students who were told the war ended in a British victory, the chance of that happening was judged to be 311 Risk 57.2 per cent. So knowing how the war ended caused people's estimate of the probability to jump from one-third to better than one-half. Fischhoff ran three other versions of the experiment and consistently got the same results. Then he did the whole thing over again, but with one change: Those who were told the war's outcome were also asked not to let their knowledge of the outcome influence their judgment. It still did. Fischhoff came up with an ingenious twist on his research in 1972, after Richard Nixon announced he would make an historic visit to China and the U.S.S.R. Prior to the trip, students were told that certain things could happen during Nixon's travels: He may meet personally with Mao; he may visit Lenin's tomb; and so on. They were asked how likely each of those events was. Fischhoff filed that information away and waited. Months after Nixon's trip, he went back to each student and asked them about each event. Do you think it occurred? And do you recall how likely you thought it was to occur? 'Results showed that subjects remembered having given higher probabilities than they actually had to events believed to have occurred,' Fischhoff wrote, 'and lower probabilities to events that hadn't occurred.' The effect of hindsight bias is to drain the uncertainty out of history. Not only do we know what happened in the past, we feel that what happened was likely to happen. What's more, we think it was predictable. In fact, we knew it all along. So here we are, standing in the present, peering into the fright-eningly uncertain future and imagining all the awful things that could possibly happen. And when we look back? It looks so much more settled, so much more predictable. It doesn't look anything like this. Oh yes, these are very scary times. This is all an illusion. Consider the daughter that Thomas Friedman dropped off at college in 2003 - into a world 'so much more dangerous than the world she was born into.' That daughter was born in 1985. Was the world of 2003 'so much more dangerous' than the world of 1985? Thanks to the foibles of the human mind, it can easily seem that way. But in 1985, the Soviet Union and the United States possessed sufficient nuclear weaponry to kill half the human race and reduce the rest to scavengers scuttling amid the ruins. These weapons Conclusion were pointed at each other. They could be launched at any moment. Annihilation would come with nothing more than a few minutes' notice and, in 1985, it increasingly looked like it would. The Cold War had been getting hotter since the 1979 Soviet invasion of Afghanistan and the 1980 election of Ronald Reagan. Mikhail Gorbachev became leader of the Soviet Union in 1985, and we know now that Gorbachev and Reagan later met and steadily reduced tensions, that the Cold War ended peacefully, and the Soviet Union dissolved within a few years. But in 1985, that was all in the black void of the future. In 1985, what actually happened would have seemed wildly improbable - which is why almost no one predicted anything like it. But nuclear war? That looked terrifyingly likely. In 1983, The Day After, a nightmarish look at life in small town America before and after nuclear attack, became the most talked-about tv drama of the era. In 1984, no fewer than seven novels featuring nuclear war were published. The fear was real and intense. It filled the streets of Europe and America with millions of protestors and countless heads with nightmares. 'Suppose I survive,' wrote British novelist Martin Amis. 'Suppose my eyes aren't pouring down my face, suppose I am untouched by the hurricane of secondary missiles that all mortar, metal, and glass has abruptly become: suppose all this. I shall be obliged (and its the last thing I feel like doing) to retrace that long mile home, through the firestorm, the remains of the thousand-mile-an-hour winds, the warped atoms, the groveling dead. Then -God willing, if 1 still have the strength, and, of course, if they are still alive - I must find my wife and children and I must kill them.' And if global incineration weren't enough to worry about, 1985 also saw explosive awareness about the rapid spread of a deadly new virus. There was no treatment for aids. Get it and you were certain to die a slow, wasting death. And there was a good chance you would get it because a breakthrough into the heterosexual population was inevitable. 'aids has both sexes running scared,' Oprah Winfrey told her audience in 1987. 'Research studies now project that one in five heterosexuals could be dead from aids at the end of the next three years. That's by 1990. One in five.' Surgeon General C. Everett Koop called it 312 313 Risk 'the biggest threat to health this nation has ever faced.' A member of the president's commission on aids went one further, declaring the disease to be 'the greatest threat to society, as we know it, ever faced by civilization — more serious than the plagues of past centuries.' We know now that it didn't work out that way, but at the time there were good reasons to think it would. And to be very, very afraid. So was the world of 1985 so much safer? Thomas Friedman thought so in 2003 but I think he was the victim of a cognitive illusion. He knew the Cold War ended peacefully and aids did not sweep through the United States like the Black Death. That knowledge made those outcomes appear far more likely than they did at the time. And it made him feel that the Thomas Friedman of 1985 was much more confident of those outcomes than the Thomas Friedman of 1985 really was. I don't mean to knock Friedman. The point is simply that even a renowned commentator on global affairs is vulnerable to this illusion. And he's not alone. In a 2005 book called Expert Political Judgment, Philip Tetlock, a University of California psychologist, presented the results of a 20-year project that involved Tetlock tracking the predictions of 284 political scientists, economists, journalists, and others whose work involved 'commenting or offering advice on political or economic trends.' In all, Tetlock checked the accuracy of 82,361 predictions and found the experts' record was so poor they would have been beaten by random guesses. Tetlock also found, just as Baruch Fischhoff had earlier, that when experts were asked after the fact to recall their predictions and how confident they were, they remembered themselves being more accurate and more certain than they actually were. (Unlike the Israeli students Fischhoff surveyed, however, experts often got defensive when they were told this.) I certainly don't want to suggest all scary prognostications are wrong. Horrible things do happen, and it's sometimes possible - very difficult but possible - for smart, informed people to foresee them. Each scary prognostication has to be taken on its merits. But anyone rattled by catastrophist writing should also know that many of the horrible (and wonderful things) that come to pass are not predicted and there is a very long history of smart, Conclusion informed people foreseeing disasters - they tend to focus on the negative side of things, for some reason — that never come to pass. In 1967 - a year we remember for the Summer of Love and Sergeant Pepper's Lonely Hearts Club Band - Americans got a remarkably precise warning of pending catastrophe. It would strike in 1975, they were told, and the world would never be the same. Famine — 1975! by brothers William and Paul Paddock may be thoroughly forgotten today but it was a best-seller in 1967. The brothers had solid credentials. One was an agronomist, the other an experienced foreign service officer. The book is loaded with scientific research, studies and data from around the world - everything from post-war Mexican wheat production to Russian economic output. And the Paddocks came to a brutal conclusion: As a result of soaring populations, the world was rapidly running out of food. Massive, worldwide starvation was coming and there was nothing anyone could do to stop it. 'Catastrophe is foredoomed,' they wrote. 'The famines are inevitable.' The Paddocks were not cranks. There were countless experts who agreed with them. Harvard biologist George Wald predicted that absent emergency measures meant 'civilization will end within 15 or 30 years.' The loudest alarm was raised by Stanford University biologist Paul Ehrlich. 'The battle to feed all of humanity is over,' Ehrlich wrote in The Population Bomb, published in 1968. 'In the 1970s and 1980s, hundreds of millions of people will starve to death in spite of any crash programs embarked upon now.' Like the Paddocks, Ehrlich loaded his book with research, studies and statistics. He also wrote three different scenarios for rhe unfolding of future events in heavily dramatic style - a technique that would become common in the catastrophist genre and one which, as we have seen, is very likely to trigger the Rule of Typical Things and lead Gut to believe the predicted events are more likely than reason would suggest. 'Even with rationing, a lot of Americans are going to starve unless this climate change reverses,' a frustrated scientist says to his wife in the first scenario. 'We've seen the trends clearly since the early 1970s, but nobody believed it would happen here, even after the 1976 Latin American famine and the Indian Dissolution. Almost a billion human 314 315 Risk beings starved to death in the last decade, and we managed to keep the lid on by a combination of good luck and brute force.' That scenario ends with the United States launching a pre-emptive nuclear strike on the U.S.S.R. In the second scenario, poverty, starvation, and crowded populations allow a virus to emerge from Africa and sweep the world — one-third of the planet's population dies. In the third scenario, the United States realizes the error of its ways and supports the creation of world bodies that tax rich countries to pay for radical population control measures - one billion people still die of starvation in the 1980s but population growth slows and humanity survives. Ehrlich writes that this last scenario is probably far too optimistic because 'it involves a maturity of outlook and behavior in the United States that seems unlikely to develop in the near future.' In 1970, Ehrlich celebrated the first Earth Day with an essay that narrowed the range of possibilities considerably: Between 1980 and 1989, roughly four billion people, including 65 million Americans, would starve in what he dubbed the 'Great Die-Off.' The Population Bomb was a huge best-seller. Ehrlich became a celebrity, making countless appearances in the media, including The Tonight Show with Johnny Carson. Awareness of the threat spread and mass starvation became a standard theme in popular culture. In the 1973 movie Soy lent Green, the swollen populations of the future are fed rations of a mysterious processed food called 'Soylent Green' — and as we learn in the memorable final line, 'Soylent Green is people!' Governments did not embark on the emergency measures to control population advocated by Ehrlich and many others. And yet mass starvation never came, for two reasons. First, fertility rates declined and the population did not grow as rapidly as predicted. Second, food production soared. Many experts had said these outcomes were not only unlikely, they were impossible. But they both happened and 40 years after the publication of Famine - 19751, the world's population was better fed and longer lived rhan ever. One would think catastrophists would learn to be humble about their ability to predict the future but there is a noticeable absence of humility in the genre. In 1999, James Howard Kunstler wrote at great length about the disasters - including an economic Conclusion recession as bad as the Great Depression of the 1930s - that would follow the breakdown of computers afflicted by the Y2K bug. Five years later, he published The Long Emergency, which is filled with great certainty about all manner of horrors ro come. As for Paul Ehrlich, he has been repeating essentially the same arguments he made in The Population Bomb for 40 years. On the dust jacket of The Upside of Down, a 2006 book by University of Toronto professor Thomas Homer-Dixon that follows similar themes, there is a blurb from Ehrlich. The book is highly recommended, he wrires, for its 'insightful ideas about how to make society more resilient in the face of near-inevitable environmental and social catastrophes.' Apparently the only thing Ehrlich has learned from the past 40 years is to put the word 'near' in front of the word 'inevitable.' To be fair to Homer-Dixon, his book is nowhere near as alarmist as Ehrlich's writing, or some of the others in the catastrophist genre, although that's how the marketing makes it look. In books, as in so much else, fear sells. Anyone stocking up on canned goods and shotgun shells because they've read some prediction of pending doom should keep that in mind. When Martin Rees wrote a book on threats emerging from scientific advances, he entitled it Our Final Century? But Rees's British publishers didn't find that quite frightening enough, so they dropped the question mark. Rees's American publishers still weren't satisfied and they changed 'century' to 'hour.' In an interview, Rees is much less gloomy than his marketing. He says we should worry more about nuclear weapons than we do and wotk harder for disarmament; given that these weapons are actually designed to cause catastrophe, it's hard not to agree. But Rees also thinks it important to acknowledge the astonishing bounty science has heaped upon us. 'We are safer than ever before,' he says. We worry far too much about 'very small risks like carcinogens in food, the risk of train accidents and things like that. We are unduly risk averse and public policy is very risk averse for those minor matters.' A balanced perspective is vital, Rees says. There are real threats that should concern us - threats like nuclear weapons - but we also have to appreciate that 'for most people in the world, there's never been a better time to be alive.' 316 317 Risk Proof of this fundamental truth can be found in countless statistics and reports. Or we can simply spend an afternoon reading the monuments to our good fortune erected in every Victorian cemetery. 318 Risk of psychologists in this chapter and others that follow, this is drawn from Heuristics and Biases, edited by Thomas Gilovich, Dale Griffin, and Daniel Kahneman. Along with the earlier edition of the same work - edited by Paul Slovic, Amos Tversky, and Daniel Kahneman - it is the definitive text on the subject. Page 31 "... if you give it some careful thought. . . .' The answer is five cents. Chapter Three Page 36 'Those who heard the higher number, guessed higher.' . . . For the record, both groups were way off. Gandhi was seventy-nine when he died. Page 39 '. . . produced an average answer almost 150 per cent greater than a low number.' Psychologists Baruch Fischhoff, Sarah Lichtenstein, and Paul Slovic found another use for anchoring numbers in a study that asked people to estimate the toll taken by various causes of death. Without guidance, people's answers were often wildly inaccurate, ranging from one extreme to the other. But when the researchers told people that 50,000 Americans are killed in car crashes each year (a toll that has dropped since the study was conducted), their answers 'stabilized dramatically' because they started at 50,000 and adjusted up or down. In later trials, the researchers switched to the anchoring number to 1,000 dead from electrocution -with the predictable result that people's estimates of deaths by other causes dropped enormously. Page 45 '. . • as even many black men do . . .' For an explanation and tests of unconscious beliefs that anyone can take, see 'Project Implicit' at www.implicit.harvard.edu. Page 54 '. . . there is no "just" in imagining. . . .' Lottery and casino ads that highlight smiling winners are another form of powerful manipulation. The odds of winning big jackpots are so tiny that almost no one will be able think of winners in their personal lives. But by advertising examples of people who struck it big - often with personal details that make their stories memorable — lotteries and casinos make it easy for people to recall examples of winners. And that ease of recall boosts Gut's estimate of the likelihood of winning. 322 Notes Page 59 '. . . only family and friends will hear of a life lost to diabetes . . .' Much of the work of Paul Slovic cited in this book can be found in The Perception of Risk, a compilation of decades of Slovic's papers. Chapter Four Page 61 '. ... a conference that brought together some of the world's leading astronomers and geo-scientists to discuss asteroid impacts. . . .' The conference papers were compiled in Comet/Asteroid Impacts and Human Society, P. Bobrowsky and H. Rickman eds., Springer Verlag Publishing, New York. Page 76 '. . . while "all possible causes" is bland and empty. It leaves Gut cold.' See Eric Johnson et al., 'Framing, Probability Distortions, and Insurance Decisions,' Journal of Risk and Uncertainty, 1993. Page 76 '. . . raising risk estimates by 144 per cent. . . .' See 'Affect Generalization and the Perception of Risk,' Journal of Personality and Social Psychology, 45, 20-31. Page 78 '. . . beloved grandfathers are not necessary.' One of the stranger demonstrations of the mere exposure effect involves asking people to choose which of two images they prefer. One is a photograph of their face the way it actually is. The other is the same image reversed. Most people choose the face that is reversed. Why? Because that's what they see every morning in the mirror. Page 79 '.. . the Penguins' penalty time rose 50 per cent to 12 minutes a game. . ..' Does the black uniform mean referees perceive the team more negatively and therefore judge them more strictly than they otherwise would? Or does the black uniform inspire players to be more aggressive? Both, the researchers concluded. See 'The Dark Side of Social and Self-Perception: Black Uniforms and Aggression in Professional Sport.' Page 82 '. . . bias in favor of the "lean" beef declined but was still evident.' Irwin Levin and Gary Gaeth, 'How Consumers Are Affected by the Framing of Attribute Information Before and After Consuming the Product,' The Journal of Consumer Research, December 1988. 323 Risk Page 83 '. . . feeling trumped numbers. It usually does, as we will see.' See Cass Sunstein's Laws of Fear. Page 85 '. . . there's a good chance I won't even think about that.' See Robin Hogarth and Howard Kunreuther, 'Decision Making Under Ignorance,' Journal of Risk and Uncertainty, 1995. Chapter Five Page 94 'in a way that the phrase "almost 3,000 were killed" never can.' This summary of the life of Diana O'Connor comes from the remarkable obituaries the New York Times prepared of every person who died in the attack. The series ran for months and garnered a huge readership. Page 96 '. . . only data - properly collected and analyzed - can do that.' The reader will notice that anecdotes abound within this book. My point here is not to dismiss stories, only to note that, however valuable they may be in many circumstances, anecdotes have serious limitations. Page 97 '. . . "the deaths of millions is a statistic," said that expert on death, Joseph Stalin.' Psychologists call this the 'identifiable victim effect.' For a discussion, see, for example, 'Statistical, Identifiable and Iconic Victims,' George Loewenstein, Deborah Small, and Jeff Strand. Page 98 '. . . empathetic urge to help generated by the profile of the little girl.' Even in relatively unemotional situations - the sort in which we may assume that calm calculation would dominate - numbers have little sway. Psychologists Eugene Borgida and Richard Nisbett set up an experiment in which groups of students at the University of Michigan were asked to look at a list of courses and circle those they thought they might like to take in future. One group did this with no further information. A second group listened to brief comments about the courses delivered in person by students who had taken the courses previously. These presentations had a 'substantial impact' on students' choices, the researchers found. Finally, a third group was given the average rating earned by each course in a survey of students who had taken the courses previously: In sharp contrast with the personal anecdotes, the data had no influence at all. Notes Page 99 '. . . the statistics - 70 engineers, 30 lawyers - mattered less than the profile.' Kahneman and Tversky dubbed this 'base-rate neglect.' Page 102 '. . . more about the power of Gut-based judgments than they do about cancer.' For a good overview, see Atul Gawande, 'The Cancer-Clustet Myth,' The New Yorker, February 8, 1998. Chapter Six Page 112 '. . . go along with the false answers they gave.' Robert Baron, Joseph Vandello, and Bethany Brunsman, 'The Forgotten Variable in Conformity Research: The Impact of Task Importance on Social Influence,' Journal of Personality and Social Psychology, 71, 915-27. Page 115'... The correct rule is actually "any three numbers in ascending order.'" There are other rules that would also work. What matters is simply that the rule is not 'even numbers increasing by two.' Page 115 "... more convinced that they were right and those who disagreed were wrong.' Charles Lord, Lee Ross, Mark Lepper, 'Biased Assimilation and Attitude Polarization: The Effects of Ptior Theories on Subsequently Considered Evidence,' Journal of Personality and Social Psychology, 1979. Page 116 '. . . when they processed neutral or positive information.' See Westen's 2007 book The Political Brain. Page 122 '. . . most drug education as "superficial, lurid, excessively negative. . . ."' Why have you never heard of this report? Because the government of the United States successfully buried it. In a May 1995 meeting, according to the WHO's records -and confirmed to me by a WHO spokesman - Neil Boyer, the American representative to the organization, 'took the view that . . . (the WHO's) program on substance abuse was headed in the wrong direction. ... If WHO activities relating to drugs failed to reinforce proven drug-control approaches, funds for the relevant programs should be curtailed.' Facing a major loss of funding, the WHO backed down at the last minute. Although a press release announcing the report was issued, the report itself was never officially released. The author has a copy on file. 324 325 Risk Notes Page 122 '. . . illicit drugs aren't as dangerous as commonly believed.' A thorough discussion of the real risks of drugs is outside the scope of this book but readers who wish to pursue it are encouraged to read Jacob Sullum's 'Saying Yes.' Page 122 'Higher perceived risk is always better.' See, for example, the website of the U.S. Office of National Drug Control Policy (www.whitehousedrugpolicy.gov), where documents routinely tout higher risk perceptions as evidence of success but never - so far as I can see - consider whether those perceptions are out of line with reality. Page 122 '. . . killed far more people than all the illicit drugs combined.' A 2004 article in the New England Journal of Medicine, for example, puts the annual death toll inflicted by alcohol at 85,000; all illicit drugs were responsible for 17,000 deaths. See Actual Causes of Death in the United States 2000.' Gaps between the two causes of death are even larger in other countries. Page 125 '. . . not a problem in the hands of law-abiding citizens.' Kahan's research and more is available on the website of the Cultural Cognition Project at the Yale University Law School, http:// research.yale.edu/culturalcognition/ Chapter Seven Page 137 '. . . three-quarters of all Americans would be considered "diseased."' See 'Changing Disease Definitions: Implications for Disease Prevalence,' Effective Clinical Practice, March/April 1999. Page 144 '. . . self-interest and sincere belief seldom part company.' For a thorough and entertaining look at how the brain is wired for self-justification, see 'Mistakes Were Made (But Not By Me): Why We Justify Foolish Beliefs, Bad Decisions, and Hurtful Acts' by Carol Tavris and Elliot Aronson. Page 150 '. ... one out of eight American children is going hungry tonight.' See It Ain't Necessarily So: How the Media Make and Unmake the Scientific Picture of Reality, David Murray, Joel Schwartz, and S. Robert Lichter, 2001. Page 152 '. . . half-truths, quarter truths, and sort-of-truths.' Another fine example is reported in 'Is the Tobacco Control 326 Movement Misrepresenting the Acute Cardiovascular Health Effects of Secondhand Smoke Exposure?' in Epidemiologic Perspectives and Innovations. The author, Michael Siegel, a professor in the School of Public Health at Boston University, shows how anti-smoking groups promoted smoking bans by exaggerating the danger of second-hand smoke. And another: In the late 1980s and early 1990s, activists and officials struggling to convince people that HIV-AIDS was not only a 'gay man's disease' had an unfortunate tendency to spin numbers. The U.S. Centers for Disease Control, for example, reported that 'women accounted for 19 per cent of adult/adolescent AIDS cases in 1995, the highest proportion yet reported among women.' That frightening news made headlines across the United States. But as David Murray, Joel Schwartz, and Robert Lichter point out in It Ain't Necessarily So, the CDC did not include the actual numbers of AIDS cases in the summary that garnered the headlines. Those numbers actually showed a small decline in the number of women with AIDS. The proportion of AIDS cases involving women had gone up because there had been a much bigger drop in the number of men with AIDS. By the mid-1990s, it was becoming increasingly obvious that activists and agencies had been hyping the risk of heterosexual infection in the United States, the United Kingdom, and elsewhere, but some thought that was just fine. 'The Government has lied, and I am glad,' wrote Mark Lawson in a 1996 column in The Guardian. Page 156 '. . . I hope that means being both.' Many critics of environmentalists have repeated this quotation in a form that cuts off the last several sentences, thereby making it look as if Schneider endorsed scare-mongering. He did not. Page 158 "... may be misleading but it certainly gets the job done.' Another technique for making uncertain information exciting is to dispense with the range of possible outcomes that often accompanies uncertainty and instead cite one number. Naturally, the number cited is not the lowest number, nor the average within the range. It is the biggest and scariest. Thus, when former World Bank chief economist Nicholas Stern estimated in 2006 that the costs of climate change to the global economy under a range of scenarios would be 5 to 327 Risk Notes Chapter Twelve Page 305 '. . . In 1900, it stood at 48 years.' See The Escape From Hunger and Premature Death, 1700-2100, Robert Fogel. Page 305 '. . . died before they were five years old.' See Fatal Years by Samuel Preston and Michael Haines. The authors show that the toll was distributed throughout American society. Losing children was a common experience even for the very wealthy. Page 306 '. . . life expectancy will rise in every region of the world.' See 'Projections of Global Mortality and Burdens of Disease from 2002 to 2030,' Colin Mather and Dejan Loncar. Page 308 '. . . to err is human, but, happily for us, we are not human.' In a series of experiments, psychologists Emily Pronin, Daniel Lin, and Lee Ross gave Stanford University students booklets describing eight biases identified by psychologists. They then asked how susceptible the 'average American' is to each of these biases. The average student at Stanford? You? In every case, the students said the average student was quite susceptible, but they were much less so. The researchers got the same results when they ran a version of the test in the San Francisco International Airport. In more elaborate experiments, Pronin, Lin, and Ross sat people down in pairs and had them take what they said was a 'social intelligence' test. The test was bogus. One of the two test-takers — chosen randomly -was given a high score. The other was given a low score. Then they were asked whether they thought the test was an accurate measure of social intelligence. In most cases, the person who got the high score said it was, while the poor guy who got the low score insisted it was not. That's a standard bias at work - psychologists call it the 'self-serving bias.' But then things got interesting. The researchers explained what the 'self-serving bias' is and then they asked whether that bias might have had any influence on their judgment. Why, yes, most said. It did influence the other guy's judgment. But me? Not really. Page 314 '. . . much more confident of those outcomes than the Thomas Friedman of 1985 really was.' In 2005, four out of five Canadians agreed that 'the world is not as safe a place 340 today as it was when I was growing up.' Particularly amazing is that 85 per cent of Canadians born during or prior to the Second World War agreed with this statement: Thus, almost everyone who grew up in an era characterized by the rise of totalitarian nightmares, economic collapse, and world war agreed that the world today is more dangerous than that. (See Reginald Bibby, The Boomer Factor, 2006.) Page 315 '. . . they tend to focus on the negative side of things, for some reason.' Just as it is possible to look into the future and imagine horrible things happening, it is possible to dream up wondrous changes. Vaccines for malaria and AIDS would save the lives of hundreds of millions of people. Genetically engineered crops could bring an abundance of cheap food to the world's masses. Hyper-efficient forms of alternative energy may make fossil fuels obsolete and radically mitigate climate change. In combination, they may usher in an unparalleled Golden Age - which is as likely as some of the more outlandish scenarios in Catastrophist writing. Page 317 '. . . satisfied and they changed "century" to "hour."' The full, terrifying title is Our Final Hour: A Scientists Warning: How Terror, Error- and Environmental Disaster Threaten Humankinds Future in This Century - On Earth and Beyond. Much of the book is purely speculative. Rees notes, for example, that if nanotechnology got out of control and became self-replicating it could turn the world into 'grey goo.' This is far beyond any technology humanity has invented or will invent for the foreseeable future, Rees acknowledges, and the only thing making it even a theoretical possibility is the fact that it doesn't violate any laws of physics. As British science writer Oliver Morton wrote in reviewing Rees's book, 'If we're to take the risk seriously, we need something more to gnaw on than the fact that it breaks no laws of physics. Neither do invisible rabbits.' 341