Chapter 9 Cognition 129 Cognition It should be fairly apparent to the reader why personality- and belief-basoti theories "fit" under the general heading of dispositionism rather than situ- j ationism; if all individuals tend to behave the same way when placed in thai same objective situation, there would be little point in studying the mindset* of particular individuals. If the situationists are correct, then there is little to bo j gained by looking "inside people's heads." According to them, we can get all the necessary information about people's behavior by specifying the nature o| the situation the individual faces, not by considering his or her dispositions, Again, however, dispositionists assume that individuals vary in their reJ sponses to situations, and they ask what specific factors seem to produce thin variation. Since the 1980s, moreover, political psychologists working in fields as diverse as foreign policy decision-making and voting behavior havo . increasingly sought to explain these individual differences by examining tlw knowledge structures or cognitive architecture inside our heads. During the 1970s, psychology underwent what is often referred to as the "cognitive revolution." The study of cognition (which loosely means thought processes or knowledge, from the Latin term cognoscere, "to know") has come* to dominate the discipline ever since. How do we make decisions? How UQ| we solve problems? What mental processes shape our reasoning? How do \|fl process information? How do we acquire knowledge? How do we access ■ 1 ■ • ■ knowledge when it is required? What factors shape our perceptions of llir world? How do we learn? The study of questions like these involves an analyalll j of cognitive processes and an appreciation of the ways in which the hutn.tn mind works. While modern psychology has retained Freud's idea that many of our mental processes are unconscious ones—as we shall see later on in thin chapter, we often use various cognitive "short cuts" almost without knowlm we are doing so—today's cognitive psychology represents a far cry from thp psychoanalytic ideas we began with. And since political psychology bor: heavily from its mother discipline, this has meant that many of the forr proponents have become preoccupied with political cognition, the ways in which we think and reason politically.' Many of the assumptions of the model we described in Chapter 1 as Homo psychologies are directly derived from cognitive psychology and the broader field of what has become known as cognitive science. This is broader in the sense that while it essentially covers the same topic matter as cognitive psychology, it draws upon linguistics, computer science, neuroscience, philosophy, and other disciplines as well. David Green and his colleagues define cognitive science as: the interdisciplinary scientific study of the mind ... it seeks to understand how the mind works in terms of processes operating on representations. Mind, and hence the basis of intelligent action in the world, is viewed in terms of computations or information-processes.2 The human mind is an incredible tool, and the study of artificial intelligence (Al) has not yet come close to creating computer programs that replicate the range of tasks it performs. As Steven Pinker notes, computers and robots lack what humans call intuition or "common sense." While the human mind is usually uble to capture this faculty with ease, that attribute is exceptionally difficult (if not Impossible) to program into the "mind" of a computer. The following example Illustrates the flavor of the difficulty. Suppose that a female friend of yours has just moved into town. She is throwing a housewarming party and is unattached, but »he lias a problem: the only people she knows here are other ladies, so she asks you to invite some bachelors to the party. You are really busy at work, so you give lour robot "Robbie" the task of inviting the male guests. At the workshop Robbie programmed with the standard information that a bachelor is an adult male who is not married, and he uses this definition to send out invitations from a list ill your known friends and acquaintances. To your jaw-dropping surprise, you discover at the party that Robbie has invited the following: I i nest #1: * #2: •sts #3 and #4: (hirst #5: Arthur, who has been living happily with Alice for the last five years. They have a two-year-old daughter but have never officially married. Charlie is seventeen years old. He lives at home with his parents and is still in high school. Eli and Edgar are homosexual lovers who have been living together for many years. Faisal is allowed by the law of his native Abu Dhabi to have three wives. He currently has two and is interested in meeting another potential wife. 130 The Individual Cognition 131 Guest #6: Father Gregory is the bishop of the Catholic Cathedral at Groton Upon Thames.' Computers, of course, do some things far better than we do—given the limitations of human memory, computers are generally far better at data storage and retrieval, for instance. But another example that (indirectly) suggests the comparative brilliance of the human mind is John Searle's Chinese Room thought experiment, which directly challenges the notion that computers can "think" like humans do.4 Searle's argument is designed to] debunk computationalism, or the view proposed by some hard-line advocates of AI—popularized in movies like 2001: A Space Odyssey and Demon Seed— that computers are capable of attaining genuine understanding or developing what we often call "consciousness." Searle asks us to imagine a computer that looks as if it understand* Chinese. When a native Chinese speaker inputs a question into the computer, it answers back correctly in Chinese. In fact, it is so good that it seems to o\T Chinese speaker that he or she is actually talking to another Chinese pcrsor Searle asks us to imagine, however, that we ourselves are inside the compute or "Chinese Room." I personally don't speak a word of Chinese, and probably most of you don't either (if you do, just imagine that the computer is speakitl another language instead that you don't know). However, what 1 do wlic I receive a question in Chinese is simply to consult a rule book. Thai id me that when I get a certain set of Chinese characters, I should respond wil another set. And so I mindlessly produce this string of characters in respon I have no idea what they mean, just as a computer has no idea what English symbols being keyed into it mean. But to the Chinese person the outside, it still looks just like the computer understands Chinese and genuinely conversing as a human being would. Searle's broader point is l course that computers simply produce sets of rules and syntactic symbl with which we have programmed them, but they are not capable of genulr understanding. No matter how good or how convincing our attempts mimic human behavior in artificial form, computers or robots will probtl never be able to do the things that the human mind is capable of. Given the sheer complexity of the human brain—and the fact that study of it is still in its infancy—there are naturally plenty of differences in 1 ways that cognitive scientists approach the questions about informat1' processing posed at the beginning of this chapter. Here, however, we i focus on two theories that have had a particular impact on political psvclml in recent years: attribution theory (the "naive scientist") and schema j/m (the "cognitive miser"), after first setting the context with a discussion cognitive consistency theory. Finally, we will conclude this chapter examining the related topic of analogical reasoning. As we shall see, the topics of attributions, schemas, scripts, and analogies are so closely related or interconnected that theories based upon them are far more complementary than they arc competitive. Cognitive Consistency Theory Nowadays, the kind of behaviorism discussed in Chapter 3 has lost most of its adherents within mainstream psychology. This is largely a result of failures like the one noted above, but it is also the product of a turn towards cognition and the inner workings of the mind which reasserted itself at about the same time. Cognitive consistency theory (mentioned in Chapter 2, where we told the story of "Marion Keech") became especially popular in the 1950s and 1960s. When people act in ways contrary to their own beliefs, for instance, this theory suggests that they experience a state of psychological discomfort, as long as the mismatch between behavior and beliefs is perceived. The assumption here is that people do not like to act in ways that violate their own beliefs, dislike holding beliefs that are incompatible with one another, and avoid information or situations that cause such incompatibilities to become j clear. Leon Festinger called this mismatch a state of cognitive dissonance.* A Nome what similar theory of cognitive "balance" was previously developed by fritz Heider.6 Strong party identifiers, for instance, may find themselves at Idds with their party on a key issue such as abortion or civil rights, or may disapprove of the presidential or vice-presidential candidate their party has nominated. The theory assumes that in the face of such dissonance, the voter I»-comes strongly motivated to bring things back into balance (what Festinger (ailed "consonance"). This could be done by rationalizing away the issue disagreement or candidate choice as unimportant ("the Civil Rights Act won't i hange things around here," "Joe Biden will never be president anyway") in perhaps by adding some extra belief which reduces the dissonance. Finally, nnc could switch one's party allegiance altogether and so bring one's voting behavior more into line with one's choice of party, though many models of Voting suggest that this is unlikely for strong party identifiers. Why did the cognitive consistency theory gradually fall out of favor? Susan I Isko and Shelley Taylor offer some reasons: Consistency theories ceased to dominate the field, ironically, as they proliferated, partly because the variants on a theme became indistinguishable. Moreover, it was difficult to predict what a person would perceive as inconsistent and to what degree, and which route to resolving inconsistency a person would take. Finally, people do in fact tolerate a fair 132 The Individual Cognition 133 amount of inconsistency, so the motivation to avoid it—as an overriding principle—was called into doubt.7 HI th, In these approaches, cognition was also subservient to motivational processes; people would change their beliefs or behavior only when motivated to do so by powerful negative emotional states. As noted above, advocates of this approach conceded that it was difficult to say in advance precisely how a given individual would seek to reduce dissonance, though in politics many individuals seemed to have a high tolerance for inconsistent beliefs. When deciding how to vote, for instance, advocates of the party identification model (examined in Chapter 12—in which voting is regarded as an almost automatic or "kneejerk" process, based on long-term party loyalties rather than consideration of the issues or candidates) argued that most voters simply ignored downplayed manifest differences between their own issue positions and thosi of their party. In 2008, for instance, the neoconservative voters and memben of the Christian Right disapproved of the past issue positions taken by the: own party's nominee, John McCain; he had dismissed some members of neoconservative movement as "agents of intolerance," for instance. Faced wi this kind of scenario, however, the party identification approach predicted thi most conservative Republicans would simply "hold their noses" in the votin( booth and vote for McCain anyway, ignoring the presence of cognitive dii sonance. The same thing happened in 2012, when conservatives took a loi time to warm to their presidential candidate Mitt Romney, not least becai he had been a relatively "liberal" Governor of Massachusetts some years befo: and had even pushed through a prototypical version of "Obamacare" the! But if dissonance is such a powerful force, why do people so seldom chart] their beliefs or electoral behavior? As a result of growing dissatisfaction with the cognitive consistci model, during the 1970s both cognitive and social psychologists increasing began to turn to two newer approaches in particular: attribution theory schema theory. Attribution Theory Rather than viewing human beings as "consistency seekers," attribution theol sees individuals as "naive scientists" or problem-solvers. Instead of belli motivated to constantly restore balance in their own beliefs or bel \\ 11 those beliefs and their own behavior, attribution theory suggests it human beings are mainly concerned to uncover the causes of their own beh.n li and that of others. People are constantly looking for causes and effects "w|| did this happen?"—albeit in a far less sophisticated manner than a scicntf working in a laboratory would. They are continually looking to make sense of I he world around them, and they draw upon a range of assumptions about themselves and others in doing so. Harold Kelley, Richard Nisbett, and Lee Ross have all been especially influential in developing this approach to cognition.8 Attribution theory becomes particularly interesting to us in terms of the distinction we have been drawing between situationism and dispositionism, in part because it is advocates of that theory who have popularized the distinction. According to this approach, we sometimes attribute the causes of someone's behavior to the situation they are in, while at other times we attribute that behavior to the person's internal dispositions. Unfortunately, we frequently make quite substantial errors and mistakes when we try to do this. As Fiske .md Taylor note, people are not always very careful when they make attributions. "On an everyday basis, people often make attributions in a relatively thoughtless fashion. The cognitive system is limited in capacity, so people take ihort cuts."9 One particularly notable kind of error with potentially major political consequences is called the fundamental attribution error. When we are explaining our own actions, we very often use situational attributions, and in fact we nil en overestimate the extent to which our actions are the result of the situation. On the other hand, when asked to explain why someone else acted as they did, we often make the opposite kind of mistake: we underestimate the extent In which the situation mattered (and hence overestimate the importance of iliii person's dispositions). How might this be of interest to students of |politics? As the political psychologist and expert on foreign policy decisionmaking Deborah Welch Larson puts it in her classic study of the birth of Cold War containment: Policymakers tend to infer that the actions of their own state were compelled by circumstances, even while they attribute the behavior of other states to the fundamental "character" of the nation or its leaders. Applied to the problem of explaining the change in U.S. foreign policymakers' orientation toward the Soviet Union, attribution theory would suggest that Washington officials were too willing to impute ideological, expansionist motives to Soviet actions that could just as plausibly reflect security calculations similar to those that prompted analogous policies pursued >y the United States.' That some policy-makers fall into the trap of making such false attributions Hoi . not mean that we are all condemned to do so. During the Cuban missile 11 is, for instance, attribution judgments became a matter of life and death. 134 The Individual Cognition 135 "Why have the Soviets placed missiles in Cuba?" members of the ExComm asked themselves almost immediately in those first, tense meetings. "What are their intentions?" Air Force General Curtis LcMay appears to have attributed dark dispositionist motives to the Soviet leadership, while others like Robert McNamara, Ambassador Tommy Thompson, and President Kennedy seem to have been more attuned to the possibility that Khrushchev's actions might have been compelled or encouraged by situationist forces, Both the Americans and the Soviets recognized the possibility, moreover, that situationist forces might take over the process and cause the outbreak of an inadvertent war. The exercise of empathy—placing ourselves in the shoe! J of our adversary—is a useful antidote to the kind of attributional errors thai) naive scientists often make in international relations, but there are dangeri and biases attached to this as well. As Yaacov Vertzberger notes, we may also have a special tendency to invoke dispositionist attributions in others when we have a strongly negative attitude towards them. "Dislike tend to evoke dispositional explanations for undesirable actions by others, wh empathy biases explanations of such behavior toward situational attribution," Vertzberger points out." Supporters of attribution theory argue that two short cuts or heuristic devices are especially important in human decision-making and reasoning the representativeness heuristic and the availability heuristic. As Samuel Popkll notes, "representativeness is a heuristic, a rule of thumb, for judging th likelihood that a person will be of a particular kind by how similar he is the stereotype of that kind of person."12 Popkin argues that this is how make assessments of candidates in presidential primaries, about whom modi of us know little at first. This heuristic is also used to estimate the likelihon of something occurring by assessing whether it "fits" a particular catcgor Perceived similarity is what matters here, but one major problem—wh makes the science "naive"—is that people usually ignore base information statistical probabilities when making these kind of assessments. When ask to estimate the likelihood that Saddam Hussein is "another Hitler," instance, most people attempt to match apparent similarities between two (each was an expansionist, repressive domestically, and so on). Wl most people fail to do is to look at the statistical probability that "Hussel a Hitler" (as a scientist presumably would). Arguably, there have been w few genuinely Hitler-like leaders in recent history, but this is not how ■ people estimate probability. When people use the availability heuristic, on the other hand, they csliiiu the likelihood of something based on how cognitively available it is to the Often something is available in our memories simply because it happen recently or because it constituted a very vivid experience that we're unlike to forget. World War II and Vietnam are especially vivid for makers of U.S. foreign policy, and new situations tend to be compared disproportionately to these two events. This too is clearly "unscientific" because it ignores statistical likelihood. Viewing something as likely to happen simply because something similar happened recently or you were especially influenced by some vividly memorable event is obviously a poor way of estimating probability. Schema Theory As John Sullivan and his colleagues note, the actual term "schema" has gone somewhat out of fashion within political psychology since the 1990s, especially In the study of mass behavior. This is in part due to the claims made by some political psychologists that it added little to their understanding of existing political concepts,13 but it is probably more a product of the fact that most scholars have accepted the basic idea that we have such knowledge structures In our heads and have now become more interested in how these affect political behavior. "While schema theory itself may be out of vogue, the ideas that were once packaged together under that appellation are still alive and well, only refashioned in more innocuous terminology such as 'cognitive representations' or cognitive 'categories' or stereotypes," Rahn, Sullivan, and Rudolph point out.14 Cognitive psychologists also use the term "schema" less than they did, usually preferring nowadays to talk of "associationist networks" or "the computational theory of the mind." But this is really a matter of labels rather than substance. The term "schema" is broadly familiar to most political psy-< biologists, and we will use the label here because it neatly ties together a large ■mount of work that has been done in a variety of areas across the elite—mass I vide. Indeed, this is probably the one theory (or rather body of theories) 11.11 has brought together under a single tent those who study international relations from a cognitive perspective and those who focus on mass behaviors such as voting.15 In common with the attribution theory, schema theory assumes that human kings possess limited cognitive capacities, and is in many ways compatible i Willi the former. We are bombarded every day with information. Rather than lluming that individuals search for cause and effect patterns or resemble live scientists, schema theory treats human beings as categorizers or labelers. i cope with information overload, we engage in mental economics; we |*e "cognitive misers." Rather than treating each piece of new information ■ generis or on its own merits, we assimilate knowledge into pre-existing t< ''iiries (usually known as schemas or scripts). This is cognitively efficient, ■lid ((datively easy to do.16 136 The Individual Cognition 137 The term "schema" is often used rather more loosely than it should be, and it has been given a variety of definitions. As defined here, though, a schema is essentially a kind of stereotype stored in memory that provides information on the typical features of an object, event, or person. Schemas are generic collections of knowledge; general concepts, rules, lessons, and stereotypes stored in memory. They go beyond any one example to provide information on what is usually the case, and we use such Schemas both to categorize newly encountered information and to make inferences that go beyond the information given. We can also think of a schema as a mental box containing typical or "default values" associated with a thing we are familiar with. Suppose I were to present you with the following very simple puzzle: I'm thinking of "a thing." This thing has fur. It has a tail. It has paws. You take it for walks. Once we are given the last piece of information, the thing I am talking about becomes obvious. But why is it obvious? I still haven't told you what the thing is, but we all somehow know I am talking about a dog. When you think about it, that's a rather amazing feat of collective cognitive activity, and schema theory would explain it this way. First of all, you absorbed each piece of information ("the thing has fur," and so on). Then you compared these attributes to the default values stored in your memory that correspond to various Schemas. You then matched these to the generic category "dog" and made a conclusion about what I was thinking of. In, other words, you used the information both to categorize and to go beyond the information actually given. Notice, though, that the use of Schemas j is fraught with potential error, precisely because it does involve making] inferences that exceed the data you have been given. Until I mentioned! the last attribute—"you take it for walks"—I could have been talking about] a cat, but no one in their right mind takes the family cat for a walk. Without] that last piece of information, you could easily have erroneously slotted the! attributes not only into the schema for a cat, but the Schemas for any number! of our furry friends. As with false attributions, this does not mean that we are necessarily! "trapped" by Schemas or bound to make errors when we use them, but that! is the drawback inherent in any cognitive short cut. Nor are we bound to ] ignore the differences between a prototypical example and an actual one. Julian Hochberg, for instance, argues that "any individual object is recognized first] by identifying its schema, and then by noting a small number of features tli.it identify the object more specifically and set it off from other examples of the] schema to which it belongs."17 But the fact that Schemas are devices of mentfl economy does mean that they can mislead us on occasion, and in politics tin can have serious consequences. How is all of this relevant to politics? It is relevant because elite decisionmakers (and voters too) must almost always make decisions with only incomplete information about the situation at hand. Political actors can and do make incorrect inferences by fitting individuals or events into the wrong categories or schemas based on purely superficial similarities. Again, Deborah Welch Larson provides us with a classic example. Thomas Pendergast was Harry Truman's old party boss and mentor in Missouri. In those days, the party bosses essentially ran the political system, especially at the local level, incl to rise to prominence at the national level you first had to move through various levels of patronage. Truman never forgot the impact Pendergast lud had on his career, and he became a powerful role model for the future president. It was from Pendergast that Truman learned the importance of keeping one's word, and what he called the "code of the politician": if you make n promise, always keep your word.18 It just so happened that the Soviet leader Joseph Stalin resembled Pendergast ,ind reminded Truman very much of his old mentor. Because of this superficial resemblance, moreover, Truman initially reacted warmly towards the Soviet leader. In 1946 the two men met for the first time, and Truman was greatly Impressed by Stalin: His meeting with Stalin reinforced Truman's belief that the Russian dictator was like Boss Pendergast. Truman remarked admiringly to an aide: "Stalin is as near like Tom Pendergast as any man I know." Truman went beyond Stalin's superficial resemblance to Pendergast to infer that the Russian shared personality characteristics with the Missouri boss. Truman told his staff that "Stalin was one, who, if he said something one time, would say the same thing the next time ... he could be depended upon." Truman inferred that Stalin, like Pendergast, could be trusted to keep his word. "I got the impression Stalin would stand by his agreements and also that he had a Politburo on his hands like the 80th Congress," Truman recalled.19 While this cognitive error had no effects over the longer term once Stalin had |>i i ived that he could not be trusted to deliver on his promises, in the short term It led Truman astray by causing him to trust the Soviet leader far more than he alimild have, and Truman even continued to view Stalin somewhat positively alter he realized that the latter had betrayed him.20 liven more commonplace examples can be drawn from voting behavior. We will deal with this topic in more detail in Chapter 12, but for now we will merely note that Wendy Rahn's argument that many people rely on I'n i \ or ideological identification as a kind of cognitive short cut captures perspective that has proven very popular among scholars of electoral choice. 138 The Individual Cognition 139 When people have specific information about a candidate at an election, they are perfectly capable of attending to that information and weighing its value in shaping their vote, Rahn finds. However, when voters have both particular information and party stereotypes at their disposal: they prefer to rely on heuristic-based processing. They neglect policy information in reaching evaluations; they use the label rather than policy attributes in drawing inferences; and they are perceptually less responsive to inconsistent information. Not even extreme party-issue inconsistency prompted individuals entirely to forsake theory-driven processing.21 Also consider for a moment how we make decisions about candidates foi office we know little about. As we have seen before, party identification a frequently used cognitive short cut, especially when voters know little < nothing about a candidate other than his or her party. But this is far from being the only economical mental device that voters employ. How, for instance, d| we make decisions during the presidential primary season, when all tlr candidates come from our preferred party? Samuel Popkin has developed theory of candidate appeal in primaries that draws on schema-type notions We often know very little about the candidates who run for their partien presidential nomination. Many are governors of states we know little .il..... or senators we may never have heard of. During the 2008 primary season, In instance, how many people knew what Barack Obama's voting record in lli Senate was, or what his specific policy proposals were? How many peopli actually knew who Sam Brownback or Bill Richardson was? With the excepting of "big guns" like John McCain and Hillary Clinton, most of the candidal! lacked any known national profile. How, then, do we choose between various unknowns? Popkin argues thai we base our decisions on only a few pieces of observable "data." We then imti these to fill in missing information about the candidate (default values) aflfl we reach a conclusion of how representative a candidate is of some ideal (fl non-ideal) stereotype. This is another way of saying that we (it candid,ilea, however imperfectly and imprecisely, into schemas we already have stored ||| our heads. Like the dog schema example earlier, wc use a few knowns In in the unknowns, in order to come to a more general conclusion or assessmlfl As Popkin puts it: voters will decide what kind of governor Jimmy Carter was and wM kind of president he will be not on the basis of knowledge aboul lilt performance as governor of Georgia but on their assessment of HH likely it is that Jimmy Carter, as a person, was a good governor/' Popkin borrows not just from schema theory here but from attribution theory, since he argues that fitting candidates into one stereotype or another involves judgments about representativeness. Scripts can be thought of as "event schemas," a particular kind of schema which provides typical default values for an event of some kind, such as climbing the stairs, or going to the cinema or a restaurant. How is it that human beings can climb a staircase they have never seen before, even the rather exotic spiral version? This may seem like a silly question to ask, but the reason it seems silly is that we all have a script or schema stored in our heads that deals with climbing stairs. It tells us how to approach the staircase, to place a single loot first on the initial stair, follow that with the next foot on the higher Itep, and so on. Equally, we have no problem watching a movie at a cinema we have never visited; we simply use our default values to guide our behavior. By the same token, if I tell you that I actually went to see a movie in town last night, you can easily use the same default values you keep in your head for typical visits to the cinema to guess how my evening probably went. I probably bought a ticket first, then gave my ticket to the attendant. Like most people, I probably bought a Coke and some popcorn. I sat down in the movie theater pillowing the film I'd selected and I watched it until the end. When it was over, I left and did something else. Again, however, scripts can mislead. Suppose I really did go to the I Inema last night. However, professorial salaries where I work in Florida are miserly, and I decided to slip past the ticket attendant without paying when lui attention was distracted. I'm on a diet, so I rejected the soda and popcorn lliey try to sell you before you go in. I found the movie so boring that I fell asleep in it. I was awoken by an angry lady behind me who objected to Ijny snoring, and I stumbled out into the daylight, regretting that I'd wasted my 111111 on another overly hyped Hollywood movie (even though I hadn't bothered to pay). In this ease, your default values have led you astray, and l^i.iin the reason partly relates to our cognitive miserliness. We make assumptions based on the typical or prototypical behaviors which may be entirely misleading or incorrect. The use of historical scripts is very common in international politics. The Munich script, for instance, tells us a story about what happens when a lulhless, expansionist leader is appeased, suggesting that if you don't confront ■ i threat early on, you will most assuredly have to face it later. World War I I had had a devastating effect on Europe, and British Prime Minister Neville I li,underlain (as well as other European leaders) not unnaturally wanted In avoid another war ol perhaps even greater devastation. In 1938 a peace Hoitlerence was convened in Munich at which Hitler agreed to restrain ■ggressive ambitions in return for part of what was then Czechoslovakia. 140 The Individual Cognition 141 Chamberlain famously emerged from the conference waving the agreement that had been reached and promising "peace in our time." This policy of course was a terrible failure, and the word "appeasement" became a dirty word in international relations, ruining the political careers of those who (like then Ambassador Joseph Kennedy in the United States) had advocated it. Hitler violated the terms of the Munich agreement the following year, invading one European state after another and eventually leading the United States to intervene on behalf of the beleaguered and financially ruined Allies. This same script was later evoked on numerous occasions during the Cold War, and most famously by George H.W. Bush after Saddam Hussein invaded Kuwait in 1990; Bush argued that if Hussein's aggression was not confronted early on—if Hussein were appeased, in effect—the rest of the Middle East would soon fall to his expansionist designs. Analogical Reasoning Another (very similar) way of thinking about historical events and scripts is through what has become known as analogical reasoning. When we reason by analogy, we compare a new situation to something similar we have faced in the past (or rather, something that appears to be similar). We very often use historical analogies when discussing international affairs and foreign policy; indeed, according to former Secretary of State Alexander Haig, "international politics attracts analogies the way honey attracts bears."24 Since the 1970s, the j debate over American foreign policy has often seemed like a war between twa historical analogies: Munich/World War II and Vietnam. The first—derived from the unpopularity and ultimate failure of the policy to appease Adolf Hitler—stresses the need to confront an enemy both early and head on, usina massive military force; the second—informed by America's inability to defeat the Communist North Vietnam despite its overwhelming conventional superiority—suggests the dangers of doing the first. Phrases like "bogged down," "body bags," and "exit strategy" date from this period of American foreign policy history, suggesting the profound dangers of using military forci| at least without very careful planning about precise objectives and the natunfl of the enemy faced. There is a now well-established literature on the subject of analogical reasoning in the disciplines of cognitive and social psychology, and a number of significant discoveries about human problem-solving are especially notfl worthy. Foremost among these is the fact that analogical reasoning is |. cognitive mechanism that tends to be used under conditions of high uncertain^ or ambiguity, such as when an individual is confronted by novel or unusual circumstances or a highly stressful situation. Eysenck and Keane note that mucfl bl the existing psychological research on human problem-solving examines how people deal with familiar, routine, and recurring situations, but "people can also solve unfamiliar or novel problems. Sometimes we can produce creative solutions when we have no directly applicable knowledge about the problem situation."25 We can do this by finding something in our experience that seems, to us at least, to resemble the task at hand. A second central finding—which relates primarily to the processes through which analogical reasoning occurs—is that analogizing involves what several authors have referred to as a "mapping" process. As Eysenck and Keane put it, "various theorists have characterized this analogical thinking as being the result of processes that map the conceptual structure of one set of ideas (called the base domain) into another set of ideas (called a target domain)."26 The innovators in developing this mapping theory have been Dedre Gentner, Paul Thagard, Mary Gick, and Keith Holyoak. According to Gick and Holyoak, for instance, "the essence of analogical thinking is the transfer of knowledge from one situation to another by a process of mapping—finding a set of one-on-one correspondences (often incomplete) between aspects of one body of information and aspects of another."27 In analogizing, "isomorphic" relationships are discovered between one event, situation, or object and another. A third, closely related point to note is that analogical reasoning is a structural process. An analogy, Dedre Gentner finds, is not simply a statement that something is like something else; rather, it is a comparison in which the subject assumes that the perceived similarities are "structural" (or causally significant) as opposed to merely "superficial."28 In practice, of course, Individuals do often draw analogies between things or events that exhibit only a superficial similarity. In the laboratory psychologists can usually set up experiments where it is easy to tell the difference, but in the complex world nl foreign policy decision-making, things are rarely so cut-and-dried. The appeal of the Korean analogy to Lyndon Johnson and Dean Rusk during I lie 196S debate about escalation in Vietnam was probably enhanced by the ■< i that Vietnam and Korea are both in Asia.29 In policy-making, surface similarities are usually easy to confuse with underlying structural ones. Plausible causal or higher order relations must be mapped between the base (that is, the original situation from the past to which the analogy refers) and the target II lie new situation being confronted in the present) in order for the analogy to be useful for predictive purposes, but this is relatively easy to do in political decision-making. Reliance on superficial similarity naturally leads to errors and biases, however, not least because analogical reasoning usually drawing conclusions from a single case—a practice which any good methodology student knows to be fraught with potential error. 142 The Individual Cognition 143 The first political psychologist to reflect extensively upon the use of analogies was Robert Jervis, who devotes a chapter of his Perception and Misperception in International Politics to the use of history by decision-makers, and almost all recent work in the field of analogizing has taken its inspiration from him.30 Jervis's analysis stresses the origin of analogical reasoning in the past personal experiences of decision-makers, showing how analogies can lead the policy-maker to misperceive the character of situations and/or to arrive at policy choices poorly suited to the task at hand. Later work by supporters of the cognitive approach to decision-making has sought to apply Jervis's observations to various case studies, drawn almost exclusively from the United States. Yuen Foong Khong's book Analogies at War is by far the most sustained and in-depth analysis of analogizing in foreign policy to appear to date. Khong examines the decisions by the Johnson administration to escalate U.S. involvement in the Vietnam War in 196S, and finds that analogies played a prominent part in the reasoning processes of both those who opposed the escalation and those who supported it. Under-Secretary of State George Ball, for instance, argued that increased American involvement there would soon lead to "another Dien Bien Phu," a repeat of the disastrous French experience in Indochina in which the French increasingly proved unable to defeat Communist and nationalist insurgents in a guerrilla war and were eventually forced to relinquish their former colony. For President Johnson and many of his other advisers (such as Dean Rusk), however, Korea walj the analogy of choice. "To be sure, Johnson was informed by many lessoni of many pasts," Khong argues, but Korea preoccupied him . . . Whatever it was that attracted Johnson to the Korean precedent, a major lesson he drew from it was that the United States made a mistake in leaving Korea in June 1949; the withdrawal emboldened the Communists, forcing the United Statci to return to Korea one year later to save the South. Johnson was no! predisposed toward repeating the same mistake in Vietnam.31 Others, like McGeorge Bundy and Henry Cabot Lodge, drew on the perceived lessons of the Munich-World War II experience in predict in)' the scenarios they believed would occur if the United States did not intervene. Khong argues that we can think of analogies as "diagnostic devices" ih.il assist policy-makers in performing six crucial functions: they "(1) help dclinp the nature of the situation confronting the policymaker, (2) help assess iln stakes, and (3) provide prescriptions. They help evaluate alternative option by (4) predicting the chances of success, (S) evaluating their moral right™ and (6) warning about dangers associated with the options."33 He develops what he calls the "AE (analogical explanation) framework," essentially a shorthand term for the belief that analogies are genuine cognitive devices which perform the tasks specified above. The primary research purpose of Khong's hook is to argue against the view that analogies are used solely to "prop up one's prejudices" or to justify decisions that have already been decided upon using some other rationale, and he finds that the Johnson people tended to use historical analogies which drew upon recent events such as the missile crisis, the Berlin crises, Korea, Pearl Harbor, and Munich. Khong also shows rather convincingly that in choosing a historical analogy which seemed to "make sense" of Vietnam, Johnson's advisers picked a historical example on I he basis of its superficial or surface similarities to the case in hand.34 In similar vein, it has been argued that many aspects of the Iran hostage crisis of 1979-81—especially the decisions taken by both Iranian radicals and officials in the Carter administration—can be explained using analogical reasoning.35 In November 1979 radical Iranian students clambered over the walls of the U.S. embassy in Tehran, initially taking sixty-six Americans captive. When Iran's de facto leader at the time, the Ayatollah Khomeini, refused lo return the hostages to America, this sparked a major crisis that dragged on for 444 days and helped destroy the presidency of Jimmy Carter. In 19S3, I lie American and British intelligence services had helped to overthrow Iran's elected leader, Mohammed Mossadegh, and the hostage-takers' main motive seems to have been the suspicion that the CIA was about to depose I he Ayatollah in similar fashion. Initially, Carter tried to get the hostages out by diplomatic means. Drawing on his experience of the Pueblo hostage crisis ill 1968—in which a similar crisis precipitated by the North Koreans had eventually been resolved through negotiation—Secretary of State Cyrus Vance iiigucd that this strategy would work again if Carter was willing to show sufficient patience. Others, notably National Security Adviser Zbigniew Brzezinski, Were unwilling to wait, however. Brzezinski in particular drew on an analogy With the Entebbe raid, a highly successful military rescue operation launched by the Israelis in 1976.36 In early 1980 the president ordered a mission to rescue the hostages in Tehran. The operation was a miserable failure, but it w.is in part the cognitive image of successfully pulling off "another Entebbe" ih.il proved irresistible to Carter and his colleagues. There were many lifferences between the two situations which made the Tehran operation much more difficult in a military sense, but analogies can seduce and mislead i ision-makers into ignoring or disregarding these. It is important to reiterate, however, that decision-makers reason not just by H ■in;', case-based forms of reasoning like analogies, but by drawing on more general, abstract, or rule-based reasoning as well (schema-type reasoning). 144 The Individual Cognition 145 Despite the obvious prevalence of analogical reasoning in the making of foreign policy decisions, it may not be as prevalent as other cognitive processes. Marijke Breuning, for instance, points out that more attention should be paid to forms of reasoning other than the analogical variety: Abstract reasoning entails the application of general rules or principles. Rather than comparing two or more cases, the problem solver examines the problem to determine whether it has certain structural properties and, hence, belongs to a certain class of problems. It has a more deductive flavor than case-based reasoning. One form of abstract reasoning is explanation-based reasoning, which relies on causal assertions and "if. . ., then . . ."statements." Examining the U.S. Senate debate on foreign aid in 19S0, Breuning fine that abstract reasoning was more prevalent in the deliberations of the senators than its analogical cousin. This concurs with the conclusion of Donald Sylvan and his colleagues that "reasoning in the area of foreign policy seems to be slightly more explanation based."38 Conclusion: A Variety of Complementary Concepts How are attributions, schemas, scripts, and analogies related to one another? The basic answer to this depends on who you ask. Cognitive science is still in its infancy, and there is not yet a consensus on which concepts and labell are best to use. Some cognitive scientists see analogical reasoning as so cent: to the way humans think that they generally dispense with talk about othef organizing categories. Most political psychologists—including the present author—are eclectic on this issue, however, and see these concepts as so closely related that they refer to pretty much the same cognitive processes, probably makes little difference whether we say that "the president used a historical script," "the president used an event schema," or "the prcsiden used an analogy," for instance, since what we are really interested in is cognitive process by which a decision was reached, and all of these are saying the same thing using a different label. The analogical reasoning approach is also probably intimately connected schemas in at least two ways. First of all, the use of schemas involves the sap "matching" mechanisms used in analogical reasoning; when you used the dc schema, for instance, you matched the attributes given to you about fur, tail* walks, and so on to the general category for a canine (this also involved use of the representativeness heuristic, because the example required you assess how representative the nameless "thing" was of various categories or concepts). Second, analogical reasoning appears to play a key role in schema formation, because it seems to aid the construction of general rules for solving a particular category of problem. Analogical reasoning is seen by many psychologists and cognitive scientists as closely related to schematic processing in this sense. According to Gick and Holyoak, for instance, when an individual has solved a problem successfully in the same way on two or more occasions, he or she will eventually form a general "problem schema," a set of abstract principles for dealing with that problem type which derives from particular analogical cases but which acquires an independent identity of its own.'9 In this way general rules may be formed which derive from—and yet go beyond—any particular case, abstract beliefs for which analogies supply examples and provide concrete support. The statement that "aggression must be stopped early" is a schematic rule divorced from any particular case, but the statement that "Saddam Hussein is another I litler" is an analogy or specific comparison between two cases. Nevertheless, the two are obviously related. Our general aggression schema might be composed of various individual cases or analogies involving Hitler, Mussolini, I lussein, and others. Most political psychologists do not treat attribution theory and analogical reasoning as opposing theories either. Many scholars in the field of foreign policy analysis, for instance, mix and match concepts drawn from attribution theory, schema theory, and analogical reasoning, as do (more informally) Ncholars of electoral choice. Khong, for instance, argues that the availability heuristic explains George Ball's use of the Dien Bien Phu analogy: Ball had worked as a lawyer for the French during the last years of France's colonial control of Indochina, and so the Dien Bien Phu experience was personal to him in a way that it wasn't for most of President Johnson's advisers. The representativeness heuristic, Khong argues, also affected LBJ's reasoning •nice he was impressed by the superficial similarities between Korea and Vietnam. Similarly, the events of 1979 seemed to most Iranians representative "I ihose of 19S3, the Pueblo analogy was cognitively available to Cyrus Vance 1» i ause he had been sent to South Korea as a presidential envoy during that Crisis, and Entebbe was especially available to Brzezinski because he happened to be in Israel as the operation was being planned, and had discussed the Idea of a rescue mission with Israeli officials at the time. In short, attribution i In < >i y, schema theory, and analogical reasoning are far more complementary ■in approaches than they are competitive. (>ne key difference between attribution and schema theory is worth noting imperially in the context of this book, however. Schema theory is essentially illspositionist, in the sense that different people carry different "mental baggage" 146 The Individual Cognition 147 with them. People use different analogies in response to the same objective situation, for instance, depending in part on the varied experiences to which they have been exposed. Individuals therefore vary in their attitudes. Similarly, as we shall see when we look at how schema theory explains the use of racial stereotypes, individuals vary in the extent to which they both develop and activate different mental categories. Attribution theory, on the other hand, is at least partly situationist in nature, in the sense that it allows for both situationism and dispositionism: although I have included the discussion of attribution theory under the dispositionist section for the sake ol analytical convenience, the fundamental attribution error allows for the fact that the behavior of others may be situationally determined, while allowing that our own behavior may sometimes be influenced most heavily by ~ dispositions. We will return to this point in the final chapter of the book, however, and so will defer further discussion of this issue until then. Notes 1 Two books in particular ushered in the new cognitive phase in politic psychology during the 1980s: Susan Fiske and Shelley Taylor, Social Cognitlo (Reading, MA: Addison-Wesley, 1984) and Richard Lau and David Sears, Politic/ Cognition (Hillsdale, NJ: Lawrence Erlbaum, 1986). 2 David Green and others, Cognitive Science: An Introduction (Cambridge, Mf Basil Blackwell, 1996), p.5. 3 The guest examples are taken from Steven Pinker, How The Mind (New York: W.W. Norton, 1997), p. 13, but originally come from the compuli scientist Terry Winograd. 4 John Searle, "Minds, Brains and Programs," Behavioral and Brain Sciences, 417—57, 1980. Searle's argument has produced a huge debate in cognlt: science and AI which continues to this day, and has even featured in novi See, for instance, David Lodge's brilliant Thinks . . . (New York: Viking Pro 2001). 5 Leon Festinger, A Theory of Cognitive Dissonance (Stanford, CA: Stanford Unive Press, 19S7). 6 Fritz Heider, "Attitudes and Cognitive Organization," Journal of Psychology, 107-22, 1946; Heider, The Psychology of Interpersonal Relations (New York: Will 1958). 7 Fiske and Taylor, Social Cognition, p. 10. 8 See for instance David Jones, Edward Kanhouse, Harold Kelley, Richard Ninli Stuart Valins, and Bernard Weiner (eds.), Attribution: Perceiving the Caml Behavior (Morristown, NJ; General Learning Press, 1972) and Richard Nil and Lee Ross, Human Inference: Strategies and Shortcomings of Social jud" (Englewood Cliffs, NJ: Prentice-Hall, 1980). 9 Fiske and Taylor, Social Cognition, p. 11. 10 Deborah Welch Larson, Origins of Containment: A Psychological ExpltM (Princeton, NJ: Princeton University Press, 1985), p.38. I 1 Yaacov Vertzberger, The World In Their Minds: Information Processing, Cognition and Perception in Foreign Policy Decisionmaking (Stanford, CA: Stanford University Press, 1990), pp. 162-63. I 2 Samuel Popkin, "Decision Making in Presidential Primaries," in Shanto Iyengar and William McGuire (eds.), Explorations in Political Psychology (Durham, NC: Duke University Press, 1993), p.363. I i See for instance James Kuklinski, Robert Luskin, and John Bolland, "Where's the Schema? Going Beyond the 'S' Word in Political Psychology," American Political Science Review, 85: 1341-56, 1991. 14 Wendy Rahn, John Sullivan, and Thomas Rudolph, "Political Psychology and Political Science," in James Kuklinski (ed.), Thinking About Political Psychology (New York: Cambridge University Press, 2002), p. 171. I 5 Deborah Welch Larson, "The Role of Belief Systems and Schemas in Foreign Policy Decision-Making," Political Psychology, 15: 17-33, 1994. I () Susan Fiske and Philip Linville, "What Does the Schema Concept Buy Us?," Personality and Social Psychology Bulletin, 6: 543—57, 1980. 17 Julian Hochberg, Perception, second edition (Englewood Cliffs, NJ: Prentice-Hall, 1978), p.190. IK Larson, Origins of Containment, p.1 32. r> Ibid.,p.l97. .»II Ibid., p. 197. ' I Wendy Rahn, "The Role of Partisan Stereotypes in Information Processing about Political Candidates," American Journal of Political Science, 37: 472-96, 2003, p.492. ),' Popkin, "Decision Making in Presidential Primaries." U Ibid., p.365. )\ See in particular Ernest May, Lessons of the Past (New York: Oxford University Press, 1973); Robert Jcrvis, Perception and Misperception in International Politics (Princeton, NJ: Princeton University Press, 1976), pp.217—87; Richard Neustadt and Ernest May, Thinking in Time: The Uses of Historyfor Decision Makers (New York: Freedom Press, 1986). JS Michael Eysenck and Mark Keane, Cognitive Psychology: A Student's Handbook (Hove: Lawrence Erlbaum, 1990), p.399. In ii.id., P.401. J7 Mary Gick and Keith Holyoak, "Schema Induction and Analogical Transfer," Cognitive Psychology, 15: 1-38, 1983, p.2. icdrc Gentner, "Structure Mapping: A Theoretical Framework For Analogy," [ Cognitive Science, 7: 155-70, 1983. "> Yuen Foong Khong, Analogies At War: Korea, Munich, Dien Bien Phu, and the I hi nam Decisions of 196S (Princeton, NJ: Princeton University Press, 1992). |0 \lrx Hybel, How Leaders Reason: U.S. Intervention in the Caribbean Basin and Latin America (Cambridge, MA: Basil Blackwell, 1990); Khong, Analogies At War; David Patrick Houghton, "The Role of Analogical Reasoning in Novel Foreign-Policy Snu.itions," British Journal of Political Science, 25: 523—52, 1996; Christopher I lemmer, Which Lessons Matter? American Foreign Policy Decision Making in the Middle East, 1979-1987 (Albany, NY: State University of New York Press, 2000); Houghton, U.S. Foreign Policy and the Iran Hostage Crisis (New York: Cambridge University Press, 2001). 148 The Individual Chapter 10 31 Khong, Analogies At War, pp.110-11. 32 Ibid., p. 134. 33 Ibid., p. 10. 34 Ibid., pp.217-18. 35 Houghton, U.S. Foreign Policy and the Iran Hostage Crisis. 36 The Entebbe raid was famous in the 1970s and has been depicted in movies a number of times, the most recent being The Last King of Scotland. 37 Marijke Breuning, "The Role of Analogies and Abstract Reasoning in Decision- j Making," International Studies Quarterly, 47: 229-45, 2003. 38 Donald Sylvan, Thomas Ostrom, and Katherine Gannon, "Case-Based, Model-Based, and Explanation-Based Styles of Reasoning in Foreign Policy," I International Studies Quarterly, 38: 61-90, 1994, p.88. 39 Gick and Holyoak, "Schema Induction and Analogical Transfer," p.32. Suggested Further Reading Yuen Foong Khong, Analogies At War: Korea, Munich, Dien Rien Phu, and the Vietnam Decisions of 1965 (Princeton, NJ: Princeton University Press, 1992). Alex Mintz and Karl DeRouen, Understanding Foreign Policy Decision Making (New York: Cambridge University Press, 2010). Steven Pinker, How The Mind Works (New York: W.W. Norton, 1997). Yaacov Vertzberger, The World In Their Minds: Information Processing, Cognition and Perception in Foreign Policy Decisionmaking (Stanford, CA: Stanford University Press, 1990). Affect and Emotion ll is clear that no account of the psychology of politics would be remotely complete without an account of the role that emotion—or "affect" as it is sometimes called—plays within it. Many phenomena in politics involve emotion and feelings rather than just the "cold" kind of information-processing we examined in the previous chapter; virtually all political concepts are charged with emotion, either positive or negative, something that many psychologists refer to as "hot cognitions."1 Political stimuli often provoke strong emotions, feelings such as liking, dislike, happiness, sadness, anger, ^uilt, gratitude, disgust, revenge, joy, insecurity, fear, anxiety, and so on. We do not look at politics neutrally, as some kind of super-advanced, artifi-i tally intelligent computer might. Very few people can look at a photograph of George W. Bush or Hillary Clinton, for instance, or a picture of an airplane slamming into the World Trade Center on September 11, 2001, without feeling something. Few Americans can look at a picture of the late ()sama Bin Laden and not feel anger, contempt, or some other negative ■ motion, just as many radical Islamists in the Middle East look at the same picture and feel pride, admiration, and other positive responses. And this phenomenon is not confined to politics, of course. As the psychologist Robert /itjonc notes: one cannot be introduced to a person without experiencing some immediate feeling of attraction or repulsion and without gauging such feelings on the part of the other. We evaluate each other constantly, we evaluate each other's behavior, and we evaluate the motives and consequences of their behavior. Setting aside social situations, moreover, "there are probably very few perceptions and cognitions in everyday life that do not have a significant [lllrctivc component, that aren't hot, or in the very least tepid."2