of CHAPTER 12 Social Dilemmas and Other Noncooperative Games: Were All in This Together Several members of a well-respected, decentralized research and development department in an organization had a need for laboratory space. The department' made a decision several years ago that each organizational member should develop and maintain his or her own laboratory space so as to allow for specialization and reduce bureaucratic control and overhead. The organization realized that such a system might lead to an eventual shortage of laboratory space, however, and it was agreed that, each year, members would make a laboratory facilities request that would be considered by a committee. The committee was charged with the task of allocating research space on a yearly basis. External funding was the primary criteria that determined allocations: The greater one's external funding, the more laboratory resources were granted. The committee began to encounter a problem as the organization grew in personnel, but not in physical facilities. Every year, each member requested his or her previous allocation of laboratory facilities and augmented this with an additional request for more resources. Each year, the committee struggled to meet basic needs, and turned down all new requests. Precedent became the key factor in determining allocations. As the organization aged, it acquired more and more organizational members, each with their own idiosyncratic laboratory needs. The physical facilities of the organization were at a maximum. This had several undesirable consequences: First, organizational members who did not require laboratory space continued to request it for fear of not having space in the future when and if they needed it. 1 214 CHAPTER 12 Social Dilemmas and Other Noncooperative Games: We're All in This Together Second, and most seriously, the space shortage drastically curtailed the selection of new organizational members. Whereas the department realized that attracting new members was ultimately necessary for the survival and development of the organization as a whole, old-timers were reluctant and often outright opposed to recruiting new members, especially those who had expansive laboratory needs. When the department approached the larger organization about the dilemma, they were politely told to solve the problem internally, and recommended that members share space. The organizational members were strongly opposed to this suggestion. Morale and cooperation in the department declined steadily. After several lean hiring seasons, the organization's reputation and overall productivity suffered seriously. When asked to explain the ultimate demise of the R&D group, individual members pointed to the unbridled greed of others in the department. It seems preposterous that intelligent and well-meaning people could engage in behaviors that seem so insensible. The R&D group is not unique. Similar examples of competitive j behavior leading to huge financial losses occurred in the cola advertising wars and the airline wars of the 1980s, and the telephone long-distance rate wars of the 1990s. Other examples include population control, free trade, the budget deficit, cartels, and unionization. So far, we have focused on cooperative bargaining situations, wherein people must be in mutual agreement for an outcome to be binding. In non-cooperative bargaining situations, parties are interdependent with respect to outcomes, but make independent decisions. Such bargaining situations are noncooperative, because parties do not need to cooperate for an outcome to be reached. For example, a cola company may decide to engage in a negative advertising campaign. The company's main competitor may then start to advertise negatively. i As a result, both companies suffer. j Suppose that you are the head of a large airline company and it has been announced ) that a smaller airline is for sale. You would love to acquire the smaller airline. However, you j know your competitors would also like to acquire the airline. Do you make an offer? Will i you engage in a bidding war? Noncooperative games focus on the distributive dimension of negotiation—that is, how much of the pie each party claims. Nevertheless, the integrative dimension is still present: Because most situations involve costs for delay, parties have an incentive to settle quickly; their outcomes deteriorate by prolonging settlement. However, this places negotiators in a weak position; if a negotiator has costs for time, and the other party knows this, he or she may be exploited. These situations—cola wars and bidding wars—are social dilemmas because the negotiator is unsure how to make choices. Some choices risk exploitation; other choices risk antagonizing others. COMMON MYTHS ABOUT INTERDEPENDENT DECISION MAKING In approaching dilemma-type situations, people are often guilty of mythological thinking. Next, we expose the three leading myths that impair decision making before prjtmljnc what we believe is an effective decision-making strategy. CHAPTER 12 Social Dilemmas and Other Noncooperative Games: We're All in This Together 21 5 Myth I: "It's a Game of Wits: I Can Outsmart Them" Many people believe that they can stay one move ahead of their competitor. However, effective decision making in noncooperative situations is not about outsmarting people. It is unrealistic to believe we can consistently outwit others—an egocentric illusion. A better and more realistic goal is to understand the incentive structure in the situation and take the perspective of the other party. For example, imagine that the person you are dealing with is your identical twin—it's pretty hard to outwit yourself! Myth 2: "It's a Game of Strength: Show 'em You're Tough" This negotiator goes into battle fighting fire with fire. The problem is that this behavior can unnecessarily escalate conflict situations, especially if people have a false sense of uniqueness (see chapter 8). Myth 3: "If s a Game of Chance: Hope for the Best" This negotiator believes that outcomes are not predictable and depend on ever-changing aspects of the situation: personality, mood, time of day, and so on. This person erroneously believes that it either takes a long time to figure out a good strategy or it's just downright impossible. In this chapter, we suggest that dilemma-type situations are neither games of wits, strength, or chance, but decision opportunities. We'll use the principles of logic, game theory, and psychology to optimally deal with these situations. DILEMMAS In organizational and group life, our outcomes depend upon the actions of others. The situation that results when people engage in behaviors that seem perfectly rational on an individual basis but lead to collective disaster, such as a bidding war or what happened in the R&D department, is a social dilemma. Next, we discuss two kinds of social dilemmas: two-person dilemmas and multiperson dilemmas. The two-person dilemma is the prisoner's dilemma; the multiperson dilemma is a social dilemma. We'll discuss both dilemmas as problems people and groups in conflict and how they may be effectively handled. THE PRISONER'S DILEMMA Thelma and Louise are common criminals who have just been arrested on suspicion of burglary. There is enough evidence to convict each suspect of a minor breaking-and-entering crime, but insufficient evidence to convict the suspects on a more serious felony charge of burglary and assault. The district attorney separates Thelma and Louise immediately after their arrest. Each suspect is approached separately and presented with two options: confess to the serious burglary charge or remain silent (not confess). The consequences of each course of action depend on what the other decides to do. The catch is that Thelma and Louise must make their choices independendy. They cannot communicate in any way prior to making an independent, irrevocable decision. The decision situation that faces each suspect is illustrated in Figure 12-1. 216 CHAPTER 12 Social Dilemmas and Other Noncooperative Games: We're All in This Together Thelma Do not confess (remain silent) Confess Louise Do not confess (remain silent) A T* lyr L«lyr 0 T = Oyrs L * ISyrs C D Confess Ts iSyrs T» lOyrs L <$ Oyrs L * 10 yrs FIGURE 12-1. Consequences of Thelma and Louise's Behaviors Note: Entries represent prison term length. T = Thelma's term length; L = Louise's term length. Inspection of Figure 12-1 indicates that Thelma and Louise will go to prison between 0 and 15 years, depending upon what the other partner chooses. Obviously, this is an important decision. Imagine that you are an advisor to Thelma. Your concern is not morality or ethics; you're simply trying to get her a shorter sentence. What do you advise her to do? Ideally, it is desirable for both suspects to not confess, thereby minimizing the prison sentence to one year for each (cell A). This option is risky, however. If one confesses, then the suspect who does not confess goes to prison for the maximum sentence of 15 years—an extremely undesirable outcome (cell B or C). In fact, the most desirable situation from the standpoint of each suspect would be to confess and have the other person not confess. This would mean that the confessing suspect would be released and his or her partner would go to prison for the maximum sentence of 15 years. Given these contingencies, what should Thelma do? Before reading further, stop and think about what you think her best course of action is. THE DILEMMA AND THE PARADOX The answer is not easy, which is why the situation is a dilemma. It will soon be demonstrated that when each person pursues the course of action that is most rational from her point of view, the result is mutual disaster. That is, both Thelma and Louise go to prison for 10 years (cell D). The paradox of the prisoner's dilemma is that the pursuit of individual self-interest leads to collective disaster. There is a conflict between individual and collective well-being that derives from rational analysis. It is easy for Thelma and Louise (as well as for the R&D department members) to see that each could do better by cooperating, but it is not easy to know how to implement this behavior. The players can get there only with coordinated effort. Cooperation and Defection as Unilateral Choices We will use the prisoner's dilemma situation depicted in Figure 12-1 to analyze players' decision making. We will refer to the choices that players make in this game as cooperation and defection, depending upon whether they remain silent or confess. The language of coop- CHAPTER 12 Social Dilemmas and Other Noncooperative Games: We're All in This Together 217 eration and defection allows the prisoner's dilemma game structure to be meaningfully extended to other situations that do not involve criminals, but nevertheless have the same underlying structure, such as whether an airline company should bid for a smaller company, whether a cola company should engage in negative advertising, or whether a department member should demand lab space. Rational Analysis We will use the logic of game theory to provide a rational analysis of this situation. In our analysis, we will consider three different cases: (1) one-shot, nonrepeated play situations (as in the case of Thelma and Louise); (2) the case in which the decision is repeated for a finite number of terms (such as might occur in a yearly election for a position on a five-year task force); and (3) the case in which the decision is repeated for a potentially infinite number of trials or the end is unknown (such as might occur in industries like phone companies, airlines, and cola manufacturers). Case I: One-shot Decision As we noted in chapter 4, game theoretic analysis relies on the principle of dominance detection. That is, the easiest and first principle we should try to invoke is to detect whether one strategy, confession or remaining silent, is dominant. We want to search for a strategy that will result in a better outcome no matter what the other decides to do. Suppose that you are Thelma and your partner in crime is Louise. First, consider what happens if Louise remains silent (does not confess). Thus, we are focusing on the first row in Figure 12-1. If you remain silent, this puts you in cell A: You both get one year. This outcome is not too bad, but maybe you could do better. Suppose that you decide to confess. This puts you in cell B: You get 0 years and Louise gets 15 years. If Louise remains silent, what do you want to do? Certainly no prison sentence is much better than a one-year sentence, so confession seems like the optimal choice for you to make, given that Louise does not confess. Now, what happens if Louise confesses? In this situation, we are focusing on row 2. If you Temain silent, this puts you in cell C: You get 15 years and Louise gets 0 years. This is not very good for you. Now, suppose that you confess; this puts you in cell D: You both get 10 years. Neither outcome is splendid, but 10 years is certainly better than 15 years. Given that Louise confesses, what do you want to do? The choice amounts to whether you want to go to prison for 15 years or 10 years. Again, confession is the optimal choice for you. We have just illustrated the principle of dominance detection: No matter what Louise does (remains silent or confesses), it is better for Thelma to confess. Confession is a dominant strategy; under all possible states of the world, players in this game should choose to confess. We know that Louise is smart and has looked at the situation and its contingencies in the same way as Thelma and has reached the same conclusion. In this sense, mutual defection is an equilibrium outcome. No player can unilaterally (single-handedly) improve her outcome by making a different choice. Thus, both Thelma and Louise are led through rational analysis to confess and they collectively end up in cell D, where they both go to prison for a long period of time. This seems both unfortunate and avoidable. Certainly, both suspects would prefer to be in cell A than in cell D. Is there some way to escape the tragic outcomes produced by the prisoner's dilemma? Are we doomed to collective disaster in such situations? 218 CHAPTER 12 Social Dilemmas and Other Noncooperative Games: We're All in This Together It would seem that players might extricate themselves from the dilemma if they could communicate, but we have already noted that communication is outside of the bounds of the noncooperative game. Further, because the game structure is noncooperative, any deals that players might make with one another are nonbinding. For example, antitrust legislation prohibits companies from price fixing, which means that any communication that occurs between companies regarding price fixing is unenforceable, not to mention punishable by law. What other mechanism might allow parties in such situations to avoid the disastrous outcome produced by mutual defection? One possibility is to have both parties make those decisions over multiple trials. That is, suppose that the parties did not make a single choice but instead made a choice, received feedback about the other player's choice, experienced the consequences, and then made another choice. Perhaps repeated interaction with the other person would provide a mechanism for parties to coordinate their actions. If the game is to be played more than once, players might reason that by cooperating on the first round, cooperation may be elicited in subsequent periods. We consider that situation next. Case 2: Repeated Interaction over a Fixed Number of Trials Instead of making a single choice and living with the consequence, suppose that Thelma and Louise were to play the game in Figure 12-1 a total of ten times. It might seem strange to think about criminals doing this, so it may be useful to think about two political candidates deciding whether or not to engage in negative campaigning (hereafter referred to as campaigning). There are term limits in their state—they can run and hold office for a maximum of five years. There is an election every year. During each election period, each candidate makes an independent choice (to campaign or not), then learns of the other's choice (to campaign or not) and experiences the resultant outcome. After the election, the candidates consider the same alternatives once again and make an independent choice; this continues for five separate elections. How does game theory allow us to analyze this situation? We will use the concept of dominance as applied above, but we need another tool that tells us how to analyze the repeated nature of the game. Fortunately, game theory provides such a mechanism. Backward induction is the mechanism by which a person decides what to do in a repeated game situation, by looking backward from the last stage of the game We begin by examining what players should do in election 5 (the last election). If the candidates are making their choices in the last election, the game is identical to that analyzed in the one-shot case above. Thus, the logic of dominant strategies applies, and we are left with the conclusion that each candidate will choose to campaign. Now, given that we know that each candidate will campaign in the last election, what will they do in the fourth election? From a candidate's standpoint, the only reason to cooperate (or to not campaign) is to influence the behavior of the other party in the subsequent election. That is, a player might signal a willingness to cooperate by making a cooperative choice in the period before. We have already determined that it is a foregone conclusion that both candidates will defect (choose to campaign) in the last election, so it is futile to choose the cooperative (no campaigning) strategy in the fourth election. So, what about the third election? Given that candidates will not cooperate in the last election, nor in the second-to-last election, there is little point to cooperating in the third-to-last election for the same reason that cooperation was deemed to be ineffective in the second-to-last election. As it turns out, this logic can l>e applied to every election in such a backward fashion. Moreover, this is true in any situation with a finite number of elections. This leaves us with the conclusion that defection remains the dominant strategy even in the repeated trial case. Formally, if the prisoner's dilemma is repeated finitely, all Nash equilibria of the re CHAPTER 12 Social Dilemmas and Other Noncooperative Games: We're All in This Together 219 suiting sequential games have the property that the noncooperative outcome, which is pareto-inferior, occurs in each period, no matter how large the number of periods. This is surely disappointing news. It suggests that cooperation is not possible even in long-term relationships. This runs counter to intuition, observation, and logic, however. We must consider another case, arguably more realistic of the situations we want to study in most circumstances, in which repeated interaction continues for an infinite or indefinite amount of time. Case 3: Repeated Interaction for an Infinite or Indefinite Amount of Time In the case in which parties interact with one another for an infinite or indefinite period of time, the logic of backward induction breaks down. There is no identifiable end point from which to reason backward. We are left with forward-thinking logic. If we anticipate playing a prisoner's dilemma game with another person for an infinitely long or uncertain length of time, we reason that we might influence their behavior with our own behavior. That is, we may signal a desire to cooperate on a mutual basis by making a cooperative choice in an early trial. Similarly, we can reward and punish their behavior through our actions. , Under such conditions, the game theoretic analysis indicates that cooperation in the first period is the optimal choice (Kreps, Milgrom, Roberts, and Wilson, 1982). Should our strategy be to cooperate no matter what? No! If a person adopted cooperation as a general strategy, this would surely lead to exploitation. So, what strategy would be optimal to adopt? Before reading further, stop and indicate what is a good strategy. The Tournament of Champions In 1981, Robert Axelrod, a leading game theorist, posed this very question to readers in an article in Science magazine. Axelrod spelled out the contingencies of the prisoner's dilemma game and invited members of the scientific community to submit a strategy to play in a prisoner's dilemma tournament. To play in the tournament, a person had to submit a strategy (a plan that would tell a decision maker what to do in every trial under all possible conditions) in-the form of a computer program. Axelrod explained that each strategy would play all other strategies across 200 trials of a prisoner's dilemma game. He further explained that the strategies would be evaluated in terms of the maximization of gains across all opponents they faced. Hundreds of strategies were submitted by eminent scholars from around the world. The Winner Is a Loser The winner of the tournament was the simplest strategy that was submitted. The FORTRAN code was only four lines long. The strategy was called tit-for-tat and was submitted by Anatol Rapoport. Tit-for-tat accumulated the greatest number of points across all trials with all of its opponents. The basic principle for tit-for-tat is simple. Tit-for-tat always cooperates on the first trial, and, on subsequent trials, it does whatever its opponent did on the previous trial. For example, suppose that tit-for-tat played against someone who cooperated on the first trial, defected on the second trial, and then cooperated on the third trial. Tit-for-tat would cooperate on the first trial and the second trial, defect on the third trial, and cooperate on the fourth trial. Tit-for-tat never beat any of the strategies it played against. Because it cooperates on the first trial, it can never do better than its opponent. The most tit-for-tat can do is earn as much as its opponent. If it never wins (i.e., beats its opponent), how can t?t-for-tat be so sue- 220 CHAPTER 12 Social Dilemmas and Other Noncooperative Games: We're All in This Together cessful in maximizing its overall gains? The answer is that it induces cooperation from its opponents. How does it do this? Several characteristics make tit-for-tat an especially effective strategy for inducing cooperation. Psychological Analysis of WhyTit-for-Tat Is Effective" Not Envious One reason why tit-for-tat is effective is that it is not an envious strategy. That is, it does not care that it can never beat the opponent. Tit-for-tat can never earn more than any strategy it plays against. Rather, the tit-for-tat strategy is designed to maximize its own gain in the long run. Nice Tit-for-tat always begins the interaction by cooperating. Furthermore, it is never the first to defect. Thus tit-for-tat is a nice strategy. This is an important feature because it is difficult for people to recover from initial defections. Competitive, aggressive behavior often sours a relationship. Moreover, aggression often begets aggression. The tit-for-tat strategy neatly avoids the costly mutual escalation trap that can lead to the demise of both parties. Tough Although tit-for-tat is a nice strategy, it is not a pushover. A strategy of solid cooperation would be easily exploitable by an opponent: Tit-for-tat can be provoked—it will defect if the opponent invites competition. Tit-for-tat reciprocates defection. This is an important feature of its strategy. By reciprocating defection, tit-for-tat conveys the message that it cannot be taken advantage of. Forgiving We have noted that tit-for-tat is tough in that it reciprocates defection. It is also a for giving strategy in the sense that it reciprocates cooperation. This is another important feature of the tit-for-tat strategy. It is often difficult for people in conflict to recover from defection and end an escalating spiral of aggression. Tit-for-tat's eye-for-an-eye strategy ensures that its responses to aggression from the other side will never be more than it receives. Not Clever Ironically, one reason why tit-for-tat is so effective is that it is not very clever. It is an extremely simple strategy, and other people can quickly figure out what to expect from a player who follows it. This predictability has important psychological properties. When peo pie are uncertain or unclear about what to expect, they are more likely to engage in defensive behavior. When uncertainty is high, people often assume the worst about another person. Predictability increases interpersonal attraction. Not Egotistical Another feature that makes tit-for-tat an effective strategy is that it is not egotistical in the sense of believing it may gain an advantage over the other. Tit-for-tat realizes it cannot outsmart its opponent. Tit-for-tat is successful because it induces the opponent to cooperate to maximize individual gain. The norm of reciprocity is an extremely powerful form of social influence and behavior. ® CHAPTER. 12 Social Dilemmas and Other Noncooperative Games: We're All in This Together 221 In summary, tit-for-tat is an extremely stable strategy. Players who follow it often induce their opponents to adopt the tit-for-tat strategy. But tit-for-tat is not uniquely stable; there are other strategies that are stable as well. For example, solid defection is a stable strategy. Two players who defect on every trial have little reason to do anything else. The message is that once someone has defected, it is very difficult to recover from breaches of trust. How to Build Cooperation We have worked through the logic of the rational analysis of the piisoner's dilemma game, and we have examined the characteristics of the tit-for-tat strategy as a general principle, but what about behavior in real situations? What do people do when faced with dilemmas like these? Many individual characteristics of people have been studied, such as men versus women, different races, machiavellianism, status, age, and so on (for a review, see Rubin and Brown, 1975). There are few, if any, reliable individual differences that predict behavior in a prisoner's dilemma game. People cooperate more than rational analysis would predict. Many investigations use a single trial or fixed number of trials in which the rational strategy is solid defection. When the game is infinite or the number of trials is indefinite, however, people cooperate less than they should. What steps can the manager take to build greater cooperation and trust among organization members? Social Contracts: A Foot in the Door is a Bird in the Hand People are more likely to cooperate when they promise to cooperate. Although any such promises are nonbinding and are therefore "cheap talk," people nevertheless act as if they are binding. Why is this? According to the norm of commitment, people feel psychologically committed to follow through with their word (Cialdini, 1993). The norm of commitment is so powerful that people often do things that are completely at odds with their preferences or that are highly inconvenient For example, once people agree to let a salesperson demonstrate a product in their home they are more likely to buy it. Homeowners are more likely to consent to have a large (over 10 feet), obtrusive sign in their front yard that says "Drive Carefully" when they agree to a small request made the week before (Freedman and Fraser, 1966). Why is this? In the foot-in-the-door technique, people are asked to agree to a small request, and then they are confronted with a large, more costly request. They often agree. Why? After complying with the smaller request, they view themselves as the kind of person who helps out on important community issues. Once this self-image is in place, people are more likely to agree to subsequent (often outlandish) requests (Ciddini, 1993). Metamagical Thinking and Superrationality In" the prisoner's dilemma game, people make choices simultaneously and therefore one's choice cannot influence the choice that the other person makes on a given trial—only in subsequent trials. That is, when Thelma makes her decision to confess or not, this does not influence Louise, unless she is telepathic. However, people act as if their behavior influences the behavior of others, even though it logically cannot. In an intriguing analysis of this perception, Hofstadter wrote a letter, published in Scientific American, to 20 friends (see Box 12-1 on page 222). CHAPTER 12 Social Dilemmas and Other Noncooperative Games: We're All in This Together BOX 12-1 Letter from Douglas Hofstadter to 20 friends in Scientific American Dear_: I am sending this letter by special delivery to 20 of you (namely, various friends of mine around the country). I am proposing to all of you a one-round Prisoner's Dilemma game, the payoffs to be monetary (provided by Scientific American). It is very simple. Here is how it goes. Each of you is to give me a single letter: C or D, standing for "cooperate" or "defect." This will be used as your move in a Prisoner's Dilemma with each of the 19 other players. Thus, if everyone sends in C, everyone will get $57, whereas if everyone sends in D, everyone will get $19. You can't lose! And, of course, anyone who sends in D will get at least as much as everyone else. If, for example, 1L people send in C and nine send in D, then the 11 C-ers will get $3 apiece from each of the other C-ers (making $30) and will get nothing from the D-ers. Therefore, C-ers. will get $30 each. The D-ers, in contrast, will pick up $5 apiece from each of the C-ers (making $55) and will get $1 from each of the other D-ers (making $8), for a grand total of $63. No matter what the distribution is, D-ers always do better than C-ers. Of course, the more C-ers there are, the better everyone will do! By the way, I should make it clear that in making your choice you should not aim to be the winner but simply to get as much money for yourself as possible. Thus, you should be happier to get $30 (say, as a result of saying C along with 10 others, even though the nine D-sayers get more than you) than to get $19 (by saying D along with everyone else, so that nobody "beat" you). Furthermore, you are not supposed to think that at some later time you will meet with and be able to share the goods with your co-participants. You are not aiming at maximizing the total number of dollars Scientific American shells out, only at maximizing the number of dollars that come to you\ Of course, your hope is to be the unique defector, thereby really cleaning up: with 19 C-ers, you will get $95 and they will each get 18 times $3, namely $54. But why am I doing the multiplication or any of this figuring for you? You are very bright. So are the others. All about equally bright, I would say. Therefore, all you need to do is tell me your choice. I want all answers by telephone (call collect, please) the day you receive this letter. It is to be understood (it almost goes without saying, but not quite) that you are not to try to consult with others who you guess have been asked to participate. In fact, please consult with no one at all. The purpose is to see what people will do on their own, in isolation. Finally, I would appreciate a short statement to go along with your choice, telling me why you made this particular one. Yours, Doug H. Source: Hofstadter, D. (1983). Metamagical thinking. Scientific American 248. 14-28. CHAPTER 12 Social Dilemmas and Other Noncooperative Games: We're All in This Together 223 Hofstadter raised the question of whether one person's action in this situation can be taken as an indication of what all people will do. He concluded that if players are indeed rational, they will either all choose to defect or all choose to cooperate. Given that all players are going to submit the same answer, which choice would be more logical? It would seem that cooperation is best (each player gets $57 when all cooperate and only $19 when they all defect). At this point, the logic seems like magical thinking: A person's choice at a given time influences the behavior of others at the same time. Another example: People explain that they've decided to vote in an election so that others will, too. Of course, it is impossible that one person's voting behavior could affect others in a given election, but people act as if it does. Hofstadter argues that decision makers wrestling with such choices must give others credit for seeing the logic that oneself has seen. Thus, we need to believe that others are rational (like ourselves) and that they believe that everyone is rational. Hofstadter calls this rationality superrationality. For this reason, choosing to defect undermines the very reasons for choosing it. In Hofstadter's game, 14 people defected and 6 cooperated The defectors received $43; the cooperaters received $15. Robert Axelrod was one of th«; participants who defected, remarking that in a one-shot game, there was no reason to cooperate. Imagine that you are playing a prisoner's dilemma game like that described in the Thelma and Louise case. You are told about the contingencies and payoffs in the game and then asked to make a choice. The twist in the situation is that you are either told that your opponent: (1) has already made his choice earlier that day, (2) will make his choice later that day, or (3) will make his choice at the same time as you. In all cases, you will not know the other person's choice before making your own. When faced with this situation, people are more likely to cooperate when their opponent's decision is temporally contiguous with their own decision—that is, when the opponent will make his decision at the same time (Morris, Sim, and Girrotto, 1995). Temporal contiguity fosters a causal illusion: the idea that our behavior at a given time can influence the behavior of others. This logical impossibility is not permissible in the time-delayed decisions. Do the Right Thing: It's the Name of the Game According to rational analysis, only the payoffs of the game should influence one's strategy. However, our behavior in social dilemmas is influenced by our perceptions about what kinds of behavior are appropriate and expected in a given context. In an intriguing ex-amination of this idea, people engaged in a prisoner's dilemma task. The game was not described to participants as a "prisoner's dilemma," though. In one condition, the game was called the "Wall Street game," and in another condition, the game was called the "community game" (Ross and Samuels, 1993). Otherwise, the game, the payoffs, and the choice were identical. Whereas rational analysis predicts that defection is the optimal strategy n matter what the name, in fact, the incidence of cooperation was three times as high in the community game than in the Wall Street game, indicating that people are sensitive to situational cues as trivial as the name of the game. Impression Management Still another reason why people cooperate is that they want to believe that they are nice people. For example, one person attributed his decision to make a cooperative choice in the 20-person prisoner's dilemma game to the fact that he did not want the readers of Scientific American to think that he was a defector (Hofstadter, 1983). This is a fype of impression management (Goffman, 1959). CHAPTER 12 Social Dilemmas and Other Noncooperative Games: We're All in This Together Impression management raises the question of whether people's behavior is different when it is anonymous than when it is public. The answer appears to be yes. However, it is not always the case that public behavior is more cooperative than private behavior. For example, negotiators who are accountable to a constituency often bargain harder and are more competitive than when they are accountable for their behavior (see Carnevale, Pruitt, and Seilheimmer, 1981). Signaling and Publicizing Commitment In November of 1995, USAir announced that it was putting its company up for sale (for a full treatment, see Diekmann, Tenbrunsel, and Bazerman, 1998). Financial analysts speculated that the sale of USAir would lead to a bidding war between the major airlines, because whichever airline acquired USAir would have a market advantage. None of the airlines wanted to be in the position of not acquiring USAir and seeing another airline buy the company. Following the announcement of the sale of USAir, a strange series of events followed that was not forecast by financial analysts: No one bid for USAir. Analysts did not realize that the major airlines had learned an important principle through their experience in the 1980s with the frequent flyer and triple mile programs. The frequent flyer and triple mile programs were designed to be competitive strategies to capture market share. However, the approach backfired in the airline industry when all of the major airlines developed frequent flyer programs and a price war began, which resulted in the loss of millions of dollars among airlines. Robert Crandall, the chairman of American Airlines, was effective in averting a costly escalation war for USAir. How did he do this? Before reading further, stop and indicate how you would attempt to nip a bidding war in the bud. Robert Crandall wrote and published an open letter to the employees of American Airlines that appeared in the Chicago Tribune (see Box 12-2). The letter clearly indicated that American Airlines was interested in avoiding a costly bidding war with United Airlines for USAir. The letter clearly stated the intentions of American not to make an opening bid for USAir. The letter further indicated that American would bid competitively if United initiated bidding for USAir. The letter effectively signaled the intentions of American Airlines in a way that made bidding behavior seem too costly. Although the letter was addressed to the employees of American Airlines, it is obvious that the real targets of this message were the other airlines. BOX 12-2 Letter to the Employees oj American Airlines from Robert Crandall, Chairman We continue to believe, as we always have, that the best way for American to increase its size and reach is by internal growth—not by consolidation. So we will not be the first to make a bid for USAir. On the other hand, if United seeks to acquire USAir, we will be prepared to respond with a bid, or by other means as necessary, to protect American's competitive position. Source: S. Ziemba, Chicago Tribune, November 10, 1995. CHAPTER 12 Social Dilemmas and Other Noncooperative Games: We're All in This Together 225 Align Incentives To the extent possible, managers interested in building cooperation should align the incentives of organizational actors to make defection less attractive For example, by putting in "high vehicle occupancy" lanes on major highways, single drivers are more motivated to carpool. Personalize Others People often behave as if they were interacting with an entity or organization rather than a person. For example, an embittered customer claims that the airline refused her a refund when in fact it was a representative of the airline who did not issue a refund. To the extent that others can be personalized, people are more motivated to cooperate than if they believe they are dealing with a dehumanized bureaucracy. Recovering from Breaches of Trust Suppose that you're the manager of a large HMO. The managed health care industry is highly competitive, with different companies vying to capture market share by touting low deductibles and so on. You've analyzed the situation with your competitors to be a noncooperative game. You've thought about how your competitor must view the situation, and you've decided to take a cooperative approach and not engage in negative advertising. Later that week, you leam that your opponent has taken out a full-page ad in The Wall Street Journal that denigrates your HMO, by publicizing questionable statistics about your mortality rates, quotes from angry patients, and charges about physicians. You counter with some negative TV spots. You are spending a lot of money and are angry. Can you recover from an initial breach of trust? Probably so, if you consider the following three strategies. Make Situational Attributions We often blame the incidence of escalating, mutually destructive conflict on others* ill will and evil intentions. We fail to realize that we might have done the same thing as our competitor had we been in his or her shoes. Why? We punctuate events differently than do our opponents. We see our behavior as a defensive response to the other. In contrast, we view the other as engaging in unprovoked acts of aggression. The solution is to see the other side's behavior as a response to our own actions. In the situation above, your competitor's negative ad campaign may be a payback for your campaign a year ago. Tiny Steps Trust is not rebuilt in a day. We rebuild trust incrementally by taking tiny steps. For example, the GRIT (graduated reduction in tension relations) strategy calls for conflicting parties to offer small concessions. This reduces the risk for the party making the concession. Getting Even and Catching Up As we saw in chapter 11, people are hyperconcerned with fairness. The perception of inequity is a major threat to the continuance of relationships. One way of rebuilding trust is to let the other party "get even" and catch up. The resurrection of a damaged relationship may depend on repentance on the part of the injurer and forgiveness on the part of the injured (Bottom, Gibson, Daniels, and Murnighan, 1996). Even more surprising is that, it's the thought that counts: Small amends are as effective as large amends in generating future cooperation. 226 CHAPTER 12 Social Dilemmas and Other Noncooperative Games: We're All in This Together Not the Only Game in Town The prisoner's dilemma is not the only noncooperative game of interest for the organizational actor. Other games that characterize organization life include: chicken, spite, battle of the sexes, and the volunteer dilemma (Murnighan, 1992). The volunteer dilemma is a situation in which at least one person in a group must sacrifice his or her own interests to better the group. An example is a group of friends who want to go out for an evening of drinking and celebration. The problem is that all cannot drink if one person must safely drive everyone home. A "designated" driver is a volunteer for the group. Most organized entities would not function if no one volunteered. The act of volunteering strengthen!, group ties. SOCIAL DILEMMAS Since the 1970s, there has been a trend toward "comparative advertising" in which companies compare their product with competitors' products and point out the advantages of their own product and the disadvantages of the competitors' products. Hardly any industry has managed to avoid comparative advertising. Advertisers have batiled over milk quality, fish oil, beer taste, electric shavers, cola, coffee, magazines, cars, telephone service, banking, credit cards, and peanut butter. The ads attack the products and services of other companies. What is the effect of the attack ad? For the consumer, attack ads keep prices down and quality high. However, it can also lead to consumer resentment toward the industry. The effect is much more serious for the advertisers, who can effectively run each other out of business. The Tragedy of the Commons Imagine that you are a farmer. You own several cows and share a grazing pasture known as a "commons" with other fanners. There are 100 farmers who share the pasture. Each farmer is allowed to have one cow graze. Because the commons is not policed, it is tempting for you to add one more cow without fear of detection. By adding another cow, a herdsman can double his or her utility and no one will really suffer. If everyone does this, however, the commons will be overrun and the grazing area depleted. The cumulative result will be disastrous. What should you do in this situation if you want to keep your family alive? The analysis of the "tragedy of the commons" (from Hardin, 1968) may be applied to many real world problems, such as pollution, use of natural resources, and overpopulation. In these situations, people are tempted to maximize their own gain, reasoning that their pollution, failure to vote, and Styrofoam cups in the landfill won't have a measurable impact on others. However, if everyone engages in this behavior, the collective outcome is disastrous: Air will be unbreathable, there will not be enough votes in an election, and landfills will be overrun. Thus, in the social dilemma, the rational pursuit of self-interest produces collective disaster. In the social dilemma situation, each person makes behavioral choices similar to those in the prisoner's dilemma: to benefit oneself or the group. As in the prisoner's dilemma, the choices are referred to as cooperation and defection. The defecting choice always results in better personal outcomes, at least in the immediate future, but universal defection results in poorer outcomes for everyone than does universal cooperation. CHAPTER 12 Social Dilemmas and Other Noncooperative Games: We're All in This Together 227 A hallmark characteristic of social dilemmas is that the rational pursuit of self-interest is detrimental to collective welfare. This has very serious and potentially disastrous implications. In this sense, social dilemmas contradict the principle of hedonism and laissez-faire economics. That is, unless some limits are placed on the pursuit of personal goals, the entire society may suffer. Types of Social Dilemmas There are two major forms of the social dilemma: resource conservation dilemmas (also known as collective traps) and public goods dilemmas (also known as collective fences; see Messick and Brewer, 1983). In the resource conservation dilemma, individuals take or harvest resources from a common pool (like the herdsmen in the commons). Examples of the detrimental effects of individual interest include pollution, burning of fossil fuels, and water shortage. The defecting choice occurs when people consume too much. The result of over-consumption is collective disaster. For groups to sustain themselves, the rate of consumption cannot exceed the rate of replenishment of resources. In public goods dilemmas, individuals contribute or give resources to a common pool or community. Examples include donations for public radio and television, payment of taxes, and voting. The defecting choice is to not contribute. Those who fail to contribute are known as defectors or free riders. Those who pay while others free ride are affectionately known as suckers. Determinants of Cooperation in Social Dilemmas Most social groups could be characterized as social dilemma situations. Some people view groups within organizations as social dilemmas (Kramer, 1991; Mannix, 1993). Members are left to their own devices to decide how much to take or contribute for common benefit. Consider an organization in which members are allowed to monitor their use of supplies and equipment, such as computer manuals, Xerox paper, stamps, and envelopes. Each member may be tempted to overuse or hoard resources, thereby contributing to a rapid depletion of supply. What determines whether and how much people will cooperate in social dilemma situations? The key determinant of cooperation is communication (Messick and Brewer, 1983; Liebrand, Messick, and Wilke, 1992; Komorita and Parks, 1994; Sally, in press). If people are allowed to communicate with the members of the group prior to making their choices, the incidence and level of cooperation increases dramatically (Sally, in press). Why is this? The Power of Commitment Communication is thought to be important for increasing cooperation for two reasons. First, communication provides a mechanism by which members of groups may make commitments to one another. Although the social dilemma, like its cousin the prisoner's dilemma, is a noncooperative game, such that players may not make binding agreements, players in social dilemmas nevertheless treat verbal commitments as if they were binding. When people communicate with one another in social dilemma situations, they elicit commitments of cooperation from one another. Verbal commitments in such situations do two important things. First, they indicate the willingness of others to cooperate. In this sense, they reduce the uncertainty that people have about others in such situations and provide a