Study Material. Do not distribute. 80 CHAPTER FOUR CLASSICAL GAME THEORY 81 Player 2 Player 1 (1,4) (0,2) (-1.0) (5,1) Figure 4.4 A Two-Person. Non-Zero-Sum Same without Dominant Strategies M(S,;sj) > M{S;sj). There are parallel definitions for Player 2. A dominant strategy is a strict best reply against all of the other player's strategies. Return to Figure 4.1. Player 1 has a dominant strategy, S2. Player 2*s best reply to Player 1 's dominant strategy is s2. The pair of strategies (S2; s?) is stable. Neither player wants to change his or her strategy given that they know what strategy the other player is playing. Assuming that both players know the game and reason about how the other will play the game, Player 2 should anticipate that Player 1 will play S2. She should then play s2. In some games, neither player has a dominant strategy. Neither player has a dominant strategy in the two-person, non-zero-sum game in Figure 4.4. S( and St are Player 1 's best replies to si and s2, respectively. Similarly, si and s2 are Player 2's best replies to S, and S>, respectively. Each player's best strategy depends upon the other player's chosen strategy. We define a Nash equilibrium as a situation where each player's strategy is a best reply to the other player's strategy. When both players are playing best replies against each other's strategics, they have no incentive to change their strategies. The game in Figure 4.4 has two Nash equilibria in pure strategies, (Sf;si) and (So..i. S| is Player Ts best reply to v, and S| is Player 2's best reply to Si. Similarly, S2 is Player 1 's best reply to s2, and s2 is Player 2's best reply to S2. Definition: A pair of strategies S; and s-, forms a Nash equilibrium iff the strategies are best replies to each other. Alternatively, a pair of strategies forms a Nash equilibrium iff and M,(Si;Sj) > M,(S; Sj) for all S # Si M2(Si:Sj) ä M2(Si;s) for all s ■* s,. - m mm m A Nash equilibrium is stable because neither player has an incentive to deviate unilaterally from its equilibrium strategy. Either player reduces its payoff if it defects from its equilibrium strategy. However, this observation does not imply that an equilibrium is the best outcome for either player. Nor are equilibria "fair" in any common meaning of "fairness." Instead, Nash equilibrium is a minimal condition for a solution to a game if the players can correctly anticipate each other's strategies. Nash equilibria entail stable, mutual anticipations of other players' strategies. If such anticipations exist, neither player has an incentive to change its strategy unilaterally. To do so would reduce its payoff. Nash equilibria in pure strategies can be found easily from the strategic form. Player 1 can choose the row of the strategic form, and Player 2, the column. Sometimes Players 1 and 2 are called Row and Column for this reason. Within a given column, that is, for a fixed strategy for Player 2, Player 1 's best reply is the strategy that produces the largest payoff for him in that column. Within a given row, Player 2's best reply is the strategy that produces the largest payoff for her in that row, Nash equilibria can be found, then, by searching for outcomes where the first payoff is the greatest in its column and the second payoff is the greatest in its row. The strategies that produce such outcomes form a Nash equilibrium in pure strategies. Exercise.4.2: Find the Nash equilibria in pure strategies for the games in Figure 4.5. In a two-person, zero-sum game, the players' payoff at an equilibrium must be the greatest in its column and the least in its row. Player 1 has control over the row chosen and can benefit by defection if some outcome in the same column has a greater payoff. Conversely, Player 2 wishes to minimize Player 1 \ payoff (recall that M? - -Mi in a zero-sum game) and has control over the column chosen within a given row. Equilibria of zero-sum games are sometimes called minmax points because they are simultaneously row minima and column maxima. Exercise 4.3: Find the equilibria of the two-person, zero-sum games in Figure 4.6. Mixed Strategies Some games do not have Nash equilibria in pure strategies. Figure 4.7 gives the strategic, form of the Matching Pennies game from Chapter Three (using the convention for payoffs in zero-sum games). Matching Pennies does not have a Nash equilibrium in pure strategies. Player 1 's best replies to h and t are H and T, respectively; Player 2's best replies to H and T are t and h, respectively. 82 CHAPTER: FOUR CLASSIC At GAME JHIOBY a) Player 1 Player 2 (0,0) £-5,10} (10,-5) (5,5) b) Player 1 Player 2 a) Player 2 S1 (3,4) Player 1 ■ % 2 0 (10,-2) vc4. b) Player 1 Player 2 ■ 3 4 -18 c) Player 1 Player 2 d) Player 2 e) Player 2 •' ?s2': s (1.1) Player 1 S2 (-10,-2) ■ ■ S .(-1,-2) (0,-1) i./-figure 4v§,. Exercise -^'y- Put simply, Player l wants to match Player 2\ strategy, and Player 2 wants to avoid matching Player l's strategy. How should they play this game? Hach player wants to make the other player uncertain about which strategy he or she will play. II" a player can predict whether the other player wilt play heads or tails, the first player can take advantage of that prediction to win the game. We use probabilities to represent degrees of uncertainty about what strategy the other player will play. Definition: A mixed strategy lor a player is a probability distribution on the set of its pure strategies. Mixed strategies are denoted by (p|S|,... ,pnSn). where ps is the probability of playing strategy S; where the player has n pure strategies and C) Player 2 d) Player 2 s1 s2 ■ S1 ;:s2 ■ u s1 s3 s ' 1 , (5,-1) S1 (-2,0) (4,-1) Player 1 y^Yy 4 -1 % (3.-2) (-5,-4) Player 1 Sg (0,3) (3,1) (5,4) 12 -5 Player 1 Sg ■■y2 3 . S3 (1.5) (4,2) r3^:: ■■■y4/" -1 5 d) Player 1 S Player 2 s1 :.;;:«1 . ;^5;" -i Figure 4,6 Exercise 4.3 : :Rtayer 2 -: h t . Player 1 •.^•::-r' ■'• ry\-:y . -0,,; Figuce; 4 «7 Matching Pen nies 84 CHAPTER FOUR .53 q. eg <5 m CL 1 IIHIilllllltllllllllllllitE Rayer 2's Best Reply Correspondence 0 1 p{P!ayer 1 plays H) Figure 4,8 Player 2's Best Reply Correspondence for Matching Pennies W ■Is .1: 'fzWP* - - CL cm <5 >> :. CL CLASSICAL GAME THEORY 85 iiiiiuiimititiimmiiie Player 2's Best Reply Correspondence - Player 1's - Best Reply - Correspondence ' -I III H 111111 l£l 1 i I 111111111!» Mixed Strategy Equilibrium Player 1 aniiiiimiiiiiiiHiiiiiini p(Player 1 plays H) Figure 4.9 Both Players' Best Reply Correspondences for Matching Pennies Player 2 3 1 2 4 Figure 4.10 A Two-Person, Zero-Sum Game without a Pure Strategy Nash Equilibrium A player's set of mixed strategies includes all of its pure strategies; (1S|, OS2____,0S„) is the same strategy as Si. A player's best reply to a mixed strategy of the other player can be calculated with expected utilities. In Matching Pennies, calculate Player 2's best reply if Player 1 plays a mixed strategy of (|H, |T). If she plays h, her expected utility against Player's 1 mixed strategy will be the following: u2(h) - p(H)[-M(H;h)] + p(T)[-M(T;h)] If she plays t, her expected utility against Player's 1 mixed strategy will be the following: u,(t) = p(H)f-M(II;t)l I p(T)f-M(T;t)l -}<„^,-,)--i. Because 112(h) > U2CO, heads is Player 2's best reply against this mixed strategy for Player 1. A best reply correspondence maps out a player's best reply to every mixed strategy of the other player. Figure 4.8 gives Player 2's best reply m correspondence for Matching Pennies. The horizontal axis of Figure 4.8 gives the probability that Player 1 will play H in a mixed strategy. Player 1's pure strategies correspond to the end of this axis. When p(H) = 0, he plays T; when p(H) = 1, he plays H. The vertical axis gives the probability that Player 2 will play h in her mixed strategy. Each point in Figure 4.8 specifies a pair of mixed strategies, one for each player. The thick, dashed line gives Player 2's best reply correspondence. She should play h if she believes the chance that Player 1 will play II is less than one-half and I if that chance is greater than one-half. If she thinks there is exactly a one-half chance that Player 1 will play h, then any strategy she plays, mixed or pure, is a best reply. We can also calculate Player 1's best reply correspondence. Figure 4.9 maps it on the same diagram as Player 2's best reply correspondence. The two correspondences intersect when both players play heads half the time and tails the other half. These strategies form the mixed strategy equilibrium [(iJI. JrT); <^h, it)]. They are mutual best replies. In some sense, this mixed strategy equilibrium is natural and obvious. In Matching Pennies, you want the other player to think you are equally likely to play either heads or tails. If the other player can predict your move, he or she can take advantage of you. But what happens in a more complicated zero-sum game.' Example: The game in Figure 4.10 has no pure strategy equilibria. No matter what pair of pure strategies we select, one of the two players has an incentive to change strategy. What is the mixed strategy equilibrium of this game? T mm 86 CHAPTER FOUR When pure strategy equilibria do not exist, each player may wish lo intm ducc some uncertainty about what strategy it will play. This uncertainty fore."* the other player to consider playing all of its undominated strategies. Olhei-wise we cannot find strategies that are mutual best replies. We find a playei'» equilibrium mixed strategy by choosing the probability of picking each strai egy such that the other player is indifferent among its undominated strategies. Then the other player has no incentive to play any one strategy over anothei. and any of those strategies is a best reply Lo (he mixing player's strategy. 'I> lind such a mixed strategy in a game with two pure strategies, we calculate the other player's expected utility for playing either pure strategy and set them equal. Example: In the game in Figure 4.10, Player 2 wants to play s-i TfPIayer 1 plays St and play S| if Player 1 plays S2. To calculate Player 1 's mixed strategy that makes Player 2 indifferent between her two strategies, let. p be the probability that he will play-Si. Player 1 plays the mixed strategy lpSt, (1 - p)S;]. Find p such that Player 2 is indifferent between playing S| and s>: -3p + (-2K1 - P) = -1p + - P) p - 2 - 3p - 4 Then (|Si, ^Sz) is Player l*s mixed strategy that makes Player 2 indifferent between her two pure strategies. We can find a player's expected value for playing the game from the other .J player's mixed strategy. We calculate what payoff the player expects from playing the game against the other player's mixed strategy. This expected value is often called the player's value for the game. The players' values for a game can vary with the equilibrium in a non-zero-sum game. In .1 pure strategy equilibrium, the players' values are just their payoffs from thai equilibrium. Example: Either side of the first equality above gives Player 2"s value for the game against Player 1 's mixed strategy. Substitute 3 for p in either side, and we get Player 2's utility if Player 1 plays its mixed strategy. From the left side of the equality above, wc have the following: v2 = <-3)(£) + (-2)(l - \) = ~2i CLASSICAL GAME THEORY 87 Exercise 4.4: Find Player 2's mixed strategy lo make Player 1 indifferent between his two strategies and his value for the game in the above example. There is a critical observation about mixed strategies that must not be missed here. Mixed strategies art*>cd~lculated to neutralize the other player's choice of strategy, not to maximize the mixing player's payoff. Mixed strategies expand the set of strategies that are a best reply for the other player. When a player is indifferent among its undominated strategies, any of them is a best icpl>. When both players play their mixed strategies, the pair of strategies forms a Nash equilibrium. Game theory is about strategic interaction; the strategic effect of a player's strategy on the other player's choice of strategy is as iinp^riam as the immediate consequences of that strategy for the player's payoffs. Whenever a mixed strategy is a best reply to a given strategy, every pure strategy in the mixed strategy is also a best reply Lo the given strategy. When ever wc have a mixed strategy equilibrium, each player plays its mixed strategy in order that the other player can play its mixed strategy It a plawr deviates from its equilibrium mixed strategy, then the other player's best reply is no longer its equilibrium mixed strategy. Mixed strategy equilibria can be understood in two ways, First, they could be exact instructions to randomize over pure strategies. In Matching Pennies, Player 2 could tell Player 1 that she is going to flip her coin to make her move after Player 1 has selected his move. Mixed strategies with more complicated probabilities could be carried out by drawing lots. Second, mixed strategies can be interpreted as representing the other player's uncertainty about what strategy the mixing player will play. A mixed strategy under this interpretation does not give random chances of the pure strategies. Rather, it delineates the uncertainty in the mind of the other player about the mixing player's strategy. The other player's indifference between its strategies captures the degree of uncertainty where the other player cannot exploit the mixing plauT. What happens when a player plays a mixed strategy? Mixed strategies make one's actions unpredictable. When a player can profit by knowing the other player's strategy, the latter must obscure its intentions to prevent the former from taking advantage of its knowledge. Sports contains many examples of mixed strategies; for example, calling plays in football is a zero-sum game. If a pure strategy equilibrium existed in football, each team would always run the same play and set the same defense conditional on the down, distance, and score. But they don't. Football coaches say that if you know the play, there is a perfect defense to stop it, and if you know the defense, a perfect play to attack it. There is no pure strategy equilibrium in football.1 Both sides should choose randomly among their plays and defenses to prevent the other side from predicting what they are planning on any given play. The exact set of plays and defenses used in a Loam's mixed strategy involves careful judgments about the w 88 CHAPTER FOUR CLASSICAL GAME THEORY 89 a) Player 1 Player 2 C) Player 1 a) Player 1 s1 S2 3 -4 -2 1 Player 2 S1 S2 5 -4 1 2 b) Player 1 Player 2 -3 1 0 -5 Figure 4,11 Exercise 4,5 Player 2 Si s2 (4,2) (-5,6) (-1,5) (0,-2) b) Player 1 Player 2 (1,1) (0,4) (-13) (3,-5) Figure 4,12 Exercise 4.6 strengths and weaknesses of a team and its opponent. The set of plays also should vary with the down, distance, and score. Both of these judgments make Football a fascinating strategic game. Exercise 4.5; Find the mixed strategy equilibria and values of both players in the two-person, zero-sum games in Figure 4.11. Exercise 4.6: Verify that the two-person, non-zero-sum games in Figure 4.12 do not have Nash equilibria in pure strategies. Find a Nash equilibrium in mixed strategies and the value of the game for both players in the mixed strategy equilibrium for each game in Figure 4.12. ■ The Minmax Theorem and Equilibria of Two-Person, Zero-Sum Games Mixed strategies expand the set of strategies available to the players. We have found Nash equilibria in mixed strategies. Do equilibria always exist in the set of mixed strategies?"To answer this question, I begin with two-person, zero-sum games. Two-person, zero-sum games are very special. It is common knowledge thai the players' interests arc perfectly opposed. Both players' self-interest dictates that they try to reduce the opponent's payoff. A player's security level for a strategy S, is the minimum payoff it can obtain if it declares in advance that it will play Sj. The security level of a strategy is important for two-person, zero-sum games because a player must assume that its opponent will strive to limit it to its minimum payoff. Players in a two-person, zero-sum game should choose their strategies to maximize their security levels. Example: In Ihe game in Figure 4.10, Player 1 \s security level for S| is 1. If Player 2 knows that Player 1 is playing S|, she will play s2- Similarly, Player l's security level for is 2. Player 2's security levels for her pure strategics are -3 for si and -4 for Si. But the players' mixed strategies that were calculated to make the other player indifferent between its pure strategies have higher security levels than those of their pure strategies. The mixed strat-egy (|Si4Si) provides Player 1 with a security level of 2|, and (jfS| ,|si) provides Player 2 with a security level of —2\. This example illustrates the Minmax Theorem, the basic result about two-person, zero-sum games. All two-person, zero-sum games have at least one equilibrium in mixed strategies. The equilibrium strategies maximize both players" security levels. Theorem (the Minmax Theorem): For every two-person, zero-sum game with a finite number of pure strategies, there exists a number, v, a mixed strategy for Player 1 that guarantees him a total payoff of at least v, and a mixed strategy for Player 2 that guarantees that Player I gets at most v. These mixed strategies are in equilibrium, and any pair of mixed strategies that are in equilibrium yield v for Players 1 and 2 and are also in equilibrium with the equilibrium strategies that produce v. ! he Minmax Theorem is immensely powerful for the special case of two-person, zero-sum games. Every two-person, zero-sum game can be solved in mixed strategies. If there are multiple equilibria, all equilibria produce 90 HARTER FOUR CLASSICAL GAME THEORY 91 the same value, and any equilibrium strategy for one player is in equilibrium with any equilibrium strategy for the other player. In other words, all equilibrium strategies art: interchangeable and produce the same value. The interchangeability of strategies makes the practical solution of two-person, zero-sum games easier. Any equilibrium strategy is in equilibrium with any equilibrium strategy of the other player. If an equilibrium strategy is found, we need look no further for strategic purposes; it is equally effective against any equilibrium strategy the opponent plays. The proof of the Minmax Theorem can be sketched as follows. For any pair of strategies, find each player's best reply in the set of mixed strategies to the other player's strategy in that pair. This process defines a best reply transformation in the space of mixed strategies from the original pair of strategies to the pair of test replies to them. There are theorems in mathematics that state that all such transformations have a fixed point, a pair of strategies thai is mapped onto itself by the transformation. The fixed point of the best reply transformation is the equilibrium; it is a pair of strategies that are best replies to each other. Given that an equilibrium exists, then the statements about the value of the game and interchangeability of equilibrium strategies follow from some inequalities. However, the Minmax Theorem stales only that an equilibrium exists and does not explain how to find that equilibrium. To give you an idea of the limits of the Minmax Theorem, chess is a two-person, zero-sum game. Thus there is an equilibrium for chess. There exists a strategy for chess that guarantees either a victory for one side or a draw for both. Strategically, then, chess is not an interesting game. Of course, determining this strategy is not a trivial problem, which is why chess is an interesting game to play. No one has ever solved for the optimal chess strategy for cither player. The strategy space is immense, and the interaction of the players' strategies extraordinarily complex. The Minmax Theorem does not hold for all possible two-person, zero-sum games. For example, games with infinite strategy sets can fail to have equilibria: Example: The game in which each player picks a number and the higher number wins. This game has no equilibrium because for any number chosen, any larger number is a best reply. There is no pair of numbers that are best replies to each other in this game, so no pure strategy equilibrium exists. Similarly, there is no mixed strategy equilibrium. If there were such a mixed strategy equilibrium, either player would be better off reducing the probability of playing the least number it might play and raising the probability of any number greater than the lowest number that the opponent plays in its mixed strategy. The Minmax Theorem fails to hold for this case because the players have an infinite set of pure strategies and the payoff function is not, as mathematicians say, "well-behaved."There are theorems that apply to games with infinite strategy sets that explain how "well-behaved" the payoff function must be for Nash equilibria to exist in pure and mixed strategies. See Fudenberg and Tirolc i 991 (34-36,484-89) for some*>uch theorems. Characteristics of Nash Equilibria The previous section discussed two-person, zero-sum games. Equilibria of two-person, zero-sum games have many nice properties. Nash equilibrium is the extension of equilibrium in two-person, zero-sum games to non-zero-sum games. Both types of equilibria require players' strategies to be mutual best replies. Unfortunately, most of the nice properties of equilibria of two-person, zero-sum games do not extend to Nash equilibria in non-zero-sum games. This section identifies and illustrates which properties of the equilibria of two-person, zero-sum games no longer hold in Nash equilibria of non zero-sum games. In a Nash equilibrium, neither player can better itself acting on its own. Nash equilibria are stable from unilateral defections. However, they may not be stable against coordinated defections. Both players could benefit if they both changed their strategies together. Such coordinated defections are never desirable in a zero-sum game because the players have opposed interests. If one player benefits from the joint shift in strategies, the other must lose from that shift. Example: The game in Figure 4.13 has two Nash equilibria in pure strategies, (Si ;S|.) and (S2;s2). But (S2;s2) is not stable against both players' defecting at the same lime. Both players prefer shifting from (S2;s2) to (S];si). Non-zero-sum games with multiple Nash equilibria pose problems that two-person, zero-sum games with multiple equilibria do not. If a two-person, zero-sum game has multiple equilibria, all the equilibria produce identical values for the players and all equilibrium strategies are interchangeable. Neither proposition is true for non-zero-sum games with multiple equilibria. Example: Battle of the Sexes. The Battle of the Sexes game has a cute slice of life in the 1950s built into its story. As Luce and Raiffa (1957, 91) put it, " A man, player 1, and a woman, player 2, each have two choices for an evening's entertainment. Each can either go to a prize fight fS, and s,] or to a ballet [S2 and s2J. Following 92 CHAPTER FOUR CLASSICAL GAME THEORY 93 Player 2 Player 2 Player 2 Player 1 (2,2) (0,0) (0,0) (1,1) Player 1 (2,1) (0,0) Figure 4.13 A Two-Person, Non-Zero-Sum Game with Two Nash Equilibria So (0,0) (1,2) Figure 4,14 Battle of the Sexes the usual cultural stereotype, the man much prefers the right and ihe woman the ballet; however, to both it is more important that they go out together than that each see Ihe preferred entertainment." In the 1990s version of this game, now called "Contest of the Individuals with Neither Gender nor Sexual Orientation Specified," Chris and Pal decide whether they wish to vacation at the beach or in the mountains.2 No matter what we call this game, both players want to coordinate their strategies, but they disagree about which outcome is better. Figure 4.14 gives the strategic form of Battle of the Sexes. This game has two Nash equilibria in pure strategies: (S i ;si > and (S2;s2). Battle of the Sexes illustrates how Nash equilibria lack many of the nice properties of the equilibria of two-person, zero-sum games. First, different Nash equilibria of the same game can have different values. In Battle of the Sexes, each player's value for the game at equilibrium is different across the two equilibria. Unlike the. equilibria of two-person, zero-sum games, different Nash equilibria of a game can have different values for a player. Second, equilibrium strategies are not interchangeable. (Si ;s2) is not a Nash equilibrium of Battle of the Sexes, even though both S\ and s2 are equilibrium strategies for some pure strategy Nash equilibrium of the game. Exercise 4.7: Find a mixed strategy equilibrium and its value to both players for Battle of the Sexes. Nash equilibria do not possess all the nice properties of the equilibria of two-person, zero-sum games. But Nash equilibria always exist in mixed strategies. Theorem (Nash): Every finite non- zero-sum game has at least one Nash equilibrium in mixed strategies. The proof of the existence of Nash equilibria parallels the proof of the Min-max Theorem. The best reply transformation must have a fixed point, and that Play.gr 1 (5,5) (0,10) (10,0) (-5,-5) * '"' Figure 4.15 Chicken fixed point is a Nash equilibrium. But the proof does not tell us how to find Nash equilibria—only that they exist. There are no general techniques for finding Nash equilibria other than simple searches and basic intuition. One way to find Nash equilibria is to think about how each player should play the game and look for strategy pairs where neither player can benefit by changing its strategy. Example: Another classic game. Chicken. Chicken, like Prisoner's Dilemma and Battle of the Sexes, has a cute story attached to it. Back in the 1950s, leenaged males suffering from excessive hormones would engage in a contest of manhood known as Chicken. The two contestants would meet on a deserted stretch of road, each driving their favorite machine. They would face each other at some distance and drive their cars directly at each other until one driver "chickened" out by swerving off the road. The other would be the winner and proclaimed the man with the most hormones on the block. Sometimes neither driver would chicken out, and many hormones would be spilt on the pavement. Let S| be a driver's choice to swerve and S2 the choice to hold firm to the course. Figure 4.15 gives the strategic form of Chicken. The best outcome is to win the game by having your opponent "chicken" out, but better to be a live chicken than a dead duck. Exercise 4.8: a) Find the pure strategy Nash equilibria of Chicken. b) Are there any mixed strategy equilibria? If so, what are they, and what value do they produce for the players? Classic two-by-two games, such as Prisoner's Dilemma and Chicken, are often used as models of strategic interaction in international relations. Although such simple models can illustrate some important strategic problems, they have two significant limitations as models. First, two-by-two games assume simulta- 94 CHAPTER FOUR CtASSCAL GAME THEORY 95 neous moves for the players. Some games, such as Prisoner's Dilemma, are un changed by adding an order to the players1 moves. Others, such as Chicken, arc completely changed by introducing a sequence to the moves. The assumption of simultaneous moves also prevents the analysis from addressing the question of how the players respond to earlier moves. Exercise 4.9: a) State the strategic form of Chicken when the second player can observe the first player's move before choosing her move. b) What are the pure strategy Nash equilibria of this game? c) Do all these Nash equilibria make sense to you? Second, strategic situations with only two choices are quite limited. Although all game models are simplifications of reality, reducing the players' choices to iwo may abstract questions of strategic interest into nonexistence. All too often, two-by-two games have been seen as the only possible models. Analysts have used those games to make inappropriate arguments about situations with more than two relevant strategic choices. Nash Equilibria and Common Conjectures Many games have multiple Nash equilibria. How do the players know which equilibrium to play, and how do they coordinate their expectations to play thai equilibrium? In a two-person, zero-sum game, the players do not have to worry about multiple equilibria. All equilibria produce the same value, and all equilibrium strategies are interchangeable. The players do not have to try to predicl which equilibrium the other player will use when choosing their own strategy. Any of a player's equilibrium strategies is in equilibrium with all of its opponent's equilibrium strategies, and all those equilibria produce identical values for the players. But Nash equilibrium strategics are not interchangeable in non-zero-sum games. Nash equilibrium strategies are best replies to each other; they are not necessarily best replies to any other Nash equilibrium strategy of the other player. The players need to know the other players' strategies to know that their own strategy is a best reply in a non-zero-sum game. The concept of a Nash equilibrium assumes that the players hold a common conjecture about how the game is going to be played. In a Nash equilibrium, a player cannot make itself better off by deviating from its equilibrium strateg} if it knows that the other player will play its corresponding equilibrium strategy. When players hold a common conjecture, they correctly anticipate one another's strategies. Player 2 Player 1 (2,2) (0,0) (0,0} (1,1) Figure 4.16 A Two-Person, IMon-Zero-Sum Game with Two Nash Equilibria Common conjectures could arise for many reasons. This section discusses two possible sources of common conjectures, communication and focal points. The common conjecture assumed may also distinguish some Nash equilibria from others in games with multiple Nash equilibria. The exact common conjecture used can help us choose among multiple Nash equilibria of a game. When the players have identical interests in a game, communication may be sufficient to create the common conjecture needed for a Nash equilibrium. The game in Figure 4.16 has two Nash equilibria in pure strategies: (Si;si) and (S2;s2). If the players can communicate, one would expect them to agree to play (Si;si). If pregame communication is the source of the common conjecture, then one would expect players to coordinate on a Pareto-optimal equilibrium. The players should not choose an equilibrium if both can do better in another equilibrium. Some game theorists suggest Pareto optimality as a condition for selecting among multiple Nash equilibria. Definition: An outcome x Pareto dominates an outcome y iff for all players i, u;(x) > iij(y) and for some player j, Uj(x) > Uj(y). Outcome x strictly Pareto dominates outcome y iff for all players i, Uj(x) > Ui(y). In the game in Figure 4.16, there are two Nash equilibria in pure strategies. (St;si) strictly Pareto dominates (S2;s?). It seems implausible that the players would play (Sa;*?) over (S i ;s i) if they communicate to agree on their strategies. Pareto optimality is not as useful to select among Nash equilibria when the players do not have identical interests. Battle of the Sexes has two pure strategy equilibria, but neither Pareto dominates the other. Further, communication alone may not create a common conjecture for Battle of the Sexes. Suppose the players quarreled about which pure strategy equilibrium to play, with Player 1 holding out for (Sj;si) and Player 2 arguing for (S?^)? What strategies do you think they would play after such an argument? 96 CHAPTER FOUR CLASSICAL GAME THEORY 97 Player 2 Player 2 s. S, Player 1 (1.1) (0,0) (0,0) (1.1) Player 1 (1,1) (0,0) (0,0) ft ft 1.1) ft ft Figure 4.17 The Effect of a Focal Point in a Game with Two Identical Equilibria Second, factors beyond the game may create a common conjecture. The players may share ideas that lead them to focus on sortie strategy pairs ovei others. For example, the two games in Figure 4.17 are the same. They have two Nash equilibria in pure strategies: (S■ :si) and (S2;s2). If the players can not communicate and they play the game using the strategic form on the left, we might expect them to play (Si; s|) simply because S\ comes before Si. If they use the strategic form on the right, (S2; s2) seems more likely The stars, arrows, and large print make that equilibrium more salient to the players. In real life, Schelling's (1960, 55-57) famous example of where to meet in New York City illustrates the idea of focal points. His audience in New Haven, Connecticut, selected Grand Central Station, their focal point in Manhattan. This observation captures the idea of focal points. If there are multiple Nash equilibria, some may be distinguished in ways that lead the players to those equilibria over the others. Focal points could be distinctive outcomes that at tract attention out of a large set of Nash equilibria. If the players have played the game before, their prior experiences could create a common conjecture about how the game should be played. A common culture could lead the players toward some strategies over others. Symmetry can be thought of as a focal point. Consider the game where two players can divide one hundred dollars if they can agree on how to divide it. Each player must simultaneously write down how they will divide the sum. I:" the two entries match, the players get the money, divided in the way they have agreed. Any division of the money is a Nash equilibrium of this game. But we might expect that the players would choose to divide the money equally. Equal division is a natural focal point in the set of equilibria. If we accept symmetry as a focal point, symmetric games should have symmetric equilibria. If a game treats the two players the same-- gives them the same choices and produces the same outcomes from the corresponding choices—the equilibrium should reward them with the same outcome. ■ 91 I jjjj m lip m Definition: A game is symmetric iff both players have the same strategy set. S, and for all s;, Sj E S, uj(S;;Sj) = u2(Sj;sj). A game is symmetric if the game does not change when we relabel the players. The players must have identical strategy sets. When we exchange the strategies the players play, the players also reverse payoffs. Both Chicken and Battle of the Sexes are symmetric games. The players differ only in their labels.3 Symmetric games should have equilibria where the players receive equal payoffs. Such games provide no way to differentiate the players except through the labels we give them. One argument for equilibrium selection holds that Nash equilibria of symmetric games where the players receive different payoffs should be discarded. Neither Chicken nor Battle of the Sexes has a pure strategy Nash equilibrium that satisfies the symmetry criterion. Instead, the mixed strategy equilibria of the two games are selected. They do produce equal payoffs for the players. However, the symmetry condition conflicts with Pareto optimality in Battle of the Sexes. Both players are better off in either of the pure strategy equilibria than they are in the mixed strategy equilibrium. Game theory does not have a theory of focal points at this time. One would have to explain how focal points arise from cultural influences, shared experiences, or moral systems. But focal points draw our attention to an important point about Nush equilibria. Nash equilibria assume that the players share a common conjecture about what strategics they each will play. Otherwise, the players cannot know that their strategies are best replies to each other. Focal points could be one source of a common conjecture. How strong does the common conjecture have to be for a Nash equilibrium? Two theorems set forth by Aumann and Brandenburger (1991) clarify what shared knowledge is needed for Nash equilibrium. Before looking at these theorems, we have to define the term mutual knowledge. Recall the definition of common knowledge from Chapter Three. Something is common knowledge if all players know it, all players know that all other players know it, and so on. A weaker level of knowledge is mutual knowledge. Definition: An aspect of a game is mutual knowledge if all players know it. Mutual knowledge, unlike common knowledge, does not require that the players know that the other players know it. In this sense, events that are mutual knowledge are not as well known as those that are common knowledge. For pure strategy Nash equilibria, mutual knowledge of strategies is sufficient to guarantee that the players know their strategies are mutual best replies. 98 CHAPTER FOUR CLASSICAL GAME THEORY 99 Theorem: If the players are rational and the pure strategies they are playing are mutual knowledge, then those strategies must form a Nash equilibrium. If strategies are mutual knowledge, then rational players must select best replies. Strategics that do not form a Nash equilibrium cannot be mutual knowledge among rational players. Mixed strategy Nash equilibria require greater knowledge. The players are uncertain about what pure strategy the other players will play in a mixed strategy Nash equilibrium. Consequently, they cannot be certain that their mixed strategies are best replies. Theorem: For two-person zero-sum games, if the game, the rationality of the players, and their mixed strategies are mutual knowledge, then those mixed strategics form a Nash equilibrium. As in the previous theorem, if the mixed strategics are mutual knowledge, they must form a Nash equilibrium. Otherwise, one of the players would not be playing a best reply, and it would wish to change its strategy. However, this theorem does not generalize for games with three or more players. Stronger knowledge is needed. I suggest you see Brandenburger 1992 for a further discussion ol Nash equilibrium and common conjectures. Rafionoiizability What happens when the players do not share a common conjecture about hou the game will be played? Are there are any limits on what strategies rational players would play? Recall the idea of a strictly dominated strategy. When a player has a strictly dominated strategy, it always does better by playing the strategy that dominates its dominated strategy. A rational player would nevei play a strictly dominated strategy, no matter what it conjectures about the othei players' strategies. Rational players can use common knowledge of the game being played and their rationality to restrict the set of strategies from which they choose. Rational players will not play strictly dominated strategies, and they know othei rational players will not play such strategies. They need only focus their attention on undominaled strategies. If a player eliminates another player's striclh dominated strategies, it could find that some of its strategies are now strictly dominated within the smaller game left after the elimination. It should not pla\ a strategy that has become strictly dominated; there are no strategies that the other players will consider where such a strategy is at least as good as another strategy. This observation suggests a way of reducing games to those strategies that rational players could select, an iterated dominance-elimination procedure, fn a Player 2 s1 s2 s3 s 1 (0,1) (-2,3) (4,-1) Player 1 S2 (0.3) (3,1) (6,4) S 3 (1,5) (4,2) (5,2) Figure 4.18 A Game to Illustrate the Elimination of Dominated Strategies Player 2 Player 1 S, Figure 4.T9 The ©amerin Figure 4.T8 after Dominated Strategies Are Eliminated dominance-elimination procedure, we eliminate all strictly dominated strategies from the game, and then continue searching for strategies thai are now dominated in the set of remaining strategies. This procedure is carried out until no more strategies can be eliminated. The process is an iterated dominance-elimination procedure. If an iterated dominance-elimination procedure ends in only one strategy per player, the game is dominance solvable. Prisoner's Dilemma is dominance solvable. Example: Conduct an iterated dominance-elimination procedure on the game in Figure 4.18. S| is strictly dominated by Si: Player 1 always docs better playing S; ralher than S|. If we delete S| from consideration, then S| strictly dominates s? for Player 2. In Figure 4.19 blacks these two eliminated strategies and the payoffs they produce are blacked out. Figure 4.19 cannot be reduced further by an iterated dominance-elimination procedure. Neither Si nor S-> strictly dominates the 100 CHAPTER FOUR CLASSICAL GAME THEORY 101 other for Player 1. Neither s. nor s3 strictly dominates the other for Player 2. The game in Figure 4.18 is not dominance solvable, then. The strategies that result from an iterated dominance-elimination procedure are called rationalizable.4 A rational player could hypothesize strategies for the other players where a rationalizable strategy is a best reply. In the reduced game in Figure 4.19, either player can rationalize both of its remaining strategies given the proper hypothesis about the other player's strategy. If Player 1 thinks that Player 2 will play s^, he prefers playing S2. If he thinks that she will play s|. he prefers playing S.t. Similarly, if Player 2 thinks that Player 1 will play S2, she prefers playing s.i. Simultaneous choices of S2 and si can be rationalized if both players' hypotheses about the other's strategy are incorrect; in this case. Player 1 thinks that Player 2 will play s3, and player 2 ihinks Player 1 will play S3. If either player knew its hypothesis about the other player's strategy was incorrect, it would want to change its strategy. Rationalizability, unlike Nash equilibrium, allows for incorrect hypotheses about the other player's strategy. In Nash equilibrium, we assume that the players' hypotheses about one another's strategies are correct. Any strategy that is part of a Nash equilibrium is rationalizable. If it were not, then it would be strictly dominated at some point during the iterated dominance-elimination procedure. That strategy would be a better reply than the player's Nash equilibrium strategy to the other players' Nash equilibrium strategies. Nash equilibria, then, are a subset of the set of rationalizable strategy pairs. Many games have multiple Nash equilibria. The set of rationalizable strategy pairs always includes all Nash equilibria and typically is larger than the set of all Nash equilibria. Exercise 4.10: Conduct an iterated dominance-elimination procedure on the game in Figure 4.20. Is this game dominance solvable? If so, what are the resulting strategies? If not, what does the game look like after the procedure, and what are the Nash equilibria of the remaining game? In general, there is no completely satisfactory solution concept for two-person, non-zero-sum games. Nash equilibria often provide multiple solutions with no commonly accepted way to judge among them. The logic of mutual best replies alone does not explain why the players share a common conjecture about how the game should be played. Because non-zero-sum games involve mixed motives for all players, threats and bargaining strategies are common in them. But the exploration of threats and bargaining tactics requires the specification of the nature of communication among the players and a greater knowledge of the structure of the game. Different sequences of moves can affect Player 2 «Iii m m ■p ■it ■ft ■ Sill SilP ■ m mm WW m Player 1 S S1 S2 S3 S4 S5 s 1 (4,-1) (3,0) (-3,1) (-1,4) (-2,0) $f 1-1,1) (2,2) (2,3) (-1.0) (2,5) s 3 (2,1) (-1,-1) (0,4) (4,-1) (0,2) s 4 (1,6) (-3,0) (-1,4) (1,1) (-1,4) s 5 (0,0) (1,4) (-3,1) (-2,3) (-1,-1) Figure 4.20 Exercise 4.10 greatly the efficacy of threats and bargaining tactics. 1 leave these topics for my discussion of games in the extensive form. Now I present two examples of games applied to electoral politics. Political Reform in Democracies Reform of the civil service has been a recurrent problem in democracies. Patronage is the traditional source for government employees. Victorious parties reward their workers and followers with government jobs. Patronage often leads to inefficiency and corruption, and the demand for reform soon follows. But civil service reform is difficult to achieve after the demand for it arises. The tight to establish merit-based hiring over patronage is often blocked by established political parties. Why is the move to meritocracy difficult to achieve? This section presents two simple models from Geddes 1991 thai help us understand the political motivations behind the success and failure of civil service reform in democracies. Reform occurs only if politicians approve of it. The models here focus on the incentives that politicians face when the issue of political reform arises in competitive democracies. Both patronage and reform have electoral value to politicians seeking office. Patronage is a reward for loyal campaign workers and so produces a cadre of veterans for the political organizations of parties in office. Reform is an effective campaign issue with voters when the inefficiencies of patronage become obvious to all. The two models here address the questions of when politicians use the promise of patronage jobs to their campaign workers and when they support reform as an issue in their campaigns. The model shown in Figure 4.21 displays the incentives of politicians to use patronage, that is, promise their workers jobs, during a campaign. Exam-