The Puzzle of Cooperation

by Robert Ellickson   (1991)


EllicksonRobert

The Rational-Actor Model

The Vocabulary of Game Theory

The Prisoner's Dilemma

A Specialzed Labor Game

The Source of Hope: Repeated Play

Game theory provides a set of tools for the systematic dissection of the problem of human order. The advantage of game theory is that it forces its users to make explicit assumptions about human motivations and capabilities, and to identify the features of "games" - that is, interpersonal interactions - that are apt to influence conduct.[1]

The Rational-Actor Model

Game theorists adopt the rational-actor model that is currently dominant among social scientists of a positivist bent, especially those working in economics and public-choice theory.[2] The rational-actor model has two basic underlying tenets. It assumes, first, that each individual pursues self-interested goals and, second, that each individual rationally chooses among various means for achieving those goals.[3]

To be self-interested is not necessarily to act selfishly at every opportunity. A rational actor may choose to pass up a short-run gain to garner a long-run gain of greater present value. When rational-actor theorists observe ostensibly altruistic behavior they therefore tend to see it as part of a continuing, mutually beneficial pattern of exchange. Apart from interactions among kin, however, they tend to doubt the possibility of unalloyed altruism.[4]

The assumption that people are rational is a useful simplification that is known to be overdrawn. As Herbert Simon in particular has emphasized, people have limited cognitive capacities.[5] Some limitations in cognitive abilities are rather easily reconciled with the basic rational-actor model. For example, to assume people are rational does not presuppose that they endlessly calculate their every move. Because deliberation is time-consuming and endless innovation is risky, a rational actor may choose a course of action, not by calculating from scratch, but rather by drawing upon general cultural traditions, role models, or personal habits developed after trial-and-error experimentation.[6] These shorthand methods reduce decision-making costs, but actors who rely on them will tend to lag in adapting to changes in their surroundings.

In some contexts a person's perceptions seem to be distorted not by lack of cognitive capacity but rather by cracks in his lens. For example, in the face of much contrary evidence, Shasta County cattlemen adhered to the folklore that the "motorist buys the cow in open range." Psychologists theorize that cognitive dissonance may cause an individual to suppress information whose acceptance would make him feel foolish.[7] Similarly, Daniel Kahneman and Amos Tversky have assembled evidence that the framing of an outcome as a loss, as opposed to a forgone gain, has more effect on decision making than the rational-actor model would predict.[8]

OrderWithoutLaw-cover

The rational-actor model has drawn its heaviest fire from scholars who are suspicious of either reductionist theory or, at the extreme, the possi- bility of objective (positive) social science. Law-and-society scholars tend to be positivists, but they are also skeptical about model building. Although something close to a rational-actor model has surfaced in the work of law-and-society stalwarts such as Stewart Macaulay and David Trubek,[9] most law-and-society scholars seem to regard it as too simple to have heuristic value. At the extreme among the nonpositivist critics are the Critical Legal scholars. Using more intuitive epistemologies, they as- sert that human nature and human tastes are highly contingent on histor- ical circumstance. They would likely regard rational-actor theorists as fundamentally mistaken, for example, in taking the self-interestedness of individuals as a given.[10]

The subsequent analysis applies the rational-actor model and also makes considerable use ofgame theory. I am a positivist and am therefore interested in making and testing predictions. If a theory lacks assump- tions about human motivations and decision-making processes, it cannot generate predictions. In my view, despite the undoubted simplicity of the rational-actor model, social scientists possess no technique with greater heuristic power.

The Vocabulary of Game Theory

Game theorists analyze interactions between two or more people ("games") in which the individual outcomes ("payoffs") for the people involved ("players") depend on their independent choices among plays. Some key variables in games are (1) the number of players, (2) the number of choices a player has available, (3) the patterns of payoffs under different conjunctions of player choices, and (4) the number of periods in which a game is to be played. Game theorists usually assume that the players know perfectly the matrix that shows the individual payoffs associated with different combinations of player choices, but that players cannot change those payoffs or communicate with each other except by making choices.

Game theory aspires to predict what players would choose to do in particular game situations. Because game theorists make use of the rational-actor model, they assume that players want to maximize their individual payoffs. Theorists call a choice "dominant" in a period of play if it would be in a player's self-interest for that period of play regardless of what the other player(s) were to choose to do.

The outcome of a game will be referred to here as "cooperative;' or "welfare maximizing;' when the players' choices have combined to deliver the largest total objective[11] payoff available, regardless of how individual players happen to share in that total.

In some games, those of pure coordination, the payoffs are structured such that the players have strong individual incentives to choose strategies that will conjoin to produce cooperative results. Every motorist, for ex- ample, recognizes that there will be gains from a convention that requires all to drive on the right (or left) side of the highway; every user of a language gains if there is a consensus about the meaning of given words. It is unremarkable that players reach cooperative outcomes in these sorts ofgames.[12]

The Prisoner's Dilemma

Theorists of cooperation therefore concentrate on more nettlesome situ- ations in which rational players seem likely to make choices that will not conjoin to produce cooperative outcomes. The most famous game of this sort is the Prisoner's Dilemma. In a Prisoner's Dilemma the matrix of payoffs is structured so that the rational pursuit of self-interest seems destined to be an engine of Hobbesian impoverishment rather than of welfare production. Most analysts of cooperation assume that if players can achieve cooperative outcomes under the adverse circumstances of the Prisoner's Dilemma, they could certainly achieve cooperative outcomes under more favorable game conditions.[13]

Table 9.1 - An Illustrative Prisoner's Dilemma

Player 2
Cooperate Defect
Player 1 Cooperate 3, 3 0, 5
Defect 5, 0 1, 1

Note: The payoffs to Player One are listed first.

Table 9.1 is based on a simple Prisoner's Dilemma set out in Robert Axelrod's important book on cooperation.[14] Two players play each other just once. Each has two choices, "Cooperate" or "Defect." The four cells in the matrix indicate the payoffs, in units of the prevailing currency, that would result from each possible conjunction ofchoices. Each cell contains two numbers, the first the payoff for Player One, and the second the payoff for Player Two. For example, if Player One were to Cooperate and Player Two were to Defect in this particular game, Player One would receive 0 and Player Two would receive 5. Observe that the cooperative, welfare-maximizing outcome is the upper-left quadrant, which is reached when both players Cooperate. The sum of the individual payoffs (6) is greater for that quadrant than for any other.

For a game to be a Prisoner's Dilemma, the pattern of payoffs must satisfy three conditions. First, Defecting must be the dominant choice for each player. Second, mutual decisions to Defect must produce individual payoffs for both players that are lower than the payoffs they each would have received had they both "irrationally" chosen to Cooperate. And third, the total payoff in the upper-left cell, which represents mutual Co- operation, must be larger than the total payoff in either the upper-right or the lower-left cells, which would be reached if one player Cooperated and the other Defected.[15]

Anyone who has not previously encountered the Prisoner's Dilemma should spend a moment studying Table 9.1 to see that it indeed meets these devilish conditions. Imagine how Player One would analyze the situation. Because the game rules prevent the players from communicat- ing prior to choosing what to do, Player One would not know whether Player Two was about to Cooperate or about to Defect. Suppose Player Two were about to Cooperate. According to the matrix of payoffs, Player One would gain 5 by Defecting, but only 3 by Cooperating; therefore an egoistic Player One would conclude that it would be wise to Defect if Player Two were about to Cooperate. Now suppose Player Two were about to Defect. In that case Player One would gain 1 by Defecting, but oby Cooperating. Thus, Player One would conclude that Defecting was his dominant choice; it would make him better off regardless of the choice Player Two was about to make. In Table 9.1 the payoffs and incen- tives are symmetrical, and Defecting would be Player Two's dominant choice as well. Mutual Defection, apparently the inexorable result of ra- tional, self-interested play of the game, would produce payoffs of 1, 1, an outcome worse for both players than the 3, 3 results they would have obtained had both Cooperated. Because the total payoff of mutual Coop- eration (6) is larger than the total payoff in either the upper-right or lower-left quadrants of the matrix (5), the third and final requirement for a Prisoner's Dilemma is satisfied.

Table 9.2 - An Algebraic Prisoner's Dilemma

Player 2
Cooperate Defect
Player 1 Cooperate B, B D, A
Defect A, D C, C

Note: A>B>C>D and 2B>A+D

Table 9.2 presents the structure of the Prisoner's Dilemma in simple algebraic terms. As an everyday example, imagine that the players in the Prisoner's Dilemma are adjoining landowners and that the game is over the construction of a boundary fence. To Cooperate in this example would be to contribute labor and materials to a cost-justified fence proj- ect; to Defect would be to fail to contribute. If the boundary-fence project situation were indeed structured like a Prisoner's Dilemma, the best result for either landowner would be for the other to build the fence as a solo project. Because 2B > A + D in a Prisoner's Dilemma, if both adjoiners were to work together on a fence they would exploit economies of scale, perhaps of the type Robert Frost suggested in the poem "Mending Wall," that would not be exploited if one of them were to build it alone. In this example, a cooperative fence project would be better for each neighbor than no fence at all. For each neighbor Defection is, how- ever, the dominant strategy, and the rational-actor model predicts that short-sighted neighbors would fail to build the fence.[16]

A Specialized Labor Game

Another game, slightly different from the Prisoner's Dilemma, illustrates a common social situation that may also pose problems for egoists. Table 9.3 presents the game in algebraic form. Table 9.4 presents a numerical example, which can again be taken to involve the potential construction of a boundary fence. For reasons that will be apparent, this second game will be called "Specialized Labor."

Table 9.3 - An Algebraic Version of the Specialized Labor Game

Player 2
Work Shirk
Player 1 Work B, B D, A
Shirk A, E C, C

Note: A>B>C>D>E and 2B<A+D

Specialized Labor differs from the Prisoner's Dilemma in two respects. First, to reach the cooperative outcome the players must act differently. In this game the highest sum of payoffs is achieved when Player One Works and Player Two Shirks. In contrast to the Prisoner's Dilemma, joint-labor projects in Specialized Labor are welfare reducing. In the fence-building context, for example, one adjoining landowner would be able to build a boundary fence more cheaply than two could build it.

Second, in Specialized Labor the sum of the payoffs in the upper-right quadrant cannot be the same as the sum in the lower-left quadrant. In both Tables 9.3 and 9.4, the higher sum happens to be in the upper-right quadrant. This means that Player One has some special ability, not possessed by Player Two, to act in a way that will maximize joint welfare. Inspired by Calabresi's notion of the "cheapest cost-avoider," let us call this specially capable person the "cheapest labor-provider." It is character- istic of a Specialized Labor game that a cheapest labor-provider is always present.[17]

Table 9.4 - An Illustrative Specialized Labor Game

Player 2
Build Fence Shirk
Player 1 Build Fence 3, 3 0, 7
Shirk 7, -2 1, 1

Examination of Tables 9.3 and 9.4 will quickly reveal that Shirking is the dominant choice for each of the players in Specialized Labor. Mutual Shirking is a poor outcome. In Table 9.4, the resulting total payoff is 2, the lowest total for any quadrant. In Table 9.3, the resulting total is 2C, which is stipulated to be less than 2B (the sum if both were to Work), which is in turn less than A + D (the sum if only the cheapest labor- provider were to Work).

If transaction costs were zero and the players could negotiate in advance, it would be in their mutual interest in Specialized Labor situations to negotiate a contract obligating Player One to Work, permitting Player Two to Shirk, and obligating Player Two to make an appropriate side-payment to Player One.[18] In Table 9.3, the side-payment would have to be at least equal to C - D (Player One's costs of Working), but could not exceed A - C (Player Two's benefits from the Player One's Work). More concretely, in Table 9.4, the fence would cost Player One a net of 1 to build alone (1 - 0), and would confer benefits of 6 on Player Two (7 - 1). In that situation both parties would be better off if Player Two were to contract to pay Player One some sum between 1 and 6 to compensate Player One for building the fence as a solo project. If such contracting were impossible, the logic ofgame theory suggests that the players would simply miss out on these gains from trade.

The Source of Hope: Repeated Play

People who interact often expect that their current encounter will be but one incident in a series that will continue into the future. Game theorists call continuing relationships "iterated games" and each encounter a "period" of play. For each period a player has "choices." For an iterated game, however, a player can also adopt a "strategy," that is, a plan of action that determines the player's choices in all periods.

Theorists who have investigated repeated games have tended to focus on the iterated Prisoner's Dilemma, in part to see whether players can succeed in cooperating under relatively inauspicious circumstances.[19] The usual format involves two players who confront an identical, symmetric, Prisoner's Dilemma matrix period after period. The number of periods may be finite, have a finite expected value, or be infinite.

Thanks to Axelrod, the best-known strategy for the iterated Prisoner's Dilemma is Tit-for-Tat. A Tit-for-Tat player Cooperates in the first period and thereafter chooses the move that the other player chose for the previous period. Tit-for-Tat is thus never the first to Defect; in Axelrod's terminology, it is a "nice" strategy. A Tit-for-Tat player is not a patsy, however, because he immediately penalizes a Defection by the other player by Defecting himself in the next period.[20] A Tit-for-Tat player nevertheless bears no grudges; once he has squared accounts, he is willing to Cooperate thereafter as long as the other player also Cooperates.

Axelrod conducted several computer tournaments in which various strategies were paired against one another in a round-robin of iterated Prisoner's Dilemmas. In the tournaments the success of a strategy was measured according to the total payoffs it individually earned in its round-robin matches. Tit-for-Tat turned out to be the most successful of the strategies submitted, and much more successful than most of its competitors.[21] Nice strategies did best in Axelrod's tournaments because, when paired, they produced strings of mutually Cooperative outcomes. Moreover, by credibly threatening to punish Defections in later rounds, a Tit-for-Tat player encouraged Cooperation by forcing opponents to lower their estimates of the long-term gains associated with decisions to Defect. To show that his tournaments were not simply laboratory fun, Axelrod amassed anecdotal evidence that Tit-for-Tat strategies are frequently ob- served in practice. His most arresting example is that of British and Ger- man troops facing each other from opposing trenches in World War I, who often followed the Tit-for-Tat strategy of live and let live.[22]

Axelrod's work has obvious implications for students of informal systems of social control. As he presented it, Tit-for-Tat is a second-party system of social control. It is a strategy that is simple for a player to administer and for an opponent to recognize. In the language of behavioral psychology, a Tit-for-Tat strategy is a relentless system of operant conditioning. It promptly rewards cooperation and promptly punishes defections.

Tit-for-Tat, however, is operable only under a highly restrictive set of conditions. Axelrod's computer tournaments involved only two-player interactions and presented each player with only two choices per period. Payoffs were symmetrical and did not change from period to period.[23] A player had perfect knowledge of the history of each of his own dyadic matches but knew nothing of the outcomes of matches between others. A player's groupwide reputation was therefore never at stake.[24] In addition, Axelrod's basic format assumed players costlessly and perfectly administered their strategies. They could not, for example, accidentally "push the wrong button."[25]

Despite Axelrod's results, which provide hope that social and evolutionary processes may work to favor cooperative behavior, game theorists have not been able to deduce from plausible axioms that players in iterated Prisoner's Dilemmas will actually settle into a cooperative mode. Indeed, most game theorists accept the "folk theorem" that asserts that any equilibrium, including an uncooperative one, can be stable as long as each player could do even worse.[26]

That a result cannot be deduced from axioms does not mean that it cannot be induced from observations. Evidence about how people actually behave suggests that the folk theorem is too pessimistic.[27] The next step is to articulate on the basis of field evidence a somewhat more upbeat hypothesis about the reality of social life.



Notes

1. Scholars from diverse fields have turned to game theory to sharpen their analyses of the phenomenon ofcooperation. See, e.g., Robert Axelrod, The Evolution ofCooperation (1984) (political scientist); Russell Hardin, Collective Action (1982) (philosopher); John Maynard Smith, Evolution and the Theory of Games (1982) (biologist); Robert Sugden, The Economics of Rights, Co-operation, and Welfare (1986) (economist); Michael Taylor, Anarchy and Cooperation (1976) (philosopher); Edna Ullmann-Margalit, The Emergence of Norms (1977) (philosopher).

2. Social scientists in other disciplines commonly make use of the rational-actor model. See, e.g., George C. Homans, Social Behavior 15-50 (rev. ed. 1974); James Q. Wilson and Richard J. Herrnstein, Crime and Human Nature 41-66 (1985).

3. Jack Hirshleifer, "The Expanding Domain of Economics," 76 Am. Econ. Rev. 53, 54- 62 (1985), succinctly presents the rational actor model and also summarizes criticisms of it. See also Ellickson, "Bringing Culture and Human Frailty to Rational Actors: A Critique of Classical Law and Economics," 65 Chi. -Kent L. Rev. 23 (1989). One of the model's most serious limitations is its failure to explain how people come to hold particular preferences. A plea for theory and research on this issue is Aaron Wildavsky, "Choosing Preferences by Constructing Institutions," 81 Am. Pol. Sci. Rev. 3 (1987).

4. Howard Margolis, Selfishness, Altruism, and Rationality (1982), develops a somewhat less egocentric model of human behavior.

5. See, e.g., Herbert A. Simon, Reason in Human Affairs 3-35 (1983). See generally Rational Choice: The Contrast between Economics and Psychology (Robin M. Hogarth and Melvin W. Reder eds. 1986).

6. On the transmission of norms and culture, see Robert Boyd and Peter J. Richerson, Culture and the Evolutionary Process (1985); J. Maynard Smith, supra note 1, at 170-172.

7. See Elliot Aronson, The Social Animal 85-139 (2d ed. 1976); Leon Festinger, A Theory o/Cognitive Dissonance (1957).

8. See, e.g., Amos Tversky and Daniel Kahneman, "Rational Choice and the Framing of Decisions," 59J. Business S251 (no. 4, pt. 2, Oct. 1986); see also Jack L. Knetsch, "The Endow- ment Effect and Evidence of Non-Reversible Indifference Curves," 79 Am. Econ. Rev. 1277 (1989).

9. Stewart Macaulay, "Non-Contractual Relations in Business: A Preliminary Study," 28 Am. Soc. Rev. 55,66 (1963); David M. Trubek, "Studying Courts in Contexts," 15 Law & Soc'y Rev. 485,498-499 (1980-81).

10. See, e.g., Gerald E. Frug, "The City as a Legal Concept," 93 Harv. L. Rev. 1059,1149- 1150 (1980) ("We can transform society as much or as little as we want" in pursuit of the goal of empowering cities); Robert W. Gordon, "Historicism in Legal Scholarship," 90 Yale L.J. 1017, 1019-1020 (1981) (critics assert historical contingency of social life). See generally Steven H. Shiffrin, "Liberalism, Radicalism, and Legal Scholarship," 30 UCLA L. Rev. 1103, 1116- 1119 (1983). There is a long tradition of opposition to the notion that human nature constrains human institutions in important ways. Karl Polanyi, for example, argued in a 1944 book that Adam Smith's "economic man" hardly existed before Smith wrote and is entirely a product of culture. The Great Transformation 44, 249-250 (1st Beacon paperback ed. 1957). But see, e.g., Edward O. Wilson, On Human Nature (1978); G. Homans, supra note 2, at 217: "Ours)s the doctrine that 'human nature is the same the world over.'" On the long-standing tension between these opposing intellectual traditions, see Clifford Geertz, The Interpretation of Cultures 33-54 (1973); Thomas Sowell, A Conflict o f Visions 18-39 (1987); infra Chapter 14, text accompanying notes 10-12.

11. See infra Chapter 10, text accompanying notes 14-22, for an explanation of the inclusion of this adjective.

12. On these games, see generally Thomas C. Schelling, The Strategy of Conflict 89-99 (1960); E. Ullmann-Margalit, supra note 1, at 74-133; David K. Lewis, Convention (1969).

13. See, e.g., Rational Man and Irrational Society (Brian Barry and Russell Hardin eds. 1982) and the sources in note 1 supra.

14. The Evolution of Cooperation 8 (1984). Readers already familiar with the Prisoner's Dilemma may prefer to skip to the next section. The phrase Prisoner's Dilemma came into use when early game theorists illustrated this type of game with an example in which two, separately confined, prisoners accused of a joint crime each had to decide whether or not to confess that they both were guilty.

15. When all conditio~s except this last one are met, the game is Specialized Labor. See infra text following note 16.

16. This example is highly unrealistic both because neighbors are typically situated in a continuing, not a one-time, relationship, and also because they usually have no trouble communicating with one another before making their choices.

17. One can readily imagine slight variations of this game. For example, the players could be equally skilled but still face disefficiencies of scale in returns to work. Or the cooperative outcome might be achievable only if the players performed slightly different tasks. These sorts of variations will not be analyzed.

18. For simplicity, the discussion assumes that the payoffs reflect how players both subjec- tively and objectively value outcomes. On this distinction, see infra Chapter 10, text at notes 14-18.

19. Much of this work involves complex mathematical modeling. See, e.g., Abraham Ney- man, "Bounded Complexity Justifies Cooperation in the Finitely Repeated Prisoners' Di- lemma," 19 Econ. Lett. 227 (1985); Ariel Rubinstein, "Finite Automata Play the Repeated Prisoner's Dilemma," 39 J Econ. Theory 83 (1986).

20. An examination of Table 9.1 will reveal that a defection by Player One hurts Player Two regardless of the choice that Player Two makes.

21. R. Axelrod, supra note 1, at 30-43 and Appendix A.

22. Id. at 73-87.

23. It is not clear that Tit-for-Tat would have fared as well if, for example, the payoffs in every fifth round had been tripled.

24. In this respect, Axelrod's tournament lacked a structural feature that is powerfully conducive to the evolution of cooperation. See infra Chapter 10, text accompanying notes 45- 47.

25. On occasion, however, Axelrod has introduced into his computer tournaments the possibility of errors in perception. See R. Axelrod, supra note 1, at 182-183. A variety of ways of enriching the iterated Prisoner's Dilemma are discussed infra Chapter 12, text accompa- nying notes 39-48.

26. See, e.g., Drew Fudenberg and Eric Maskin, "The Folk Theorem in Repeated Games with Discounting or with Incomplete Information," 54 Econometrica 533 (1986).

27. See infra Chapters 11-14. Robert Aumann, an esteemed game theorist, asserted in a talk at Stanford University on August 19, 1986, that he intuitively regarded the folk theorem as too gloomy.


For a graphic simulation of tit-for-tat and other strategies, see The Evolution of Trust.

AnarchoDollar-sm-tr
Anarchism

breakchain-book-color
Library of Liberty

AnarchoDollar-sm-tr
Anarchism
breakchain-book-color
Library of Liberty