Citation: Alan Wolfe, Algorithmic Justice, 11 Cardozo L. Rev. 1409 (1990) Content downloaded/printed from HeinOnline Tue Nov 27 03:17:16 2018 -- Your use of this HeinOnline PDF indicates your acceptance of HeinOnline's Terms and Conditions of the license agreement available at https://heinonline.org/HOL/License -- The search text of this PDF is generated from uncorrected OCR text. -- To obtain permission to use this article beyond the scope of your HeinOnline license, please use: Copyright Information Use QR Code reader to send PDF to your smartphone or tablet device ALGORITHMIC JUSTICE Alan Wolfe* Julius stopped in front of his friend. "Listen, Rupert. If there were a perfectly just judge I would kiss his feet and accept his punishments upon my knees. But these are merely words and feelings. There is no such being and even the concept of one is empty and senseless. I tell you, Rupert, it's an illusion, an illusion." "I don't believe in a judge," said Rupert, "but I believe in justice. And I suspect you do too, or you wouldn't be getting so excited." "No, no, if there is no judge there is no justice, and there is no one, I tell you, no one."1 I. If there are no metanarratives, is justice possible? With the exception of religious belief-to which it is often compared2 -the quest for justice invariably has involved grand stories that take people beyond the concerns of the material world into considerations of the transcendental. The just act, the just person, 'and the just'society have been viewed as possessing an other-worldly nature, as if only heroic action on the part of heroic actors could achieve, or even approximate, them. Plato's stories may be, in Geertzian language 3 "thick," while Rawls' are "thin," but neither points toward standards directly observable in the everyday course of social practice, lying, as they are, either hidden in shadows or behind a veil of ignorance. Those philosophical dispositions known as postmodernism, poststructuralism, and deconstruction 4-in questioning whether there exist any standards of meaning, evaluation, taste, truth, or morality outside of the specific ways we make contingent rhetorical arguments about such contested terrains-lead inexorably to the conclusion that * Presented to the Seminar on Law and the Humanities, Cardozo Law School, Yeshiva University, and the Graduate Faculty, New School for Social Research, March 1990. Alan Wolfe is the Michael E. Gellert Professor of Sociology and Political Science and Dean of the Graduate Faculty, New School for Social Research. I 1.Murdoch, A Fairly Honorable Defeat 218 (1970). 2 Levinson, Constitutional Faith 9-53 (1988). 3 C. Geertz, Thick Description: Toward an Interpretive Theory of Culture, in Interpretation of Cultures 3-30 (1973). 4 There are obvious differences between these terms, as the polemics between Derrida and Foucault indicate. Nonetheless, for purposes of the argument presented here, all these approaches are suspicious of transcendental standards of justice that can be embodied in texts like constitutions and thereby can be linked in this presentation--contingently, of course. 1409 CARDOZO LAW REVIEW no transcendental metanarratives of justice can exist. To be sure, skeptics such as Stanley Fish would claim, this does not mean that no standards ofjustice are possible, just as Barbara Herrnstein Smith argues that the absence of any uncontested aesthetic standard does not mean that objects of art cannot be assigned value.' Such arguments instead try to show that the standards we develop for such matters as justice and truth are the products of specific language games, conventions, shared normative understandings or community practices, due to change when new contingencies arise from whatever source, including pure happenstance. There seems little question that the air admitted to discussions of law through the windows opened by postmodernism has been refreshing. Perhaps interpreters of texts will never again be quite so certain in insisting that certain conclusions-including ones dealing with the lives and liberties of real people-follow directly and automatically from materials written down generations ago. But the epistemological skepticism introduced by the overlap between law and literary criticism does not resolve fundamental issues involved in the quest for justice so much as it alters their focus. If meaning does not exist in texts but instead in the interpretations brought to those texts by readers, what we require, instead of a theory of the text, is a theory of the reader. .People read the texts that other people write. (Although the structure of DNA has been compared to a text, to date I have not seen any efforts to deconstruct the texts of living species besides our own; similarly, non-living species, such as computers, can generate texts which humans, if they wish, can deconstruct, but machines do not seem capable of putting into any interpretative context the instructions given to them). Moreover, not all people read and write. Infants do not, and neither do the illiterate nor the brain dead. The capacity to read and write is a potential, something that can only be undertaken by a self: a mature, socialized human individual who has grown up in a society and possesses the tools of culture given to her by that society.' No adequate theory of readers is possible without a sociological theory of the self, without some notions, coming perhaps, 5 S. Fish, Doing What Comes Naturally: Change, Rhetoric, and the Practice of Theory, in Literary and Legal Studies (1989); B. Herrnstein Smith, Contingencies of Value: Alternative Perspectives for Critical Theory (1988). 6 For a study of the impact that the ability to write (and, by implication, the ability to read) has on the structure of society, see J. Goody, The Logic of Writing and the Organization of Society (1986). The capacity of our reading material to link us together in a society sharing a moral framework is emphasized in W. Booth, The Company We Keep: An Ethics of Fiction (1988). 1410 [Vol. 11:1409 ALGORITHMIC JUSTICE from writers like Mead, Schutz, Garfinkel, or Goffman-which seek to define the self, not as found in nature (for in nature there are no selves) but only as the product of society and its dynamics.7 Sociological theory since the nineteenth century has been premised upon one or another form of philosophical anthropology. Theorists may differ in how they claim humans to be a special and unique species, but it is common to all the great thinkers in the sociological tradition that humans do have special capacities and that these capacities are a product of the way they organize their artificial environments. Since interpretation, at the very least, assumes that human beings can be reflective agents who can assign meaning to texts-including those texts by which their affairs are regulated-and adjust the meanings they find in those texts to meet the contexts and contingencies within which they find themselves, contemporary philosophical skepticism ought to find itself in historic continuity with the philosophical anthropology of sociological theory. And yet it does not. "Essentialism" runs against the grain of the contingency and relativism so characteristic of these philosophical tendencies, since, in assigning fundamental qualities to the human species, it assumes that at least some things are certain and exist in spite of the interpretations we give them. Indeed the distinction between nature and*culture which lies at the heart of sociological theory, according to theorists like Herrnstein Smith, needs to be disarmed "of its ideological power.... With respect to human preferences," she writes, "nothing is uniform, universal, natural, fixed, or determined in advance, either for the species generally, or for any specific individual, or for any portion or fraction of the species, by whatever principle, sociological or other, it is segmented and classified ".... 8 In short, postmodern philosophical perspectives are not only not neutral toward the way sociologists have defined the self, but actively hostile. Foucault's description of man as a historically contingent creation about to be washed away from the sand by the next epistemological wave hovers over nearly all such contemporary approaches to knowledge.' In the heady Nietzschean atmosphere of contemporary thought, talk of the self verges close to humanism-only humans, remember, have selves-and that particular combination of naivete and arrogance alleged to be characteristic of Enlightenment thought. From a postmodernist perspective, one is led to address such appar- 7 For a conception of the self close to the one I am discussing here, see C. Taylor, Sources of the Self (1989). 8 B. Herrnstein Smith, supra note' 5, at 78. 9 M. Foucault, The Order of Things: An Archaeology of the Human Sciences 387 (1970). 1990] 1411 CARDOZO LAW REVIEW ently human matters as desire and need without positing the existence of autonomous human agents, as, for example, again in the case of Herrnstein Smith-who coins the phrase "desired/able" in order to indicate that the valued effect in question need not have been specifically desired (sought, wanted, imagined, or intended) as such by any subject. In other words, its value for certain subjects may have emerged independent of any specific human intention or agency and, indeed, may have been altogether a product of the chances of history or, as we say, a matter of luck."° Smith's concern is with the process of evaluation, with the way in which we establish standards of aesthetic judgement. But the question raised by her denial of human agency can be raised for theories of justice as well, since conceptions ofjustice always involve questions of evaluation. Indeed the stakes involved in developing a theory of human agency are higher when we discuss justice than when we discuss taste, for one can imagine people living in the absence of any transcendental standard of the latter-although I am not sure I would want to-but it is almost impossible to imagine them, at least in human form, living in the absence of the former. Yet legal theorists attracted to postmodernism are as reluctant as literary theorists to acknowledge the existence of an autonomous self; Thomas Heller, for example, discussing the indeterminacy of the law, points out that it "does not arise because the standpoint of the human individual is in some way privileged or central. Rather, indeterminacy is an element of the grammar of complex systems or a feature of the observation/ system relationship."" The minimal condition for a theory of justice is that we find a justification or legitimation for constraint. Other participants in the intellectual division of labor, especially economists, may argue for freedom, although it is not too difficult to perceive that the market is a prison as well as an opportunity.' 2 Law talk, by contrast, is always explicitly about regulation, the intellectual problem at hand being one of understanding-and in some cases trying to change-rules that make possible life in groups. Note that even those most committed to a skeptical epistemological stance in no way deny the constraints in- 10 B. Herrnstein Smith, supra note 5, at 193. 11 Heller, Accounting for Law, in Autopoietic Law: A New Approach to Law and Society 307 (G.Teubner ed. 1988). 12 For relevant demonstrations of this point see, e.g., Lindblom, The Market as Prison, 44 J. of Pol. 324 (1982); Preston, Freedom, Markets, and Voluntary Exchange, 78 Am. Pol. Sci. Rev. 959 (1984); West, Authority, Autonomy, and Choice: The Role of Consent in the Moral and Political Visions of Franz Kafka and Richard Posner, 99 Harv. L. Rev. 384 (1985). 1412 [Vol. 11:1409 ALGORITHMIC JUSTICE volved in thinking about law; their point is simply that those constraints have no ultimate justifications, only local, contingent, and socially constructed ones. Given that there will be constraints, we can judge a theory ofjudgment by the legitimacy of the standards it establishes for restraining our actions. The postmodernist reply that there are no non-contingent standards is useless; for even the most radical versions of postmodern theory, as we have seen, still presuppose a human ability to interpret the contexts in which people find themselves. If we accept that minimal philosophical anthropology as our non-contingent standard, the question we can ask is: how, in the absence of both a theory of texts and a theory of people, can postmodern theories of justice legitimize obedience to rules in such a way as to make those who are subject to such rules better interpreters of the rules that rule over them? Epistemological skeptics imagine two ways by which human affairs will be regulated if we deny the possibility of any standards of justice outside the purely contingent and local. One was suggested by Thrasymachus-the first postmodernist-and is repeated, in more elegant form, by Foucault and those inspired by him. Everything being power, the only antidote to oppression is a transformation in the relations of power. Appeals to justice, from such a perspective, are naive and self-defeating, a lingering symptom of wooly-headed humanism. One might just as well ask an earthquake to stop rumbling as ask holders of power to bind their actions in accord with some preexisting standard of justice. Replace all justice discourse by power discourse and then we can begin to talk about who makes the rules and how. "Does might make right?" Stanley Fish asks. "In a sense the answer I must give is yes, since in the absence of a perspective independent of interpretation some interpretive perspective will always rule by having won out over its competitors."' 3 Or, more epigrammatically, "the gun is always at your head."' 4 It seems doubtful that an approach emphasizing the ubiquity of power and force in human affairs could generate an account of justice that takes cognizance of human interpretative capacities, although someone may come up with an argument to that effect. The conventional response, in this case, seems like the correct one: if all knowledge reflects the power of contending forces, then the way to constrain individuals is not to rely on persuasion but coercion. Fish, who believes that persuasion--e.g., rhetoric-is coercion, consequently holds that human agents have strikingly little freedom in 13 S. Fish, supra note 5, at 10. 14 Id. at 520. 1990] 1413 CARDOZO LAW REVIEW these matters: "In the end we are always self-compelled, coerced by forces-beliefs, convictions, reasons, desires-from which we cannot move one inch away."15 The theory of the self associated with any such answer to the quest for justice is a theory asserting that there can be no self, or at least not a very autonomous one. As an alternative to the justice-lies-in-the-interests-of-thestronger kind of argument, there is another way to think about constraint contained within postmodern approaches to legal regulation, and it is the one on which I want to focus in this paper. There being no truths or standards outside the operation of a system, this way of thinking argues, then the rules that structure the system lie within the system. Each system is governed by its own laws, and such laws have as their goal the reproduction of whatever system in which they are found. The inspiration for such ways of thinking about rules comes, not from the grand tradition of Western metanarratives about justice, but instead from cybernetics, information theory, economics, population ecology, quantum physics, cellular autonoma, linguistics, sociobiology, artificial intelligence, DNA, and chaos theory. I will call such conceptions of justice algorithmic. They offer a different solution to the nihilism that seems to lie within deconstruction. We do not, if we follow such an approach, haveto conclude that because there are no metanarratives there are no rules. Rather we can govern our affairs. and at the same time avoid privileging any one version of the good by imagining our rules to be self-referential to that activity, whatever it is, in which we find ourselves engaged. Although algorithmic notions of the good avoid the stark view of coercion inherent in arguments that equate knowledge and power. they are even less charitable toward the possibility of an interpretative and autonomous self. Algorithms are rules designed to be followed with as little interpretive variation as possible. They may help explain how computers function and how species other than our own regulate their affairs, although, as I will try to show, there is a strong case that even in those cases non-algorithmic rule following is more important than researchers, at first, realized. They can, however, only be applied to human affairs if we accept the notion that humans are precoded rule followers. Yet if humans are following instructions algorithmically, then they will have no interpretive capacities, will be unable to read texts, will not be able to supply meaning to documents that can inherently have no meaning, and, as a result, will be subject to a fate offollowing rules without any input into how those rules are 15 Id. 1414 [Vol. 11: 1409 ALGORITHMIC JUSTICE formulated and applied. Surely that is a conclusion that postmodernists would wish to avoid at all costs. At one level, postmodernists certainly do wish to avoid such a conclusion; Fish, for example, finds in Chomskian linguistics an almost complete algorithmic system, a formalism of truly nightmarish dimensions.16 One alternative to Chomsky, of course, would be to develop a kind of sociolinguistics-such as that associated with ethnomethodology in sociology-in which meaning would be understood as what human speakers provide in the contexts within which their conversations take place. 7 But a move in this direction is a move toward the self, constituting a step back up the slippery slope of essentialism that we have just, in turning to postmodernism, slid down. Postmodern theories of justice, I will argue, faced with a choice between making a commitment to a theory of the self demanded of their interpretive face and relying on algorithmic conceptions of rule following associated with their skeptical face, tend to adopt the latter. Sometimes this is explicit, such as in the case of Niklas Luhmann, Gunther Teubner, and others attracted by cybernetics and information theory. 8 At other times the move toward algorithmic justice is more reluctant, opting not for "hard" algorithms, such as those associated with artificial intelligence and Chomskian linguistics, but instead for "soft" algorithms associated with the automatic and "natural" following of the rules of a practice. Still, hard or soft, what characterizes algorithmic justice is a lack of appreciation for the rule-making, rule-applying, rule-interpreting capacities of human beings and an emphasis instead on the rule-following character. The price postmodernism pays for its flirtation with algorithmic conceptions of justice is a very high one: the denial of liberation, play, and spontaneity that inspired radical epistemologies in the first place. II. To provide legitimacy for the enormously difficult task of coordinating our actions toward common goals without relying on force, a conception ofjustice must mean something to those who will be governed by its imperatives. Yet meaning is precisely what texts cannot possess according to much of the philosophical inclination under dis- 16 Id. at 315-20. 17 See, e.g., Schegloff & Sacks, Opening Up Closings, in Ethnomethodology: Selected Readings 233-64 (R. Turner ed. 1979). 18 See the bulk of the essays in Autopoietic Law: A New Approach to Law and Society, supra note 11. One of the exceptions, the arguments of which overlap to some degree with those here, is Rottleuthner, Biological Metaphors in Legal Thought, in Autopoietic Law: A New Approach to Law and Society, supra note 11, at 97. 14151990] CARDOZO LAW REVIEW cussion here. Texts nonetheless contain words. Do those words convey anything if they do not convey meaning? At least for some thinkers working within postmodern philosophical assumptions, texts, if not capable of conveying meaning, are capable of conveying information. It ought to be immediately clear that information and meaning are not only not the same thing, but that they can work at cross purposes. (American voters, for example, have more information than ever before about the candidates for whom they vote, yet seem to cast votes that are less meaningful than ever before, understood in the sense of making sense of how they behave.) Meaning is a macro phenomenon that involves making larger sense out of smaller bits, while information, especially in the computer age, reduces larger complexity into smaller, and presumably more manageable, units. If we accept one distinction between symbols and signs-that the former work top down and the latter bottom up'9-then symbols can have meaning, while signs convey information. From the standpoint of a theory of communication, information has remarkable properties, ones that have been seized upon by theorists to develop information processing machines of great potential. When it was discovered that certain phenomena found in nature, such as the structure of DNA, were also understandable as an information processing mechanism, the possibility of a unified theory of cognition began to seem possible. Surely a number of human activities, such as language, could be understood as the reduction of complexity through information processing, and, since thinking was believed to take place in its own language, 20 it was a short step to the conclusion that the human brain was also an information processor. Once that insight was accepted, then all human activities-including not only how we speak, but how we write poetry, compose music, make our laws, conduct our economic activities, and everything else--could be understood to be governed by similar dynamics. The unified theory of cognition promised by the information processing model, in other words, offered to unify, not only what we understand to be the sciences, but to link the sciences together with both the social sciences and the humanities. What is often called postmodernism is fascinated by the potential of information processing. This is certainly true of the inventor of the phrase, Jean Francois Lyotard, who has a tendency to take extreme, and rather dubious, positions vis-A-vis the capabilities of information 19 See, for a typical account, H. Pagels, The Dreams of Reason 192-94 (1988). 20 J. Fodor, The Language of Thought (1975). 1416 [Vol. 11: 1409 ALGORITHMIC JUSTICE processing, such as suggesting that artificial intelligence will be capable of translating from one "natural" language to another, that computers could "aid groups discussing metaprescriptives by supplying them with the information they usually lack for making knowledgeable decisions," and that data banks will serve as "nature" for postmodern individuals.21 But the fascination with information processing is not just a Lyotardian quirk. The writings of Deleuze and Guattari, for example, are filled with images of machines that program other machines in ever-recurring fashion, down to the notion that the structure of desire takes the form of a binary system.22 It may well be the case that the tendency to attribute to information all the capacities that one has stripped from meaning characterizes many thinkers who believe that knowledge is defined by relationships among signs, rather than by reference to any "reality," including symbols containing meaning, standing behind the signs themselves. Information theory is usually thought of in connection with developments in cognitive science, mathematics, linguistics, decision theory, and rational choice theory-all of them. closer in spirit to the epistemological certainty and rationalistic clarity that postmodernism rejects. Yet the matter is clearly more complicated than that. The two intellectual giants who created the framework for postmodernism and deconstruction-Nietzsche and Saussaure-were both attracted to cybernetic notions of self-regulating systems because the rules governing such systems made it possible for the relationships between things to keep them suspended in air without being either ground down to a reality beneath them or tied to a reality above them. The case of Nietzsche is particularly instructive in this regard given the importance that he has assumed in the law and literature debates.23 But it is not the Nietzsche whose perspectivism is so attractive to critical legal scholars that is important here, but instead the Nietzsche of Zarathustra. In speaking of the metamorphosis of the lion into the child in one of his early speeches, Zarathustra introduces the image of a "self-propelled wheel," preparing the way for his later discussion of the eternal recurrence-images and concepts quite similar to the ideas of Goedel, Escher, and Bach which have been found to be 21 J.Lyotard, The Post-Modem Condition 4, 51, 67 (1984). 22 See, e.g., G. Deleuze & F. Guattari, Anti-Oedipus: Capitalism and Schizophrenia 14 (1977). 23 Levinson, Law as Literature, in Interpreting Law and Literature: A Hermeneutic Reader 155-73 (1988); for an argument that Levinson has misapplied Nietzsche's ideas about how texts can be interpreted, see Weisberg, On the Use and Abuse of Nietzsche for Modern Constitutional Theory, in id. at 181-92. 1990] 1417 CARDOZO LAW REVIEW compatible with the age of information machines.24 Moreover, even if we do not accept the notion of the eternal recurrence as a cosmology-which Nehamas, in defending Nietzsche asks that we do notwe can still accept it not as a "theory of the world but a view of the self."' 2- Nietzsche's somewhat mysterious references to the notion that if we could live our lives over again we would live them in exactly the same form as we have can, therefore, be read as a kind of gedankenexperiment designed to show that the world is still possible without selves that can be defined by essential, non-contested, features. A fascination with eternal recurrence, with the notion that automatic processes can generate exactly similar responses over and over again, would seem to characterize all those thinkers who are skeptical of the possibility of autonomous, choosing, selves. Considering the importance attached to notions about the death of the author associated with Barthes-let alone the Derridian suspicion of there being anything outside the text-self-recurrence takes on a special fascination in the literary culture inspired by postmodernism, recognizing in Borges, for example, the postmodernist par excellence.20 One ought not to be surprised, consequently, that postmodernist thought, which is so inspired by Nietzsche, can also overlap so significantly with the "self-propelled wheels" now known -as Turing machines or cornputers. What they all have in common is a distrust of the active self and, as a result, an attraction to algorithmic imagery. Writing about artificial intelligence, for example, Sherry Turkle points out that: If mind is a program, where is the self? [AI] puts into question not only whether the self is free, but whether there is one at all.... In its challenge to the humanistic subject, Al is subversive in a way that takes it out of the company of rationalism and puts it into the company of psychoanalysis and radical philosophical schools such as deconstructionism .... Artificial intelligence is to be feared as are Freud and Derrida, not as are Skinner and Carnap.2 7 From this perspective, the contrast between the rationalism of cognitive science and the irrationalism of postmodernism and deconstruc- 24 D. Hofstader, Godel, Escher, and Bach: An Eternal Golden Braid (1979); F. Nietzsche, Thus Spoke Zarathustra, in The Portable Nietzsche 139 (1954). 25 A. Nehamas, Nietzsche: Life as Literature 150 (1985). 26 Foucault's The Orderof Things, which ends with the image of man being washed away in the sand, begins with a discussion of a passage from Borges. M. Foucault, supra note 9, at xv. Poe and Roussel, two other novelists fascinated with the entrapping imagery of machines, are also important literary sources for postmodernist speculations. 27 Turkle, Artificial Intelligence and Psychoanalysis: A New Alliance, 117 Daedalus 241, 245 (1988). 1418 [Vol. 11: 1409 ALGORITHMIC JUSTICE tion takes a backseat to their common attitude toward the nonautonomy of the self. The extreme representative of the common ground shared by information theory and literary postmodernism is Michel Serres, who has incorporated all the reference points for information theory--entropy, Maxwell's demon, the second law of thermodynamics, Claude Shannon, and Boltzmannian quantum physics-into a theory of the origins of language. Information theory allows Serres to develop a theory of communication without there necessarily being any communicators. In contrast, for example, to Habermas, who specifies two parties to a communication (and who, in so doing, inspires heavyhanded critique from Lyotard),2 Serres shows how language may be possible without knowing anything about its origins: I know who the final observer is, the receiver at the chain's end: precisely he who utters language. But I do not know who the initial dispatcher is at the other end. I am confronted indefinitely with a black box, a box of boxes, and so forth. In this way, I may proceed as far as I wish, all the ways to cells and molecules, as long, of course, as I change the object under observation.29 As might be expected, Serres theory about the origins of language has little to do with the notion of an autonomous self. There is only one type of knowledge and it is always linked to an observer, an observer submerged in a system or in its proximity. And this observer is structured exactly like what he observes.... There is no more separation between the subject, on the one hand, and the object, on the other .... As Serres's remarks would seem to indicate, communication is possible within the terms of information theory, but interpretation is not. Information can only be transmitted, not read. The act of reading, by bringing an interpreting self in confrontation with a text, can only be viewed, from the perspective of information theory, as noise. Although one may argue that Serres's approach provides "a unique example of the possibilities opened up by bringing literary culture and scientific thought into play with one another,"'a3 it is hard to see how. Hence Paulson, who wants literary critics to take information theory 28 J. Lyotard, supra note 21, at 60-67. Lyotard also has strongly critical views toward Luhmann, yet, at least with respect to their joint fascination with cybernetics and the reduction of complexity, they share more than he is prepared to admit. See N. Luhmann, The Differentiation of Society (1982). 29 Serres, The Origin of Language: Biology, Information Theory, and Thermodynamics, in Hermes: Literature, Science, Philosophy 82 (1982). 30 Id. at 83. 31 W. Paulson, The Noise of Culture: Literary Texts in a World of Information 31 (1988). 1990] 1419 CARDOZO LAW REVIEW seriously, winds up concluding that even though literature, as an artifact of culture, may only be a form of noise, still "[w]hat literature solicits of the reader is not simply reception but the active, independent, autonomous construction of meaning."32 Without ever explicitly suggesting so, Paulson's study suggests that although there are strong similarities between deconstruction and information theory, the former at least allows for readers, even if it does not theorize much about them, while the latter does, and can, not. In pushing information theory to its logical conclusion, these efforts make clear why a purely algorithmic approach to communication is inappropriate to the texts that human beings write and read: meaning exists when human selves attribute characteristics to the symbols around them, while information requires only relationships between signs irrespective of whether there exist selves reading into those signs anything whatsoever. Surely the attraction of information theory is its promise that it can bypass the problem of meaning in a philosophical culture where meaning has become so problematic. In doing so, however, it renders the readers of texts into passive receivers of information as if they were computer programs or DNA molecules. Recent developments in both artificial intelligence and biology suggest, more than ironically, that the notion of an algorithmic transmittal of information is not only of little relevance to humans, but also not completely characteristic of what takes place either in machines or in other living species. The recent history of artificial intelligence ("Al"), in fact, constitutes a major attack on the notion of algorithmic rule following. Although a number of starts were made in AI research that were nonalgorithmic in nature-including Frank Rosenblatt's notions of perceptrons and the expert systems approach adopted by Newell and Simon 33-the early decades of work in AI were inspired by efforts to represent the real world in machines through the device of giving machines programs written as precisely as possible.34 If the software instructions were precise enough, the argument ran, then whether the von Neumann architecture of a central processing unit actually modelled the way human brains worked was irrelevant. It turned out, however, that the scripts and frames proposed by researchers such as 32 Id. at 139. 33 A. Newall & H. Simon, Human Problem Solving (1972); F. Rosenblatt, Principles of Neurodynamics (1962). 34 Even the simplest instructions imagined in the Turing machine, however, can, from Wittgensteinian premises, be understood as something more than the simple following of a rule. See on this point Shanker, Wittgenstein Versus Turing on the Nature of Church's Thesis, 24 Notre Dame J. of Formal Logic 615 (1987). 1420 [Vol. 11:1409 ALGORITHMIC JUSTICE Minsky and Schank to represent the real world were so brittle in nature that the limits of a purely algorithmic approach to artificial intelligence were quickly reached." As one critic pointed out, the problem with such an approach was that for the machine to know anything, it first had to know everything.16 Machines, in short, could clearly be programmed to follow rules, but whether such rule following constituted intelligence in anything like the way that quality is generally understood was another matter. Intelligence, at least in human form, is, according to two neurobiologists and one mathematician, non-algorithmic in nature; human brains work, not by following rules, but by recognizing realities in the larger world and thereby incorporating experience and context into the thinking process.37 The failure of "software" approaches to Al were hailed in some quarters of the artificial intelligence community, especially among those who believed that the proper way to design machines was not by creating software programmed with precise instructions, but literally by designing machines to resemble the presumed architecture of human brains. Connectionist, neural net, or parallel data processing models-as they came to be called-did not begin with the assumption that memory could be stored in a CPU, to be accessed through instructions in the form of rules. Instead networks of electrical charges were constructed in such a manner that machines could "learn" by using the strengths between connections to narrow down a problem until a solution was found that was correct, or at least, less incorrect than a series of possible solutions that were rejected.3" Precision-what I have been calling, following Nietzsche, eternal recurrence-was sacrificed in such approaches for the flexibility introduced by allowing machines to "settle in" to solutions.39 Algorithms, in 35 R. Schank & R. Abelson, Scripts, Plans, Goals, and Understanding (1977); Minsky, A Framework for Representing Knowledge, in Mind Design 95 (J. Haugeland ed. 1981). 36 I.Rosenfield, The Invention of Memory 112 (1988). 37 The neurobiologists are I. Rosenfield, id. at 144-45, and G. Edelman, Neural Darwinism 44 (1987). The mathematician is R. Penrose, The Emp-ror's New Mind 405-18 (1989). 38 For various approaches, see P. Churchland, Neurophilosophy: Toward a Unified Science of the Mind-Brain (1986); Foundations (Parallel Distributed Processing: Explorations in the Microstructure of Cognition No. 1, 1986); S.Grossberg, Neural Networks and Natural Intelligence (1988); C. Mead, Analog VLSI and Neural Systems (1989); Churchland & Sejnowski, Perspectives on Cognitive Neuroscience, 242 Science 741 (1988). 39 Although PDP approaches are less attracted to the kinds of eternal braids discussed by Hofstadter, they resonate with postmodern themes in another way. Like deconstruction, connectionist approaches to Al believe that meaning does not lie behind any text but only emerges, if it exists at all, as a relationship between signs. As D. A. Norman puts it, "I believe the point is that PDP mechanisms can set up almost any arbitrary relationship. Hence, to the expert, once a skill has been acquired, meaningfulness of the relationships is irrelevant." Norman, Reflections on Cognition and Parallel Distributive Processing, in Psychological and Bio- 1990] 1421 CARDOZO LAW REVIEW short, already found to be inappropriate to humans, were similarly found to be inappropriate to machines. Purely algorithmic understandings of information transmittal have received a blow from another quarter: that of the process by which DNA sends instructions through its replication and thereby makes possible species development. The roughly parallel discoveries of realizing Turing machines of great power and the uncovering of the structure of DNA presented an irresistible challenge to sociobiologists in particular: genes present information in the form of instructions which determine the trajectory by which a species evolves. As Richard Dawkins put the matter, "We are survival machines-robot vehicles blindly programmed to preserve the selfish molecules known as genes."' So convinced was Dawkins of the appropriateness of the computer metaphor to the evolutionary process that he developed a software program that would enable the user to trace the patterns of many different evolutionary possibilities by specifying relevant features at the beginning of the process in order to understand how even very slight flaws in the transmission of messages (such as those contained in DNA) create fantastic variation over enormous periods of time.4' One should, therefore, note that both the two leading representatives of sociobiological thinking-Dawkins on the one hand and Edward Wilson on the other-found that a purely algorithmic understanding of genetic transmission ultimately could not explain the speed of human evolutionary changes. Dawkins, for example, after spending an entire book discussing selfish genes, concluded that the day of the gene was passed; in the future, cultural transmissionrepresented in what he called memes (from mimesis)-would take over and, being superior, drive out genetic transmission entirely.42 Meanwhile Wilson, together with Charles Lumdsen, rejected the analogy with computers completely, on the grounds that the memory capacity of human brains would have to be larger than we can imagine to contain all precoded instructions sufficient to account for human evolution.4 3 Instead Lumsden and Wilson argued for what they logical Models 531, 544 (Parallel Distributed Processing: Explorations in the Microstructure of Cognition No. 2, 1986). 40 R. Dawkins, The Selfish Gene ix (1976). 41 R. Dawkins, The Blind Watchmaker (1986). 42 R. Dawkins, supra note 40, at 205-214. Dawkins does not address the issue of whether memes reproduce themselves less algorithmically than genes. He does write, however, that "[t]he computers in which memes live are human brains." Id. at 211. Since, as we have seen, human brains are believed to function non-algorithmically, the shift from genetic to cultural transmission would seem to indicate a less algorithmic process. 43 C. Lumsden & E. Wilson, Genes, Mind, and Culture 332 (1981). 1422 [Vol. 11: 1409 ALGORITHMIC JUSTICE called "gene-culture coevolution,"' a process by which the biological and the social share in determining the course of human. evolution. They introduced a distinction, between primary and, secondary, epigenetic rules as a way of recognizing that the latter allowed for the possibility of autonomous. minds affecting the course of evolutionary development,45 an important concession, but one that: still left open the possibility of third-order epigenetic rules (and others' beyond that) in which mind was understood to play even a greater role than they were prepared to admit. Their concessions to their critics, in short, were probably not enough to explain how evolution took over and produced such enormous variation in the development of our species in such a remarkably short (by evolutionary standards) amount of time. Algorithmic understandings of evolutionary dynamics, in any case, have been found as problematic in biology as-they have been in computer science. Perhaps the information processing model associated with these sciences is flawed, even in areas where, unlike with humans, it had been expected to work. If so, then there may be reason to question some of the assumptions of information theory. Qyama, for example, has argued that the notion of self-reproducing feedback loops so essential to information theory is a, metaphor developed- because the existence of computers provided the relevant imagery. But one would be incorrect, in her view, to adopt a preformatist attitude toward' information, that is, to conclude that information always exists before the means developed for its transmission are imagined. "The developmental system," she writes; ". . . does not have a final form, encoded before its starting point and realized at maturity."' 6 We need to conceptualize the transmission of information, rather, as developmental, as a process that adds something. to the process in the course of its evolution rather than spinning around within. the same already existing information. Oyama's arguments, of course, are-not-a refutation of information theory, and one may still argue, as-some biologists do, that life itself is "autopoietic" in the sense that "living beings are characterized in that, literally, they are continually self[re]producing."47 Still the possibility that self-recurrence. may not be a characteristic found in nature does raise the question; of whether it is a helpful way to think about society, since society is generally held.to 44 Id. at 19-34. 45 Id- at 53-58. 46 S. Oyama, The Ontogeny of Information 23 (1985). 47 H. Maturana & F. Varela, The Tree of Knowledge. 43 (1987). As might be expected Maturana: and Varela are fascinated by ideas of self-recurrence, using a drawing from. Escher to make their point. Id. at 25. 1990] 1423 CARDOZO LAW REVIEW be populated by human agents whose actions can alter purely algorithmic codings. Two conclusions, then, can be reached about the search for the perfect algorithm. One is that if there are perfect algorithms, they are incompatible with the notion of freely choosing autonomous selves. Systems, not their components, have autonomy in a purely algorithmic world, just as, in some of the more mechanistic views of sociologists like Durkheim and Parsons, social structures, not individuals, determine consequences. 48 At the same time, however, we have also seen that even if imaginable in theory, a perfectly algorithmic system is in practice far more difficult to realize than at first understood. Algorithmic machines are too brittle to resemble human intelligence. Genetic transmission of information, especially in the case of humans, takes place over time periods far too short for algorithmic principles to be able to explain them. Information theory, far from providing the basis for a unified theory of cognition, may be highly limited in its applications to relatively contrived situations. The search for the perfect algorithm is both futile and self-defeating. III. If algorithmic notions are as problematic as they seem, even in areas such as artificial intelligence and genetic transmission, they seem even less likely to be of use in such human and social activities as reflecting on justice (or knowledge, morality, and taste). They do, nonetheless, possess one feature which makes them attractive to postmodernist thinkers: their denial of the possibility of autonomous human agency overlaps with the suspicion of humanism and the negation of the self so characteristic of Derrida, Lacan, Barthes, Foucault and other influential thinkers. Hence although algorithmic thinking is highly formalistic and anti-interpretative, many contemporary theorists cannot avoid the temptation to introduce algorithmic conceptions into their arguments. Perhaps the most interesting example of the power of algorithmic imagery is the effort by Barbara Herrnstein Smith to make a case against any non-contingent standards of evaluative judgement. For Herrnstein Smith, the Western humanistic tradition has sought stan- 48 "The division of labor does not present individuals to one another, but social functions." E. Durkheim, The Division of Labor in Society 407 (1964). Interesting enough, theorists in artificial intelligence, stumped by how essentially dumb bits of information can be linked together into something smart called intelligence, utilize images strikingly similar to Durkheim's: "Each mental agent by itself can only do some simple thing that needs no mind or thought at all. Yet when we join these agents in societies-in certain very special ways-this leads to true intelligence." M. Minsky, The Society of Mind 17 (1986). 1424 [Vol. 11:1409 ALGORITHMIC JUSTICE dards of "transcendence, endurance, and universality ' 49 in its evaluation of literary works, but her own personal -relationship to Shakespeare's sonnets convinces her instead that "everything is always in motion with respect to everything else."5 If value is therefore never a fixed attribute of any particular product under evaluation, how do certain cultural products come to be seen as worthy, while others are assigned to the dust bin of culture? Hermstein Smith relies on economics for an answer: each of us has a personal economy of needs and resources. "Like any other economy, moreover, this too is a continuously fluctuating or shifting system, for our individual needs, interests, and resources are themselves functions of our continuously changing states in relation to an environment that may be relatively stable but is never absolutely fixed."'" Markets, then, play a role in the creation of literary standards, and not only in the narrow economic sense of money. But the important question is what kind of market this is: are we talking of the kinds of rigged and fixed markets which radical critics since Marx believe drive capitalist societies or instead the purely automatic, homeostatic markets envisioned by eighteenth century liberals? For Herrnstein Smith, it is clearly the latter: markets are interesting to her because they work independently of the desires of the agents in the market.52 Like contemporary rational choice theorists, Herrnstein Smith argues that self-interest drives everything we do: "We are always, so to speak, calculating how things 'figure' for us-always pricing them, so to speak, in relation to the total economy of our personal universe."53 But unlike rational choice theorists, Herrnstein Smith does not believe that "we" do this calculating consciously and as autonomous choosers: Most of these "calculations," however, are performed intuitively and inarticulately, and many of them are so recurrent that the habitual arithmetic becomes part of our personality and comprises the very style of our being and behavior, forming what we may call our principles or tastes-and what others may call our biases and prejudices. 54 Since we find in Herrnstein Smith's account a picture of the mar- 49 B. Herrnstein Smith, supra note 5, at 28. 50 Id. at 15. 51 Id. at 31. 52 There are occasions in her text, however, where Smith takes the opposite tack and argues that markets are rigged: "The linguistic market can no more be a 'free' one than any other market, for verbal agents do not characteristically enter it from positions of equal advantage or conduct their transactions on an equal footing." Id. at 111. 53 Id. at 42. 54 Id. at 43. 14251990] CARDOZO LAW REVIEW ket which is far more invasive than anything found in writers like Gary Becker and Richard Posner who see markets everywhere, we do not wonder that algorithmic conceptions of self-regulating systems come to dominate her account of how standards of taste become established. So much is in motion at such speeds that the only possible regulation of the whole process is automatic regulation, or what Herrnstein Smith calls an 'evaluative feedback loop': Every literary work-and, more generally, artwork-is thus the product of a complex evaluative feedback loop that embraces not only the ever-shifting economy of the artist's own interests and resources as they evolve during and in reaction to the process of composition, but also all the shifting economies of her assumed and imagined audiences, including those who do not yet exist but whose emergent interests, variable conditions of encounter, and rival sources of gratification she will attempt to predict-or will intuitively surmise-and to which, among other things, her own sense of the fittingness of each decision will be responsive." Not surprisingly, therefore, Herrnstein Smith finds attractive any way of thinking that emphasizes algorithmic processes. In the course of her discussion, she touches on the possibility that human brains may be cognitively hard-wired in predetermined ways;56 criticizes the Habermasian notion of rational communicative standards on the ground that we speak, as we spend, only out of self-interest, so thathonesty in speech, if it ever exists, is the product of a Mandevillian lack of intention;57 adopts information theory as the model for an epistemology in which "what is traditionally referred to as 'perception', 'knowledge,' 'belief,'... would be an account of how the structures, mechanisms, and behaviors through which subjects interact with-and, accordingly, constitute-their environments are modified by those very interactions;" uses information theory to explain that evaluative classifications exist so that "energy need not . . . be expended on the process of classification and evaluation each time a similar array is produced;"59 argues that often such classifications are "fixed in the DNA;"' and relies on Brownian motion and Nietzsche's "play of forces" to criticize any who suggest an "overall, underlying, or ultimate governing outcome toward which each instance of human productive-acquisitive or consummatory-expenditure activity (all 55 Id. at 45. 56 Id. at 101. 57 Id. at 108-09. 58 Id. at 95. 59 Id. at 122. 60 Id. 1426 [Vol. 11:1409 ALGORITHMIC JUSTICE making, getting, and spending, we might say) is directed .... "61 Most illustrative of all, however, is Herrnstein Smith's account of why certain products of culture have entered our canon. In answering this question, she is not only more economistic than the most committed rational choice economist, she is also more taken with genetic theories of evolution than most sociobiologists. Artistic texts survive the way species do: "These interactions are, in certain respects, analogous to those by virtue of which biological species evolve and survive and also analogous to those through which artistic choices evolve and are found 'fit' or fitting by the individual artist. 62 Evolutionary feedback loops allow Herrnstein Smith to resolve the question in aesthetic theory proposed by Hume: why do we consider Homer great? The answer is not that Homer survived because he was great but that because he survived he is considered great. "Nothing endures like endurance. ' 63 Images of eternal recurrence combine with Durkheimian functionalism to explain the secret of Homer's success: Repeatedly cited and recited, translated, taught and imitated, and thoroughly enmeshed in the network of intertexuality that continuously constitutes the high culture of the orthodoxly educated population of the West (and the Western-educated population of the rest of the world), that highly variable entity we refer to as "Homer" recurrently enters our experience in relation to a large number and variety of our interests and thus can perform a large number of various functions for us. . ... Although Barbara Herrnstein Smith is not writing about justice but about evaluation, her analysis demonstrates the linkage between the position that there are no non-contingent standards in the world and the need, consequently, for automatically functioning regulatory mechanisms. When we turn to writers who are directly concerned with justice, we find exactly the same linkage. The clearest example is Luhmann, who finds in cybernetics and information theory an answer to the question of what makes society possible. Luhmann wants to understand how societies-which are not only enormously complex, but are also, in their modern form, more complex than ever beforereproduce themselves through time. Arguing, in a tradition that goes back to Mandeville and Adam Smith, that no conscious direction can ever guide a system so complex, Luhmann turns to methods by which systems reduce their complexity. Computers, of course, reduce com- 61 Id. at 144. 62 Id. at 47. 63 Id. at 50. 64 Id. at 53. 1990] 1427 CARDOZO LAW REVIEW plexity by dividing all information into bits that can be expressed as zeros and ones. So, argues Luhmann, do legal systems. A legal system can exhaust the entire realm of the possible through the legal/ illegal dichotomy. That distinction, in a sense, constitutes the "hardware" of a legal system. In order for the system to take a decision in a specific case, "software" programs access the system, feeding back into the "memory" and thereby creating new rules that anticipate future programs.65 For Luhmann, the dynamics of specific cases introduced into the system continuously redefine the binary codes, interacting again with new programs in ways that resemble eternal recurrence. The whole system, he argues, is a matter of a specific technique for dealing with highly structured complexity. In practice this technique requires an endless, circular re-editing of the law: the assumption is that something will happen, but how it will happen and what its consequences will be has to be awaited. When these consequences begin to reveal themselves they can be perceived as problems and provide an occasion for new regulations in law itself as well as in politics. Unforeseeable consequences will also occur and it will be impossible to determine if and to what extent they apply to that regulation. Again, this means an occasion for new regulation, waiting, new consequences, new problems, new regulation and so on.66 "Autopoietic law" thereby, according to Luhmann, avoids many of the problems faced by other philosophies of law. It explains how a legal system can change, for example, as well as provides a way of thinking about the law that guarantees its autonomy from other systems. Luhmann's theories about the law overlap with postmodernism because autopoietic systems are non-hierarchical. Being circular in their dynamics, they avoid privileging any one set of legal norms over any other; as Luhmann expresses it, "There can therefore be no norm hierarchies," or, somewhat more self-reflectively, "legal forms are valid because they are valid."' 67 But if legal scholars turn to autopoietic theories of justice out of a generalized commitment to principles of equality, they may find the equality not worth having. Since notions of self-regulation come primarily from biology on the one hand and artificial intelligence on the other, any legal system designed by such principles cannot incorporate specifically human capabilities, 65 N. Luhmann, Ecological Communication 64-66 (1989). 66 Id. at 66. 67 Luhmann, The Unity of the Legal System, in Autopoietic Law: A New Approach to Law and Society, supra note 11, at 21, 23. 1428 [Vol. 11:1409 ALGORITHMIC JUSTICE such as the possibility that autonomous human subjects can interpret the instructions given to them out of their history and contexts. For if the agents ruled by laws can interpret laws, then automatic self-regulation no longer exists. Any conception of justice that might emerge from such a system would have to be just in the way ant colonies are just or the evolution of different species or worms is just or computer programs are just. Justice would thus be defined as having reached some kind of stable equilibrium that makes possible the continued reproduction of the system. What justice could not be, under such a conception of law, is a quality that enhances a specific human capacity to bring meaning to situations and contexts in order to guide them toward any purpose defined by a community of autonomous actors. Stanley Fish's reflections on the law illustrate the dynamics of algorithmic justice in a slightly different way than Luhmann's. Fish is well known for his insistence that standards do not transcend the particular points of view of the communities that interpret them. Whether he is correct or not is not the point on which I want to focus. I want to argue instead that by introducing the term "community, ' 68 Fish is under a certain obligation to talk sociology: to discuss what a community is, how its members act, what relationship exists between individual needs and community concerns, and other typical concerns of sociological theory. After all, an enormous emphasis is being placed by Fish's approach on the practices carried out by human agents, including not only judges, but professors of law and literature, readers of texts, and, presumably, all those affected by the legal decisions which in turn are affected by how judges and legal intellectuals make their arguments. Surprisingly, however, questions involving sociological practice play relatively little role in Fish's writings. Consider his answer to the question of why we ought to be concerned about the interpretation of legal texts in the first place. For Fish the overlap between law and literary criticism is the result of a glitch in democratic theory: the existence of judicial review, which in enabling judges to overrule democratic decisions in the name of fidelity to an earlier text, builds counter-majoritarian tendencies into our political system.6 9 Yet why do we have constitutional texts, and procedures for reviewing them, at all? Surely both the Constitution and the practice of judicial review illustrate a larger sociological problem: one identified quite clearly by the rational choice philosopher Jon Elster in his discussion of Ulysses and the Sirens. Constitutions, as Elster argues, deal with the problem 68 S. Fish, Is There a Text in This Class? (1980). 69 S. Fish, supra note 5, at 338-39. 1990] 1429 CARDOZO LAW REVIEW of binding, the ways in which one generation attempts, like Ulysses tying himself to the mast, to make it possible for the next generation not to be seduced by the temptations of immediate gratification and self-interest.'0 Judicial review, by contrast, grows out of the recognition that the bonds, if tied too tightly, result in bondage. Far from being a quirk in the system, judicial review exists as part of a dynamic process by which societies continuously reform themselves and their institutions over time to insure a balance between the contradictory goals of adhering to foundation norms and allowing for change. As is the case with so many other aspects of legal practice, judicial review makes it necessary to pose a host of questions about the agents doing the review: Who reviews the founding document? What standards of practice ought to guide them? Are they freely choosing agents or part of a larger social structure? What qualities of mind do they have? What qualities of mind ought they to have? To not address sociological questions about the nature of real people in discussing the interpretation of legal texts is like discussing the plot of Ulysses without reference to the character of the man who tried to save his ship and his men.' 1 Although the question of how human beings follow practices thus assumes great importance for Fish, his analysis of what a practice means is to quote a pitcher for the Baltimore Orioles. Like Dennis Martinez, who just throws the ball to get the batter out without thinking about ultimate goals, an agent "need not look to something in order to determine where he is or where he now might go because that determination is built into, comes along with, his already-in-place sense of being a competent member of the enterprise.' 2 Agents, in Fish's view, are part of a preformative chain, not one, to be sure, of the automatic transmittal of information without consciousness, but nonetheless one that works automatically and without requiring autonomy and self-judgement. There is no autonomy in this view, or, more precisely, all autonomy lies with the historical events that determined the patterns of a practice. Individuals, being "deeply situated,"'73 just follow rules, rules which themselves are so deeply situated that individuals may not. and probably are not, aware that they are following them. "'Be the best you can be,'" Fish writes in response to Dworkin, "finally means nothing more than 'act in the 70 J.Elster, Ulysses and the Sirens (rev. ed. 1984). 71 Although I am in strong disagreement with his moral philosophy, I am in agreement with Maclntyre that it is impossible to understand Greek conceptions of virtue without discussing the matter of character. A. Maclntyre, After Virtue (1981). 72 S. Fish, supra note 5, at 388. 73 Id. at 387. 1430 [Vol. 11: 1409 ALGORITHMIC JUSTICE way your understanding of your role in the institution tells you to act.' 9-74 Since agents naturally follow the rules determined by their roles in the institutions that define their practices, little would be amiss if Fish were to focus on those institutions themselves. After all, if the members of a professional subcommunity developed through some kind of democratic practice an agreed-upon set of norms, little would be wrong in expecting that each of them would then bind themselves to the rules that defined their practice. Yet, no subcommunity can possibly develop such standards fairly without agreement upon larger normative and procedural issues, such as that their decisions will be made by majority rule, that the practices to which they adhere will not violate Judeo-Christian beliefs, that their behavior will be guided by law, etc. In other words, the development of a theory about how agents act is intimately linked to a discussion of the larger, normative standards of the society (not just the subcommunity) in which agents practice. If there are norms viewed as just in the society and practices viewed as procedurally correct in the professional subcommunity, then agents can follow rules naturally, or even algorithmically, without violating their autonomy. The problem, of course, is that Fish believes that transcendental standards ofjust or moral behavior can not exist. Within the confines of that argument, the failure to look beyond the automatic and natural following of the rules of a practice becomes a serious matter indeed. As he often points out, Fish is neither an anarchist nor a nihilist. Quite the contrary. Like many thinkers attracted to algorithmic imagery, he imagines structures that are so tightly organized as to make anarchism impossible. For this reason, Fish, for all his distaste for formal algorithmic thinking, concludes with a position not all that distinct from Chomsky's. To be sure, the rules of transformational grammar work in such a way that the personal proclivities of the speaker are irrelevant, whereas the rules of thumb of a practice depend very much on the idiosyncracies of the person engaging in the practice." Yet in both cases particulars are embedded in generals, in the one case the rules of grammar, in the other "the individual who is always constrained by the local or community standards and criteria of which his judgement is an extension."76 74 Id. at 391. 75 Id. at 317. 76 Id. at 323. 14311990] CA RDOZO LA W REVIE W IV. Those skeptical of the possibility of any transcendental standard of justice posit that none of the theories we may have about justiceboth of the transcendental and anti-transcendental sort-matter. Even the most skeptical, such as Fish, do, however, believe that theory talk matters, even if theory does not.77 But what if theory talk is theory? It would be under the assumptions I have been making in this paper. To be sure, there is no such thing as a view from nowhere,"8 universally valid, from which we can deduce standards of justice that bind our actions for all time. Yet purely contingent understandings of what binds us together, being contingent, cannot bind, or at least bind what we need to be bound. Already existing interpretative sub-communities, which by definition share normative standards, are not the ones that need grand narratives about justice, but instead communities seeking to answer questions such as these: How do we resolve our disputes with one another and make those resolutions binding? How do we aim to make our resolutions as fair as possible to the parties to the conflict? How does a community exist in time, passing on the rules by which it regulates its affairs to those who did not participate in the original making of such rules? What do we expect from newly admitted members of our community in return for their membership in the community? Who makes the rules? Who follows them? Who questions them? Who changes them? Social justice exists when diverse communities can be knitted together because, whatever their other differences, the one thing they require is some normative consensus, however vague, about the purposes that define their society. Faced with the dilemma of how we can develop standards ofjustice that are more than contingent and local but less than universal and permanent, we rely on the minimal philosophical anthropology that even postmodernism, indirectly, concedes. Since we are all members of a larger community governed by a text and the interpretations of that text we bring to it, we seek standards ofjustice that recognize and allow us to develop our capacity as readers and interpreters. Those who do what Fish calls theory ti k are engaged in the process of sharpening and refining the standards by which real human beings interpret texts. They are setting an example, using their powers as thinkers and writers to create provisional standards of justice, recognizing that these are socially created rather than found in nature or 77 Id. at 14. 78 T. Nagel, The View from Nowhere (1986). 1432 [Vol. 11:1409 ALGORITHMIC JUSTICE theology, and that, because these standards are recognized as minimally transcendental, we expect that they will, before changing, last for a considerable period of time-say across two generations-and, during that time, be accepted as generally binding. If that is the case, then the most likely place to find a standard of justice lies, after all, in the major texts that frame talk about theorythe Constitution, decisions of the important courts, and articles in law reviews and similar outlets that debate both specific laws as well as standards of interpretation. From such a point of view, writers like Fish, by engaging in theory talk, contribute to theory. What they say matters a great deal and, moreover, matters in a transrhetorical way, just as I believe that what I am saying, while rhetorically presented, also involves more than rhetoric. One embarks, Fish claims, on a slippery slope down the anti-formalist road: once you question formalism's first assumption, you have no choice but to question them all. Yet, as I have tried to show in this paper, there is a road back up the slope again. Once we understand that whatever permanence and universality we lose in any transcendental standard of justice is more than compensated for by the recognition that the transcendence we get, however temporary, is a product of our own efforts, we require a sociological theory of the self to'put back into pe6ple and their efforts and practices the meanings about justice that we have, rightly, stripped away from texts. The failure of postmodern theorists ofjustice to develop an adequate philosophical anthropology dealing with the capacities of human selves undermines much of the strength of its critique of intentionalism and other problematic theories of interpretation. The notion that truths are not embedded in texts in such a fashion that we can divine what to do simply by reading the words is a profound idea. In turning the question of justice back to us, to those who read and interpret texts, postmodernism makes possible a self-governing political community capable of interpreting its rules for the benefit of its members. But that potential can only be realized, not by denying humanism, but by welcoming it, by recognizing that what makes us human is our ability to shape and interpret rules according to the contexts in which we find ourselves. If that means that we have to accept at least some minimal transcendental standards and distinctions-that, for example, there is a difference between nature and culture, that humans do have special abilities, that the socially constructed can be transcendental without necessarily being permanent-then this is a small price to pay for gaining control over the rules that we simultaneously make and follow. Why bother to argue 1990] 1433 1434 CARDOZO LAW REVIEW [Vol. 11:1409 that the rules are not made by God or nature only to argue instead that, however they are made, our only choice is to follow them rather than remake them through all the practices in which we engage.