||||||| The MIT Press As Art Is Lifelike: Evolution, Art, and the Readymade Author(s): Nell Tenhaaf Source: Leonardo, Vol. 31, No. 5, Sixth Annual New York Digital Salon (1998), pp. 397-404 Published by: The MIT Press Stable URL: http://www.jstor.org/stable/1576605 Accessed: 29/05/2010 14:43 Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at http://www.jstor.org/action/showPublisher?publisherCode=mitpress. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission. JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact support@jstor.org. 9 STOR The MIT Press is collaborating with JSTOR to digitize, preserve and extend access to Leonardo. http://www.jstor.org artificial life art As Art Is Lifelike Evolution, Art, and the Readymade Nell Tenhaaf Abstract The early development and current concerns of artificial life practices are outlined in relation to both biology and art. The pragmatic side of a-life is presented, together with a consideration of how it compares with the biological sciences and a description of its methodologies for studying nature through computer simulation. A-life is proposed here as a place to locate art practice for artists who are interested in technoscience, and who are concerned with the "two cultures" gap between the humanities and the sciences. Mythical narratives that underpin new computational techniques, such as the dream of transformation or even generation of life, are not dismissed but become the impetus for resituating a-life as a set of representational strategies with great creative potential. A-life is linked to a particular aspect of 20th century art: how artists have developed and expressed the conviction that art and everyday life are inextricably enmeshed. A Mmlthough it piques the curiosity of anyone encountering it for the first time, artificial life can be added to the list of late 20th century technoscientific research wonders that seem to promise more than they deliver. Yet it remains compelling even to the initiated because of the mythical quality of its underlying premise: the dream of engineering life in the laboratory. Similarly, even if few current practitioners of artificial life (or a-life) have retained the frontier spirit that permeated its moment of origin in the late 1980s, the transcendent vision of higher evolution attached to it places a-life within a trajectory that runs from alchemical wizardry through Faustian metaphysics to contemporary reproductive technologies and cloning. A-life is based on the hypothesis that computer simulation of evolution can deter- mine not just how evolution works but also how it progresses; that is, that simulations of living systems can shape the development of species. One would have thought, then, that a-life arose in an experimental biology lab. In reality, the founders of a-life were a small group of hackers in the southwest of the United States first clustered in the T-13 Complex Studies Group at Los Alamos National Laboratory, and then at the Santa Fe Institute, both in New Mexico. The original a-lifers did insist that their ultimate goal was simulated life that would be the successor to biological life, although a confusion between such a grand mission and their personal investment in a cosmic "evolutionary consciousness" was usually quite transparent [1]. The principal feature of the first "life" generated by the founders was that the computer programs performing the simulations were deemed capable of self-propagation, a key feature of organic life. The significance of such simple life-as-it-could-be simulation was then amplified in evolutionary terms, and interpreted as the first step in generating entire artificial worlds. In the 1990s, a second generation of a-lifers are reworking these principles but for the most part define their activity and research objectives in a much more pragmatic way. That is, their goal is to model living systems using purely computational methods so as to understand those systems better. What reconnects them to the frontier spirit and reinserts some of its disturbance factor is that, even if humans have intervened in natural systems for as long as we've been conscious, the possibilities for synthetic enhancement and extension of life by technoscientific means continue to expand dramatically as this century comes to a close. A-life models can be seen, at least in theory, as formulations for potential intervention in the life of real organisms. Underpinning a-life practices is a composite of current physics, chemistry, and biology theory which, if it can be sum- Nell Tenhaaf, 106 Helena Ave., Toronto, Ontario M6G 2H2, Canada. Email: tenhaaf@yorku.ca © 1998 Nell Tenhaaf, received 27 January 1998 LEONARDO, Vol. 31, No. pp. 397-404, 1998 597 marized at all, is concerned with the dynamics of order in a broad continuum of nonliving and living systems. It is also concerned with synergistic methods for representing such dynamics. The representational dimension of a-life is key to understanding its impact and importance: its foundational features, including the very obvious one of creation itself, are driven by the pull of analogy and the power of metaphor, which operate conceptually to establish the parameters of research and also determine the development of the representational tools themselves. Today, these tools have extended from computer simulations into evolutionary robotics and evolvable hardware. A-life models make heavy interpretive demands on the modeler. They are not so much scientifically provable or repeatable as they are readable in their own context. That is, the significance of an a-life simulation reflects its creator's understanding of what counts as mutation, adaptation, or emergence. Although informed by theoretical biology, these terms have a certain fluidity in a-life, and they circulate more as shared conventions than as fixed rules. These features connect a-life with art, they forge a link with art practices that are also as old as human intelligence: the urge to develop a symbolic logic and representational system that teases out some kind of order and meaning from a chaotic surround. In the contemporary scene, and on a less grand scale, a-life and art practices share some basic concerns with modeling narratives of life in its social sense, as it is lived and experienced by the organism in its surround. A-life research makes formalized models for life processes, at both micro and macro levels and at all kinds of levels of interaction within and between organisms. The models are built on algorithmic descriptive methods, but they also carry rich associative connotations. In particular, they are inextricable from the fabulous narratives about accelerated evolution that circulate in technoculture and permeate all aspects of life as we live it. Further, within a broad cultural perspective, a-life resituates the relationship between art and science, which has been an increasingly problematic one since the two areas split apart during the Enlightenment. As bioscientific issues now become the order of the day, very present in the media and visibly affecting art practices, the question of how artists can treat scientific information becomes pressing. The question could be framed as how to represent life and the interventions it is the object of, not as atomized under the microscope, yet wanting to keep the rigor and sheer interest of that order of information in the picture. Scientific methodologies will continue to discourage or exclude the operation of cultural narratives. But from its inception, a-life has allowed various degrees of play, whether deliberately or implicidy, and thus offers ways in which complex, layered representations can keep subjectivity, the social world, and also the natural world fully in view. Because nature is increasingly indistinguishable from manipulations of nature—which is evident in the ease with which we accept pharmaceutical regulation of our moods or the rationalized management of what we still call wilderness—such a point of view is important for any designer of a reality model, be it artist, theorist, engineer, or scientist. What Is A-Life? The core activity of a-life research is to synthesize lifelike phenomena in artificial media such as computers or robots, in an attempt to understand living systems in all their complexity. Although a-life surpasses the definition of computer science as engineering performed with computers, it is in fact computing research and has arisen within that history. A-life practitioner and theorist Melanie Mitchell, one of the post-founder generation working at the Santa Fe Institute, delineates three key areas of development in computation: neural networks for the project of modeling the brain, machine learning to simulate human learning, and evolutionary computation for simulating biological evolution [2]. A-life has become most invested in the third of these areas, while the first two are still considered the purview of artificial intelligence (AI). In certain ways, a-life is a successor of AI. From its inception, a-life shared with AI a commitment to the idea of studying key features of life such as cognition, behavior, or evolution, without a material body as substrate, so as to shed useless constraints and operate in an unbounded field of symbolic potential. The central modeling hypothesis of classical AI, called computationalism, sees cognitive systems based on either brain or mind as being disembodied and so like computers that they can be replicated in some combination of computer architecture, software, or robot behavior. This is also known as strong AI. The premise is that if one can derive a complete enough set of rules, essentially conceived in a top-down and rational fashion, then the simulated rationality of an "expert system" follows. Similarly, a-life research assumes that one can separate the logical form of an organism from its material base, and that its "aliveness," its capacity to live and reproduce, is a property of the form, not the matter.... Therefore it is possible to synthesize life (genuine living behavior) on the basis of computational principles. [3] But what has always distinguished a-life research is that it relies on the principle of overlapping and interlocked systems built from the bottom up, an approach that at least in theory can be based on organizational features of the whole biological system in its environment, rather than on any isolated part of it. Some current a-life situates itself almost antithetically in relation to classical AI by upping its investment in materiality, and thus resituating some of the central concerns of AI with attention to physical manifestations of the natural world. The two areas are currently by no means distinct. AI continues to invest in ever more powerful computing methods such as connectionism, which is computational architecture inspired by neural systems. But connectionist rules follow the simple-leads-to-complex principles of a-life rather than top-down rule-making. Conversely, the class of bottom-up a-life robots and computer critters that are placed in an environment to perform and adapt are considered by AI researcher Phoebe Sengers to be "alternative AI," 398 Nell Tenhaaf, As Art Is Lifelike improving upon but not solving the problem of the deranged autonomous agent living in an isolated "fabricated world of it own making" [4] . In this analysis, a-life continues to construct the "schizophrenic" agents of AI because the conception of artificial subjectivity in a-life remains invested in the objective knowledge production of science: the conditions for creation of the artificial subject are removed, erasing the role of either creator or audience in the agent's behavior. Constituting an even fuzzier overlap between AI and a-life is an emerging field within AI called embodied AI or embodied cognition, based on phenomenological premises of grounding consciousness in the body. It uses a dynamic systems approach, generally in the development of robot behavior. For example, cybernetics researcher Ker-stin Dautenhahn attempts to build into her autonomous robot agents various aspects of social understanding, such as the bodily dynamics of empathy or the human mechanism of subjectively reconstructing an interaction so as to understand it [;]. So where does this leave a-life, not to mention art? One strategy that distinguishes a-life from AI is evolutionary computation, in that its explicit origins lie in the study of evolution as an optimization tool for engineering problems dating from the 1950s and 1960s [6]. Its role in a-life research is to provide the methods for a formal domain of encoded phenomena to simulate a naturally evolving system, so that "evolve" becomes a technical term and not just a metaphor. Yet the methodology remains fundamentally metaphoric, because the techniques developed for evolutionary computation rely on a very broad use of terms from classic Darwinian theory and also from genetics: genetic algorithms (GAs) and fitness landscapes are the two key methodologies of evolutionary computation. They both assume an operative equality between binary code and genotype, and then proceed to collapse genotype and phenotype in a quite reductive way. Nonetheless, because they act as formal rules that operate on and shape a multi-dimensional space of possibilities, these methodologies are both computationally and figuratively powerful. A GA is a search procedure that carries out a transformation on a set of randomly generated candidate solutions for a problem, which are symbolically encoded as "chromosomes" or, in computer terms, bit strings. Crossover and mutation are performed on the bit strings, as they carry out the task of finding the best solution to the problem. As further possible solutions are generated or fed in, they are selected according to a "fitness function," that is, a definition of success in the task. A fitness landscape represents the space of all possible genotypes or bit string candidate solutions, along with their fitnesses as calculated once the GA has run through them. Its hills and valleys show the population of genotypes shifting to and away from fitness peaks or spaces of success. The pull of analogy between genetic code and computer code, as well as the power of evolution theory as a description for transformation of dynamic systems, are plainly visible in these methodologies. If it is not simply computer science because of its recourse to biological and evolutionary tropes and its relation to theories of dynamic systems that have arisen in mathematics and the sciences, a-life is not quite like scientific research either. It checks back to the natural world in ways that distinguish it from science. An a-life model is a kind of a metaphor for a living system because there is no direct connection between the two, because computational form is basically disengaged from matter. "A characteristic feature of these simulations is that not all entities decode into something in the real world"[7], although it remains a goal of many a-life researchers to have the relationship between the two be much stronger. Clearly, there is an unresolved issue for a-life in comparing the "life-as-it-could-be" features of simulation in the computer with messy, fragile life as it is lived. A-life research does, after all, take place outside the organism even when it operates from a whole-system premise. To circumvent and get past these chronic issues of reductivist tropes translated into method, evolution terminology such as adaptation and fitness have come to be used in a-life in a very abstract way, at the physical and chemical level of simulated molecular evolution [8]. Katherine Hayles exposes the relation between a-life and theoretical biology as strained by the tautology of assuming the computer as the "natural" medium of a-life [9], although a weakness in this argument is that it overlooks the extent to which biology research in general relies on computer modeling methods. Speaking as a developer of evolutionary computation techniques, Mitchell suggests that biology be assumed as metaphor in a basically unproblematic way and interpreted strictly within the domain of theoretical computation rather than in the terms of evolutionary biology. Corresponding to this dilemma of what kind of science a-life can be, there has been a strong pull within a-life toward using behavioral models and other forms of symbolic logic to control robots that operate with sensors and effectors, as these "animats" are considered to at least have a physicality that can be interacted with in direct measurable ways. A-Life Is to Nature as Art Is to Life A-life is a speculative platform that focuses on pattern and process embedded within nature. In parallel, art is latent and embedded within life. To come into existence, each set of practices relies on semiotic, interpretive strategies. These are culturally pre-determined, an aspect that becomes really interesting when it is foregrounded within the representational process itself. The strategies artists contrive for bringing forth art from its surround are a function of the cultural climate, current modes of perception, and a concern with issues of the times, among them technological and scientific issues. While art remains preoccupied with philosophical, perceptual, and human issues, a-life does have a much more direct link to the biosciences in its concern with modeling nature. But because it doesn't use the methods of pure science to derive these models, because its modeling parameters arise more from computational ingenuity than from methods of observation of natural Nell Tenhaaf, As Art Is Lifelike 399 phenomena, a-life is a fundamentally creative platform. Perhaps the most expansive way in which a-life and art can be viewed in parallel is that, in their preoccupation with philosophical and theoretical questions regarding the nature of life, they are focused on the creation of means for life's representation. Each is of necessity concerned with how the mode of development of representational apparatuses or technologies affects the very kinds of representations that can be made. Interpretation of the representations is dependent on the material, lived context of the interpreter, and tends also to be influenced by her technologized environment. In the art context, self-consciousness with regard to medium has been at stake since the development of photography in the 19th century shattered the convention of medium transparency, the window onto the natural world. And more recently, it has been much in evidence as artists work with electronic media that reflect the impact of mass media. In these practices, the nature and outcome of artistic investigation are shaped by conceptual and critical awareness of the technologies used. Perhaps the clearest parallel between these concerns and a-life is to be found in evolutionary electronics. It suggests a link between theoretical advances in the design of semiconductors, the ultimate substrate of computation, and social issues of economics and marketing. An approach to chip design put forward by Adrian Thomson requires throwing out many conventional premises. Its methodology is to use artificial evolution to craft electronic circuits on a physical silicon medium, a reconfigurable chip. Thomson asks what appear to be very simple questions, looking at correspondences between natural conditions and evolutionary electronics and focusing especially on the effect of temperature variation on the fitness of circuits that are evolved. Thomson remarks, "It is beneficial to do this [construct these systems] in a more natural way than simply forbidding all analog continuous-time dynamics, as conventional digital design does" [10] . Thomson exposes a weak point in the "digital revolution," which is that the compulsion to keep pace with it means that we only think within its conventions rather than reconsidering basic design parameters. If the human role in evolution is a privileged one largely because of our tool development, through which we direct the course of evolution in more and more self-conscious ways, then a reexamination of premises that are now locked in due to market forces is certainly a priority. Alongside evolving hardware, the algorithmic programs of a-life are, equally, technologies that determine the results obtained. The computational strategies of a-life are engineered following principles established a priori as the key features of synthesized life. Emergent properties or behaviors are aimed for in simulated entities because emergence has been conceived as a hallmark of a-life— something arises that is not preprogrammed, an unforeseen feature on the order of "the whole is more than the sum of its parts." The a-life system is built so as to lend itself to an unpredicted transformation, and then, conventionally, to eliminate all traces of its authorship in the interpretation of the results. This applies also to self-organization, the other defining feature of a-life, which overlaps with emergence. Self-organization is a concern with implicit order and pattern in physical systems that arises inexorably from chemical and particle activity, and which is continuous from nonliving to living systems. The hypothesis, in summary form, is that the universe has a natural tendency to organize itself: "Evolution is not just 'chance caught on the wing.' It is emergent order honored and honed by selection" [11]. Conceived as a counterbalance to the force of evolution driven through time by randomness and chance events, self-organization resituates the argument about grand design in the universe by shifting the emphasis to inherent properties of the physical world that are studied via complex adaptive systems. Such systems in simulation have again a certain tautological or pre-ordained quality: they are constructed so as to generate the phenomena under study. But as a methodology, this pervades much of contemporary model- making. Consider, for instance, so-called sand-pile physics— the study of the dynamic behavior of granular substances. The theory of "self-organized criticality" proposed in the late 1980s by Danish physicist Per Bak proposes that during the development of gradual processes like the accumulation of a sand pile, critical points of avalanche effect are reached, which abrupdy and radically reorganize the whole system. The most recent news in sand-pile physics was the discovery in late I996 by a group of researchers in Texas that a thin layer of tiny brass spheres spread over the flat bottom of a container, when jiggled at a very precise rhythm by vibrating the platform at certain frequencies and amplitudes, causes patterns remarkably like atoms, molecules, or crystal lattices to appear [12]. With some controversy among scientists, the phenomena of physical self-organization have been taken up as analogies for the emergence of life, and even for the development of human idiosyncrasy. The tautological features that seem to undermine a-life practice tend also to result from the extrapolation of its practices into broad analogy. In particular, there is a huge contradiction implicit in the promise of unprogrammed lifelike behavior in that it makes an appeal to an unspecified ordering principle of digital logic, which resembles all too closely the traditional objective authority of science. Yet the impact of research that looks for emergent and self-organizing properties depends on a certain latitude in interpretation and application. This is precisely what makes it interesting for "integrative science" and also for culture in general. Stuart Kauff-man, a biophysicist and biochemist whose ideas about order and chaos permeate complexity theory, describes integrative science as theory-based conjecture about putting back together what reductionist science has taken apart [13]. As is true for any representational strategy, the interest it holds is not so much an issue of its correctness as of what questions it poses, how clearly they are asked, and how significant they are for the expansion of knowledge and understanding in a cultural and social context. 400 Nell Tenhaaf, As Art Is Lifelike For critical purposes, modeling in a-life has to be seen as sharing the features of any cultural representation: it is a signifying system that necessarily has specified parameters and references. In a-life, interpretation cannot help but be shaped by the fact that computer simulation is the core technique for representation. The creativity of a-life involves conjecture, in the form of computational representations, about interpretive exchange -within and among organisms at all levels of organization. Artistry can be seen in the relationship between a-life and science, then, in the way that a-life translates various branches of biology, including evolutionary biology, into computational language and in the process takes up biology as a "ready-made." In 1913, Marcel Duchamp conceptualized the readymade in response to both the "retinality" of painting and the monetarization of art, factors which had caused him to withdraw from painting the year before. The readymade has had a notable presence in art practice ever since [14]. It is an avant-gardist notion of response to aesthetic convention, and so it is dated in terms of our contemporary oudook, which has long been inured to the "shock of the new." But in our current cultural circumstances of extreme mediation of experience, it has a renewed currency because living and representations of living are so easily conflated. The readymade has an ironic and subversive authenticity in that it declares ideas and images in their unceasing circulation to be already named and defined by an indefinable someone. Similarly, biology is not nature but is already a representation. Biology and biotechnology are telling us what we think life is as we become more and more accustomed to intensive mediation of nature through scientific imaging and computer simulation. An example of this is the Human Genome Project, an internationally coordinated quest to construct a database that identifies all of the genes in a typical human body. Its persuasive promise to reveal a genetically determined key to life is a substitution of the map for the territory. A-life practitioners don't become artists when they create readymades in their use of biology, but they nonetheless contribute to the process of engaging art with science. They place quotation marks around a segment of nature and make explicit in their models its encoding within a particular set of technoscientific practices, which are thereby revealed as representational practices. For example, Charlotte Hemelrijk makes a simulation of cooperation among virtual entities starting from a dynamic of conflict and competition [15]. The object of her simulation is to demonstrate ways in which cooperation emerges systemically when, under certain circumstances, it provides an advantage over the killer instinct. Because the model quotes accepted narratives of nature and relies on artifacts of existing research, so as to duplicate and extend them in computer terms, it operates like a readymade. It permits us to reconsider the original biological, or for that matter natural, premise. It asks us to look again at the assumptions hidden in functional forms, whether that be a urinal signed R. Mutt or aggressively competing virtual entities [16]. By virtue of its elaboration in a systems dynamic, it can also in turn inform the practices of biology. Anti-Art, Anti-Science Throughout the 20th century, art has been expanded by important moments of formal and philosophical reinvention of its relation to the real. Anti-art is a concept derived from these reformulations of the relationship between art and everyday life. Its precedents include the Russian Constructivists' and the Futurists' merging of art and revolutionary politics. Anti-art was adopted by the dadaists in western Europe after World War I to express an inseparability between social and political concerns and art manifestations. The readymade and other features of Duchampian thought are considered anti-art, as well as the extension of these into '60s and '70s Happenings, and "new media" such as video and performance. These practices are open to and even embrace social causes, but it is in their expression of dissatisfaction with the ability of established art practices to connect with lived reality that they become anti-art. Although anti-art is a critique of institutionalized art and its role in maintaining social convention, this description places it in a negative position and doesn't adequately account for its dadaist origins of radical breakaway spirit, its commentary on intolerable conditions of social distress, or its iconoclastic humor. In parallel, a-life could be considered principally as the ground for another version of science critique calling attention to the sociopolitical imbrications of science in the face of its insistence on objectivity. But a-life becomes a more compelling epistemological field if we consider it as the emergence of an alternative, para-scientific practice, an "anti-science": it doesn't seek to negate its terms of reference or their knowledge base; rather, it depends on them so as to propose reinventing them. Anti-science can expand our thinking about science in a way that parallels how anti-art reorders the symbolic systems we use to interpret and constandy reinvent everyday life. Anti-art shows that, once art and life are perceived as enmeshed, the transformative potential of art increases exponentially. Similarly, awareness of how we construct nature through science and technology on a daily basis could deliver a comparable empowerment. The tendency in recent debates about the interpretation and use of science is to polarize experts and amateurs [17]. Even if the sciences can now be understood on many more levels than explaining the universal structural properties of nature, neither are they freely accessible to the nonscientist without complications of expertise and specialization. A-life has its own requirements of expertise, but it offers a representational territory that lies in between the acute demands of conforming to the natural world that are implicit in science research, and the very wide range of imaginary transformation of the real that is available to the artist. Representational issues have been the purview of art, art theory and criticism, and literary criticism. In the past few decades there have been debates in these domains about very fundamental issues of representation, which expand upon anti-art principles in their attention to social and Nell Tenhaaf, As Art Is Lifelike 40I cultural context: assumptions of naturalism have been undermined by theories about the construction of the gaze and of subjectivity itself, and the overall dominance of the visual as a modernist trope in art has been put into question [18]. There is a comparable epistemological challenge in a-life: In what way are primary methodologies of a-life, such as genetic algorithms and fitness functions, cultural constructions? And how can this be exposed rather than concealed in a-life models? The significance of responses to these question is not limited to a-life research, but sheds light on 20th-century technoscience in general. A-Life Epistemology and the Evolution of Code Genetic algorithms and fitness functions are constructed within received narratives of evolution. But even more important is their relation to the ideology surrounding codification, which has had ascribed to it godlike attributes of pervasiveness and infinite possibility since the decade just after World War II. At that time, mathematicians, scientists, computer engineers, information theorists, and cyberneticists were fashioning a model combining computers, Cold War secrecy, genetics, and semiotics that established the absolute power of algorithmic processes. An algorithm is by definition a formal, logical, mechanical, and always reliable process, and although that in itself is an old idea, Daniel Dennett tells us that it was the work of Alan Turing, Kurt Godel, and Alonzo Church in the 1930s that determined our current computational understanding of the term. Dennett goes on to characterize "Darwin's dangerous idea" of natural selection as algorithmic, because logical form supersedes any material substrate, because there is no overarching design (he calls this "underlying mindlessness"), and because the results are guaranteed [19]. The meaning of computation is debated in the a-life world; that is, whether it is a contained set of materially based processes or an algorithmic "universal, abstract, dynamics to which even the laws of physics must conform" [20]. The transcendent metaphor which proposes that "we hitch a ride on the huge ongoing computation of the universe by nature" certainly has its place in a-life thought. It is, after all, a history of hackers. There are obvious streaks of computational determinism in a-life thinking; for instance, in the self-fulfilling argument that models such as cellular automata and artificial neural nets pass the Turing test. The idea is that they demonstrate a "fitness equivalence" between the formally defined, computed system and its natural physical implementation. The argument relies on the interpretive weight of whoever interacts with these systems and is upheld even though, or perhaps precisely because, the computer-based system isolates processes from the noisy reality of life. The epistemological and critical dilemma for programmers steeped in code is that there is no "outside" remaining in the notion that "the universe is a map of a computation" [21]; there is no place from which to make an interpretation of what a simulation means. It means itself, in its inarguable and "mindless" inevitability. This is not so much threatening as it is fatalistic and disempowering, like the worst pre-Enlightenment religious doom scenarios. For those fixated on either computational or genetic code as the key to the mystery of life, it simply has inviolable origins in the abstract laws of mathematics and innate principles of the universe. What more is there to say? But as a history encompassing quantum physics theory, the birth and growth of molecular biology, and the development of weapons systems for war, the story of code is much more complex. It is a narrative of interwoven technical progress, scientific insight, and sociopolitical circumstances. It involves a profound tension in the relation between abstract natural law and organic matter: code has come to be characterized as the (masculine) determinant of human fate while matter is the (feminine) burden of flesh to be mastered and shaped. The term information was misapplied to genetic code after the synergistic moment of its conceptualization in the domains of physics, biology, and information theory in the late 1940s [22]. DNA and RNA came to be inexorably perceived as con- sisting of orders or instructions that specified their own interpretation. By the time of Francis Crick's "central dogma," formulated in I958, DNA was perceived as having the power to fully determine all characteristics of an organism; it carries inherent and irreversible meaning. With this confusion of terminology, the conflation of genetic code with the colloquial rather than technical sense of information took hold, and subsequently it also began to parallel the meteoric rise of algorithmic code for processing information, that is, computer code. The parallel has stuck; it is evident today in the perception of a natural fit between computer techniques and genetics research, or in electronic devices seen as metaphors for or extensions of biological systems. The conflation of genetic code and computer code in a-life is shaped by this historical backdrop. Even if it is used as a functional and creative tool, its history of contested knowledge and its legacy of the power of naming and describing should be kept in view. A-lifers who ascribe to the notion that self-organization at every level means implicit informational and cognitive design waiting to be revealed as wondrous cosmic order forget what Francois Jacob calls attention to in the possible and the actual: "As complexity increases, history plays a greater part" [22]. In parallel, there is a fallacy in accepting the overarching power of algorithmic description of natural order at the expense of any other consideration of the generation and development of living systems. An alternative is to see algorithmic principles, which describe both organic and inorganic phenomena, as the substructure of evolutionary and developmental processes, and layered on top of these are many relational aspects of inter-subjectivity and interagency, whether among human, animal, molecular, or machinic agents. Attributing signs of life to an AI or a-life agent, in a way that is divorced from its creator, its users, and its fellow entities, is a symptom of the deadend urge to arrive at an absolute design and at the ultimate algorithm [24]. Interesting work in current a-life research tends to be historically and conceptually much more specific. Giles May- 402 Nell Tenhaaf, As Art Is Lifelike ley is an a-life researcher in the United Kingdom who is studying the influence of learning on evolution [25]. This research does not revive Lamarckianism, but in effect it credits Lamarck with posing questions that Darwin didn't account for. Mayley's research is based on the Baldwin effect, a theory put forward in the late 19th century to explain cases of apparent inheritance of acquired characteristics without resorting to the Lamarckian idea that learning affects the genome and is transmitted directly. The theory has not had much currency in biology but has been taken up in the past decade by the a-life community. Mayley's work demonstrates how learning does speed up evolution, as proposed by the Baldwin effect, but that another factor he calls "the Hiding effect" counterbalances it over the time and process of generational flux. This interaction of processes can be shown through the dynamics of an evolving population on a fitness landscape. Philosophically, this research moves evolution discourse away from the brutality of "fitness equals survival" in the strictest sense, as well as from a conception of adaptation as genomically driven purpose and progress. Representationally, the method has parallels across a-life in that it involves constructing and observing pathways through a complex state space generated over time from specified initial states. Such models are also used to construct statistical mechanics for describing complex dynamical systems, such as the immune system or a neural network [26]. The time-space search metaphor, like the GA, is a rich one. The visualization of time translated into space in a fitness landscape describes a search in n-dimen-sional space that is very much like a Duchampian capturing of chance in spatial form: Duchamp described the construction of he Grand Verre: ha Mariée mise á nu par ces célibataires, Méme (The Large Glass) as an exploration in n-dimensional space in which energy flowed from an unspecifiable source in a self-propagating circuit [27]. There are many parallels between concepts of anti-art and anti-science. Without expecting that a-life as an anti-science will revolutionize science practices, it is con- ceivable that it may inject new life into science, just as art must be in a continuous state of reinvigoration and renewal in relation to life. There is a line that can't be crossed between the scientific method of studying the real, material world through experimentally testable models, and an artist's contrivance of a metaphorical material world through techniques of representation and interpretation. But overall, a-life offers a cross-platform to both science and art for renewal, a site for building awareness about assumptions and biases that shape perception. Acknowledgments Thanks to Steven Kurtz and Scott Draves for important clarifications in this text. References and Notes 1. The key people in the original a-life world were Chris Langton, Doyne Farmer, Norman Packard, John Holland, Danny Hillis, and Thomas S. Ray, according to Steven Levy in "Artificial Life: The Quest for a New Creation," in Whole Earth Review, Artificial Life issue (Fall 1992) p. 22. One of the origin stories of a-life focuses on programmer Chris Langton's hang glider crash, which he relates led him to a spiritual revelation about how "latent patterns" in his mind are just like computer processes. See Stefan Helmreich, "The Spiritual in Artificial Life," in Science as Culture 6, Part 3, No. 28 (1997) p. 375. See also Howard Rheingold in the introduction to the Whole Earth Review cited above: "Maybe we will end up creating God, or the Devil, depending on how our minds' children evolve a billion years from now." About Thomas S. Ray's Tierra, and for a probing analysis of the first-genertion Santa Fe a-lifers, see Katherine Hayles, "Narratives of Artificial Life," in Future-Natural: Nature!Science/ Culture (London and New York: Roudedge, 1996). For an overview of a-life practices and their relation to art practice, see Simon Penny, "The Darwin Machine: Artificial Life and Interactive Art," in New Formations No. 29 (Autumn 1996). 2. Melanie Mitchell, An Introduction to Genetic Algorithms (Cambridge, MA: MIT Press, 1996). 3. Claus Emmeche, "Life as an Abstract Phenomenon: Is Artificial Life Possible?," in Francisco J. Varela and Paul Bourgine (eds.), Toward a Practice of Autonomous Systems: Proceedings of the First European Conference on Artificial Life (Cambridge, Mass.: MIT Press, 1992) pp. 466—474. See also Chris Langton (ed.), Artificial Life (Redwood City, CA: Addison-Wesley, 1989). 4. Phoebe Sengers, "Fabricated Subjects: Relocation, Schizophrenia, Artificial Intelligence," in ZKP2@;Cyberconf, Madrid, June 1996. 5. Kerstin Dautenhahn, "I Could Be You: The Phenomenological Dimension of Social Understanding," in Cybernetics and Systems 28, No. 5 (July-August 1997). 6. Mitchell [2] p. 2. 7. Erich Prem, "Grounding and the Entailment Structure in Robots and Artificial life" in Moran, Moreno, Merelo, and Chacon (eds.), Advances in Artificial Life: Third European Conference on Artificial Life (Berlin/Heidelberg: Springer-Verlag, 199s) p. 41. 8. Prem [7] pp. 45-6. 9. Hayles [1] p. 157. 10. Adrian Thomson (University of Sussex), "Temperature in Natural and Artificial Systems," in Phil Husbands and Inman Harvey (eds.), Proceedings of the Fourth European Conference on Artificial Life (Cambridge, MA: MIT Press, 1997) p. 388. ECAL '97 made it plain that the European approach to a-life is not as steeped in Santa Fe origin lore as the American perspective. I presented a poster on "Aesthetics of A-life" within the conference and also showed two works in an exhibition held in conjunction with the conference at the Brighton Media Centre,"LikeLife," curated by Joe Faith, Medeni Fordham, Inman Harvey, and Phil Husbands. 11. Stuart A. Kauffman, The Origins of Order Self-Organisation and Selection in Evolution (Oxford: Oxford University Press, 1993) p. 644. 12. Malcolm W. Browne, New York Times, Tuesday, Sept. 3, 1996. 13. From an interview I conducted with Stuart Kauffman in October 1977, posted as "Semiosis, Evolution, Energy: Interviews with Three Scientists," on the electronic journal Nettime (www.fac-tory.org/nettime). For further discussion of the relevance of theories of representation to a-life tautologies, see my "Simorg Culture," in Parachute 72, "La Bestiaire/Endangered Species" issue (1993). 14. And if, as Thierry de Duve proposes, the readymade is an orphan because it has no mother but is born fully formed from its bachelor father, then a-life gives it a mother, via a biological matrix. De Duve put forward his proposal in a public discussion with Sylvie Blocher, York University, Toronto, Nov. 18, 1997 (unpublished). 15. Charlotte Hemelrijk, "Cooperation Without Genes, Games, or Cognition," in Husbands and Harvey [10]. 16. Fountain is Duchamp's infamous readymade of 1917. The first readymade, Bicycle Wheel, was made in 1913, the year after Duchamp left artmaking. I am indebted to Jeanne Randolph for her perception of technology as a found object, a recurring theme in discussions with her and in her writing. See for example the catalogue Influencing Machines: The Relationship Between Art and Technology (Toronto: YYZ, 1984). 17. Most notorious in this respect are Paul R. Gross and Norman Levitt, authors of Higher Superstition: The Academic Left and Its Quarrels with Science (Baltimore: Johns Hopkins University Press, 1994). These writers, in parallel with their attempt to dismiss all critical discourse—even by scientists—construe the public as mere passive consumers of science. 18. See for instance Theresa de Lauretis and Stephen Heath (eds.), The Cinematic Apparatus (New York: St. Martin's Press, 1980) and Jonathan Crary, Techniques of the Observer: On Vision and Modernity in the Nineteenth Century (Cambridge, MA: MIT Press, 1990). 19. Daniel Dennett, "Darwin's Dangerous Idea," Nell Tenhaaf As Art Is Lifelike 403 in The Sciences, May/June 1995, p. 36. Dennet has since published a book of the same title (Touchstone Books, 1996). 20. H.H. Pattee, "Artificial Life Needs a Real Epistemology," in Moran, Moreno, Merelo, and Chacon [7] p. 30. I am using "computation" and "computational" here in a generalized sense to refer to computer methods for modeling dynamic systems, as distinct from the computationalism of artificial intelligence discussed above. 21. H.H. Pattee [20] p. 31. 22. Evelyn Fox Keller, Refiguring Life: Metaphors of Twentieth-Century Biology (New York: Comumbia University Press, 1995). Physicist Erwin Schrodinger developed the idea of a "code-script" contained within the gene molecule, which is composed of its atoms or "code-particles." The code-script is the operative factor bringing about development, a constraint on infinite diversity or complexity, and thus it counters the tendency to entropy. These ideas are linked to Schrodinger's "negentropy" theory of thermodynamical information, specifically that information in a physical system is the capacity for the system to do work (or, put another way, to transform energy). That this idea of code-script becomes linked with the information theory that Claude Shannon was formulating in the late 1940s is then no surprise. Shannon posits information as a "precise quantitative measure of the complexity of linear codes," a patterning in units of communication, which in his terms is also the negative of thermodynamical disorder. However, in Shannon's defintion of information, it is devoid of meaning in itself, it is a container, whereas for Schrodinger the code-particle consists of both container and content. 23. Francois Jacob, The Possible and the Actual (New York: Pantheon Books, 1982) p. 31. 24. The same fallacy is evident in Richard Dawkins' and Daniel Dennett's idea of memes: "The radically new kind of entity created when a particular kind of animal is properly furnished (or infested) with memes [or, roughly speaking, ideas] is commonly called a person" (in Dennett [19] p. 40). The implication is that memes exist outside of their continual genesis in relational fluidity, although the word has been taken up culturally in a less deterministic way. 25. Giles Mayley, "Guiding or Hiding: Explorations into the Effects of Learning on the Rate of Evolution," in Husbands and Harvey [10] pp. 15 5-44- 26. Inman Harvey and Terry Bossomaier, "Time Out of Joint: Attractors in Asynchronous Random Boolean Networks," in Husbands and Harvey [10] pp. 67-75. 27. See Marcel Duchamp, Notes and Projects for the Large Glass, Arturo Schwarz, ed. (New York: Abrams, 1969). See also Craig E. Adcock, Marcel Duchamp's Notes from the Large Glass: An N-Dimen-sional Analysis (Ann Arbor: University of Michigan Research Press, 1983). Adcock says (p. 48), "Both the readymade and the fourth dimension were methods of getting away from conventional systems," which Duchamp saw as preventing an openness to the unexpected that art should have. Nell Tenhaaf is an electronic media artist and writer. She has exhibited widely and published numerous reviews and articles. Recently she has been presenting an Internet-based performance called Neonud-ism and has been included in the exhibitions "Odd Bodies/Corps etrangers" at the National Gallery of Canada and "Techno-seduction" at Cooper Union (New York). Her article "Mysteries of the Bioapparatus" appeared in the I996 book Immersed in Technology: Art, Culture and Virtual Environments (The Banff Centre and MIT Press). Tenhaaf teaches in the visual arts department at York University (Toronto), and prior to this taught in the art department of Carnegie Mellon University (Pittsburgh). 404 Nell Tenhaaf, As Art Is Lifelike