Art and Artificial Life – a Primer Simon Penny University of California, Irvine penny@uci.edu It was not until the late 1980s that the term ‘Artificial Life’ arose as a descriptor of a range of (mostly) computer based research practices which sought alternatives to conventional Artificial Intelligence methods as a source of (quasi-) intelligent behavior in technological systems and artifacts. These practices included reactive and bottom-up robotics, computational systems which simulated evolutionary and genetic processes, and are range of other activities informed by biology and complexity theory. A general desire was to capture, harness or simulate the generative and ‘emergent’ qualities of ‘nature’ - of evolution, co-evolution and adaptation. ‘Emergence’ was a keyword in the discourse. Two decades later, the discourses of Artificial Life continues to have intellectual force, mystique and generative quality within the ‘computers and art’ community. This essay is an attempt to contextualise Artificial Life Art by providing an historical overview, and by providing background in the ideas which helped to form the Artificial Life movement in the late 1980s and early 1990s. This essay is prompted by the exhibition Emergence –Art and Artificial Life (Beall Center for Art and Technology, UCI, December 2009) which is a testament to the enduring and inspirational intellectual significance of ideas associated with Artificial Life. INTRODUCTION It was not until the late 1980s that the term ‘Artificial Life’ arose as a descriptor of a range of (mostly) computer based research practices which sought alternatives to conventional Artificial Intelligence methods as a source of (quasi-) intelligent behavior in technological systems and artifacts. These practices included reactive and bottom-up robotics, computational systems which simulated evolutionary and genetic processes, and are range of other activities informed by biology and complexity theory. A general desire was to capture, harness or simulate the generative and ‘emergent’ qualities of ‘nature’ - of evolution, co-evolution and adaptation. ‘Emergence’ was a keyword in the discourse. Two decades later, the discourses of Artificial Life continues to have intellectual force, mystique and generative quality within the ‘computers and art’ community. This essay is an attempt to contextualise Artificial Life Art by providing an historical overview, and by providing background in the ideas which helped to form the Artificial Life movement in the late 1980s and early 1990s. This essay is prompted by the exhibition Emergence –Art and Artificial Life (Beall Center for Art and Technology, UCI, December 2009) which is a testament to the enduring and inspirational intellectual significance of ideas associated with Artificial Life. Artificial Life could not have emerged as a persuasive paradigm without the easy availability of computation. This is not simply to proclaim, as did Christopher Langton, that Artificial Life was an exploration of life on a non-carbon substrate, but that Artificial Life is ‘native’ to computing in the sense that large scale iterative process is crucial to the procedures which generate (most) artificial life phenomena. The notion that Artificial Life is life created an ethico-philosophical firestorm concerning intelligence, creativity and generativity in evolving and adaptive non-carbonbased life-forms. Unfortunately but inescapably, such debate was often muddied by Extropian rhetoric asserting that in computers and robotics, humans were building the machine successors to biological (human) life. Artificial Life burst onto a cultural context in the early 90’s when artists and theorist were struggling with the practical and theoretical implications of computing – that is, it was contemporaneous with virtual reality, bottom-up robotics, autonomous agents, real-time computer graphics, the emergence of the internet and the web and a general interest in interactivity and human-computer interaction. In part due to the interdisciplinarity of the moment, it was a time also when paradigms accepted within the scientific and technical communities were under interrogation – dualism, reductivism, cognitivism, and AI rooted in the ‘physical symbol system hypothesis’ among them. There were inklings of a Kuhnian paradigm shift in the wind. Amongst the (small) community of interdisciplinary computer artists, a smaller subsection were highly attentive to the emergence and activities of the Artificial Life community because in these techniques was promise of a kind of autonomously behaving art which could make its own decisions, based on its own interpretations of the (its) world. That is, the methods of Artificial Life promised the possibility of the holy grail of machine creativity. The artist would become a gardener, a metadesigner, imposing constraints upon the environments of his creatures, which would then respond in potentially surprising ways. In some cases this activity was clad in obvious religious terms of ‘playing god’. One of the enduring fascinations of Alife is that simulated evolutionary systems did and do produce increasingly well adapted, efficient forms, which solved their problems in surprising ways, and in many cases, are structurally incomprehensible to programmers, that is, are resistant to reverse engineering. Before discussing such artwork, it is necessary to recap some of the technical and philosophical pre-history of Artificial Life. BIOLOGY, COMPUTING, AND ARTIFICIAL LIFE. Vitalism, Emergence and SelfOrganisation The question of what it is that distinguishes the living from the non-living has been a constant theme in philosophy and science. Henri Bergson posited the idea of an ‘élan vital’ or life force. This idea which was subsequently received with ridicule by mechanist scientists, characterising Elan Vital as the phlogiston of the life sciences. The spirit of vitalism has recurred in various discourses around emergence and self organization, ideas which have been central in cybernetics and artificial life. G H Lewes used the term emergence in its current context as early as 1875, indicating the philosophical context for Bergson’s élan vital. J.S. Mill embraced such ideas. In A System of Logic (1843) he gave the term “heteropathic causation” to situations where an effect is the result of the combined multiple causes. In his writings of the 1920s Samuel Alexander proposed a general theory of emergence which purported to explain the transition from non-living to living and from non-conscious to conscious. Such ideas were influential in fields as diverse as sociology and embryology. Hans Driesch, one of the founders of experimental embryology subscribed to a notion of entelechy, a form of emergence. The mechanist/vitalist tension persisted throughout the twentieth century, and is easily detected in Artificial Life discourse . Cybernetics and Biology Biological and ecological metaphors were the stock-in-trade of cybernetics, as it was preoccupied with the integration of an entity within a context, and with the study of such systems of entities. In 1937, biologist Ludwig Bertalanffy first presented his General Systems Theory, and this became central to the emerging field of cybernetics during the formative Macy Conferences of 1946-53. Ross Ashby coined the term ‘self organising system’ in 1947, it was taken up by Norbert Weiner among others. The term self-organization refers to processes where global patterns arise from multiple or iterated interactions in lower-levels of the system. Canonical examples are the organisation of social insects and the emergence of mind from neural processes. Other cybernetic luminaries such as Stafford Beer, Heinz von Foerster and others were preoccupied with self-organisation, and idea grouped in the early cybernetic literature with ‘adaptive’, ‘purposive’ and even ‘teleological’ systems. As a meta-discipline, cybernetics wielded significant influence in the ‘60s, in biology (systems ecology), sociology (Luhmann), business management (Beer) and the Arts (Burnham). Systems, Information, Software As I have discussed elsewhere, two qualities of computing paradigms and emerging discourse made cybernetic approaches increasingly incomprehensible. First, an increasing commitment to the concept of intelligence-as-reasoning (the physical symbol system hypothesis of Newell and Simon), as opposed to intelligence-as-adaptation. Second, an increasing commitment to the hardware/software dualism which made the idea of the integration of intelligence within (biological) matter itself problematic. The clear distinction between information and matter was not part of the cybernetic paradigm. In 1948, Claude Shannon published his "A Mathematical Theory of Communication" in which he formalized his ‘information theory’. [18] Earlier, he had done fundamental work applying Boolean logic to electrical engineering, and had written his PhD thesis developing an algebra for genetics (!) and had worked with Alan Turing during the second world war. In the context of this discussion, it is important therefore to note that the very concept of ‘information’ was in the process of being technically formalized at the time (the post-war years). Arguably, the most significant was the formulation of the notion of ‘software’ as a (portable) information artifact without material existence which became axiomatic to the construction of computer science. The ramifications of this reification were slow to occur. The idea of ‘code’ (a computer program) as something other than custom and handcrafted was also slow in developing, as was the notion of ‘platform independence’. Software as a ‘stand-alone’ information artifact was not reified as a commodity until well into the 80s. 1 The influence of Cybernetics waned in later decades, in part due to the ascendancy of approaches related to the development of digital computing. Cybernetics went undercover, so to speak, as systems theory and as control theory. Ideas of self-organization and emergent order percolated through the more systems-oriented parts of the Alife community. Many of the ideas central to cybernetics reappear under slightly different terminology in artificial life discourse. Central to cybernetic thinking were questions of self organization and purposive behavior, the relationship of an entity to its (changing) environment, its real time response and adaptabilty - interactions characterised as ‘feedback’. In artificial life, these ideas are clad in terms of autonomous agents, reactive insect like robots, simulated evolution in fitness landscapes, emergence and self-organizing criticality. And indeed, theorists like Peter Cariani [6] and others explicitly bring systems theory and cybernetic theory to bear on Artificial Life. Biotech and Alife. In 1953, Watson and Crick first announced the structure of DNA, building on work of Linus Pauling, Rosalind Franklin and others. Analogies from both cryptography and computer programming are everywhere in genetics language, and seem to have been from the outset. (Note that a Univac, the first ‘mass produced’ computer, was installed in the US Census bureau in 1951.) Watson and Crick made explicit analogies between computer code and genetic code, with DNA codons being conceived as words in DNA codescript. They explicitly described DNA in computer terms as the genetic ‘code’, comparing the egg cell to a computer tape. The human Genome project began in1990, and was headed by none other than James Watson. Like any structuring metaphor, computer analogies doubtless had significant influence on the way DNA and genetics is thought, particularly by laying the fallacious hardware/software binary back onto biological matter - constructing DNA as ‘information’ as opposed to the presumably information-free cellular matter. What is seldom noted is that the conception of computer code and computer programming in 1950 was radically different from what it became 50 years later. The analogy of DNA to machine code has some validity. The analogy of bio-genetic operations to contemporary high-level programming environments is rather more complex and tenuous, and certainly demands critical interrogation. The treatment of DNA as computer code laid the conceptual groundwork for mixings of genetics and computing, such as genetic algorithms and biological computing – deploying genetic and biological processes as components in Boolean and similar computational processes. This unremarked historical drift of denotation has also permitted not-always-entirely-principled mixings of biology and computing, such as the construction of the possibility of living computer code (ie artificial life). DNA, matter and information Cybernetics and digital computing deployed differing metaphors from biology, and as we have seen, the conception of genetic information owed much to the conception of the computer program. The conception of the genetic program as deployed by Watson and Crick did not specifically dissociate the genetic ‘information’ from its materiality, but by the late 80s, it was possible for Artificial Life adherents to speak in these terms. A basic premise of Artificial Life, in the words of one of its major proponents, Christopher Langton, is the possibility of separation of the ‘informational content’ of life from its ‘material substrate’. Contrarily, embryological research indicates that the self organising behavior of large molecules provides (at least) a structural armature upon which the DNA can do its work. That is: some of the ‘information’ necessary for reproduction and evolution is not in the DNA but elsewhere, integrated into the ‘material substrate’. Alvaro Moreno argues for a ‘deeply entangled’ relationship between explicit genetic information and the implicit self organising capacity of organisms. Hard and Soft Artificial Life Chris Langton, an outspoken spokesman for Artificial Life, referred to it as "a biology of the possible", and was wont to make proclamations such as : “We would like to build models that are so lifelike that they would cease to be models of life and becomes [sic] examples of life themselves”. [12] In what may have been a rhetorical push-start to the Artificial Life movement, the community purported to divide itself into hard and soft factions. The Hard Alifers maintained that silicon based ‘life’ was indeed alive by any reasonable definition. They argued that biology must include the study of digital life, and must arrive at some universal laws concerning "wet life" and digital life. A major case example for these discussions was Tom Ray’s Tierra system, created around 1990. Tierra executes in a ‘virtual computer’ within the host computer, small programs compete for cpu cycles and memory space. Tierra generated a simulated ecosystem in which species of digital entities breed, hybridise and compete for resources. Tierra would be set to run, overnight, and inspected for new forms. While Tierra was framed in ecological and biological language, it does not employ Genetic Algorithms per se, its code was in fact based on the early and esoteric computer game Core War. The major significance of Tierra was that not only did forms evolve to compete better, but various kinds of biological survival strategies emerged, such as host/parasite relations, and cycles of new aggressive behaviors and new defensive behaviors. Some years later, Ray made a proposal to promote “digital biodiversity" : a distributed digital wildlife preserve on the internet in which digital organisms might evolve, circumnavigating diurnally to available CPUs. He noted that "Evolution just naturally comes up with useful things" 2 and argued that these creatures would evolve unusual and unpredictable abilities (such as good net navigation and CPU sensing abilities) and these organisms could then be captured and domesticated. [17] Mimesis, Art and Artificial Life One of the major preoccupations of western art has been mimesis, the desire to create persuasive likeness. Although the modern period saw a move away from this idea in the fine arts toward various notions of abstraction, mimesis is the preoccupation of popular media culture: cinema, television, computer games. "Abstract" television is a rare thing indeed! For the fine arts, the prototypical mimetic moment is the story of Parrhasius and Zeuxis: "[Parrhasius] entered into a competition with Zeuxis. Zeuxis produced a picture of grapes so dexterously represented that birds began to fly down to eat from the painted vine. Whereupon Parrhasius designed so life-like a picture of a curtain that Zeuxis, proud of the verdict of the birds, requested that the curtain should now be drawn back and the picture displayed. When he realized his mistake, with a modesty that did him honour, he yielded up the palm, saying that whereas he had managed to deceive only birds, Parrhasius had deceived an artist." Although we regard classical Greek sculpture as a high point of mimesis, I contend that at the time, the static nature of sculpture yet entirely accessible through public records. The artist’s accumulated evidence presented a pattern of social neglect typical of New York’s invidious real estate market. Shapolsky et al. also resembled, if not in fact parodied, the conceptual or information art being made in the early 1970s by artists such as Mel Bochner, Adrian Piper or Joseph Kosuth. After canceling Haacke’s exhibition just prior to the opening Thomas Messer, the museum’s director, summed up his opposition to Shapolsky et al. by stating, “To the degree to which an artist deliberately pursues aims that lie beyond art, his very concentration upon ulterior ends stands in conflict with the intrinsic nature of the work as an end in itself.“ Defining what did lie beyond the art’s “intrinsic nature” was to become the central question for a new generation of activist artists.” Submitted 18 Feb2006 at http://www.neme.org/main/354/news-from-nowhere 8 Such as the Canada Social Sciences and Humanities Research Council. www.sshrc-crsh.gc.ca was not regarded as an esthetic requirement, it was purely a technical constraint. The Greeks stuccoed and painted their sculptures in a highly lifelike manner. 9 My guess is that if the greeks could have made soft fleshy sculpture, they would have. Hero of Alexandria was renowned for his pneumatic automata which combined static sculptural mimesis with human-like (if repetitive) movement. The famous clockwork automata of the C17th were capable of much more complex behavior than the Heros' pneumatic automata. The "scribe" by Jacquet Drosz could dip its pen and write lines of elegant script. Vaucansons famous Duck is said to have been able to flap its wings, eat, and with a characteristically duck-like wag of the tail, excrete foul smelling waste matter! It is of note, not simply that these clockworks were contemporary with the first programmable device, the Jacquard weaving loom, but also that their behavior was constructed from mechanical "logic" much like that which Babbage used for his difference engine. We should further note that these automata were not regarded as fine art but simply as amusements. The industrial era equipped the automaton with reliable structure and mechanism and the possibility the autonomy of untethered power sources, first steam, then electric. The image of the mechanical man became a cultural fixture. Literature was populated with a veritable army of mechanical men (and women). from pathetic representations like the tin man in the Wizard of Oz, to the mechanical girlfriend of Thomas Edison in Tommorow's Eve by de L'isle-adam, and the distopic portrayals of Mary Shelley's Frankenstein, Fritz Lang's Metropolis and Karel Capek's RUR (Rossum's Universal Robots), the dramatic work in which the term "robot" originated. It was the move into the electronic that began to offer first the simulation of reflexes, then a modicum of intelligence. In the context of this historical trajectory, we must consider Artificial Intelligence as a continuation of this broad cultural anthropomorphic and mimetic trajectory. Indeed, Alan Turing defined the entire project as anthropomorphic with his test for artificial intelligence, now referred to as the "Turing Test". Simply put, this test says that if you can't tell it's not a person, then it has human intelligence. ARTIFICIAL LIFE AT 21 Roughly speaking, Artificial Life and Artificial Life Art has existed for two decades. Over that period, the computational capability of consumer computer technology has advanced profoundly, as has our acculturation to it. Daily, we casually do things on our phones (and complain about them) that were out of reach of million dollar supercomputers two decades ago. We must bear this changing reality in mind when viewing early Artificial Life Art. The ongoing lively interest in this interdisciplinary field is testified by the fact that the VIDA Art and Artificial Life Award is now in its 13th year. 26 As is the case with much research in computational techniques, much of the basic Artificial Life Art research has now found its way into, or influenced larger scale and commodity products. As mentioned above, the computer game Spore (Will Wright/Maxis) is a clear descendent of numerous similar art projects. But less obvious is the fact that the vast array of techniques for generating synthetic but natural looking landscapes, weather patterns, vegetation and plants, animals and synthetic character and crowds (and their autonomous and group behaviors); which we see in movies, computer games and virtual environments: all these have some connection to the Artificial Life research of the 1990s. The foregoing is but a cursory introduction to the history and theory of Artificial Life Art. I hope that it creates interest and provides a context for further research. Simon Penny August-November 2009