I Lev Manovich The Language of New Media ! II To Norman Klein / Peter Lunenfeld / Vivian Sobchack 43 !"#$%&'#()#*+,#-+.(&/# What is new media? We may begin answering this question by listing the categories which are commonly discussed under this topic in popular press: Internet, Web sites, computer multimedia, computer games, CD-ROMs and DVD, virtual reality. Is this all new media is? For instance, what about television programs which are shot on digital video and edited on computer workstations? Or what about feature films which use 3D animation and digital compositing? Shall we count these as new media? In this case, what about all images and textimage compositions — photographs, illustrations, layouts, ads — which are also created on computers and then printed on paper? Where shall we stop? As can be seen from these examples, the popular definition of new media identifies it with the use of a computer for distribution and exhibition, rather than with production. Therefore, texts distributed on a computer (Web sites and electronic books) are considered to be new media; texts distributed on paper are not. Similarly, photographs which are put on a CD-ROM and require a computer to view them are considered new media; the same photographs printed as a book are not. Shall we accept this definition? If we want to understand the effects of computerization on culture as a whole, I think it is too limiting. There is no reason to privilege computer in the role of media exhibition and distribution machine over a computer used as a tool for media production or as a media storage device. All have the same potential to change existing cultural languages. And all have the same potential to leave culture as it is. The last scenario is unlikely, however. What is more likely is that just as the printing press in the fourteenth century and photography in the nineteenth century had a revolutionary impact on the development of modern society and culture, today we are in the middle of a new media revolution -- the shift of all of our culture to computer-mediated forms of production, distribution and communication. This new revolution is arguably more profound than the previous ones and we are just beginning to sense its initial effects. Indeed, the introduction of printing press affected only one stage of cultural communication -- the distribution of media. In the case of photography, its introduction affected only one type of cultural communication -- still images. In contrast, computer media revolution affects all stages of communication, including acquisition, manipulating, storage and distribution; it also affects all types of media -- text, still images, moving images, sound, and spatial constructions. How shall we begin to map out the effects of this fundamental shift? What are the ways in which the use of computers to record, store, create and distribute media makes it “new”? 44 In section “Media and Computation” I show that new media represents a convergence of two separate historical trajectories: computing and media technologies. Both begin in the 1830's with Babbage's Analytical Engine and Daguerre's daguerreotype. Eventually, in the middle of the twentieth century, a modern digital computer is developed to perform calculations on numerical data more efficiently; it takes over from numerous mechanical tabulators and calculators already widely employed by companies and governments since the turn of the century. In parallel, we witness the rise of modern media technologies which allow the storage of images, image sequences, sounds and text using different material forms: a photographic plate, a film stock, a gramophone record, etc. The synthesis of these two histories? The translation of all existing media into numerical data accessible for computers. The result is new media: graphics, moving images, sounds, shapes, spaces and text which become computable, i.e. simply another set of computer data. In “Principles of New Media” I look at the key consequences of this new status of media. Rather than focusing on familiar categories such as interactivity or hypermedia, I suggest a different list. This list reduces all principles of new media to five: numerical representation, modularity, automation, variability and cultural transcoding. In the last section, “What New Media is Not,” I address other principles which are often attributed to new media. I show that these principles can already be found at work in older cultural forms and media technologies such as cinema, and therefore they are by themselves are not sufficient to distinguish new media from the old. 45 01,#-+.(+3&4+#*+,# On August 19, 1839, the Palace of the Institute in Paris was completely full with curious Parisians who came to hear the formal description of the new reproduction process invented by Louis Daguerre. Daguerre, already well-known for his Diorama, called the new process daguerreotype. According to a contemporary, "a few days later, opticians' shops were crowded with amateurs panting for daguerreotype apparatus, and everywhere cameras were trained on buildings. Everyone wanted to record the view from his window, and he was lucky who at first trial got a silhouette of roof tops against the sky." "# The media frenzy has begun. Within five months more than thirty different descriptions of the techniques were published all around the world: Barcelona, Edinburg, Halle, Naples, Philadelphia, Saint Petersburg, Stockholm. At first, daguerreotypes of architecture and landscapes dominated the public's imagination; two years later, after various technical improvements to the process, portrait galleries were opened everywhere — and everybody rushed in to have their picture taken by a new media machine. "" In 1833 Charles Babbage started the design for a device he called the Analytical Engine. The Engine contained most of the key features of the modern digital computer. The punch cards were used to enter both data and instructions. This information was stored in the Engine's memory. A processing unit, which Babbage referred to as a "mill," performed operations on the data and wrote the results to memory; final results were to be printed out on a printer. The Engine was designed to be capable of doing any mathematical operation; not only would it follow the program fed into it by cards, but it would also decide which instructions to execute next, based upon intermediate results. However, in contrast to the daguerreotype, not even a single copy of the Engine was completed. So while the invention of this modern media tool for the reproduction of reality impacted society right away, the impact of the computer was yet to be measured. Interestingly, Babbage borrowed the idea of using punch cards to store information from an earlier programmed machine. Around 1800, J.M. Jacquard invented a loom which was automatically controlled by punched paper cards. The loom was used to weave intricate figurative images, including Jacquard's portrait. This specialized graphics computer, so to speak, inspired Babbage in his work on the Analytical Engine, a general computer for numerical calculations. As Ada Augusta, Babbage's supporter and the first computer programmer, put it, "the Analytical Engine weaves algebraical patterns just as the Jacquard loom weaves flowers and leaves." "$ Thus, a programmed machine was already synthesizing images even before it was put to process numbers. The connection between the Jacquard loom and the Analytical Engine is not something historians of 46 computers make much of, since for them computer image synthesis represents just one application of the modern digital computer among thousands of others; but for a historian of new media it is full of significance. We should not be surprised that both trajectories — the development of modern media, and the development of computers — begin around the same time. Both media machines and computing machines were absolutely necessary for the functioning of modern mass societies. The ability to disseminate the same texts, images and sounds to millions of citizens thus assuring that they will have the same ideological beliefs was as essential as the ability to keep track of their birth records, employment records, medical records, and police records. Photography, film, the offset printing press, radio and television made the former possible while computers made possible the latter. Mass media and data processing are the complimentary technologies of a modern mass society; they appear together and develop side by side, making this society possible. For a long time the two trajectories run in parallel without ever crossing paths. Throughout the nineteenth and the early twentieth century, numerous mechanical and electrical tabulators and calculators were developed; they were gradually getting faster and their use was became more wide spread. In parallel, we witness the rise of modern media which allows the storage of images, image sequences, sounds and text in different material forms: a photographic plate, film stock, a gramophone record, etc. Let us continue tracing this joint history. In the 1890s modern media took another step forward as still photographs were put in motion. In January of 1893, the first movie studio — Edison's "Black Maria" — started producing twenty seconds shorts which were shown in special Kinetoscope parlors. Two years later the Lumière brothers showed their new Cinématographie camera/projection hybrid first to a scientific audience, and, later, in December of 1895, to the paying public. Within a year, the audiences in Johannesburg, Bombay, Rio de Janeiro, Melbourne, Mexico City, and Osaka were subjected to the new media machine, and they found it irresistible. "% Gradually the scenes grew longer, the staging of reality before the camera and the subsequent editing of its samples became more intricate, and the copies multiplied. They would be sent to Chicago and Calcutta, to London and St. Petersburg, to Tokyo and Berlin and thousands and thousands of smaller places. Film images would soothe movie audiences, who were too eager to escape the reality outside, the reality which no longer could be adequately handled by their own sampling and data processing systems (i.e., their brains). Periodic trips into the dark relaxation chambers of movie theaters became a routine survival technique for the subjects of modern society. The 1890s was the crucial decade, not only for the development of media, but also for computing. If individuals' brains were overwhelmed by the amounts of information they had to process, the same was true of corporations and of government. In 1887, the U.S. Census office was still interpreting the figures from 47 the 1880 census. For the next 1890 census, the Census Office adopted electric tabulating machines designed by Herman Hollerith. The data collected for every person was punched into cards; 46, 804 enumerators completed forms for a total population of 62,979,766. The Hollerith tabulator opened the door for the adoption of calculating machines by business; during the next decade electric tabulators became standard equipment in insurance companies, public utilities companies, railroads and accounting departments. In 1911, Hollerith's Tabulating Machine company was merged with three other companies to form the Computing-Tabulating-Recording Company; in 1914 Thomas J. Watson was chosen as its head. Ten years later its business tripled and Watson renamed the company the International Business Machines Corporation, or IBM. "& We are now in the new century. The year is 1936. This year the British mathematician Alan Turing wrote a seminal paper entitled "On Computable Numbers." In it he provided a theoretical description of a general-purpose computer later named after its inventor the Universal Turing Machine. Even though it was only capable of four operations, the machine could perform any calculation which can be done by a human and could also imitate any other computing machine. The machine operated by reading and writing numbers on an endless tape. At every step the tape would be advanced to retrieve the next command, to read the data or to write the result. Its diagram looks suspiciously like a film projector. Is this a coincidence? If we believe the word cinematograph, which means "writing movement," the essence of cinema is recording and storing visible data in a material form. A film camera records data on film; a film projector reads it off. This cinematic apparatus is similar to a computer in one key respect: a computer's program and data also have to be stored in some medium. This is why the Universal Turing Machine looks like a film projector. It is a kind of film camera and film projector at once: reading instructions and data stored on endless tape and writing them in other locations on this tape. In fact, the development of a suitable storage medium and a method for coding data represent important parts of both cinema and computer pre-histories. As we know, the inventors of cinema eventually settled on using discrete images recorded on a strip of celluloid; the inventors of a computer — which needed much greater speed of access as well as the ability to quickly read and write data — came to store it electronically in a binary code. In the same year, 1936, the two trajectories came even closer together. Starting this year, and continuing into the Second World War, German engineer Konrad Zuse had been building a computer in the living room of his parents' apartment in Berlin. Zuse's computer was the first working digital computer. One of his innovations was program control by punched tape. The tape Zuse used was actually discarded 35 mm movie film. "' One of these surviving pieces of this film shows binary code punched over the original frames of an interior shot. A typical movie scene — two people in a 48 room involved in some action — becomes a support for a set of computer commands. Whatever meaning and emotion was contained in this movie scene has been wiped out by its new function as a data carrier. The pretense of modern media to create simulation of sensible reality is similarly canceled; media is reduced to its original condition as information carrier, nothing else, nothing more. In a technological remake of the Oedipal complex, a son murders his father. The iconic code of cinema is discarded in favor of the more efficient binary one. Cinema becomes a slave to the computer. But this is not yet the end of the story. Our story has a new twist — a happy one. Zuse's film, with its strange superimposition of the binary code over the iconic code anticipates the convergence which gets underway half a century later. The two separate historical trajectories finally meet. Media and computer — Daguerre's daguerreotype and Babbage's Analytical Engine, the Lumière Cinématographie and Hollerith's tabulator — merge into one. All existing media are translated into numerical data accessible for the computers. The result: graphics, moving images, sounds, shapes, spaces and text become computable, i.e. simply another set of computer data. In short, media becomes new media. This meeting changes both the identity of media and of the computer itself. No longer just a calculator, a control mechanism or a communication device, a computer becomes a media processor. Before the computer could read a row of numbers outputting a statistical result or a gun trajectory. Now it can read pixel values, blurring the image, adjusting its contrast or checking whether it contains an outline of an object. Building upon these lower-level operations, it can also perform more ambitious ones: searching image databases for images similar in composition or content to an input image; detecting shot changes in a movie; or synthesizing the movie shot itself, complete with setting and the actors. In a historical loop, a computer returned to its origins. No longer just an Analytical Engine, suitable only to crunch numbers, the computer became Jacqurd's loom — a media synthesizer and manipulator. 49 56(73(89+)#1:#*+,#-+.(&# The identity of media has changed even more dramatically. Below I summarize some of the key differences between old and new media. In compiling this list of differences I tried to arrange them in a logical order. That is, the principles 3-5 are dependent on the principles 1-2. This is not dissimilar to axiomatic logic where certain axioms are taken as staring points and further theorems are proved on their basis. Not every new media object obeys these principles. They should be considered not as some absolute laws but rather as general tendencies of a culture undergoing computerization. As the computerization affects deeper and deeper layers of culture, these tendencies will manifest themselves more and more. "(!)*+,-./01!2,3-,4,5606.75! All new media objects, whether they are created from scratch on computers or converted from analog media sources, are composed of digital code; they are numerical representations. This has two key consequences: 1.1. New media object can be described formally (mathematically). For instance, an image or a shape can be described using a mathematical function. 1.2. New media object is a subject to algorithmic manipulation. For instance, by applying appropriate algorithms, we can automatically remove "noise" from a photograph, improve its contrast, locate the edges of the shapes, or change its proportions. In short, media becomes programmable. When new media objects are created on computers, they originate in numerical form. But many new media objects are converted from various forms of old media. Although most readers understand the difference between analog and digital media, few notes should be added on the terminology and the conversion process itself. This process assumes that data is originally continuos, i.e. “the axis or dimension that is measured has no apparent indivisible unit from which it is composed.” "8 Converting continuos data into a numerical representation is called digitization. Digitization consists from two steps: sampling and quantization. First, data is sampled, most often at regular intervals, such as the grid of pixels used to represent a digital image. Technically, a sample is defined as “a measurement made at a particular instant in space and time, according to a specified procedure.” The frequency of sampling is referred to as resolution. Sampling turns continuos data into discrete data. This is data occurring in distinct units: people, pages of a book, pixels. Second, each sample is quantified, i.e. 50 assigned a numerical vale drawn from a defined range (such as 0-255 in the case of a 8-bit greyscale image). "9 While some old media such as photography and sculpture is truly continuos, most involve the combination of continuos and discrete coding. One example is motion picture film: each frame is a continuos photograph, but time is broken into a number of samples (frames). Video goes one step further by sampling the frame along the vertical dimension (scan lines). Similarly, a photograph printed using a halftone process combine discrete and continuos representations. Such photograph consist from a number of orderly dots (i.e., samples), however the diameters and areas of dots vary continuously. As the last example demonstrates, while old media contains level(s) of discrete representation, the samples were never quantified. This quantification of samples is the crucial step accomplished by digitization. But why, we may ask, modern media technologies were often in part discrete? The key assumption of modern semiotics is that communication requires discrete units. Without discrete units, there is no language. As Roland Barthes has put it, “language is, as it were, that which divides reality (for instance the continuos spectrum of the colors is verbally reduced to a series of discontinuous terms). ": In postulating this, semioticians took human language as a prototypical example of a communication system. A human language is discrete on most scales: we speak in sentences; a sentence is made from words; a word consists from morphemes, and so on. If we are to follow the assumption that any form of communication requires discrete representation, we may expect that media used in cultural communication will have discrete levels. At first this explanation seems to work. Indeed, a film samples continuos time of human existence into discrete frames; a drawing samples visible reality into discrete lines; and a printed photograph samples it into discrete dots. This assumption does not universally work, however: photographs, for instance, do not have any apparent units. (Indeed, in the 1970s semiotics was criticized for its linguistic bias, and most semioticians came to recognize that language-based model of distinct units of meaning can’t be applied to many kinds of cultural communication.) More importantly, the discrete units of modern media are usually not the units of meanings, the way morphemes are. Neither film frames not the halftone dots have any relation to how film or a photographs affect the viewer (except in modern art and avant-garde film — think of paintings by Roy Lichtenstein and films of Paul Sharits — which often make the “material” units of media into the units of meaning.) The more likely reason why modern media has discrete levels is because it emerges during Industrial Revolution. In the nineteenth century, a new organization of production known as factory system gradually replaced artisan labor. It reached its classical form when Henry Ford installed first assembly line in his factory in 1913. The assembly line relied on two principles. The first was standardization of parts, already employed in the production of military uniforms 51 in the nineteenth century. The second, never principle, was the separation of the production process into a set of repetitive, sequential, and simple activities that could be executed by workers who did not have to master the entire process and could be easily replaced. Not surprisingly, modern media follows the factory logic, not only in terms of division of labor as witnessed in Hollywood film studios, animation studios or television production, but also on the level of its material organization. The invention of typesetting machines in the 1880s industrialized publishing while leading to standardization of both type design and a number and types of fonts used. In the 1890s cinema combined automatically produced images (via photography) with a mechanical projector. This required standardization of both image dimensions (size, frame ratio, contrast) and of sampling rate of time (see “Digital Cinema” section for more detail). Even earlier, in the 1880s, first television systems already involved standardization of sampling both in time and in space. These modern media systems also followed the factory logic in that once a new “model” (a film, a photograph, an audio recording) was introduced, numerous identical media copies would be produced from this master. As I will show below, new media follows, or actually, runs ahead of a quite a different logic of post-industrial society — that of individual customization, rather that of mass standardization. $(!;7<*10-.6=!!! This principle can be called "fractal structure of new media.” Just as a fractal has the same structure on different scales, a new media object has the same modular structure throughout. Media elements, be it images, sounds, shapes, or behaviors, are represented as collections of discrete samples (pixels, polygons, voxels, characters, scripts). These elements are assembled into larger-scale objects but they continue to maintain their separate identity. The objects themselves can be combined into even larger objects -- again, without losing their independence. For example, a multimedia "movie" authored in popular Macromedia Director software may consist from hundreds of still images, QuickTime movies, and sounds which are all stored separately and are loaded at run time. Because all elements are stored independently, they can be modified at any time without having to change Director movie itself. These movies can be assembled into a larger "movie," and so on. Another example of modularity is the concept of “object” used in Microsoft Office applications. When an object is inserted into a document (for instance, a media clip inserted into a Word document), it continues to maintain its independence and can always be edited with the program used originally to create it. Yet another example of modularity is the structure of a HTML document: with the exemption of text, it consists from a number of 52 separate objects — GIF and JPEG images, media clips, VRML scenes, Schockwave and Flash movies -- which are all stored independently locally and/or on a network. In short, a new media object consists from independent parts which, in their turn, consist from smaller independent parts, and so on, up to the level of smallest “atoms” such as pixels, 3D points or characters. World Wide Web as a whole is also completely modular. It consists from numerous Web pages, each in its turn consisting from separate media elements. Every element can be always accessed on its own. Normally we think of elements as belonging to their corresponding Web sites, but this just a convention, reinforced by commercial Web browsers. Netomat browser which extract elements of a particular media type from different Web pages (for instance, only images) and display them together without identifying the Web sites they come from, highlights for us this fundamentally discrete and non-hierarchical organization of the Web (see introduction to “Interface” chapter for more on this browser.) In addition to using the metaphor of a fractal, we can also make an analogy between modularity of new media and the structured computer programming. Structural computer programming which became standard in the 1970s involves writing small and self-sufficient modules (called in different computer languages subroutines, functions, procedures, scripts) which are assembled into larger programs. Many new media objects are in fact computer programs which follow structural programming style. For example, most interactive multimedia applications are programs written in Macromedia Director’s Lingo. A Lingo program defines scripts which control various repeated actions, such as clicking on a button; these scripts are assembled into larger scripts. In the case of new media objects which are not computer programs, an analogy with structural programming still can be made because their parts can be accessed, modified or substituted without affecting the overall structure of an object. This analogy, however, has its limits. If a particular module of a computer program is deleted, the program would not run. In contrast, just as it is the case with traditional media, deleting parts of a new media object does not render its meaningless. In fact, the modular structure of new media makes such deletion and substitution of parts particularly easy. For example, since a HTML document consists from a number of separate objects each represented by a line of HTML code, it is very easy to delete, substitute or add new objects. Similarly, since in Photoshop the parts a digital image are usually placed on separate layers, these parts can be deleted and substituted with a click of a button. %(!>*67+06.75!! 53 Numerical coding of media (principle 1) and modular structure of a media object (principle 2) allow to automate many operations involved in media creation, manipulation and access. Thus human intentionally can be removed from the creative process, at least in part. "? The following are some of the examples of what can be called “lowlevel” automation of media creation, in which the computer user modifies or creates from scratch a media object using templates or simple algorithms. These techniques are robust enough so that they are included in most commercial software for image editing, 3D graphics, word processing, graphic layout, and so on. Image editing programs such as Photoshop can automatically correct scanned images, improving contrast range and removing noise. They also come with filters which can automatically modify an image, from creating simple variations of color to changing the whole image as though it was painted by Van Gog, Seurat or other brand-name artist. Other computer programs can automatically generate 3D objects such as trees, landscapes, human figures and detailed ready-to-use animations of complex natural phenomena such as fire and waterfalls. In Hollywood films, flocks of birds, ant colonies and crowds of people are automatically created by AL (artificial life) software. Word processing, page layout, presentation and Web creation programs come with "agents" which can automatically create the layout of a document. Writing software helps the user to create literary narratives using formalized highly conventions genre convention. Finally, in what maybe the most familiar experience of automation of media generation to most computer users, many Web sites automatically generate Web pages on the fly when the user reaches the site. They assemble the information from the databases and format it using generic templates and scripts. The researchers are also working on what can be called “high-level” automation of media creation which requires a computer to understand, to a certain degree, the meanings embedded in the objects being generated, i.e. their semantics. This research can be seen as a part of a larger initiative of artificial intelligence (AI). As it is well known, AI project achieved only very limited success since its beginnings in the 1950s. Correspondingly, work on media generation which requires understanding of semantics is also in the research stage and is rarely included in commercial software. Beginning in the 1970s, computers were often used to generate poetry and fiction. In the 1990s, the users of Internet chat rooms became familiar with bots -- the computer programs which simulate human conversation. The researchers at New York University showed a “virtual theater” composed of a few “virtual actors” which adjust their behavior in realtime in response to user’s actions. $# The MIT Media Lab developed a number of different projects devoted to “high-level” automation of media creation and use: a “smart camera” which can automatically follow the action and frame the shots given a script; $" ALIVE, a virtual environment where the user interacted with 54 animated characters; $$ a new kind of human-computer interface where the computer presents itself to a user as an animated talking character. The character, generated by a computer in real-time, communicates with user using natural language; it also tries to guess user’s emotional state and to adjust the style of interaction accordingly. $% The area of new media where the average computer user encountered AI in the 1990s was not, however, human-computer interface, but computer games. Almost every commercial game includes a component called AI engine. It stands for part of the game’s computer code which controls its characters: car drivers in a car race simulation, the enemy forces in a strategy game such as Command and Conquer, the single enemies which keep attacking the user in first-person shooters such as Quake. AI engines use a variety of approaches to simulate human intelligence, from rule-based systems to neural networks. Like AI expert systems, these characters have expertise in some well-defined but narrow area such as attacking the user. But because computer games are highly codified and rulebased, these characters function very effectively. That is, they effectively respond to whatever few things the user are allowed to ask them to do: run forward, shoot, pick up an object. They can’t do anything else, but then the game does not provide the opportunity for the user to test this. For instance, in a martial arts fighting game, I can’t ask questions of my opponent, nor do I expect him or her to start a conversation with me. All I can do is to “attack” my opponent by pressing a few buttons; and within this highly codified situation the computer can “fight” me back very effectively. In short, computer characters can display intelligence and skills only because the programs put severe limits on our possible interactions with them. Put differently, the computers can pretend to be intelligent only by tricking us into using a very small part of who we are when we communicate with them. So, to use another example, at 1997 SIGGRAPH convention I was playing against both human and computer-controlled characters in a VR simulation of some non-existent sport game. All my opponents appeared as simple blobs covering a few pixels of my VR display; at this resolution, it made absolutely no difference who was human and who was not. Along with “low-level” and “high-level” automation of media creation, another area of media use which is being subjected to increasing automation is media access. The switch to computers as means to store and access enormous amount of media material, exemplified by the by “media assets” stored in the databases of stock agencies and global entertainment conglomerates, as well as by the public “media assets” distributed across numerous Web sites, created the need to find more efficient ways to classify and search media objects. Word processors and other text management software for a long time provided the abilities to search for specific strings of text and automatically index documents. UNIX operating system also always included powerful commands to search and filter text files. In the 1990s software designers started to provide media users with 55 similar abilities. Virage introduced Virage VIR Image Engine which allows to search for visually similar image content among millions of images as well as a set of video search tools to allow indexing and searching video files. $& By the end of the 1990s, the key Web search engines already included the options to search the Internet by specific media such as images, video and audio. The Internet, which can be thought of as one huge distributed media database, also crystallized the basic condition of the new information society: over-abundance of information of all kind. One response was the popular idea of software “agents” designed to automate searching for relevant information. Some agents act as filters which deliver small amounts of information given user's criteria. Others are allowing users to tap into the expertise of other users, following their selections and choices. For example, MIT Software Agents Group developed such agents as BUZZwatch which “distills and tracks trends, themes, and topics within collections of texts across time” such as Internet discussions and Web pages; Letizia, “a user interface agent that assists a user browsing the World Wide Web by… scouting ahead from the user's current position to find Web pages of possible interest”; and Footprints which “uses information left by other people to help you find your way around.” $' By the end of the twentieth century, the problem became no longer how to create a new media object such as an image; the new problem was how to find the object which already exists somewhere. That is, if you want a particular image, chances are it is already exists -- but it may be easier to create one from scratch when to find the existing one. Beginning in the nineteenth century, modern society developed technologies which automated media creation: a photo camera, a film camera, a tape recorder, a video recorder, etc. These technologies allowed us, over the course of one hundred and fifty years, to accumulate an unprecedented amount of media materials: photo archives, film libraries, audio archives…This led to the next stage in media evolution: the need for new technologies to store, organize and efficiently access these media materials. These new technologies are all computer-based: media databases; hypermedia and other ways of organizing media material such the hierarchical file system itself; text management software; programs for content-based search and retrieval. Thus automation of media access is the next logical stage of the process which was already put into motion when a first photograph was taken. The emergence of new media coincides with this second stage of a media society, now concerned as much with accessing and re-using existing media as with creating new one. $8 (See “Database” section for more on databases). &(!@0-.0A.1.6=!! 56 A new media object is not something fixed once and for all but can exist in different, potentially infinite, versions. This is another consequence of numerical coding of media (principle 1) and modular structure of a media object (principle 2). Other terms which are often used in relation to new media and which would be appropriate instead of “variable” is “mutable” and “liquid.” Old media involved a human creator who manually assembled textual, visual and/or audio elements into a particular composition or a sequence. This sequence was stored in some material, its order determined once and for all. Numerous copies could be run off from the master, and, in perfect correspondence with the logic of an industrial society, they were all identical. New media, in contrast, is characterized by variability. Instead of identical copies a new media object typically gives rise to many different versions. And rather being created completely by a human author, these versions are often in part automatically assembled by a computer. (The already quoted example of Web pages automatically generated from databases using the templates created by Web designers can be invoke here as well.) Thus the principle of variability is closely connected to automation. Variability would also will not be possible without modularity. Stored digitally, rather than in some fixed medium, media elements maintain their separate identity and can be assembled into numerous sequences under program control. In addition, because the elements themselves are broken into discrete samples (for instance, an image is represented as an array of pixels), they can be also created and customized on the fly. The logic of new media thus corresponds to the post-industrial logic of "production on demand" and "just in time" delivery which themselves were made possible by the use of computers and computer networks in all stages of manufacturing and distribution. Here "culture industry" (the term was originally coined by Theodor Adorno in the 1930s) is actually ahead of the rest of the industry. The idea that a customer determines the exact features of her car at the showroom, the data is then transmitted to the factory, and hours later the new car is delivered, remains a dream, but in the case of computer media, it is reality. Since the same machine is used as a showroom and a factory, i.e., the same computer generates and displays media -- and since the media exists not as a material object but as data which can be sent through the wires with the speed of light, the customized version created in response to user’s input is delivered almost immediately. Thus, to continue with the same example, when you access a Web site, the server immediately assembles a customized Web page. Here are some particular cases of the variability principle (most of them will be discussed in more detail in later chapters): 4.1. Media elements are stored in a media database; a variety of end-user objects which vary both in resolution, in form and in content can be generated, either beforehand, or on demand, from this database. At first, we may think that this is simply a particular technological implementation of variability principle, 57 but, as I will show in “Database” section, in a computer age database comes to function as a cultural form of its own. It offers a particular model of the world and of the human experience. It also affects how the user conceives of data which it contains. 4.2. It becomes possible to separate the levels of "content" (data) and interface. A number of different interfaces can be created to the same data. A new media object can be defined as one or more interfaces to a multimedia database (see introduction to “Interface” chapter and “Database” section for more discussion of this principle). $9 4.3. The information about the user can be used by a computer program to automatically customize the media composition as well as to create the elements themselves. Examples: Web sites use the information about the type of hardware and browser or user's network address to automatically customize the site which the user will see; interactive computer installations use information about the user's body movements to generate sounds, shapes, and images, or to control behaviors of artificial creatures. 4.4. A particular case of 4.3 is branching-type interactivity (sometimes also called menu-based interactivity.) This term refers to programs in which all the possible objects which the user can visit form a branching tree structure. When the user reaches a particular object, the program presents her with choices and let her pick. Depending on the value chosen, the user advances along a particular branch of the tree. For instance, in Myst each screen typically contains a left and a right button, clicking on the button retrieves a new screen, and so on. In this case the information used by a program is the output of user's cognitive process, rather than the network address or body position. (See “Menus, Filters, Plug-ins” for more discussion of this principle.) 4.5. Hypermedia is another popular new media structure, which conceptually is close to branching-type interactivity (because quite often the elements are connected using a branch tree structure). In hypermedia, the multimedia elements making a document are connected through hyperlinks. Thus the elements and the structure are independent of each other --rather than hardwired together, as in traditional media. World Wide Web is a particular implementation of hypermedia in which the elements are distributed throughout the network . Hypertext is a particular case of hypermedia which uses only one media type — text. How does the principle of variability works in this case? We can conceive of all possible paths through a hypermedia document as being different versions of it. By following the links the user retrieves a particular version of a document. 4.6. Another way in which different versions of the same media objects are commonly generated in computer culture is through periodic updates. Networks allow the content of a new media object to be periodically updating while keeping its structure intact. For instance, modern software applications can 58 periodically check for updates on the Internet and then download and install these updates, sometimes without any actions from the user. Most Web sites are also periodically updated either manually or automatically, when the data in the databases which drives the sites changes. A particularly interesting case of this “updateability” feature is the sites which update some information, such as such as stock prices or weather, continuosly. 4.7. One of the most basic cases of the variability principle is scalability, in which different versions of the same media object can be generated at various sizes or levels of detail. The metaphor of a map is useful in thinking about the scalability principle. If we equate a new media object with a physical territory, different versions of this object are like maps of this territory, generated at different scales. Depending on the scale chosen, a map provides more or less detail about the territory. Indeed, different versions of a new media object may vary strictly quantitatively, i.e. in the amount of detail present: for instance, a full size image and its icon, automatically generated by Photoshop; a full text and its shorter version, generated by “Autosummarize” command in Microsoft Word 97; or the different versions which can be created using “Outline” command in Word. Beginning with version 3 (1997), Apple’s QuickTime format also made possible to imbed a number of different versions which differ in size within a single QuickTime movie; when a Web user accesses the movie, a version is automatically selected depending on connection speed. Conceptually similar technique called “distancing” or “level of detail” is used in interactive virtual worlds such as VRML scenes. A designer creates a number of models of the same object, each with progressively less detail. When the virtual camera is close to the object, a highly detailed model is used; if the object is far away, a lesser detailed version is automatically substituted by a program to save unnecessary computation of detail which can’t be seen anyway. New media also allows to create versions of the same object which differ from each other in more substantial ways. Here the comparison with maps of diffident scales no longer works. The examples of commands in commonly used software packages which allow to create such qualitatively different versions are “Variations” and “Adjustment layers” in Photoshop 5 and “writing style” option in Word’s “Spelling and Grammar” command. More examples can be found on the Internet were, beginning in the middle of the 1990s, it become common to create a few different versions of a Web site. The user with a fast connection can choose a rich multimedia version while the user with a slow connection can settle for a more bare-bones version which loads faster. Among new media artworks, David Blair’s WaxWeb, a Web site which is an “adaptation” of an hour long video narrative, offers a more radical implementation of the scalability principle. While interacting with the narrative, the user at any point can change the scale of representation, going from an imagebased outline of the movie to a complete script or a particular shot, or a VRML 59 scene based on this shot, and so on. $: Another example of how use of scalability principle can create a dramatically new experience of an old media object is Stephen Mamber’s database-driven representation of Hitchock’s Birds. Mamber’s software generates a still for every shot of the film; it then automatically combines all the stills into a rectangular matrix. Every cell in the matrix corresponds to a particular shot from the film. As a result, time is spatialized, similar to how it was done in Edisons’s early Kinetoscope cylinders (see “The Myths of New Media.”) Spatializing the film allows us to study its different temporal structures which would be hard to observe otherwise. As in WaxWeb, the user can at any point change the scale of representation, going from a complete film to a particular shot. As can be seen, the principle of variability is a useful in allowing us to connect many important characteristics of new media which on first sight may appear unrelated. In particular, such popular new media structures as branching (or menu) interactivity and hypermedia can be seen as particular instances of variability principle (4.4 and 4.5, respectively). In the case of branching interactivity, the user plays an active role in determining the order in which the already generated elements are accessed. This is the simplest kind of interactivity; more complex kinds are also possible where both the elements and the structure of the whole object are either modified or generated on the fly in response to user's interaction with a program. We can refer to such implementations as open interactivity to distinguish them from the closed interactivity which uses fixed elements arranged in a fixed branching structure. Open interactivity can be implemented using a variety of approaches, including procedural and objectoriented computer programming, AI, AL, and neural networks. As long as there exist some kernel, some structure, some prototype which remains unchanged throughout the interaction, open interactivity can be thought of as a subset of variability principle. Here useful analogy can be made with theory of family resemblance by Witgenstein, later developed into the influential theory of prototypes by cognitive psychologist Eleonor Rosh. In a family, a number of relatives will share some features, although no single family member may posses all of the features. Similarly, according to the theory of prototypes, the meanings of many words in a natural language derive not through a logical definition but through a proximity to certain prototype. Hypermedia, the other popular structure of new media, can also be seen as a particular case of the more general principle of variability. According to the definition by Halacz and Swartz, hypermedia systems “provide their users with the ability to create, manipulate and/or examine a network of informationcontaining nodes interconnected by relational links.” $? Since in new media the individual media elements (images, pages of text, etc.) always retain their individual identity (the principle of modularity), they can be "wired" together into more than one object. Hyperlinking is a particular way to achieve this wiring. A 60 hyperlink creates a connection between two elements, for example between two words in two different pages or a sentence on one page and an image in another, or two different places within the same page. The elements connected through hyperlinks can exist on the same computer or on different computers connected on a network, as in the case of World Wide Web. If in traditional media the elements are "hardwired" into a unique structure and no longer maintain their separate identity, in hypermedia the elements and the structure are separate from each other. The structure of hyperlinks -- typically a branching tree - can be specified independently from the contents of a document. To make an analogy with grammar of a natural language as described in Noam Chomsky’s early linguistic theory, %# we can compare a hypermedia structure which specifies the connections between the nodes with a deep structure of a sentence; a particular hypermedia text can be then compared with a particular sentence in a natural language. Another useful analogy is with computer programming. In programming, there is clear separation between algorithms and data. An algorithm specifies the sequence of steps to be performed on any data, just as a hypermedia structure specifies a set of navigation paths (i.e., connections between the nodes) which potentially can be applied to any set of media objects. The principle of variability also exemplifies how, historically, the changes in media technologies are correlated with changes the social change. If the logic of old media corresponded to the logic of industrial mass society, the logic of new media fits the logic of the post-industrial society which values individuality over conformity. In industrial mass society everybody was supposed to enjoy the same goods -- and to have the same beliefs. This was also the logic of media technology. A media object was assembled in a media factory (such as a Hollywood studio). Millions of identical copies were produced from a master and distributed to all the citizens. Broadcasting, cinema, print media all followed this logic. In a post-industrial society, every citizen can construct her own custom lifestyle and "select" her ideology from a large (but not infinite) number of choices. Rather than pushing the same objects/information to a mass audience, marketing now tries to target each individual separately. The logic of new media technology reflects this new social logic. Every visitor to a Web site automatically gets her own custom version of the site created on the fly from a database. The language of the text, the contents, the ads displayed — all these can be customized by interpreting the information about where on the network the user is coming from; or, if the user previously registered with the site, her personal profile can be used for this customization. According to a report in USA Today (November 9, 1999), “Unlike ads in magazines or other real-world publications, ‘banner’ ads on Web pages change wit every page view. And most of the companies that place the ads on the Web site track your movements across the Net, ‘remembering’ which ads you’ve seen, exactly when you saw them, whether 61 you clicked on them, where you were at the time and the site you have visited just before.” %" More generally, every hypertext reader gets her own version of the complete text by selecting a particular path through it. Similarly, every user of an interactive installation gets her own version of the work. And so on. In this way new media technology acts as the most perfect realization of the utopia of an ideal society composed from unique individuals. New media objects assure users that their choices — and therefore, their underlying thoughts and desires — are unique, rather than pre-programmed and shared with others. As though trying to compensate for their earlier role in making us all the same, today descendants of the Jacqurd's loom, the Hollerith tabulator and Zuse's cinema-computer are now working to convince us that we are all unique. The principle of variability as it is presented here is not dissimilar to how the artist and curator Jon Ippolito uses the same concept. %$ I believe that we differ in how we use the concept of variability in two key respects. First, Ippolito uses variability to describe a characteristic shared by recent conceptual and some digital art, while I see variability as a basic condition of all new media. Second, Ippolito follows the tradition of conceptual art where an artist can vary any dimension of the artwork, even its content; my use of the term aims to reflect the logic of mainstream culture where versions of the object share some well-defined “data.” This “data” which can be a well-known narrative (Psycho), an icon (CocaCola sign), a character (Mickey Mouse) or a famous star (Madonna), is referred in media industry as “property.” Thus all cultural projects produced by Madonna will be automatically united by her name. Using the theory of prototypes, we can say that the property acts as a prototype, and different versions are derived from this prototype. Moreover, when a number of versions are being commercially released based on some “property”, usually one of these versions is treated as the source of the “data,” with others positioned as being derived from this source. Typically the version which is in the same media as the original “property” is treated as the source. For instance, when a movie studio releases a new film, along with a computer game based on it, along with products tie-ins, along with music written for the movie, etc., usually the film is presented as the “base” object from which other objects are derived. So when George Lucas releases a new Star Wars movie, it refers back to the original property — the original Star Wars trilogy. This new movie becomes the “base” object and all other media objects which are released along with refer to this object. Conversely, when computer games such as Tomb Rider are re-made into movies, the original computer game is presented as the “base” object. While I deduced the principle of variability from more basic principles of new media — numerical representation (1) and modularity of information (2) — it can also be seen as a consequence of computer’s way of to represent data and model the world itself: as variables rather than constants. As new media theorist 62 and architect Marcos Novak notes, a computer — and computer culture in its wake — substitute every constant by a variable. %% In designing all functions and data structures, a computer programmer tries to always use variables rather than constants. On the level of human-computer interface, this principle means that the user is given many options to modify the performance of a program of a media object, be it a computer game, a Web site, a Web browser, or the operating system itself. The user can change the profile of a game character, modify how the folders appear on the desktop, how files are displayed, what icons are used, etc. If we apply this principle to culture at large, it would mean that every choice responsible for giving a cultural object a unique identity can potentially remain always open. Size, degree of detail, format, color, shape, interactive trajectory, trajectory through space, duration, rhythm, point of view, the presence or absence of particular characters, the development of the plot — to name just a few dimensions of cultural objects in different media — all these can be defined as variables, to be freely modified by a user. Do we want, or need, such freedom? As the pioneer of interactive filmmaking Graham Weinbren argued in relation to interactive media, making a choice involves a moral responsibility. %& By passing these choices to the user, the author also passes the responsibility to represent the world and the human condition in it. (This is paralleled by the use of phone or Web-based automated menu systems by all big companies to handle their customers; while the companies are doing this in the name of “choice” and “freedom,” one of the effects of this automation is that labor to be done is passed from company’s employees to the customer. If before a customer would get the information or buy the product by interacting with a company employee, now she has to spend her own time and energy in navigating through numerous menus to accomplish the same result.) The moral anxiety which accompanies the shift from constants to variables, from tradition to choices in all areas of life in a contemporary society, and the corresponding anxiety of a writer who has to portray it, is well rendered in this closing passage of a short story written by a contemporary American writer Rick Moody (the story is about the death of his sister): %' I should fictionalize it more, I should conceal myself. I should consider the responsibilities of characterization, I should conflate her two children into one, or reverse their genders, or otherwise alter them, I should make her boyfriend a husband, I should explicate all the tributaries of my extended family (its remarriages, its internecine politics), I should novelize the whole thing, I should make it multigenerational, I should work in my forefathers (stonemasons and newspapermen), I should let artifice create an elegant surface, I should make the events orderly, I should wait and write about it later, I should wait until I’m not angry, I shouldn’t clutter a narrative with fragments, with mere recollections of 63 good times, or with regrets, I should make Meredith’s death shapely and persuasive, not blunt and disjunctive, I shouldn’t have to think the unthinkable, I shouldn’t have to suffer, I should address her here directly (these are the ways I miss you), I should write only of affection, I should make our travels in this earthy landscape safe and secure, I should have a better ending, I shouldn’t say her life was short and often sad, I shouldn’t say she had demons, as I do too. '(!B-054/7<.5C! Beginning with the basic, “material” principles of new media — numeric coding and modular organization — we moved to more “deep” and far reaching ones — automation and variability. The last, fifth principle of cultural transcoding aims to describe what in my view is the most substantial consequence of media’s computerization. As I have suggested, computerization turns media into computer data. While from one point of view computerized media still displays structural organization which makes sense to its human users — images feature recognizable objects; text files consist from grammatical sentences; virtual spaces are defined along the familiar Cartesian coordinate system; and so on — from another point of view, its structure now follows the established conventions of computer's organization of data. The examples of these conventions are different data structures such as lists, records and arrays; the already mentioned substitution of all constants by variables; the separation between algorithms and data structures; and modularity. The structure of a computer image is a case in point. On the level of representation, it belongs to the side of human culture, automatically entering in dialog with other images, other cultural “semes” and “mythemes.” But on another level, it is a computer file which consist from a machine-readable header, followed by numbers representing RGB values of its pixels. On this level it enters into a dialog with other computer files. The dimensions of this dialog are not the image’s content, meanings or formal qualities, but file size, file type, type of compression used, file format and so on. In short, these dimensions are that of computer’s own cosmogony rather than of human culture. Similarly, new media in general can be thought of as consisting from two distinct layers: the “cultural layer” and the “computer layer.” The examples of categories on the cultural layer are encyclopedia and a short story; story and plot; composition and point of view; mimesis and catharsis, comedy and tragedy. The examples of categories on the computer layer are process and packet (as in data packets transmitted through the network); sorting and matching; function and variable; a computer language and a data structure. Since new media is created on computers, distributed via computers, stored and archived on computers, the logic of a computer can be expected to 64 significant influence on the traditional cultural logic of media. That is, we may expect that the computer layer will affect the cultural layer. The ways in which computer models the world, represents data and allows us to operate on it; the key operations behind all computer programs (such as search, match, sort, filter); the conventions of HCI — in short, what can be called computer’s ontology, epistemology and pragmatics — influence the cultural layer of new media: its organization, its emerging genres, its contents. Of course what I called a computer layer is not itself fixed but is changing in time. As hardware and software keep evolving and as the computer is used for new tasks and in new ways, this layer is undergoing continuos transformation. The new use of computer as a media machine is the case in point. This use is having an effect on computer’s hardware and software, especially on the level of the human-computer interface which looks more and more like the interfaces of older media machines and cultural technologies: VCR, tape player, photo camera. In summary, the computer layer and media/culture layer influence each other. To use another concept from new media, we can say that they are being composited together. The result of this composite is the new computer culture: a blend of human and computer meanings, of traditional ways human culture modeled the world and computer’s own ways to represent it. Throughout the book, we will encounter many examples of the principle of transcoding at work. For instance, “The Language of Cultural Interfaces” section will look at how conventions of printed page, cinema and traditional HCI interact together in the interfaces of Web sites, CD-ROMs, virtual spaces and computer games. “Database” section will discuss how a database, originally a computer technology to organize and access data, is becoming a new cultural form of its own. But we can also reinterpret some of the principles of new media already discussed above as consequences of the transcoding principle. For instance, hypermedia can be understood as one cultural effect of the separation between a algorithm and a data structure, essential to computer programming. Just as in programming algorithms and data structures exist independently of each other, in hypermedia data is separated from the navigation structure. (For another example of the cultural effect of algorithm—data structure dichotomy see “Database” section.) Similarly, the modular structure of new media can be seen as an effect of the modularity in structural computer programming. Just as a structural computer program consist from smaller modules which in their turn consist from even smaller modules, a new media object as a modular structure, as I explained in my discussion of modularity above. In new media lingo, to “transcode” something is to translate it into another format. The computerization of culture gradually accomplishes similar transcoding in relation to all cultural categories and concepts. That is, cultural categories and concepts are substituted, on the level of meaning and/or the language, by new ones which derive from computer’s ontology, epistemology and 65 pragmatics. New media thus acts as a forerunner of this more general process of cultural re-conceptualization. Given the process of “conceptual transfer” from computer world to culture at large, and given the new status of media as computer data, what theoretical framework can we use to understand it? Since on one level new media is an old media which has been digitized, it seems appropriate to look at new media using the perspective of media studies. We may compare new media and old media, such as print, photography, or television. We may also ask about the conditions of distribution and reception and the patterns of use. We may also ask about similarities and differences in the material properties of each medium and how these affect their aesthetic possibilities. This perspective is important, and I am using it frequently in this book; but it is not sufficient. It can't address the most fundamental new quality of new media which has no historical precedent — programmability. Comparing new media to print, photography, or television will never tell us the whole story. For while from one point of view new media is indeed another media, from another is simply a particular type of computer data, something which is stored in files and databases, retrieved and sorted, run through algorithms and written to the output device. That the data represents pixels and that this device happened to be an output screen is besides the point. The computer may perform perfectly the role of the Jacquard loom, but underneath it is fundamentally Babbage's Analytical Engine - after all, this was its identity for one hundred and fifty years. New media may look like media, but this is only the surface. New media calls for a new stage in media theory whose beginnings can be traced back to the revolutionary works of Robert Innis and Marshall McLuhan of the 1950s. To understand the logic of new media we need to turn to computer science. It is there that we may expect to find the new terms, categories and operations which characterize media which became programmable. From media studies, we move to something which can be called software studies; from media theory — to software theory. The principle of transcoding is one way to start thinking about software theory. Another way which this book experiments with is using concepts from computer science as categories of new media theory. The examples here are “interface” and “database.” And, last but not least, I follow the analysis of “material” and logical principles of computer hardware and software in this chapter with two chapters on human-computer interface and the interfaces of software applications use to author and access new media objects. 66 $%&'#*+,#-+.(&#()#*1'# Having proposed a list of the key diffirences between new and old media, I now would like to address other potential candidates, which I have ommitted.The following are some of the popularly held notions about the difference between new and old media which this section will subject to scrutiny: 1. New media is analog media converted to a digital representation. In contrast to analog media which is continuos, digitally encoded media is discrete. 2. All digital media (text, still images, visual or audio time data, shapes, 3D spaces) share the same the same digital code. This allows diffirent media types to be displayed using one machine, i.e., a computer, which acts as a multimedia display device. 3. New media allows for random access. In contrast to film or videotape which store data sequentially, computer storage devices make possible to access any data element equally fast. 4. Digitization involves inevitable loss of information. In contrast to an analog representation, a digitally encoded representation contains a fixed amount of information. 5. In contrast to analog media where each successive copy loses quality, digitally encoded media can be copied endlessly without degradation. 6. New media is interactive. In contrast to traditional media where the order of presentation was fixed, the user can now interact with a media object. In the process of interaction the user can choose which elements to display or which paths to follow, thus generating a unique work. Thus the user becomes the co-author of the work. D.5,+0!04!),E!;,<.0! If we place new media new media within a longer historical perspective, we will see that many of these principles are not unique to new media and can be already found in older media technologies. I will illustrate this by using the example of the technology of cinema. (1). “New media is analog media converted to a digital representation. In contrast to analog media which is continuos, digitally encoded media is discrete.” Indeed, any digital representation consists from a limited number of samples. For example, a digital still image is a matrix of pixels — a 2D sampling of space. However, as I already noted, cinema was already based on sampling — the sampling of time. Cinema sampled time twenty four times a second. So we 67 can say that cinema already prepared us for new media. All that remained was to take this already discrete representation and to quantify it. But this is simply a mechanical step; what cinema accomplished was a much more difficult conceptual break from the continuous to the discrete. Cinema is not the only media technology which, emerging towards the end of the nineteenth century, employed a discrete representation. If cinema sampled time, fax transmission of images, starting in 1907, sampled a 2D space; even earlier, first television experiments (Carey, 1875; Nipkow, 1884) already involved sampling of both time and space. %8 However, reaching mass popularity much earlier than these other technologies, cinema is the first to make the principle of a discrete representation of the visual a public knowledge. (2). “All digital media (text, still images, visual or audio time data, shapes, 3D spaces) share the same the same digital code. This allows diffirent media types to be displayed using one machine, i.e., a computer, which acts as a multimedia display device.” Before computer multimedia became commonplace around 1990, filmmakers were already combining moving images, sound and text (be it intertitles of the silent era or the title sequences of the later period) for a whole century. Cinema thus was the original modern "multimedia." We can also much earlier examples of multiple-media displays, such as Medieval illuminated manuscripts which combined text, graphics and representational images. (3). “New media allows for random access. In contrast to film or videotape which store data sequentially, computer storage devices make possible to access any data element equally fast.” For example, once a film is digitized and loaded in the computer memory, any frame can be accessed with equal ease. Therefore, if cinema sampled time but still preserved its linear ordering (subsequent moments of time become subsequent frames), new media abandons this "human-centered" representation altogether — in order to put represented time fully under human control. Time is mapped onto two-dimensional space, where it can be managed, analyzed and manipulated more easily. Such mapping was already widely used in the nineteenth century cinema machines. The Phenakisticope, the Zootrope, the Zoopraxiscope, the Tachyscope, and Marey's photographic gun were all based on the same principle -- placing a number of slightly different images around the perimeter of a circle. Even more striking is the case of Thomas Edison's first cinema apparatus. In 1887 Edison and his assistant, William Dickson, began experiments to adopt the already proven technology of a phonograph record for recording and displaying of motion pictures. Using a special picture-recording camera, tiny pinpoint-size photographs were placed in spirals on a cylindrical cell similar in size to the phonography 68 cylinder. A cylinder was to hold 42,000 images, each so small (1/32 inch wide) that a viewer would have to look at them through a microscope. %9 The storage capacity of this medium was twenty-eight minutes -- twenty-eight minutes of continuous time taken apart, flattened on a surface and mapped into a twodimensional grid. (In short, time was prepared to be manipulated and re-ordered, something which was soon to be accomplished by film editors.) BF,!;=6F!7G!6F,!H.C.601! Discrete representation, random access, multimedia -- cinema already contained these principles. So they cannot help us to separate new media from old media. Let us continue interrogating these principles. If many principles of new media turn out to be not so new, what about the idea of digital representation? Surely, this is the one idea which radically redefines media? The answer is not so strait forward. This idea acts as an umbrella for three unrelated concepts: analog-todigital conversion (digitization), a common representational code, and numerical representation. Whenever we claim that some quality of new media is due to its digital status, we need to specify which out of these three concepts is at work. For example, the fact that different media can be combined into a single digital file is due to the use of a common representational code; whereas the ability to copy media without introducing degradation is an effect of numerical representation. Because of this ambiguity, I try to avoid using the word “digital” in this book. “Principles of New Media” focused on the concept of numerical representation as being the really crucial one out of these three. Numerical representation tuns media into computer data thus making it programmable. And this indeed radically changes what media is. In contrast, as I will show below, the alleged principles of new media which are often deduced from the concept of digitization — that analog-to-digital conversion inevitably results in a loss of information and that digital copies are identical to the original — turn out not to hold under closer examination. That is, although these principles are indeed logical consequence of digitization, they do not apply to concrete computer technologies the way they are currently used. (4). “Digitization involves inevitable loss of information. In contrast to an analog representation, a digitally encoded representation contains a fixed amount of information.” In his important study of digital photography The Reconfigured Eye, William Mitchell explains this as follows: "There is an indefinite amount of information in a continuous-tone photograph, so enlargement usually reveals more detail but yields a fuzzier and grainier picture... A digital image, on the other hand, has precisely limited spatial and tonal resolution and contains a fixed 69 amount of information." %: From a logical point of view, this principle is a correct deduction from the idea of digital representation. A digital image consists of a finite number of pixels, each having a distinct color or a tonal value, and this number determines the amount of detail an image can represent. Yet in reality this difference does not matter. By the end of the 1990s, even cheap consumer scanners were capable of scanning images at resolutions of1200 or 2400 pixels per inch. So while a digitally stored image is still comprised of a finite number of pixels, at such resolution it can contain much finer detail than it was ever possible with traditional photography. This nullifies the whole distinction between an "indefinite amount of information in a continuous-tone photograph" and a fixed amount of detail in a digital image. The more relevant question is how much information in an image can be useful to the viewer. By the end of new media first decade, technology has already reached the point where a digital image can easily contain much more information than anybody would ever want. But even the pixel-based representation, which appears to be the very essence of digital imaging, cannot be taken for granted. Some computer graphics software have bypassed the main limitation of the traditional pixel grid -- fixed resolution. Live Picture, an image editing program, converts a pixel-based image into a set of mathematical equations. This allows the user to work with an image of virtually unlimited resolution. Another paint program Matador makes possible painting on a tiny image which may consist of just a few pixels as though it were a high-resolution image (it achieves this by breaking each pixel into a number of smaller sub-pixels). In both programs, the pixel is no longer a "final frontier"; as far as the user is concerned, it simply does not exist. Texture mapping algorithms make the notion of a fixed resolution meaningless in a different way. They often store the same image at a number of different resolution. During rendering the texture map of arbitrary resolution is produced by interpolating between two images which are closest to this resolution. (The similar technique is used by virtual world software which stores the number of versions of a singular object at different degree of detail.) Finally, certain compression techniques eliminate pixel-based representation altogether, instead representing an image via different mathematical constructs (such as transforms.) I'J(!KL5!/756-046!67!05017C!+,<.0!EF,-,!,0/F!4*//,44.M,!/73=!174,4!N*01.6=O! <.C.6011=!,5/7<,D;O!@71*+,!":O!57(!8!IW*5,!"?9'JQ!%""X%"9(! ! ' !BF7+04!Y(!Z*F5O!BF,!Y6-*/6*-,!7G!Y/.,56.G./!2,M71*6.754O!$5/6.M,!b7-1<4(! @.-6*01!E7-1<4!-,3-,4,56!05!.+37-6056!6-,5331,d4!4F7-6X1.M,D!_Y^!G.1,!G7-+064!IWTeVO!;TeVO!H@O!`*./]B.+,O!2BcO! b>@J^!4/-.36.5C!105C*0C,4!IhB;aO!W0M04/-.36J^!3-7C-0++.5C!105C0*C,4!IDjjO! W0M0J^!/7++*5./06.75!3-767/714!IBDTXLTJ^!6F,!/75M,56.754!7G!hDL!I,(C(!<.017C! A7\,4O!/73=!05kO!DHX2_;O!H@HJO! 37-6!6=3,4!I4,-.01O![YUO!c.-,E.-,JO!A*4!0-/F.6,/6*-,4!ITDLJO!05;!6=3,4(! ? !@]*6,+04!E04!0!;74/7E!0-6!05-6O!"?8&JO!":(! "" !),EF011O!BF,!h.467-=!7G!TF767C-03F=O!"9X$$(!! "$ !DF0-1,4!e0+,4O!>!D7+3*6,-!T,-43,/6.M,Q!U0/]C-7*5C,O! "??#!,<.6.75!ID0+A-.-6Q!>5!L56-7<*/6.75O!G.G6F!,<.6.75! I),E!l7-]Q!BF,!;/V-0EXh.11!D7+305.,4JO!"'(! "& !e0+,4O!>!D7+3*6,-!T,-43,/6.M,O!$$X$9O!&8X'"O!?#X?"(! "' !e0+,4O!>!D7+3*6,-!T,-43,/6.M,O!"$#(! "8 !L400/!@./67-!Z,-17M!05-6.464!I),E!l7-]Q!@05!)746-05*67+06.75!7G!Y.CF6!G-7+!TF767C-03F=!67!D7+3*6,-!@.4.75Og! e1,/6-75./!D*16*-,Q!B,/F5717C=!053,-6*-,O!"??8J^!g;033.5C!Y30/,Q! T,-43,/6.M,O!20<0-!05Th!d?%!@.4*01!T-7/,,<.5C4O! ,<.6,D;O!"??%J(!!! $# !F663QRREEE(+-1(5=*(,<*R.+3-7MRO!0//,44,D;X?'R01.G,X/0/+?'(F6+1O! 0//,44,M056XV0-<,!04!Y7G6E0-,OP!.5!_46-05,5.,O!,<.6,D;!I),E!l7-]Q!>D;O!"??&JO!%#(! %# !)70+!DF7+4]=O!Y=560/6./!Y6-*/6*-,4O!-,3-.56!,<.6.75!IT,6,-!a05C!T*A1.4F.5CO! "?9:J(! %" !Kh7E!;0-],6,-4!nT-7G.1,o![4,-4OP![Y>!B7<0=!I)7M,+A,-!?O!"???JO!$>(!! %$ !Y,,!F663QRREEE(6F-,,(7-C(!_*-!/75M,-406.754!F,13,5C,1,4O!W*5,!8O!"???(! %& !V-00+,!b,.5A-,5O!L5!6F,!_/,05!7G!Y6-,0+4!7G!Y67-=O!;.11,55.*+!c.1+!W7*-501! $:!IY3-.5C!"??'JO! F663QRREEE(4M0(,<*R;cWRf7*-50130C,4R;cW$:RVb_De>)(hB;a(! %' !2./]!;77<=O!H,+75717C=O!G.-46!3*A1.4F,*C*46!"???JO!::X:?(!! %8 !>1A,-6!>A-0+475O!e1,/6-75./!;76.75!T./6*-,4(!>!h.467-=!7G!B,1,M.4.75!D0+,-0! IU,-],1,=Q![5.M,-4.6=!7G!D01.G7-5.0!T-,44O!"?''JO!"'X$&(! %9 !DF0-1,4!;*44,-O!BF,!e+,-C,5/,!7G!D.5,+0Q!BF,!>+,-./05!Y/-,,5!67!"?#9! IU,-],1,=Q![5.M,-4.6=!7G!D01.G7-5.0!T-,44O!"??&JO!8'(! %: !;.6/F,11O!BF,!2,/75G.C*-,-6!05!Y6*<=!.5!6F,! T4=/F717C=!7G!T./67-.01!2,3-,4,5606.75!IT-.5/,675Q!T-.5/,675![5.M,-4.6=!T-,44O! "?8#J(!!! 282 &$ !BF,!576.75!6F06!/7+3*6,-!.56,-0/6.M,!0-6!F04!.64!7-.C.54!.5!5,E!0-6!G7-+4!7G!6F,! "?8#4!.4!,\317-,-6Og!LYe>!IL56,-506.7501!Y=+374.*+!75!e1,/6-75./!>-6J!"??&!T-7/,,<.5C4! IF663QRREEE(*.0F(G.RA77]4F73R.4,0q3-7/R5,\6C,5R#:(F6+1O!0//,44,*C*46!"$O! "??:J^!gc-7+!T0-6./.306.75!67!L56,-0/6.75Q!B7E0--6Og! .5!a=55!h,-4F+05!a,,475O!,<(!D1./].5C!L5Q!h76!a.5]4!67!0!H.C.601!D*16*-,! IY,0661,Q!U0=!T-,44O!"??8JQ!$9?X$?#(!Y,,!0147!Y.+75!T,55=O!KD754*+,-!D*16*-,! 05-6.46!.5!H060430/,O!.5!Y.+75!T,55=O!,<(O! D-../01!L44*,4!.5!e1,/6-75./!;,<.0!I>10A05=O!),E!l7-]Q!Y606,![5.M,-4.6=!7G!),E! l7-]!T-,44O!"??%JQ!&9X9&(! &% !BF.4!0-C*+,56!-,1.,4!75!0!/7C5.6.M.46!3,-43,/6.M,!EF./F!46-,44,4!6F,!0/6.M,! +,5601!3-7/,44,4!.5M71M,-6Q!05!L56-7<*/6.75^!H0M.]0<,+.,!k*+!H-.66,5!W0F-60*4,51105!Y,]*10O!gBF,!U7<=!05-/F.M,Og!_/67A,-!%?!I"?:9JQ!'"(! &8 !h*C7!;r546,-A,-CO!BF,!TF767310=Q!>!T4=/F717C./01!Y6*<=!I),E!l7-]Q!H(! >311,675!s!D7(O!"?"8JO!&"(! &9 !Y,-C,.!e.4,546,.5O!g)76,4!G7-!0!c.1+!7G!dD03.601Odg!6-054(!;0/.,f!Y1.E7E4].O!W0=! a,*<0O!0555,66,!;./F,1475O!_/67A,-!$!I"?98JQ!"#(! &: !B.+76F=!H-*/]-,=O!g2,M,5C,!7G!6F,!),-<4(!>5!L56,-M.,E!E.6F!W0-75!a05.,-Og! >G6,-.+0C,!I;0=!"??"JO!?(! &? !c-,<-./!W0+,475O!BF,!T-.475XF7*4,!7G!a05C0*C,Q!0!D-.6./01!>//7*56!7G! Y6-*/6*-01.4+!05/6.75O!6-054(!BF7+04! ;/D0-6F=!IU74675O!U,0/75!T-,44O!/"?:&XJ(!! '" !H-*/]-,=O!g2,M,5C,!7G!6F,!),-<4OP!8(! '$ !Y.C+*5!U,C.55,-d4!T4=/F717C=!I),E!l7-]Q!BF,! ;0/+.1105!D7+305=O!"?"'JO!""&(!! 283 '& !V,7-C,!a0]7GGO!gD7C5.6.M,!a.5C*.46./4Og!@,-4*4!&&R&'!I"?:8JQ!"&?(!! '' !TF.1.3!W7F5475Xa0.-1F*44,-!.56-7<*/,330-06*4,4!I)76,4!B7E0-<4!05! L5M,46.C06.75JO!.5!a,5.5!05(!;=,-4O!g>!U-.,G!h.467-=!7G!h*+05!D7+3*6,-!L56,-0/6.75! B,/F5717C=Og!6,/F5./01!-,37-6!D;[XDYX?8X"8%!05!IL56,-506.7501!Y=+374.*+!75!e1,/6-75./!>-64J!"??'O! F663QRREEE(\4&011(51Rm+3,4/,R.4,0],=(F6+1O!0//,44,