Although the negativity in this statement is largely a generational issue {it was written by an author in his forties who had not visited a cinema for many years previously) it does illustrate, once again, the extent to which market forces shaped the form and combination of technologies used to deliver moving images to the pubSic. Audiences from the 1990s onwards generally responded enthusiastically to more channels in each recording and a higher overall sound level at which it was reproduced, be that in cinema, other public spaces such as nightclubs, television, domestic music recordings, radio or video games. Those who came to it with expectations of an earlier generation of media technologies, as with the critics of the 1930s who wanted to bring back silent films, were often disillusioned. Conclusion The evolution of the technologies used for cinema exhibition, from the Kinetoscope of 1892 to the digital sound reproducers of 1992, established a pattern and then followed it. This was based on the simple principle of playing back a single recorded sequence of images and sounds before as many paying customers as possible, thereby commercially exploiting that recording to the maximum extent possible. Technologies which upgraded, or at an audience's subjective level, 'improved' the experience of that reproduction were developed, tested and commercially introduced at regular intervals during the first century of this industry's existence. But in order to do so, they had to meet certain technical and economic criteria. Given the sheer volume of hardware and infrastructure involved after cinemas had started to be built in significant numbers from the late 1900s onwards, economies of scale dictated a more cautious approach to the rollout of new technologies. They had to conform to a framework of established technical standards, and the most successful ones embodied a degree of 'backwards compatibility'. The conversion to sound involved equipment which was simply added to existing installations in the projection booth. The wide-screen systems which eventually established themselves used conventional 35mm film and required little investment in equipment from the exhibitor. Dolby stereo variable area tracks could also be played in mono on non-Dolby equipment, and so on and so forth. The technology we shall consider in the next chapter extends that model even further. In the case of television and (domestic) videotape recording, the technology in use at the receiving end did not have to be standardised and mass-produced across tens of thousands of cinemas: the principle had to be applied across hundreds of millions of homes worldwide. moving image technology chapter six! television and video if Hollywood ever makes a movie about the invention of television, a suitable title might be Three Hundred Men and a Baby.'' . (V was probably the first invention that was achieved by committee.'2 Television might have gone the way of the airship, nothing more than an interesting footnote in history.'3 Gentlemen, you have now invented the biggest time waster of ail time. Use it weil.'4 From the previous chapter it will be apparent that, during the first half of the twentieth century, the medium through which the overwhelming majority of the world's population who experienced moving images did so was 35mm film projected in front :'of a mass-audience, sitting inside a building specifically designed for this purpose. . By the final quarter of the twentieth century, the technology used to deliver moving images as a mass medium had relocated from public buildings to private homes (the cinema still existed, but it was no longer a mass medium). As we wil! see later in this chapter, the technologies of film and television are closely linked - in fact, it is unlikely that television would have evolved in the way it did if film-based technologies did not represent a multi-million dollar global industry by the time the first scheduled, public broadcasts began. For example, not only did film provide the cultural precedents, but for the first two decades of television's existence it was the only means available of permanently recording live broadcasts and of broadcasting material that vyas not produced live in front of a camera. But there are also fundamental differences, both in the technologies themselves and in the cultural, political and economic frameworks through which they were developed, established and sold. Film records images through the use of chemical compounds that react to the presence of light, and in which the results of that reaction can then be made visible to the naked eye. Television and video recording work - like electrical audio recording - by representing a visual image as television and video a series of changing electrical modulations which can either be transmitted by radio or recorded onto magnetic tape. Later, the changing electrical modulations were replaced by representing the image as digital data which is encoded and decoded by a computer. This had a number of implications. The chapters on film-based technologies have emphasised the role played by independently articulated technical standards which are observed by competing suppliers of products and services. This is because without these standards, the basic economic principle on which the film-industry depends for profitability - that of reproducing a single recording in as many different locations and to as many paying customers as possible - becomes less effective or even impossible. Let us suppose, for example, that 35mm fiim was not a universal, worldwide standard, and that 28mrn had survived to become a widespread release format alongside it. Assuming that each gauge achieved an approximate 50 per cent market share, any studio wishing to make a feature film available to ali the world's cinemas would be forced to produce both types of print and to absorb the extra distribution overheads. With television the stakes are even higher, because instead of the estimated 130,000 35mm cinema installations in use worldwide in 2003, there are literally billions of television sets. The technical properties of the broadcast signal therefore have to be standardised in order to ensure that all the receivers in a given area are able to display the images and play back the audio correctly. For this reason and because the broadcasting 'bandwidth' (the number of modulated signals or digital data streams which can be broadcast at any one time) is limited, broadcast standards and bandwidth use have been regulated by national governments from within a few years of sound-only radio starting to become established as an industry. Perhaps the best known of these government agencies in technical circles is the United States' National Television Standards Committee, of which more below. One of the key roles played by governments in the technology of broadcasting is to allocate and regulate the use of bandwidth by privately-owned broadcasters,5 though some countries have gone even further, by establishing state-run broadcasters financed by. direct taxation. The United Kingdom {which was the first country in which regularly scheduled public television broadcasting took place) evolved its unique hybrid in the form of the British Broadcasting Corporation (BBC), a (supposedly) non-commercial organisation established by Royal Charter, but which is institutionally separate from the government executive. It is financed by a law which requires anyone in the UK who uses a television set in their home or workplace to buy an annual licence for the premises on which it is installed. This scale of political intervention has never been equalled in any other area of moving image technology in the developed world. Apart from the former USSR and Eastern Bloc, there have never been a significant number of state-owned cinemas, and nowhere near the same scale of state involvement in the production of content. The scale on which television operates as a mass medium therefore results in a very different model of regulation and standardisation to that of fiim, a model which is in evidence throughout virtually every stage of technological development and change. Hence, therefore, Albert Abramson's remark that television was the first invention to be achieved by committee. In fact film had its fair share of committees too, and almost from the very beginning. But because these committees (or at least, the ones that mattered) were ail industry self-regulation bodies (such as the MPPC or the SMPE), their role in developing and imposing standards was less obvious. In other words, the film industry was still essentially 'business, pure and simple', as D. W, Griffith put it, committees and all. The technical logistics of television initially ruled out standardisation through market forces. While a vertically integrated fiim industry could absorb the cost of Vltaphone proving not to be up to the job and of having to throw a few thousand turntables into skips - even in the depths of the century's worst economic depression - individual consumers would not (and, to a certain extent, still will not) risk a ■ :substantral capital investment on a piece of domestic equipment which is likely to go "the same way, as Robson's research on British television in the 1930s reveals: The pace of sales was always sluggish, partly due to a lack of confidence in the technology that the BBC was using. The public had its major concern that the broadcast standard was still experimental and liable to change in the near future, and this apprehension was hardly * allayed by government pronouncements at the time.6 Governments, therefore, had to start making pronouncements, and in regulating the : television industry they have been doing so ever since. This makes it all the more :. ironic that when George Orwell wrote 1984 in 1948, just as television was on the verge of starting to grow from an experimental technology into a mass-medium, he settled on the 'telescreen' as his symbol of state intrusion into the private sphere. Television - initial research, development and rollout The timescale in which television evolved from a theoretical possibility into scheduled broadcasting and cabinets in living rooms was similar to that of film, but took place approximately half a century later. Both the illusion of continuous movement and the basics of photochemistry were understood and starting to be applied by the end of the 1830s; from there the research and development needed to put two and two together and produce film took a further half century. Likewise, the basic radio and electronic technologies needed to make broadcast television a reality were in place by the 1890s, and were evolved by a number of individuals and organisations over the first half of the twentieth century, to the point of becoming a mass-medium in the years immediately following the end of World War Two. This work took place predominately in the United States, and in Britain to a lesser extent. The only other country in which public television broadcasting took place before World War Two was Nazi Germany, and even then only on what was effectively an experimental basis. The media technologies that were developed by the Edison/Eastman cartel in the late nineteenth century - i.e. still photography, audio recording and reproduction and film-based moving image technology - all had one crucial attribute in common: they were initially intended as a means for consumers to create and play back their moving image technology television and video own recordings. It was oniy in the case of still photography that this application proved to be commercially viable, even in part. Where audio and film were concerned, the law of diminishing returns came quickly into play and resulted in an industry based on the commercial exploitation of multiple copies of a single recording. As we have seen in chapter four, the arrival of radio in the 1920s heralded a new variant of this business model. Broadcasters consciously differentiated themselves from the record industry by promoting the fact that almost everything they transmitted was live. As David Morton's research has revealed, the broadcasting of pre-recorded radio programmes was initially perceived as being culturally inferior, a perception that was deliberately encouraged by the radio industry of the 1920s and 1930s in order to distinguish their product from records and, to a lesser extent, films.7 The radio industry was following the lead of record and film producers up to a point, in that it was making money by selling the same content to an almost infinite number of customers. But broadcasting had the added attraction of making the process time-specific. The idea of television offered the same potential selling point, oniy with moving images into the bargain. The first generation of television inventors and pioneers were looking to exploit the same underlying technical principle as their counterparts in radio had discovered. This principle had been discovered as early as 1832, when the engineer Samuel Morse had discovered that electrical currents could be transmitted along metal conductors (wires) of almost infinite length. While the most widespread use for this discovery was the delivery of 'mains' electricity from power stations to homes and businesses, Morse also realised that by periodically interrupting the current to create a sequence of pulses, that sequence could be detected and reproduced at the other end of the wire, thereby forming a method of data transmission. This was the 'Morse code', in which the receiving end of the wire was connected to a buzzer or a light bulb, thereby enabling the sequence to be reproduced and interpreted as the 'dots' and 'dashes' which formed letters of the alphabet. In the late nineteenth century the German scientist Heinrich Hertz established that it was possible to induce changes in the wavelength of an electromagnetic energy source, which could then be detected from a remote location without the need for any wires. While this did not enable the transmission of significant volumes of power, it did enable 'wireless' data transmission using a modulated signal, e.g. of Morse code. The invention of the first crude microphone by Alexander Graham Bell (see chapter four) enabled a representation of audible sounds to be modulated and transmitted, and it was the the Irish-Italian engineer Gugliemo Marconi who, in 1896, combined these technologies to carry out the earliest known voice transmission. Lee de Forest's triode valve marked another milestone (although, in chapter four, this is discussed primarily in the context of film sound, it is worth keeping in mind that valves were originally developed to enable the construction of long-distance telephone networks) in that it enabled the modulating signal to be amplified to the point at which broadcasting over a significant distance (i.e. radio as mass-communication) became feasible. moving image technology Just as records and films did not catch on as 'home media' to anything like V the same extent that they became vehicles for mass-communication, radio was not : developed and marketed as a form of wireless telephone. There were two key tech-:;; , nical reasons for this, namely bandwidth limitations and the fact that no scrambling or encryption technology existed. In order to receive a wired transmission, it was necessary to establish a physical connection to the wire which carried it; but anyone with a receiver tuned to the appropriate wavelength can hear a radio transmission, whether the broadcaster wishes them to or not. It was precisely this characteristic which initially made radio a useless medium for one-to-one communication, but ■■'' ideal for mass-broadcasting. As well as these technical reasons, records and films had, by the early 1920s, established the cultural precedent for electronically medi- .......ated mass-communication, even if the media were pre-recorded. Therefore, the -p broadcasting of moving images along with sounds was a logical next step in all the development which had been going on in the area of media technology during the late nineteenth and early twentieth centuries; and furthermore, a cultural frame-;fe.V work already existed for its commercial exploitation by the early 1920s. v>: The process that would eventually lead to the emergence of broadcast tele-v vision as a mass medium in the form we know it today involved two technological "^-approaches and a number of individuals and organisations associated with each. |:;; to some extent, the two technologies can be analogised with the battle between ' sound-on-disc and sound-on-film in the 1920s film industry. The first system, which used an electromechanical approach to encoding and displaying the transmitted ^ -signal, established a cultural and business case for television as a medium, but #'eventually encountered technical limitations which caused it to fall by the wayside. The electronic method which emerged shortly afterwards would eventually provide the technological basis for television as it exists today. Electromechanical television This technology was associated first and foremost with John Logie Baird, a Brit-ish engineer born near Glasgow in 1888 and who began his career in the electrical V; supply industry. After settling in south-east England in 1923 he began a series of |:.: experiments designed to exploit the potential of the 'Nipkow disc' in encoding visual j;; images for radio transmission. Television, like film, relies on the illusion of continu-;. ous movement to electronically encode and reproduce a pattern of light which is |.:. detected by the human eye as the impression of a continuously moving image. But whereas film does this by regulating the rate at which whole images are recorded ;>:: and reproduced, analogue television has to divide that picture up into a series of 'horizontal segments for transmission. The first practicable method of achieving this s. was devised and demonstrated by the German engineer Paul Nipkow in 1883. This consisted of a rotating metal disc (not dissimilar to the shutter in a film camera or projector), in front of which was a lens and behind which was a photoelectric cell (the same sort as would be found in the optical sound reproducer on a cinema projector), f-'; The disc contained a series of small holes around the perimeter, gradually spiralling television and video inwards. When this apparatus was placed in front of a stationary subject and the disc rotated, the first hole in the sequence would move horizontally across the lens' field of vision; the second would do likewise on a line just below it, and so on until at the end of the revolution, the entire picture had been 'scanned', and the modulated signal for each line of the picture had been generated by the photocell. Working largely in isolation and financed mainly by private collaborators, Baird undertook a process of research and development which culminated in a viable television system based on the Nipkow disc. Initially this took the form of a low-resolution, 30-line signal, but was eventually uprated to a 240-line Nipkow disc by 1935. Electronic television Baird had a north American counterpart, who was also a largely self-taught engineer working in isolation. Philo Farnsworth, born in Utah in 1906, was, however, working on a fundamentally different approach to transmitting visual images to that of Baird. His 'image dissector tube', first successfully demonstrated in 1927 (when its inventor was aged only 21!), consisted of a large glass bulb coated with a photoelectric compound. This surface was scanned continuously by a beam of electrons focused from the narrow end of the bulb, in order for the signal modulation generated by the photoelectric surface in response to light to be detected and transmitted.3 The image was displayed by reversing the process in the 'cathode ray tube' (CRT) which, instead of being coated with a photosensitive compound, was coated with one that produced light in response to electrical stimulation. Put like this, the technique sounds simple and straightforward. In fact, it was anything but, and Farnsworth and his competitors expended a great deal of time and energy attempting to improve the reliability and image definition of their tubes. One key problem was the relatively low sensitivity of the photoelectric compounds which were available in the 1920s and 1930s (as will be recalled from chapter four, the inventors of optical sound-on-film systems had exactly the same problem), which became more of an issue as the number of lines of horizontal resolution was increased. In particular, the phosphors in the receiving tube could not be made to illuminate for long enough to maintain the screen's brightness between scans, as a result of which the technique of interlacing was developed in 1929. In many ways this can be seen as television's equivalent of the developments in cinema projector shutters in the 1920s. Instead of scanning and broadcasting each line of the picture in succession (known as progressive scanning), lines 1-3-5 would be scanned first, followed by 2-4-6 (interlaced). This would maintain overall picture brightness, because as the odd-numbered lines started to fade, the evens would be freshly scanned and bright. Although today's CRTs are more than capable of displaying a progressively scanned image with uniform brightness, the television broadcast signals in use today still interlacing. Interlacing was just one innovation produced by researchers working for the Radio Corporation of America (RCA): and is indicative of the point having been reached at which the development of broadcast television technology diverges moving image technology MR m radically from the romantic image of the innovations of lone inventors such as Baird and Farnsworth, and becomes a process driven by a unique combination of big business and political regulation. From the early 1930s onwards, RCA in the United States and the BBC in Britain became the key players - the 'invention by committee' described by Albert Abramson. The formation of RCA was essentially a political act. At the end of World War One, the only significant provider of radio communications in the United States was . a subsidiary of the British Marconi Company, which led to concerns in the US govern- ■ ment and military for the security implications of the radio communications infrastructure being foreign-owned. On 17 October 1919, the Radio Corporation of America . was formed, primarily as an offshoot of General Electric, which took over Amercan -■- Marconi shortly afterwards. With the Marconi infrastructure at its disposal RCA soon established a dominant position in radio communications, and by the mid-1920s had ■ also become heavily involved in broadcasting to the public, both as a manufacturer of . equipmentand an owner of radio stations. Throughout this period RCA enjoyed ciose : relations with the US government, which were to prove especially important when it began to enter the field of commercial television broadcasting in the late 1930s. ■■: There were two driving forces behind RCA's involvement in television, both of them Russian emigres. David Sarnoff was born near Minsk in 1891 and emigrated to the US as a child. A self-taught radio operator during the 1910s, Sarnoff joined RCA as commercial manager upon its formation and became its president in 1930, -a position he continued to hold until his death in 1971. He was essentially the commercial and political force behind the establishment of television in the US. The - technology was supplied by Vladimir Zworykin, a scientist bom in 1889 and who emigrated to the US in 1919. During his early career in Russia he had researched CRT technology thoroughly, though his initial interest was in its application for x-ray imaging rather than television. In 1920 he joined Westinghouse as a researcher working on photoelectric cells, and wrote a PhD thesis on increasing their sensitivity. By this stage he was actively interested in television, and in 1924 took out patents ■: for both camera (termed 'Iconoscope') and display tubes. There were some subtle differences between Zworykin's tubes and Farnsworth's which resulted in litigious wrangles throughout the 1930s, but when Zworykin joined RCA in 1929 and persuaded Sarnoff of the commercial potential for television as a mass-medium, the bail started seriously rolling, one which would culminate in the launch of public television broadcasting in the United States in New York on 20 April 1939. In Britain, the corporate impetus behind television was also a nominally .independent organisation, but which in reality had close links to the institution of government. The BBC began life as the British Broadcasting Company in 1922 .before being established under royal charter as a corporation in 1926. As it had a : legally-enshrined monopoly in UK radio broadcasting, it was obliged, under orders from the Postmaster General (the government official responsible for allocating transmission frequencies), to co-operate with Baird when he approached them ■ asking for infrastructural support to begin public broadcasting using his mechanical scanning system. The first experimental transmissions took place in 1930, and television and video continued for some years, with Baird gradually improving the resolution of his system from 30 lines to 240 during the first half of that decade. Electronic television was also being developed in Britain. Independently of Farnsworth or Zworykin, work on cathode ray tubes was also being carried out by Electrical and Musical Industries (EMI), based in West London, which had achieved an all-electronic television transmission and display system by 1932. By April 1934, EMI was able to demonstrate a 180-line, all electronic version based on its 'Emitron' camera tube, which already offered a number of crucial advantages over the Baird system, most notably a camera which could generate live images (at that point Baird's could only produce images by scanning fiim, i.e. it was a telecine device). In the summer of that year the government established what was to ail intents and purposes a committee to invent television, i.e. to make recommendations on the feasibility of beginning regular public broadcasts and which system should be used. Largely in order to protect its reputation for political neutrality, the Selsdon Report, published the following year, resulted in a competition. For a three-month trial period, between November 1936 and February 1937, the 240-line Baird system and the 405-line EMI system would broadcast in a series of alternate, scheduled transmissions, and the outcome of that trial would dictate which of the two systems the BBC would adopt for its 'high-definition' television service. The last 30-line Baird broadcast took place on 11 September 1935. The outcome of the trial was an unqualified success for the electronic method, so much so that in its aftermath Baird's company sub-iicensed some of Farnsworth's patents and attempted to market cameras and receivers based on them in the UK. The Emitron cameras could be used for live outside broadcasts, whereas the Baird system could only broadcast footage shot outside a studio which had been originated on film. EMI offered almost double the resoiution, and the receivers proved to be far more reliable. From the BBC's point of view, the test period was an exercise to carry out the process of standardisation via committees and officialdom, the equivalents in film of which had been left to market forces. As Lord Selsdon said in a televised address in the opening public broadcast of the test period, on 2 November 1936: From the technical point of view, I wish to say that my committee hopes to be able, after some experience of the working of the public service, definitely to recommend certain standards as to the number of lines, frame frequency and ratio of synchronising impulse to picture. Once these have been fixed, the construction of receivers will be considerably simplified.9 The test certainly achieved that objective, with the 405-line, 50 hertz EMI system being adopted by the BBC as the television broadcast standard. EMI was declared the winner in February 1937, while Baird's Nipkow disc went the same way as Warner Bros.' Vitaphone disc - rapidly into obsolescence. Regular, scheduled transmissions on the EMI system continued for an average of two hours per day until the outbreak of World War Two in September 1939, when broadcasting was suspended for the moving image technology duration, of the conflict. Approximately 20,000 sets had been sold by that point, though Robson argues that the first two years of television broadcasting in Britain were broadly unsuccessful: The arrival of TV as we wou Id recognise it - the most potent communicati on medium of the twentieth century - should have been an outstanding success, the 'must-have' for all mid-die-class Londoners in the years leading up to World War Two. In fact, the numbers of view-, : ers never grew large enough for it to have any social force, and the short-lived experiment, lasting less than three years up to 1939, completely failed to achieve its potential.10 This judgement is surely on the harsh side, given that this service was the first of its -kind anywhere in the world and that the cost of the early receivers was around half . that of an average family car: its significance is more in the technical precedents set ■■■ than as a 'social force'. In America, RCA went through the same process, only using Zworykin's standards and technologies, approximately two years later. This time the inaugural event i-was the World's Fair in New York on 20 April 1939, when the RCA-owned National .--Broadcasting Corporation (NBC) transmitted images of David Sarnoff declaring that ■■-TOwwe add sight to sound'.11 As with Baird and EMI in Britain, a standards row was . also underway in the US, the main protagonists being RCA and Farnsworth. In the .absence of a BBC-type organisation, the US Government's Federal Communications .^Commission (FCC) had been established to regulate the radio industry, allocating . frequencies and licensing broadcasters. In 1940 it established the National Television Standards Committee (NTSC) to do likewise with television, and one of its first tasks was to standardise the broadcast system. This was eventually established as ■ the 525-line, 60 hertz interlaced system with which the term 'NTSC is synonymous today, and the FCC' authorised commercial broadcasting using this standard as of .1 July 1941. As in Britain, public service broadcasting and the commercial manufacture of equipment ground to a halt during the war, though in the US this was as .of 7 December 1941 (the bombing of Pearl Harbor) rather than 3 September 1939 (Germany's invasion of Polandl. ■ There is one footnote to add to the story of pre-war television, which is that Nazi Germany also operated a small-scale broadcasting infrastructure, using a 180-line mechanical system. The first transmissions took place in March 1935, and it is estimated that between 200 and 1,000 receivers were manufactured. Very little is known about the technical details of this system (except that it was totally incompatible with any of those being developed in the UK or the US), and it is not believed that . the Nazi broadcasts were seen by anything like the number of viewers for pre-war UK and US programming.12 The post-war period: colour, PAL and the emergence of a mass medium The two decades following the end of World War Two saw television make the transition from an embryonic prototype into the mass medium that succeeded cinema television and video Don't Buy a Television Set without first seeing these anal 168 exhibition as the principal means through which consumers viewed moving images. Broadcasting in the UK resumed on.7 June 1946, and by 1949 150,000 licences had been sold.13 In America the growth in sales was even more dramatic, including, for example, a 500 per cent increase in the number of receivers sold between 1947 and 1948." In terms of the consumer hardware, economies of scale began to kick in with a vengeance as consumer confidence in the broadcast standard and an upturn in the Western economies, heiped by an unprecedented level of politica! stability following the end of World War Two, encouraged broadcasters and consumers alike to invest. One key milestone was the launch of the RCA 630-TS television receiver, dubbed the 'Model T' of television, in 1946.'5 Priced at $385, it had sold 10,000 units by the end of that year alone. As the mass-manufacturing techniques (especially in relation to the cathode ray tubes themselves) evolved and became more reliable, prices of television receivers fell substantially in real terms. The next major technological change to be researched, developed and then rolled out was colour, and this followed a very similar pattern to that of the establishment of mass broadcasting: the basic techniques were established, shown to work reliably, proven to be economically viable on a large scale and finally subjected to the political processes which enshrined them through technical standardisation. Unlike pretty much every film-based colour system used from the end of World War Two onwards, colour in television is exclusively additive (see chapter three for an explanation of additive and subtrac-tive colour). Both Baird in 1928 and Herbert Ives of Bell Labs in the following year demonstrated crude mechanical colour systems based on multiple Nip-kow discs. Instead of using a single sequence of spiralled holes in the disc which scanned a monochrome image based on the intensity of light which passed through it, the colour version had three; one for each of the primary colours (red, green and blue). The receivers contained gas cells which produced light of corresponding colours in the reproduction process. Although these systems suffered from all the same drawbacks as monochrome mechanical television, they established the principle of splitting the scanning process into three stages and 'adding' colour information to the existing monochrome scan that way. As mechanical television became obsolete, engineers began looking for ways of capturing, broadcasting and reproducing colour information using the electronic scanning methods which had been established by EMI in the UK and RCA in the US. The first system to have been systematically demonstrated was in fact a ma-moving image technology HADE ST TKE FtOM-E WHO CAVE THE WORLD ITS FIRST ELEÍTJOMC IHEYRIOH SYSTEM H«e Í7 the ptpnkLr. priwd Tabl* Moůíl 2807. iog(rfrÄ»tliis * H>* brighter, cttuxcr pjciurc». »Hb p«-cuncat magnetic iocoxfog. Moůc! SWS >* * oi*rc exMAiive Convite DMltfl -vboM 15"u w «■leb ttbrvbicc pro--gniAůHS la diyCifht."' if PLACE rOUR'OROEft NOW1 f/iiW* Fig. 6.1 A British advertisement published in 1950. 'His Master's Voice' wag a trade name of EMI, hence the claim 'made by tile people who gave the warid its first electronic television system1, The light output from the first generation of mass-produced cathode ray tubes was usually insufficient for viewing in anything other than subdued light and 15 inches across was about the largest cathode ray tube it was possible to mass-manufacture at the time these receivers were sold.16 ' chanical/electronic hybrid, proposed by a scientist working for RCA's main commercial rival in the radio industry, Columbia Broadcasting System {initially the Columbia Broadcasting Company, but known by the initials CBS following a series of mergers in 1926), Peter Goldmark. He positioned a rotating filter wheel containing red, green and blue segments between the lens and a camera tube. When synchronised with the tube's scanning beam and the rate increased threefold, each frame was scanned six times in sequences of red-green-blue, the latter sequence being interlaced with the former. In the receiver tube the process was reversed, with the electron gun receiving the signal firing through a reciprocating filter to illuminate the phosphors at ■ the end of the tube accordingly. In essence, the principle was the same as with the Kinemacolor film system (see chapter three), but with an electron beam scanning ■device rather than panchromatic film as the receiving medium. When it was demonstrated in July 1940, Goldmark's system was generally ac- ■■knowledged to produce an acceptable quality of colour reproduction. As a result, he faced intense opposition from RCA, which at the time was on the verge of rolling out commercial monochrome broadcasting. This opposition took two forms: firstly ' emphasising the mechanical nature of the system (i.e. trying to associate it with -the proven drawbacks of the Nipkow disc, and characterising its, claims to being • electronic as 'counterfeit'), and secondly by highlighting compatibility issues. As the broadcast signal consisted only of the red, green and blue scanned colour records, there was no means of reproducing a monochrome image from it, and the signal ■could not be received on a set designed for the existing NTSC system. As RCA had undertaken a long process of intense, political lobbying to get its system adopted as the national broadcast standard, Sarnoff was not likely to sit around while a rival, incompatible system which also offered colour into the bargain was allowed to compete in the marketplace. The result was that the CBS/Goidmark system never progressed beyond a few experimental broadcasts. ■ Further significant research and development was interrupted by World War Two, after which the broadcasters' and electronics industry's emphasis - both in the US and the UK - was to increase economies of scale to the point at which monochrome television became a viable mass-medium. There was also the growth of TV in other parts of the world to consider, something which the American and British hardware manufacturers gave a high priority in order to recoup the large investment they had made in research and development since the late 1920s. Japan became the first country to begin regular broadcasting in the post-war period, starting in 1951, while the rest of the developed world followed gradually during the decade. Work on colour did resume, with the start of a two-decade process that would culminate in the technology becoming universal. The next major advance was a purely electronic system developed by RCA and publicly demonstrated in June 1951. It was, in effect, an electronic, additive version of three-strip Technicolor. The camera contained three separate tubes and an optica! system behind the lens which split the incoming light into its constituent primary colours of red, green and blue. However, unlike in Technicolor, the resulting images were not recorded on separate strips of monochrome fiim in the form of subtractive negatives - instead, the resulting television and video electronic signal was modulated and broadcast. In the receiver tube, three separate electron guns reversed the process, illuminating phosphors which would glow with the reciprocating colour and intensity to those in the camera. The superiority of RCA's system was not only in the absence of any moving parts. The Goldmafk/CBS method was completely incompatible with the existing, monochrome signal standard which had been established by the NTSC in 1941: a CBS colour set could not receive the existing black-and-white service. The RCA system broadcast two separate modulated signals. The first was identical to the pre-existing black-and-white standard (the 'luminance', or 'Y' signal): in a colour camera, it was produced by combining the output of all three tubes to generate a single measure of brightness. The second signal contained the scanning information needed for the three electron guns in the receiving tube to reproduce the colour information captured by the camera, and was known as the 'chrominance' or 'C signal. A black-and-white television set simply did not receive the 'C signal, while a colour one combined both to reproduce the image as it had been shot by the three-tube camera. Conversely, biack-and-white cameras could continue to be used, because the colour receivers could reproduce the images from them by simply firing all three electron guns with equal intensity (remember; in additive colour, combining al! three primaries gives you black) in order to display varying shades of monochrome. The RCA system, therefore, could be introduced without any of the hardware which had been sold during the 1940s being rendered obsolete. However, the successful conciusion of RCA's research programme came too late to prevent the FCC from licensing the C8S method, following a protracted legal wrangle between Sarnoff and CBS. The latter accused the former of attempting to monopofise the television industry, and pointed out that because the two systems were completely incompatible, the public should have their chance to decide between them in the marketplace. In May 1951, the US Supreme Court authorised CBS to begin commercial broadcasting, though in truth this turned out to be a purely symbolic victory. In the following months sales of CBS sets were negligible, and, at the request of the military, production was then suspended throughout the Korean War. In the aftermath of that and following another round of politicking, the RCA method was eventually adopted by the FCC, to be known officially as 'NTSC color'. The first sets went on the market in 1954, and commercial broadcasting on a small scale began soon afterwards.17 But it was to be another decade before colour television became widespread. Because RCA/NTSC was fully compatible with existing black-and-white hardware, there was little incentive for studios or consumers to make a fresh roundof major investment - for ail the same economic reasons as the film industry had ignored widescreen in the aftermath of the conversion of sound. By 1960, NTSC colour sets accounted for only one in fifty of total receiver sales in the United States.13 A big part of the problem lay in the technology itself, as the three-gun colour tubes were expensive to manufacture, had a high wastage rate and were relatively unreliable (according to contemporary accounts, early models had the frequent and unfortunate habit of exploding). A useful comparison would be with the laptop computers sold in the early twenty-first century: even though they have been available for over a decade, they remain about 30 per cent more expensive than their full-sized counterparts, because manufacturing techniques and the availability of raw materials have not developed in order to reduce the unit cost of the high-power batteries and TFT screens which account for most of what a consumer pays in the shops. By the same token and until the late 1960s, colour broadcasting was limited and the cost of the receivers was substantially more than most consumers were able or willing ; to pay. In the case of colour television, the development which broke this vicious circle came not from the United States, but from Japan. The Trinitron tube marked a huge improvement in colour resolution and picture definition and eventually proved - rC- a lot cheaper to mass-manufacture than the RCA tubes. It was the invention that made Sony a household name and marked the start of a process which would cuimi-ij^bt nate in Japanese manufacturers dominating the consumer electronics market in the 1970s and 1980s. The major drawback with RCA receivers was the use of a focusing system termed a 'shadow mask', through which the three electron guns were :': discharged. This was needed to align the three beams to the viewing end of the tube ■■ accurately (and in some ways, can be considered analogous to the beam-splitting device in a Technicolor camera), but it absorbed a lot of light and tended to wear out after a few years' use, resulting in a blurred and indistinct picture. By the early 1960s Sony had not entered the colour market at all, and its techni-;.. cal director, Masaru Ibuka, was determined that it should not do so until the company had developed a way of overcoming the drawbacks of the RCA tube design. Sony's r- first attempt was an adaptation of a US military radar display screen known as 'Chrc-matron': instead of using separate electron guns for each of the primary colours, the tube consisted of a single gun which iiluminated three separate formulations of phosphor. In this way the need for a shadow mask was overcome, but the Chro-matron tubes proved impossible to mass-manufacture reliably (initially, only around 0.25 per cent of tubes which came off the assembly line were usable, and this figure could not be improved substantially). The eventual solution, introduced in 1968, was the 'Trinitron' tube - a single cathode ray tube but which contained three cathodes (negative electrodes). The focusing method was known as an 'aperture grille', which absorbed only a fraction of the light of an RCA shadow mask. Both systems were able to receive the same NTSC broadcast signal. While shadow mask technology had also been steadily improving throughout the 1960s, the Sony method marked the breakthrough which would enable colour television to become a mainstream broadcast and consumer medium. This was partly due to the. clarity and definition of the Trinitron's colour image, though the advances in tube design during the 1960s and 1970s were as much to do with process control and the ability to apply economies of scale to the manufacturing process as with the image itself.18 Both shadow mask and Trinitron-type display tubes continue to be manufactured today, althoug h the use of CRT tech nology i n televi sion imaging has declin ed as new microelectronic and computer-based methods have emerged. In terms of recording electronic moving images, the charge coupled device (CCD) largely replaced CRT moving image technology television and video cameras for this purpose from the mid-1980s onwards. It was initially developed by George Smith and WiJIard Boyle, two scientists working for Bell Laboratories in the US in areas of optical communication devices, lasers and semiconductors. It is, in effect, an electronic memory which is 'written' by exposure to light and sequentially discharged in the form of electrical energy shortly afterwards. Although the first designs were not intended for television and video imaging, ongoing research into this technology resulted in its eventual use for this purpose. CCDs are a fraction of the size and weight of a CRT (the ones in today's cameras are around half the surface area of an average postage stamp), consume an insignificant amount of power and have an almost unlimited lifetime (although, in some circumstances, they can be damaged or destroyed by overexposure). CCDs record colour information in one of two ways: professional studio cameras and telecine devices usually contain separate chips for each of the primary colours, while the cheaper units used in domestic video camcorders (see below) and still cameras use a single CCD fitted with a filter device known as a 'Bayer mask'. This divides the individual photosensitive components, or 'pixeis',20 into a mixture of red, green and blue sensitivities. Alternative technologies to CRTs have also been developed to display television and other types of electronic moving image. The liquid crystal display (LCD) has its origins in the discovery of liquid crystals in the late nineteenth century, when the Austrian chemist Friedrich Reinitzer (1857-1927) discovered a liquid substance similar to cholesterol which could be made opaque or clear by applying intense levels of heat (similar to the phenomenon whereby fat in a frying pan wiil become clear as it melts, and then opaque as it resolidifies after the heat has been switched off - only liquid crystals will not solidify). Research on the properties of liquid crystals developed slowly throughout the early twentieth century, but stalled in the 1940s due to the absence of any perceived industrial application. A resurgence in research activity in the 1960s resulted in the development of liquid crystals, the reflectivity of which could be permanently adjusted by applying an electric current. Most modern LCD devices are based on a method invented by the American physicist James Fergason and first publicly demonstrated in 1971, known as the 'twisted nematic field' effect. A more recent development in LCD technology was the invention of the 'thin film transistor' (TFT) display, which uses an 'active matrix' of microscopic transistors to form the image. Once energised, these will retain their chrominance and opacity for as long as is needed, effectively doing away with interlace issues, vastly improving the definition possible from an LCD and significantly reducing power consumption. For this reason, the most widespread use of TFT displays at the time of writing has been in laptop computers, though their use as a moving image display medium has been steadily growing and TFT-based television receivers are now readily available in the developed world (albeit at a substantially higher price than their CRT counterparts}. By the mid-1960s, NTSC was still the only nationally adopted broadcast standard capable of encoding a colour signal, used principally in the USA and Japan. Most of the rest of the world used the 405-line system adopted by the BBC in 1937, which was black-and-white only and of visibly lower definition relative to NTSC. This moving image technology led to the rollout of the Phase Alternate Line (PAL) standard, which increased the definition to 625 lines but kept the 405-line scanning rate of 50 hertz (equivalent to 25fps). Some changes to the way in which the luminance and chrominance signals are modulated for transmission further improved the image resolution over that of NTSC. PAL was essentially a German invention, having been developed by engineers working for Telefunken, though the French, in the form of state broadcaster Radiod-iffusion-Television Francaise, had also came up with a higher resolution broadcast standard: Systeme FJectronique couleur avec memoir (SECAM), first demonstrated in 1957, in which each line of colour information was sequentially interlaced with a line of luminance.21 As this is the only difference between PAL and SECAM, the two systems are compatible in black-and-white only (i.e. a PAL television can receive a SECAM broadcast, but will only display it in black-and-white, and vice versa). Starting from summer 1967 most of Western Europe adopted the PAL system (starting in Britain, when regular colour broadcasting began on 2 December 1967 following experimental transmissions earlier in the year)22 while France, the Eastern Bloc and the Soviet Union converted to SECAM.23 RCA had been hoping to further entrench its dominance in the television market by trying to establish NTSC as a global standard, exploiting the fact that the British 405-line system was not compatible with colour, and in a publicity brochure issued in 1964 (three years before the launch of PAL in the UK) had warned that: In Europe, government authorities concerned with telecommunications will endeavour, within the next few months, to reach agreement on standards of television for European countries. The responsibility is a serious one, for these standards must anticipate future as well as present requirements, and they must provide the best possible colour television system for all countries concerned.2" There were no significant changes to the broadcast standards in use during the last three decades of the twentieth century. In various examples related to film we have seen how, once established through a significant hardware and software base, technical standards in moving image technology become very difficult to supersede, even after advances in the state of the art have rendered them obsolete. This applies equally to television, in the UK, 405-line transmissions were not finally switched off until 1985 (although by that time, all but a tiny fraction of consumers had made the change to PAL receivers), while the US still uses the 'NTSC color' standard pretty much as RCA defined it in the 1950s. Digital terrestrial broadcasting has already began in some developed countries (and wiil be discussed in further depth in chapter eight), but even then the encoded picture is of identical characteristics and similar definition to its analogue predecessors (and therefore can be displayed on existing receivers in conjunction with an external digital to analogue conversion device). In most cases, the increased 'bandwidth' offered by digital television has been used to increase the number of channels being broadcast, not to improve the picture definition. Attempts to introduce 'High Definition [analogue] Television' (HDTV), principally by Sony in the late 1980s, ran aground. This was mainly due to consumer resistance at being forced television and video to purchase more expensive hardware, which led to a chicken-and-egg situation in which manufacturers were consequently unwilling to invest in production lines and broadcasters were unwilling to incur the increased production costs. Television and film As with radio in the 1920s, television initially traded on being a live medium - literally, radio with pictures. It did not take broadcasters long, however, to realise that they needed a method of recording content 'offline' for subsequent transmission, and of recording live broadcasts for future use. For the first two decades of regular, scheduled transmissions, there was no effective means in existence for achieving this electronically, because the volume of 'bandwidth' (analogue signal information) consumed by a broadcast television signal was very much greater than with audio. Fiim, therefore, became the de facto recording medium, as the technology needed to encode film images electronically and capture television images on film had been developed to the point of reliability almost as quickly as television itself. The 'teiecine' device initially used an electronic camera tube in conjunction with a modified film projector. The scanning phases of the TV camera were synchronised to the shutter movement of the projector and its light output regulated in order to enable an even exposure. This process was formally known as 'photoconductive' teiecine. It was in routine use both by the BBC and American broadcasters since the first scheduled transmissions: as the authors of the standard textbook on teiecine technology note, 'the film and video industries have been intertwined almost since the beginning of television broadcasting'.25 The earliest company which was formed specifically to market teiecine equipment was Cinema Television Ltd., known as Cin-tel, founded in 1938 to service the BBC. The next significant development was the advent of 'flying spot' telecines, in which a small beam of light from a cathode ray tube progressively 'scanned' (i.e. projected through the film) the film surface, line by line, with the resulting flow of light being detected by a photoelectric device on the other side. This had one crucial advantage over the photoconductive method: it enabled the film to be transported in continuous motion rather than intermittently, as in a projector. Illumination was more even, and film damage or interrupted transmission due to a breakdown in projection was significantly reduced. From the late 1980s onwards, CCDs have largely replaced CRTs as the principai imaging device in use in teiecine technology. Two issues which have proven a continuous problem in the use of film as a storage medium for television have been that of speed compatibility and aspect ratios. By the end of the 1920s, 24fps was established as the standard shooting and projection speed for 35mm film worldwide. Film projection cannot be captured by any television imaging device at this speed, as the scanning rates for both 405-line/PAL (50hz) and NTSC (60hz) are determined by the frequency of the mains power supply in the countries in which they are used. In the case of 50hz systems the solution was very simple - the speed of the projector was increased to 25fps and each frame scanned twice per cycle. The slight speed increase is imperceptible and inaudible for moving image technology air- most viewers, and in any case footage which is shot specifically for PAL television broadcast is usually filmed at 25fps in the first place. The only practical restriction in this method was that until analogue frame store and subsequently digital video imaging techniques became available in the 1980s and 1990s, pre-1930 footage (and amateur film taken since) which was shot at a significantly lower speed than 24fps had to be televised at 25fps. This technical issue is largely responsible for the popular image of silent films today as being exaggeratedly fast, as, for almost four decades, this was the only way they could be shown on television. With NTSC the problem was more complicated, as neither 24 nor any number remotely close to it can be divided by 60. The solution was the 3:2 (or sometimes 2:3) pulldown, whereby frames of film shot at 24fps were scanned alternately for three fields (114 24lhrseconds of interlaced video) followed by two (1 24lh-second). As with the speed increase necessitated by PAL teiecine scanning, what should logically be a slightly uneven movement with the NTSC 3:2 method is to all intents and purposes invisible. Aspect ratios were not a problem at first, because the Academy ratio was universal to all film and television. The advent of widescreen cinema introduced the problem of how material originated in this format could be made to fit the ratio of a televison tube, which could not be changed in the same way that cinemas and projectors could be modified. Two methods were developed. The first, known as 'letter-boxing', simply displays the widescreen image in a section of the television screen, with the unused area appearing black. As this results in a substantially smaller image than would be case with Academy, a technique was evolved known as 'panning and scanning'. This involves selecting an Academy shaped section within a widescreen frame, using it to fill the television screen and cropping the rest. Paradoxically, although this enables the broadcast of material originated on widescreen film to take place without losing any of the television screen, it does mean losing a substantial amount of the original film image (almost half in the case of CinemaScope). For this reason, technically-aware consumers and enthusiasts generally prefer letterboxing; but when films are broadcast this way, TV stations inevitably receive a barrage of complaints, inciuding those from people who needlessly called out repair men in the belief that the black bars indicated a technical fault with their receivers. While panning and scanning is disliked by purists and film industry professionals because of the image loss, broadcasters preferred it because 'the supposition was that viewers neither cared nor noticed'.25 Given the relatively poor resolution of early generations of CRT displays and, in the United States, an FCC requirement that broadcasters fill the screen, panning ad scanning was, and by many broadcasters still is, considered the preferable option. 'Telerecording', or 'Kinescope' as the technique is known in US English, is the reverse of teiecine: namely the capturing of electronically-originated images on film. The equipment consisted of a high-definition CRT display and a 16mm or 35mm film camera. The camera's shutter was synchronised to the scanning beam of the CRT, and during the three decades (mid-1940s to mid-1970s approximately) when the technique was in mainstream use, a number of film emulsions were developed specifically for the purpose.27 As a BBC engineer noted, telerecording quality 'greatly television and video improved' between 1946 and 1971. In later models, the two sets of interlaced lines were exposed onto the film separately in order to improve the definition of each individual line.28 Before the use of videotape in broadcast studios became widespread from the mid-1960s onwards, telerecording was the only method available for making a permanent recording of moving image content originated by a television camera. Given the high cost of film stock, its use was restricted: even major national broadcasters would only telerecord broadcasts where repeat transmissions had been scheduled, the content was deemed to have significant commercial value {e.g. for syndication to broadcasters in other territories) or legal reasons dictated that a recording had to be kept (e.g. copyright registration requirements). Video - initial development of broadcast systems Both telecine and telerecording technology were initially developed because no reliable means existed of recording the broadcast signal electronically, hence the need to use film as a production and storage medium for television content. Although, for a number of reasons, attempts had been ongoing to develop the means of recording television electronically for almost as long as the medium had existed, they proved to be technically unsuccessful until the mid-1950s, and economically unviable (for use on a mass-scale) until the early 1970s. 'Video' - from the latin verb videre, meaning to see, is a term which has been variously used and abused over the second half of the twentieth century. For the purpose of this book, it will refer specifically to the technologies involved in recording television images on magnetic tape, both in analogue and digital form. The earliest attempt to carry out such a recording did not, in fact, use tape at all, but instead encoded the image as grooves on a record. This was the 'Phonovision' system, invented by John Logie Baird as a means of recording his 30-line mechanical images. Given the comparatively low bandwidth of the signal, Baird found that it was possible to modulate it as audible tones which were then recorded using a slightly modified acetate disc cutter.29 Phonovision could not be considered successful in anything other than an experimental sense, though, as Baird never managed to reproduce the modulated signal as a moving image. However, experiments to play back the disc using computer software to analyse the signal on surviving discs were successfully carried out in the late 1990s, thereby proving that Baird was able to make an accurate recording.30 The capture of magnetic tape technology from the Nazis in the aftermath of World War Two stimulated research on video recording as with audio. Possibly the first working prototype came from an unusual source - the American singer and entertainer Bing Crosby. In the late 1940s he became one of the first major radio performers to pre-record his broadcasts on tape, primarily in order to reduce his workload and prevent the need to repeat live performances for broadcast in different time zones. When in 1948 he started to perform regularly on commercial television, Crosby asked the engineer and former Army Signal Corps officer John J. 'Jack' Mullin to produce a magnetic tape recorder for video signals. The result was the Mullin-Crosby video tape recorder (VTR), which recorded a black-and-white picture on 1/2-inch tape and was successfully demonstrated on 11 November 1951. The main technical problem in using magnetic tape to record video was to accommodate the vastly increased bandwidth. It was found that this could be accomplished in one of two ways: by increasing the tape speed to the point at which the entire modulation could be recorded as a straightforward linear signal, or by recording a number of parallel, diagonally-positioned modulated records, by means of rotating heads. The Crosby-Mullin VTR used the former method, as did the BBC's Visual Electronic Recording Apparatus (VERA) machine, first demonstrated in 1952 and used on a limited scale in 1958.3' This method was soon found to be problematic, because the very high tape speeds it required (240 inches per second for Crosby-Muliin, 200 for VERA) severely limited the running time available from each reel, increased the cost of tape and resulted in high levels of mechanical wear. It was the latter form of recording that would eventually enable videotape to become a standard production technology in television studios worldwide and subsequently to be adapted for consumer use. The American Ampex corporation32 had been founded in 1944 specifically to commercially exploit magnetic audiotape technology, but by the 1950s was working on ways to reduce the tape speed and increase the image quality from that which could be obtained from the first generation of linear VTRs. The result was the model VRX-1000, first demonstrated at an American trade show in March 1956. This device used four rotating heads (hence the term 'quadruplex' used to describe this type of head assembly) mounted in a cylindrical assembly, across which 2-inch tape passed at a speed of 30 inches per second. The quality of television images recorded and broadcast using this machine were effectively indistinguishable from those broadcast live when viewed in the home, and over the following years it began to be used extensively in studios, primarily as a 'time shifting' device. Several drawbacks inhibited its use as a primary production medium, a role which continued to be filled mainly by 16mm film. Key among them were the cost of tape, the fact that a recording could only be played back by the same machine which had recorded it and difficulty in editing, which could only be done by physically cutting and splicing tape at the same angle to the modulated signal (which in turn required the use of chemicals to make the modulated magnetic oxide visible). Furthermore an edited tape could not be reused, so to start with, VTRs were only used to record complete broadcasts for subsequent transmission or repeat transmission. Writing as late as 1972, a BBC engineer stressed the restrictive nature of early broadcast videotape technology: Video-tape operation is not cheap. The machines themselves are costly, since they are built to a high standard of precision, and include complex servo mechanisms for maintaining the movement of the tape and of the head-wheel to the accuracy required for replaying in synchronism with other sources. Additional units are needed to maintain the even higher accuracy required for colour. The recording heads are expensive and have to be replaced after a few hundred hours. Running costs include the cost of tapes (which can, however, be reused many times], tape storage, maintenance and editing. At first editing was done by cutting and joining the tape, but electronic editing was provided by the second generation moving image technology television and video of video-tape recorders. This permitted the timing of cuts to be determined during the recording session, so that the producer was closely concerned with the editing.33 The key advantages of videotape were that the media was cheaper than film, recordings could be played back instantly (no need to send exposed film elements to a lab for processing) and the tapes couid be re-recorded many times over. Sadly, this feature of the new technology led to the inadvertent loss of what many archivists would now consider to be culturally important programming as 'missing, believed wiped'. Rotating head technology took another step forward with the introduction of the 'helical scan' method in 1961, which was a significant refinement to the quadruplex drum and reduced the number of recording heads needed to one. From then on the technology was gradually refined over the following three decades to add the functionality that users of broadcast video systems were taking for granted by the close of the century. NTSC colour recording followed with the launch of the Ampex VR2000, available from 1965,34 with PAL equivalents coming on the market shortly after this standard's mass rollout in European markets. By 1972, the BBC had 42 machines.35 Electronic video editing, which is achieved by selectively copying content from one tape to another, was also introduced by various manufacturers during the 1960s. This eliminated the need to physically cut and splice the tapes, and increased the volume of tape stock which a studio could recycle. The introduction of the 'UMatic' format by Sony in 1970 marked a further milestone, in that it was the first large-scale format to use cassettes as distinct from separate spools in the tape transport mechanism. From the 1970s onwards videotape technology became progressively more 'user friendly', with even broadcast-standard equipment being operable by non-technical staff, e.g. broadcast news journalists and schedulers. As one technical writer noted in 1981, the era of 'electronic news gathering' (ENG) was here to stay: The replacement of 16mm film for news gathering by U-matic cassette recorders had obvious advantages. The machine was portable, it eliminated high film developing costs, it could easily be edited using the new electronic control system and it was able to bring more immediate news coverage to the viewer. Cost savings were estimated at that time to be around 70 to 80 percent. So began the ENG era.36 By the late 1980s, videotape had superseded film as the primary 'offline' medium in the television industry for everything except the broadcast of feature films and some high-budget, technically complex drama and documentary productions in which the 16mm was still used, either due to its superior image quality or technical versatility in other respects (for example, high-speed nature filming). Consumer videotape systems From the mid-1970s video cassette recorders (VCRs) designed specifically for use by domestic consumers began to be mass-manufactured and sold. Their ante-cedents go back to UMatic and the late 1960s, when a number of Japanese and American moving image technology manufacturers sought to grow the market for videotape technology by extending it to homes as well as businesses. But, as with the Trinitron tube turning colour television successfully into a mass-medium, it was Sony which cracked the mass-manufacturing problems and found the application that would make the VCR an attractive proposition. Sony's founder, Akio Morita, came up with a catchprase for it: 'time shift', meaning to record broadcast television programmes off-air for subsequent . viewing at a time more convenient to the viewer.37 The machine which Morita came up with to sell this concept was markedly different from its studio-based predecessors. Launched in September 1975, it used a format named Betamax. This consisted of a feed and take-up spool mounted inside a plastic cassette which contained a number of safety devices : for preventing accidental damage to the tape caused by its owner mishandling ■: it: The recording method was helical scan on Vi-inch width tape, using a slightly more compact tracking geometry than broadcast standard formats. The result was a cassette around the size of a small paperback book which, initially, could hold an hour of footage. The VCRs which Sony produced for the format contained two features which were not included in studio versions of the technology: a television tuner, which enabled it to record off-air signals, and a timing device which allowed recordings to be made unattended. . To start with, Betamax was aggressively marketed as a 'time shifting' device. Advertisements were broadcast in which Count Dracula is shown using the machine . to record primetime evening' programmes ('If you work nights like I do, you miss an ^wful lot of programmes'), though it was never suggested - to start with, at any rate - that videotape was initially intended to allow consumers to accumulate their own permanent collections of material. This, however, was the concern of ■many broadcasters and film studios; with the result that Universal (which owned ■.two hit television shows that were broadcast simultaneously on different channels and therefore considered itself a key target for 'time shifters') sued for breach of copyright. The case dragged on until 1981, eventually culminating in the 'Betamax ■ruling'. In it, the US Supreme Court ruled that the sale of equipment which could potentially be used to facilitate copyright theft cannot be restricted unless it was designed specifically and exclusively for that purpose. It is this decision which forms the legal basis for domestic VCR use worldwide, in effect ensuring that it is not an offence to record 'off air' material simply for the purpose of a single viewing later, : as long as the intention is not to keep the recording permanently. Of course this has proven impossible to enforce, and in 1995 a prominent television industry body published its belief that the average VCR owner possesses a permanent library of : between 100 and 200 hours of programming. But, by the time the Betamax case reached the courts, the format itself was dead in the water. Two contenders emerged during the late 1970s. In one camp, led principally by RCA, were a number of short-lived disc-based systems. Two underwent an attempt at mass-marketing. RCA's 'Capacitance Electronic Disc' (CED), a vastly more sophisticated version of Baird Phonovision, was marketed from March 1981 toJune 1986.38 The modulated video signal was encoded as 'hill and dale' grooves television and video which were scanned by measuring the varying resistance between the edge of the stylus and the groove's surface as the disc rotated. The 'laserdisc', initially developed by Philips and marketed from 1978, encoded the picture signal optically as indentations in a highly reflective surface. These were 'read' by bouncing laser radiation off it and measuring the time taken by a photoelectric device to register the reflection. The format was relaunched, with digital audio, under the trade name 'CD Video', in the early 1990s. It had the advantage over CED that as the reading device never made physical contact with the disc's surface, the signal quality would not erode with repeated playings. Film studios and broadcasters were initially very keen on these disc formats, and for one reason: as with records (and, for the first 15 years or so of their existence, CDs), they were 'read only' formats as far as the consumer was concerned, i.e. they did not allow time-shifting. But, unlike with the audio technology of Edison's day (when there was no source material to record in the form of broadcast media), the inability to record was seen as a substantial disadvantage by consumers, and no videodisc system achieved significant sales until the launch of DVD in 1997 (see chapter eight). The medium which replaced Betamax as the principal domestic videotape format was the 'Video Home System' (VHS),39 developed by another Japanese manufacturer, JVC (Japanese Victor Corporation), and adopted by all the Japanese majors except Toshiba and Sanyo. Crucial to VHS's success were two factors. Firstly, the format sacrificed picture quality on the altar of capacity, with running times of up to three hours for each cassette (thereby making it possible to view an entire feature film uninterrupted). Secondly, JVC persuaded a number of Hollywood distributors in which copies of feature films would be produced on VHS cassettes, either for rental or outright sale to consumers. As this British newspaper article from 1969, headlined 'A Sinatra movie on your shopping list', makes clear, it was an idea which had caught consumers' imagination since the first experimental domestic video formats materialised: On Friday night my wife goes to the supermarket with her weekend shopping list. She checks off what she needs: cereal ... eggs ... butter... sausages. Then she says to me: 'What film do you want to show at home over the weekend: a Frank Sinatra or an Olivier?' And she'll buy the fiim, just the size of a jumbo postcard, along with the cigarettes at the check-out desk of the supermarket. At home I'll slip the film cartridge into the back of my TV set and we will see the film at whatever time we want. Ease of use was considered another consumer 'plus': The real point about the cassette revolution is that for the first time it brings 'home movies' into real life - with none of that fiddling 3round with reels or lamps or screens."10 The VHS system could also be used for time-shifting, although this capability was not explicitly advertised to anything like the same extent as with Betamax. There was another, more subtle attribute of the format which, although not explicitly endorsed by major content providers, probably went some way to allaying their fears over moving image technology unauthorised recording and copying: it was far less resilient and the picture quality was a lot lower than that of Betamax. VHS encoded a 'colour under' signal - that is, it combined the separate luminance (Y) and chrominance (Q information as broadcast into a single, phase-shifted pattern of modulation on the tape. Although this is capable of producing an acceptable picture on an average quality television set when a first generation recording is decoded and played back, any attempt to copy VHS ;: tapes will result in a clearly visible loss of quality in each subsequent generation. Thanks to Sony's high-profile advertising, time shifting was here to stay, though . .. JVC had successfully managed to defuse the tensions which had caused the Sony • lawsuit. In this way JVC had hedged its bets between the two potential applications for domestic video technology, and had secured the cooperation of the major con-tent producers to use the medium as an extra revenue stream. By the early 1980s VHS had effectively pushed Betamax out of the domestic market, and by 1995, the rental and sale of prerecorded VHS tapes accounted for over half of Hollywood's revenue.41 Ironically, Betamax turned out to have been of such high quality relative to the needs for consumer use that the cassette design and tape transport mechanism were recycled, in only slightly modified form, in three generations of broadcast VT • • format (Betacam, Beta SP and Digibeta), the latter two of which remain in rnain-oJSir- stream use at the time of writing. For almost two decades, VHS has dominated the market for consumer VCRs. : For prerecorded content its market share is gradually being subsumed by the Digital ■ Versatile Disc (DVD), which, being a digital medium, will be discussed in chapter eight. The speed at which domestic VCR sales accelerated is in many ways remark-able (from 1.8 million to 86 million units in use in the USA between 1980 and 1995, at which point 90 per cent of television owners also owned a VCR; in Britain owner-ship went from less than 15,000 units in 1976 to 3 million by 198342), so much so that Brian Winston argues that they 'penetrated society more quickly than any other . [mass communications] technology'.44 Economies of scale have made the VHS for-mat gradually cheaper over those decades, so much so that, in real terms, a VCR \. in 2000 cost 8 per cent of the retail cost of the first models to be sold in 1976. A ; ■ . startling illustration of the extent to which the accessibility of VHS has enabled its ^ expansion can be found in the fact that it is apparently used as a primary production medium on quite a significant scale in some third world and developing countries. One example can be found in the so-called 'video films' produced in Nigeria. In the absence of a reliable broadcasting network and given that most Nigerians cannot af-ford, to own television receivers in their homes, they consist mainly of drama series, :V not unlike the Western soap opera. They are produced, duplicated and distri-buted entirely on VHS: Aimed primarily at the urban youth market, these films are made for profit by small-scale video production companies and individuals and currently comprise an essential part of modern Nigerian culture which dwarfs any 35mm film culture as such. They are produced on VHS on very low budgets, and distributed through a thriving network of video rental shops and communal viewing centres in Nigerian cities.45 teievision and video rYr" Despite what must be the very poor quality of an edited VHS master being duplicated hundreds of times over, this format has presumably found to be ideal for the purpose due to its low cost and reliability. Piracy The issue of piracy is one which has affected all commercially produced media content using every form of moving image technology yet invented. It is covered in this chapter because consumer videotape formats have been the principal focus of this activity above all others; but that is not to say that piracy has not affected either film distribution before it or the digital technologies which will supersede it (it does, with a vengeance - of which more in chapter eight). Chambers' Dictionary defines the verb 'to pirate' in this context as 'to publish or reproduce without permission of the copyright owner'. It is not surprising that this should be a major issue where moving image content is concerned. The two crucial elements are reproduction and copyright. As i hope has become apparent thus far, the economics of moving image technology as a mass medium depend almost entirely on a single characteristic of that technology: the ability to produce multiple copies from a single original at little extra cost and with little significant loss of perceived image quality. This, of course, is a double-edged sword. One edge enables the individuals or companies who invested in an original product to distribute or broadcast it to a mass audience cheaply and easily. The other enables unauthorised third parties to do likewise, This is really how the idea of copyright came to exist, Simply put, it accords legal protection to the 'creators' of media artefacts against third parties who seek to exploit them, by unauthorised copying and/or direct exploitation, for financial gain. By 'media artefact' I mean any means of mass-reproducing intellectual property, from Caxton's printing press onwards. The definition of 'creator' varies slightly between different legal systems. For example, in British and US copyright law, the copyright owner of moving image footage is generally understood to be the individual or organisation which put up the money that enabled a film or television programme to be made in the first place. The French, however, work mainly on the principle of the 'droit d'auteur' (right of the author), emphasising the right of a person who came up with the conceptual idea to control its commercial exploitation, even if he or she was only able to bring it to the cinemas or television screens with other people's money."4 However, just because iaws exist which say that one must not do something, that does not mean to say that people wiil not try. And this is where the specific attributes of moving image technology - that double-edged sword - come into play. Piracy is nothing new. Its earliest incarnation took the form of duplicating release prints, either by using miniature 35mm cameras smuggled into cinemas, or by the owners of the original elements duplicating them by printing (something which was easier to do before the establishment of the production/distribution/exhibition system, when prints were sold outright to exhibitors). As a countermeasure, some production corn-moving image technology panies began to incorporate a logo or symbol into a costume or studio set (in much the same way that many television programmes are now transmitted with a 'spoiler', consisting of the broadcaster's logo visible in one corner of the screen) that would be visible on any unauthorised copies, of which surviving examples date back to the late 1890s.46 When the production/distribution/exhibition system based on the rental of film prints for projection in permanent buildings replaced the outright sale of prints to . itinerant exhibitors, two new forms of copyright fraud emerged to exploit the new . economic structure. The first was the outright theft of prints, most of which were exported and shown in foreign territories with less stringent copyright legislation (no cinema in the same territory as a film's production would be willing to book it from an unauthorised distributor, for obvious reasons). In America this problem grew to the point at which the Motion Picture Producers' and Distributors' Association of America (MPPDA) formed a film theft committee in 1922. Among its recommendations were that distributors should take greater care in ensuring that prints were returned at the end of their run and institute tracking procedures for ensuring that unwanted prints were destroyed. The other problem was of cinemas falsifying their .box office returns. As the amount payable by cinemas to distributors was calculated as a percentage of the total ticket sales, rather than a fixed sum, an exhibitor could cheat a distributor out of revenue by claiming to have sold fewer tickets than they ■ actually did.47 While these forms of fraud remained a significant problem throughout the period before consumer video technology became widespread, its extent was limited :. because 35mm film prints were large, heavy, expensive to produce, required tech- ■ nscal expertise to handle and could only be projected in cinemas. No technology existed which enabled-criminals to produce copies for direct sale to individual consumers. Any systematic box-office fraud could be detected relatively easily. In fact, a significant proportion of the widespread illicit copying and exhibition which took place in the developed world during this period was done in order to circumvent censorship rather than to make money. The advent of domestic videotape systems let a technological genie out of its bottle as far as piracy was concerned, for the simple reason that video enabled film and television footage to be shown in a wide range of venues, using equipment that : was relatively cheap and did not need any technical skills to operate. Furthermore, : even the legitimate use of videotape had a negative impact on the film industry: as ■ Kerry Segrave put it, 'Hollywood hit its roughest period as the VCR and the vide-ocassette arrived and became ubiquitous'.48 The VHS system had been specifically designed as a way for copies of feature films to be rented or sold directly to consumers for domestic viewing (the 'Sinatra on your shopping list'). Although these copies could not legally be used for any other purpose (i.e. in exchange for the rental or sale ■ price, the copyright owner grants permission for private viewing in the home, only ..- hence the warning notice which appears at the start of commercially produced video recordings stating that they must not be shown in, for example, oil rigs, prisons and schools), there was no technical barrier preventing their use in other settings, telovision and video as there was with film. This rapidly became a serious worry for the cinema industry, which feared that illegal forms of 'non-theatrical exhibition' using videotapes (i.e. screenings to an assembled group, but not in a cinema) posed a threat to their revenue. In 1981, the representative body for UK cinemas described the problem as being 'of very special importance', identifying two specific issues: The first relates to pirated material, that is, material recorded without the consent of the copyright holder and used without his permission. Into this category fall a large and increasing number of feature films which, in certain cases, have been shown in public houses at the same time or even prior to their local theatrical release ... Secondly, but much more difficult to control, are the feature films made available for domestic use but which are improperly shown other than within the home.49 As with the 'taping' of commercially-sold music recordings and radio broadcasts which had been going on for the previous decade, the introduction of consumer videotape technology resulted in an ongoing level of piracy which the industry has been unable to eliminate. In the two decades during which the VHS VCR has existed as a mainstream consumer technology, the film and television industries have, in general terms, adopted a two-pronged approach. To combat professional piracy (i.e. criminals illegally duplicating recordings on a large scale and then selling them directly to consumers for profit) the response has been a combination of legal action (i.e. suing and prosecuting the people who produce the copies) and public relations measures. The latter have often taken the form of advertising campaigns trying to dissuade consumers from the idea that piracy is a victimless crime and/or suggesting that it is linked to more serious organised crime, including international terrorism. Against 'low level' piracy (i.e. individual consumers making a small number of copies which are usually given away to friends and relatives) the industry's defence has been technical rather than legal. As has been noted above, one such measure is actually incorporated into the design of the VHS system itself, in that any copy will inevitably be of much poorer quality than the original. Another widespread copy protection device used with commercial'VHS recordings is called Macrovision, named after a California-based company which was formed in 1983. It works by manipulating the 'automatic gain control' (AGO, a circuit found in domestic VCRs which detects the optimum modulation strength for recording an incoming signal onto blank tape. The Macrovision signal on a source tape fools the AGC into thinking that the video and audio modulation it receives is weaker or stronger than it actually is, so that the resulting (copied) recording will be distorted and unwatchable. This does not affect playback of the original: it appears normally,, because the television set to which the source VCR is connected does not havean AGC. Someone with a basic knowledge of electronics could easily defeat Macrovision by making minor modifications to a VCR being used to record the copy (instructions for which are. readily available on the Internet). But the industry works on the assumption that the majority of consumers do not have such knowledge and that therefore, systems such, as Macrovision will prevent most domestic piracy from taking place. moving image technology Conclusion Television and video fundamentally changed the culture and economics of moving image technology, starting at the point at which film was the only viable medium. The initial commercial impetus was simple - radio with pictures. From the outset, the technologies used in producing, receiving and recording televisual images had to grapple with the conflicting demands of live and recorded content, both in terms . of their capabilities and limitations and also of cultural expectations resulting from . their use. The two decades during which broadcast television was a commercial reality but videotape recording was not established a complex relationship between film and broadcasting, which, although scaled down by the arrival of VT, remained in existence until the end of the century. As with film, there was also the standards issue. Brian Winston argues that the videotape revolution was characterised by the 'proliferation of mutually incompatible : boxes'.50 In fact, commercial competition between incompatible technical standards shaped the evolution of practically all the technologies related to television right from the outset. Whereas the 35mm standard in film was determined in the absence of any serious alternatives and established itself as a bedrock of the industry during the ' ,.. first decade of its life, the birth of television was characterised by the BBC-mediated battle between Baird and EMI, promoting electromechanical and cathode ray tube image scanning respectively. As the subsequent fight between David Sarnoff and CBS over colour demonstrated, the establishment and operation of technical standards also acquired a political dimension where television was concerned. In this respect it followed the lead of radio, in which the political regulation of 'bandwidth' was a key influence in the development of the medium, as distinct from that of film, in which the only significant legislation affecting the technology was that which addressed the health and safety issues associated with nitrate. Videotape technology was initially used only by programme makers and broadcasters, who had sought the versatility and economy of a recording medium which, unlike film, could be replayed instantly, rerecorded without practical limit and make permanent recordings of live transmissions without the expense of telerecording onto film. As VT became more reliable and easier to edit, a process began which saw it replace film for all except highly-budgeted productions which required its higher image quality as an origination medium. It was only a matter of time before the technology would be offered for sale directly to consumers. The opening of the home video age was marked by yet another battle over standardisation, but on this occasion the impetus was economic rather than political. Sony's Betamax system was technically superior to JVC's VHS and marketed for a purpose which even required the invention of a new piece of technical vocabulary - 'time shifting'. VHS, on the other hand, was developed in conjunction with the media industry and intended primarily for consumers to rent or purchase prerecorded video copies of Hollywood films. Once that genie was out of the bottle, what had been the biggest economic strength of all moving image technologies to date - the ability to produce multiple copies quickly and cheaply from a single source - quickly television and video became its worst enemy, as the illegal exploitation of 'official' recordings and the unauthorised production of new ones suddenly became very quick and easy. We have seen in this chapter how the political and economic factors influencing technical standardisation shaped the emergence of the related technologies of television and video recording. The next chapter examines the issues these and other factors raise in an activity which no one who designed the mass-produced moving image technologies ever really considered. moving image technology chapter seven j archival preservation and restoration The paradox of Theseus's ship, which poses the problem of conservation and restoration, has always fascinated me. The ship in which Theseus, slayer of the Minotaur, returned home from Crete, was kept like a sacred relic by the Athenians for centuries. Over the years ■ the old pieces of wood had to be changed to save the vessel from the ravages of time, to such an extent that after so many replacements and substitutions, it could quite leg itimately be claimed that the original ship no longer existed.' Surely it is the images that are the important thing, not the base that they are on ... Give me acetate or polyester any day: I'll take my chances with vinegar syndrome - at least it is easy to identify-and I'll sleep easy knowing that my vault is not going to go 'wallop' in the middle of the night!2 Throughout this book so far I have used the adjective 'permanent' in relation to a number of recording media. All these uses make the implicit assumption that as soon as a reel of film emerges from the fixing bath, or a Vitaphone master was removed from the electroplating machine, or a videotape is wound onto its take-up spool, a moving image or sound recording has been created which is indelible and will last forever, unless someone makes the conscious decision to dispose of it or (in the case of magnetic tape) overwrite the media with a new recording. It has not and it will not. Although political and regulatory influences have played a part (e.g. in determining technical standards), the invention and development of all the technologies used to record moving images have been commercial ventures. The investors behind them believed - correctly, in the short term - that the money which was to be made out of the media content recorded using these technologies would be generated in a relatively short time following production. So by 'permanent', the individuals and organisations which used film and videotape usually thought in terms of weeks; months at most. Simply put, they neither knew nor cared what would happen to those media after that time. The idea of systematically preserving moving image content as a public record did not come from within the media industries, but instead from museum curators, librarians and academics who believed that its cultural value was significant, even archival preservation and restoration