1 i chapter five 1 cinema exhibition 'When he saw the living pictures, he jumped up like a hare; and it took nine ushers just to hold him in his chair.'1 The first half of this book has examined the evolution and implementation of the various technologies used to record and manipulate moving image content with photographic film as the primary medium. The issues covered have included materials chemistry {film bases), photographic chemistry (emulsions, dyes and colour processes), physics (light and its manipulation in order to record a photographic image), mechanical engineering (cameras, printers, lenses and acoustic sound recording), electronics (electrical sound recording), economics (the business models through which technologies have succeeded or failed in the marketplace) and politics (the legai regulation of technology and the role of technical standards). In the following two chapters we turn our attention to the other side of the fence and consider the technological factors which shaped and influenced the way this content has been delivered to its consumers. These are cinema exhibition - the projection of films in front of a mass-audience - and the allied technologies of television and videotape. While the latter are also a significant primary production medium, it is estimated that some 80 per cent of the total footage broadcast on public television stations over the history of the medium was originated on film, and over 90 per cent of the world's videocassette recorders are to be found in private homes. For the purposes of this discussion, therefore, we will consider television and video primarily as delivery systems. At first sight, the rationale for treating the technology of cinema exhibition as a separate issue from those of film production, post-production and distribution might not seem obvious. There are, however, two powerful reasons for doing so. Firstly, although it is now a relatively small industry, for over half the total lifetime that moving image technology has existed as a mass-medium, it was a major industrial and cultural force and the only significant means through which content was delivered to its customers. Simply put, until the late 1950s (in the developed world; even later in other regions), going to the cinema was the one and oniy way in which most people saw any form of moving image. Secondly, the sheer infrastructural size of this sector of the fiim industry had a fundamental effect on the economic model through which it operated, one which had a very specific effect on both its own technologies and those of the studios which supplied it. At the sector's economic peak of the mid-1940s, there were probably several thousand cinema projectors in use for every one studio camera. The technological costs at the exhibition end, therefore, were a far bigger factor affecting development and change in this area . than they were in studio production. The role of standardisation was thus especially significant (because economies of scale made the stakes higher). This chapter, therefore, will examine the ways in which cinema projection technology has been established, evolved and changed in the 110 years since what is believed to have been the first public fiim projection took place. In doing so, i hope to ■ show that the capabilities and limitations of this technology has been a fundamentally important influence in all areas of film's use of a mass-medium, and one which is a key factor in the use of moving images within society and our relationship to them. The beginnings: 1889-1914 For the first six years during which film was being used to record moving images, there was no known method in widespread use of substantially enlarging the picture and displaying it before an audience. When the combination of the Edison/Dickson camera and Eastman's film was combined to produce a viable means of recording moving images in 1889, the initial expectation (as discussed in chapter one) was that it would be marketed primarily as a technology for individual viewing. It is worth . bearing in mind that Eastman Kodak had hitherto been mainly producing materials and hardware for amateur still photography, and that Edison's oniy other audiovisual interest before he became involved in film - the Phonograph - was also designed and marketed for home use. Cultural technologies which involved communal viewing (e.g. lanterns and slides) were not a market in which either company had taken any systematic interest. The oniy form of moving image 'playback' which was used on any significant scale^during the early 1890s was the Edison Kinetoscope (designed by Dickson and patented in 1891). This consisted of a wooden cabinet in which a 50-foot endless loop of film was transported by an electric motor over a series of rollers between an incandescent light bulb and a rotating disc shutter. Above the shutter was a lens, similar to that in a magnifying glass, through which the user saw the image. From chapter one it will be recalled that the key discovery which enabled moving images to be photographed was the ability to create the illusion of continuous movement and that the invention of a. flexible, transparent solid (i.e. film) cleared the way for this to be exploited in a camera mechanism. All successful motion picture camera . mechanisms, from Dickson's onwards, were designed to stop the film movement .completely during the moment of exposure. The Kinetoscope did not: it was a continuous motion machine, perhaps as a result of continuing difficulties which Dickson had experienced with the intermittent mechanism on his 'projecting Kinetograph' in moving image technology cinema exhibition 1889-90.2 In order to minimise visible flicker, a film speed of 40fps was found to be necessary, almost double what would eventually become the industry standard of 24fps and almost three times the effective minimum needed for flicker-free projection using an intermittent mechanism (16fps). A Kinetoscope had a film capacity of 50 feet, giving a total viewing time of just over 20 seconds/* For a few years from April 1894 a number of 'Kinetoscope parlours' opened in American cities, in which these machines were used on a coin-operated basis. But apart from the illusion of movement, they did not offer customers any real advance on the optical toys of the late Victorian period. By the turn of the century they had been superseded by the method of film exhibition which, in the developed world, dominated the industry until the mass-rollout of television in the 1950s: projection, The event which many historians argue marked the start of the film industry as we now know it took place in a Paris cafe on 28 December 1895, when two brothers who had previously run a business manufacturing glass piates for still photography, Auguste and Louis Lumiere, demonstrated their 'Cinematographe' to a paying audience. This screening set both a technological and an economic precedent. The Cinematographe served three functions in one machine - it was a combined camera, printer and projector. Projection was achieved by removing the back cover from the mechanism and placing a light source behind it, which passed through the exposed film and the lens. About 30 feet in front of the Cinematographe was a matt white screen, approximately 10 feet by 7 in size, from which the light source was reflected, thereby enabling an audience sitting in the room to view the photographic image on the fiim. The reproduction of movement was very straightforward. The mechanism which enabled the exposure of still images in rapid succession when the Cinematographe was used as a camera served exactly the same function in projection, only with light travelling in the opposite direction. A claw-and-cam system, similar to that used to insert and retract the needle in a modern sewing machine, was mechanically linked to a rotating disc shutter similar to the one used in the Kinetoscope. While the shutter blade obscured the flow of light, a claw would be inserted through perforations in the fiim and pull it down through a gate for the length of one frame. The.claw would then retract as the shutter blade rotated away from the light path, thereby enabling the stationary frame to be projected through the lens. The process was then repeated when the shutter blade completed a revolution and returned to block the flow of light. The economic precedent was that of exhibiting the same film before a gathered audience. Only one person could view each Kinetoscope screening, meaning that substantial numbers of machines had to be installed in each parlour in order for the operation to generate any significant revenue. The Cinematographe, however, was able to service a much larger customer base (it was estimated that the audience for the December 1895 show numbered about 100) for each screening, and thus recoup , the investment in manufacture and research and development far more quickly. This principle turned out to be one of the main drivers behind the growth of film as a mass medium during the following three decades: by the time the film industry's 'classical' period was established in the mid-1920s, ever increasing audiences for each screen- m ■■-mm m Hp tee® Hi mm ing ensured that the cost of the technology which recorded, duplicated and reproduced the moving images themselves had become a relatively small component in the industry's overall running costs. In other words, it would be impossible to pay Hollywood stars vast amounts of money for each appearance if there were no cheap and efficient means available of screening it to thousands of audiences of hundreds, ■ spread over a large geographical area. The following spring the Cinematographe made its London debut, showing, as it had in Paris, a 20-minute programme of actuality shorts. With the business case for projected film screenings thus established, the design and manufacture of purpose-built projectors began very quickly, by several companies and individuals, and on an industrial scale. The next significant development came in the United States, where the engineer Thomas Armat began public screenings using his 'Vitascope' : projector in 1896. Later models of this projector incorporated a technique developed by another American, Woodville Latham, at around the same time. The 'Latham • loop' was a length of film positioned between the continuously-moving sprockets .which transported the fiim through the mechanism, and the gate in which it was „ intermittently held stationary, The loops (above and below the gate) acted as a shock absorber, minimising :. contact between the film and any hard surfaces as it was advanced intermittently through the gate, and has been incorporated into virtually every cinema projector built to this day. The final core element of technology which formed the projector . mechanism as it would enter mainstream use for the following century was an intermittent mechanism that was capable of withstanding the heat generated by the light source needed toiiluminate a large screen, while at the same time inflicting minimal wear and tear on the film's perforations. This was the 'Maltese cross' movement, a device which in essence converts the continuous drive from a cranking handle or motor into the intermittent movement of a shaft with a sprocket at one end, which engages the film's perforations. This device, which was developed during the 1890s and eariy 1900s, gradually replaced earlier forms of mechanical intermittent and by the mid-1900s had almost totally superseded them in mass-manufactured 35mm projectors. Unlike any of the claw-based alternatives, the sprocket teeth inserted and retracted through the protectors at a far gentler angle, and the downward force was exerted by at least two sets of teeth at any one time. This minimised perforation damage and enhanced picture stability by providing a more accurate length of pull-down than the claw-and-cam based alternatives. A technical manual from the 1920s noted that 'no other intermittent movement has yet been evolved which compares with the Geneva (Maltese] movement for accuracy',4 a statement which, for projection, holds true to this day. It appears to have emerged in Europe and America more or less simul- FEg. 5.1 The'Latham loop'below the intermittent sprocket on a modern cinema projector. moving image technology cinema exhibition /tPPROACH CfPlM INITIAL .W51TI0N rm*.t POStTtori CENTRAL POSITION OFSTATtOHftB't-J=£RIOO, Fig. 5.2 A schematic of the Maltese cross intermittent mechanism. taneously: in New York, Thomas Armat's 'Vitascope' projector, based on a Maltese cross intermittent, was used in America's first significant commercial film projection, when a programme consisting mainly of Kinetoscope shorts was shown at Koster and Bial's Music Hall ort 23 April 1896. In Europe, the earliest known patent for a Maltese Cross mechanism was granted to the German film pioneer Oskar Messter in 1902,6 although projectors using this technology were being manufactured some years earlier. Until the turn of the century, however, the claw-and-cam system remained the most vyidely manufactured form of intermittent, not least because this had now become an 'off the shelf technology which could be lifted from a range of popular camera designs. In the Cinematographe and the first generation of purpose-built projectors, the light source was identical to that used to project lantern slides. Indeed it was often the same lantern, with the operator mounting a slide or film projector mechanism in front of it as required. This was a form of lighting which was used extensively in theatres and other auditoria throughout the second half of the nineteenth century, and was produced chemically. Known as limelight, it was generated as follows: An intensely hot flame was directed against a stick of unslaked lime. The flame was produced by burning oxygen and hydrogen gases in proper combination. One tank, known as the saturator, supplied the hydrogen gas by vapourising sulphuric ether. Oxygen was generated in a larger tank, where water dripped into an oxygen-generating compound.7 It is worth noting at this point that at around the turn of the century, mains electricity was not universally available even in many city centres of Europe and North America, let alone in suburban or rural areas. Furthermore, until the late 1900s, cinema exhibition was essentially an itinerant business - in Britain and America, cinema buildings designed exclusively for the purpose did not start to go up on moving image technology |ft any significant scale until 1908-10. The most usual venues for film projection were music halls, variety shows, fairgrounds and 'nickelodeons' (or 'penny gaffes' in Britain - both epithets refer to the price of admission) - hastily converted premises, usually shops, which entrepreneurs would turn into ersatz cinemas for the smallest possible up-front investment. The projection equipment, therefore, had to be easily . portable and not dependent on external services such as an electricity supply. It does not take a PhD in chemistry to work out that the combination of nitrate film, the intense, localised heat from a limelight flame and several gallons of highly volatile substances - all in an unprotected auditorium containing an audience of hundreds - is not exactly conducive to health and safety. The risk from fire became a major influence in the design of cinema projectors within (a few months of their commercial use and the first fires resulting therefrom. The earliest is believed to have been at an auditorium operated by the British film pioneer Birt Acres on 10 June 1896: no one was hurt and the damage was minimal, probably because the quantities of film being handled were not sufficient to cause a serious conflagration. Later that year a temporary pavilion in Berlin being used to show Edison films with a:Cinematographe was burnt to the ground, though it is believed that an electrical -fault rather than a film fire was to blame. The following year came an incident which placed health and safety at the forefront of cinema exhibition practice, legislative ■ regulation, building and equipment design for the next fifty years. On 4 May 1897 a fire broke out in a projector in Paris, it is believed as a result of its operator using a ■match to illuminate the refilling of the volatile ether in the darkened room; 125 people were killed and many more were seriously injured. The event had been a charity screening, and many of those killed were the wives and children of prominent French politicians, industrialists and society figures, with the result that the fire received extensive media coverage.8 Over the next decade, fire prevention moved gradually up the agenda, though until mass-manufactured projectors and the buildings used for film exhibition started to incorporate safety features (such as segregated projection booths), cinema fires with three-figure death tolls happened reguiariy during the 1900s. The highest casualties in a US cinema fire came in Boyerstown, Pennsylvania on 13 January 1908 when 167 were killed, and although it was subsequently established that the projector was not the cause, the incident is generally credited with providing the impetus for systematic regulation of the American film exhibition industry. In the following year came the incident which caused the highest known death toll in a cinema fire started by igniting nitrate, when over 250 lost their lives in Acapulco, Mexico. By the middle of the following decade, film fires that claimed . lives in double figures had become very unusual (especially given the vast increase in cinema buildings and ticket sales compared to a decade earlier), but by no means unheard of: Britain's worst incident came as late as 31 December 1929, when a nitrate fire which started during a childrens' matinee at Paisley, Scotland, killed 69 and seriously injured a similar number (see also chapter 1).9 As early as 1901 an advertisement for the British Prestwich model 5 projector stressed the safety benefits of its 'automatic light cut-off feature (i.e. the most probable cause of ignition would cinema exhibition be removed in the event of a film jam),10 and within a decade this and other safety features woufd be a legal requirement for public film screenings in many countries. Towards the end of the 1900-3 the design and manufacture of equipment and buildings used for film exhibition gradually evolved from a cottage industry onto an industrial footing. This mirrored the trend seen across all sectors of the industry as a whole: technical standardisation began to impose economies of scale, the techniques for editing individual shots in order to form extended narratives were perfected, the outright sale of film prints was replaced by the rental distribution system and the economic interests which controlled film production, distribution and exhibition began to consolidate. The development and production of the technology used in these processes gradually came to be supplied by an independent sector, functioning on a service provision basis for producers, laboratories and exhibitors. Establishing the standards: 1914-26 The second main phase in the development of cinema exhibition technology consisted of two major processes. The production of projectors and related equipment shifted from a large number of designs manufactured in small quantities to the production line manufacture of a much smaller number of models (the functionality of which was to ali intents and purposes identical). Secondly the venues in which films were shown began to be designed and built specifically for the purpose. The latter trend would culminate in the 'picture palaces' of the 1920s and 1930s, built with the unique requirements of the audience and the technology in mind. The projector designs which emerged during the 1910s and which were in mainstream use throughout America and Western Europe by the eariy 1920s had a number of standardised features which had evolved to meet the needs of film exhibition in the emerging 'classical' period and which would remain largely unchanged until the arrival of sound. In essence, these were: ' • the use of a Maltese Cross mechanism to advance the fiim intermittently; • the use of 'Latham loops' to prevent film damage as its motion changed from continuous to intermittent and then back again; • a concentric disc shutter consisting of two or more blades; • a spool capacity of between 1,000 and 2,000 feet (considered to be the safest maximum quantity of nitrate film which could placed at risk of accidental ignition); • a range of fire safety precautions, including enclosed spoolboxes for the feed and take-up reels, spring loaded blades which would automatically cut the fiim above and below the mechanism in the event of ignition, liquid-cooled gates (a system of pipes positioned around the gate area through which water was pumped, removing heat by radiation) and built-in fire extinguishers. in the United States, two manufacturers emerged during the 1910s which, between them, would account for the buik of projector sales throughout most of the twentieth century. In 1898 Alvah C. Roebuck, of the Sears-Roebuck mail order empire, was making a healthy profit supplying magic lantern equipment, but soon turned his attention to the film business, believing that this was where the future moving image technology lay. He founded the Enterprise Optical Manufacturing Company, which, using the trade name Motiograph, progressively refined its line of products, culminating in the launch of the model 1Ain 1908. The model E in 1916 and model Fin 1921 introduced improvements to the intermittent mechanism, shutter, safety features and picture stability of the mechanism.,! During the early 1900s the pioneer filmmaker Edwin S. Porter (of The Great Train Robbery fame) and two engineers designed the Simplex projector, first sold in 1909, and which sold thousands of units each year until its replacement with the Super Simplex in 1928. Although the company was taken over in 1983, projectors continue to be sold under the Simplex brand almost seventy years later. In Europe, the German textiles manufacturer Ernemann diversified into the film industry, launching its first model in 1904. The following year, Bauer of Stuttgart entered the business, initially servicing and repairing Pathe machines, then producing its own models from 1910.'2 Kalee in Britain, Philips of the Netherlands and Cinemecannica and Prevost of italy were all marketing cinema equipment by the late 1920s. During this period electricity replaced limelight as the main light source for cinema projection, and elbow grease as the means of driving the mechanism, as electric motors increasingly replaced hand-cranking during the 1920s. The basic principles of carbon arc illumination were first demonstrated by the British engineer Sir Humphrey Davy 1 (better known as the inventor of the safety lamps ' used by coal miners) in the late nineteenth century. He discovered that if an electrical current is passed between two thin rods of carbon, combustion of the resulting gases produces light. For use in cinema projection the two rods of copper-coated carbon electrode are mounted on stands in front of a mirror which focuses the beam of light on the projector gate. The light is 'struck' by placing the electrodes tn contact with each other and applying the current. The positive electrode is then 'trimmed' by retracting it a short distance from the negative. This creates an electrical spark which crosses the resulting gap (the arc), which in turn ignites the vapours produced by the heated carbon. It is these burning vapours which produce the light. The carbon electrodes are gradually consumed in this process and must then be replaced. Carbon arc lighting, which was aiso used in studio production until superseded by incandescent light in the late 1920s,13 offered significant advantages for projection over limelight. The running costs were lower, and it was a iot safer than storing large quantities of combustible materials close to the pro-cinema exhibition Fig. 5.3 The British Carey-Gavey 'flickerless' projector - a typical (apart from the four-blade shutter) design of the early iS20s." Note the hand-cranked mechanism, enclosed spoolboxes and film cutter (for protection from burning nitrate), and shutter positioned in front of the lens. By the early 1930s, the majority of projector designs positioned the shutter between the mechanism and light source, in order to reduce heat exposure- to the film. jector. A separate machine, known as a rectifier, was needed to convert the high-voltage, low-current mains supply to the high-current, low-voltage power used to feed the arc. Although these generated a significant amount of heat and also contained highly toxic chemicals, including mercury, they could be (and usually were) placed in a separate room and connected to the projector's lamphouse by heavy duty cables running under the projection booth floor. This was one example of how, by the 1920s, fire safety was not only a consideration in the design of projection equipment itself, but also of the buildings in which it was operated. In the US, state statutes and federal regulations usually decreed that any film which was not actually being projected had to be stored and worked on in a separate 'rewind room' fitted with a fire-retardant door, as did rectifiers and transformers. Two separate exits, one of which led directly to the exterior of the building, were required, as were iron shutters mounted on a system of runners covering the projection portholes: in the event of any film fire, they would automatically fall into place, like a guillotine blade, in order to minimise the risk of fumes or flames entering the auditorium.15 The standard length for each reel of a release print remained 1,000 feet (which was shipped to and from the cinema in a fire-retardant solid steel container) until after the conversion to sound, when 'double' reels of 2,000 feet started to be introduced. From the 1910s onwards, almost all purpose-built cinemas were equipped with at least two 35mm projectors (usually more) which were aligned to the same screen, which allowed a continuous programme of any length to be shown. At first changeovers were carried out manually, with an operator working on each projector. As the outgoing reei finished, a cue sheet provided by the distributor described the scene in the finishing reel at which the projectionist operating the incoming machine was to start his motor and open the 'douser' blocking the flow of light from the carbon arc. The conversion to sound required changeovers to be carried out with a greater degree of precision, due to the risk of cutting dialogue if the timing was not aimost perfect. A system of electrical interlocking between the two projectors combined with visible cues on the screen was therefore introduced. Standardisation was fast becoming a necessity anyway: in 1927 a projectionist writing in the. Transactions of the SMPE complained of the 'punch mark nuisance', noting that home-made cue marks consisting of 'stickers of all shapes, sizes and descriptions' were becoming a major distraction for cinema audiences.16 By the early 1930s the latter had been standardised as a mark which was printed on four consecutive frames of each print (usually by punching holes out of the internegative used to strike it), positioned in a corner of the picture. The reei in the incoming projector would be threaded to a pre-deterrnined point. When the projectionist saw the first cue mark, he would start the motor and strike the arc. On seeing a second mark, which appeared approximately six seconds later, a changeover switch would be operated. This had the effect of opening an additional shutter, simultaneously closing another one on the machine projecting the reel which had just finished, and switching the sound source between the two projectors. If this procedure was carried out successfully, the audience would be totally unaware of it having taken place (the cue marks appear too briefly for anyone who is not aware of their existence to notice them). The other major developments in projection technology which took place during the late 1910s and early 1920s was precipitated by the big increases in the size of au-ditoria which were typically built during the decade. These were a gradual increase in projection speeds and a move towards reducing the typical number of shutter blades from three to two. The increased 'throws' (distance between projector and screen) and screen sizes of the new cinemas resulted in a consequent increase in the light output needed from a projector, and of maximising its efficiency. This was because of the increased light absorption resulting from the thicker lens elements needed for longer throws and the large, smoky auditoria in the 1920s picture palaces. As a technical .::.■ manual from the period notes, 'projecting conditions have changed. Whereas 20 to 30 amperes was considered sufficient for projection, owing to longer throws necessary in the larger theatres, many houses are now using considerably over 100 amperes.'17 The design of shutters was a crucial factor in the light efficiency of a projector mechanism, and one which in turn was affected by another parameter which would eventually be standardised - the film (transport) speed. As explained in chapter one, the recording and reproduction of 'moving' images exploits the ability to create the illusion of continuous movement. In relation to film-based moving images the de-: scribes the effect whereby if a sequence of still images is captured and displayed in .; quick enough succession, it will appear to the viewer as being a continuously moving image. The function of a shutter, in both a camera and a projector, is to separate the ■.recording and reproduction of each image, or 'frarne', by blocking the flow of light as the film is advanced. Through a process of trial and error, the first generation of filmmakers established that in order for the flow of movement to appear fluent, a speed of 16fps is, for most people, a bare minimum. Any less than this and the amount of time which elapses between the exposure of one image and the next, especially if [ rapid movement is present within the subject (e.g. a shot of someone running}, will be so great that a 'jerk' or uneven movement, will be perceived when the sequence is reproduced. There is also a maximum period of time which may elapse while the light fiow from a projector is obscured during the intermittent mechanism's 'pulldown' cycle. If the light source is restored within that time, the,human brain will not have time to register the fact that the screen has gone dark. If it is not, a gradual fading of the image will be seen, followed by a reciprocating increase in light intensity when the following frame has been pulled into place. This will be perceived as a flicker, and from the 1920s onwards one of the major problems facing projector designers was to maximise the use of available Sight while minimising the appearance of any flicker. The issue of flicker reduction in itself went back even further, as this description of a projector sold from 1897 makes clear: The movement is a six to one, i.e. the film is stationary for five-sixths and is being changed in the remaining one sixth, so that the shutter used is but a very small one, and only one-sixth of the light obstruction (sic), which reduces flicker to a minimum.18 moving image technology cinema exhibition Yet before the post-World War One cinema building boom the problem was vastly simplified in that light intensity was not a major problem, given the short throws to small screens which characterised the first generation of temporary audi-toria. The only crucial requirements were to minimise the time taken by the pulldown and to ensure that the flow of light was even. The appearance of a perceived flicker was not affected by the total light output over the complete duration of the shutter cycle, only by the time of each 'obscure' phase within it. It did not matter if, during an hour-long film presentation, there was no light being projected on the screen for over thirty minutes of the total, just as long as each individual period of darkness did not exceed a certain limit. In order to produce an acceptably consistent picture with a film speed of 16fps, it was eventually established that a concentric shutter with at least three blades was needed. Two of those were 'cycle' blades, and the pulldown took place while the third covered the light source. The greater the number of blades, the more even the screen illumination appeared - hence the projector shown in fig. 5.3, which was advertised as being 'flickerless', featured a shutter with as many as four blades. The quid pro quo for increasing the number of shutter blades was a reduction in light output, as the greater the total length of the shutter cycle is spent with the light obscured, the less bright the image will appear to the human eye. When the 1920s picture palaces started to be built this became a major issue. In order to maximise the economies of scale for their operations, exhibitors put up cinemas with higher and higher capacities: by the end of the decade, auditoria containing 2,000 seats were not uncommon. This necessitated a far greater throw than had ever been needed in pre-classical times, with the result that even the output capacity of the newly developed carbon arc lamps was being stretched, not to mention the consequent increase in film fire risk from the additional heat they generated. Reducing the number of shutter blades from three to two could potentially increase the amount of light on the screen at no extra cost and without making the projector any hotter. In 1921 another projectionist training manual advised that, 'the three equal sector shutter is scientifically most perfect', although other designs were being developed to boost light output.'9 The problem was that without a compensating increase in the film speed, a two-bladed shutter would introduce flicker. By the end of the 1900s, 16fps had effectively established itself as a default speed for studio production, one which was consolidated after the introduction of the Bell and Howell 2709 camera (see chapter two), which exposed 16 frames for every two revolutions of the cranking handle. A projection manual from 1922 describes 60 feet per minute (16fps) as the 'standard speed'.20 But until the introduction of sound forced the issue, there was no formally standardised shooting and projection speed; and nor could there be, while the majority of cameras and projector mechanisms were hand-cranked. As the perceived accuracy of reproduction did not depend upon standardisation until sound came along - as an engineer quoted in the discussion of format standardisation in chapter two noted, shooting and projection speeds 'can vary over wide limits without apparent falsity' - introducing it was not believed to offer any economic benefit. Indeed, not only was there no standardised moving image technology speed but both producers and exhibitors exploited the leeway this offered within their own industrial practices. In the studio, for example, a camera wouid deliberately be 'undercranked' (operated at a lower speed than was intended for projection) in order to heighten special effects or stunt scenes. In other cases, the shooting speed was dictated by technical considerations: ... the cameraman will, under adverse light conditions, use as large a lens opening [aperture] as is practicable and slow dawn as much as he can, in order to obtain sufficient exposure. Conversely, when the light is strong, the tendency is to speed up.31 Although the major studios usually supplied information to cinemas making it clear what speed a feature was supposed to be projected at, this was often ignored, especially in the cheaper 'fleapit' houses, where films would be shown as fast as the manager thought he could get away with, in order to maximise the number of screenings that could be fitted into a day. 'Picture racing', as it was termed, was said to be a significant problem by the mid-1910s: 'an evil existing mainly in cheap, poorly-run theatres, but which once in a while pokes its sinisteriy rapid head among the seats that retail at a quarter or half a dollar'.?2 One projectionist gave this example: Fig. 6.4 A British Kalee 11 model projector, typical of those in use during the 1920s. Note the manual cranking handle linked to the tower sprocket, and the speed indicator above the gate. Picture courtesy of 8FI Stills, Posters and Designs. I remember running 1,000 feet in 12 minutes in the old days of hand-cranking at the 8 o'clock show, and in the afternoon I used to project the same reel so slow that it took Mau-. nee Costello ages to cross the set. Those were my manager's orders.23 Around the late 1910s to early 1920s, therefore, it became apparent that a significant increase in the routine cinema projection speeds in use was becoming necessary. It did not help that studio practices appeared to be determined by an ad hoc combination of achieving certain artistic effects within specific technical limitations, ■ nor did the fact that exhibitors appeared to be deciding projection speeds according . to iocal economic factors. Nevertheless, by 1918, a prominent Hollywood projection engineer was calling for an increase: Our present rate is too slowwith present powerful illumination and semi-reflective screens. : It is not sufficiently high to eliminate flicker under those conditions, especially if the local shutter conditions be bad. Seventy [feet per minute, i.e. 18.6fps] will, on the other hand, ■ place no unduly heavy burden on the fiSm itself, or upon projection machinery, and will eliminate flicker in all but the very worst cases.24 cinema exhibition Implementing this suggestion was not as simple as instructing projectionists to crank a little faster, however, (t was soon discovered that the mechanical tolerances of most projector designs then in widespread use could not withstand a speed increase significantly in excess of 16fps without affecting picture stability and causing excessive wear.25 Fears were also expressed that the increased mechanical stress caused by higher speed could exacerbate the risk of film breaking, and consequently catching fire.26 Nevertheless, projection speeds did gradually increase throughout the decade, prompted by the refinement of mechanism designs featuring electric motors and two-bladed shutters, and which were specifically designed for use with high-current carbon arc famps. Sound and consolidation: 1926-48 The standardisation of identical shooting and projection speeds was the first significant effect of the conversion to sound to be felt. Whereas the reproduction of movement can, in some cases, vary over quite wide limits without the effect being perceived by the untrained eye, the reproduction of analogue audio cannot, for the simple reason that varying the speed of playback of a recording also varies its pitch. In September 1927 the SMPTE's Standards and Nomenclature Committee undertook a fact-finding exercise in order to establish what speeds the emerging sound systems were using. The two which were entering commercial use {Vitaphone and Movietone) both used 24fps. The RCA variable area system, stiil in development at that point, used 22fps, while De Forest Phonofilms (which had virtually ceased production by that point) ran at 20fps. Accepting that trends in exhibition practice over the previous decade and decisions made by the designers of the two most successful sound systems had effectively standardised 24fps by default ('we may as weil accept facts and acknowledge that the speeds were adopted in the days of our ignorance'), the committee proposed that it be enshrined as a formal standard, which took place shortly afterwards.27 In the first phase of the conversion to sound, the equipment for playback in cinemas was designed and manufactured by the same companies which produced the studio recording equipment. However, when it became apparent that a full-scaie conversion was taking place towards the end of the decade, the supply and installation of sound equipment evolved into a service sector along similar lines to that of projector manufacture. Many of the key components of the sound reproduction technology were also used in a variety of other electronic products (such as radios), thereby further encouraging the trend. Although both RCA and Western Electric supplied and serviced optical sound reproducers, amplifiers and loudspeakers across North America and Europe throughout the following four decades, a wide range of other manufacturers designed equipment specificaliy for cinema use. As early as 1929, a training manual for projectionists and engineers, written specifically as a guide to operating and maintaining the new equipment, provided a detailed technical description of over twenty product lines of cinema sound equipment, and by the middle of the 1930s it was estimated that over 300 companies were selling equipment moving image technology 3- 4- Fox Movietone and Vitaphone Installations I. Place the film in the upper magazine, emulsion side toward light. a. Be certain that the main blade of the shutter is -up in position and that the intermittent has just completed one full movement. _ Thread the mechanism in the usual manner. Be sure that the STARTING MARK on the film is in frame at the aperture. ^ ^ Make the loop between the intermittent and the lower sprocket, so that the film rests against the index finger held across the opening of the lower film shield. It is' important that this loop be of proper length so that the film between aperture and the sound gate will measure 14J inches. Thread the film to the left lower idler under the lower sprocket. Draw tight to the sound gate sprocket^ Then raise the film twoTioles before closing the idlers. For film-recorded sound pictures, it is important that film be perfectly centred on the slit in the sound gate and that the gate be tightly closed. Select the disk corresponding to the number of the film. PIace_ the disk on the turntable and clean it with the especially provided record cleaner. Select a perfect needle. Insert it in the reproducer securely and place it exactly on the starting mark indicated on the disk. . Check all loops, idlers, and sprockets. Make sure that the number on the disk ana the film correspond. Be certain that the starting mark on the film is in perfect frame and that the needle is fastened securely, tracks correctly, and is placed exactly on the starting mark indicated on the disk. 12. Turn down the record the required number of full turns indicated on the cue sheet, making sure that the film and the needle are tracking properly during this operation. c 13. Strike the arc and_ start the motor on the cue. When all is up to speed, raise the dowser and on the proper cue bring the fader up to the required mark as per cue sheet. 14. Give the second projection the cue to strike the arc on the second machine, and stand by for change-over. 15. Stand by for the cue indicating the end of the record. Make the change-over on the fader to the second projector at the cue. Fig. 5.4. The operating procedure adopted in the projection booths of a chain of cinemas on the West Coast of America.29 -cinema exhibition 8. 9- sr. Jll. in the United States alone.23 The conversion also necessitated an additional range of technical skills on the part of projectionists. During the silent period, the job had consisted primarily of film handHng and operating mechanical equipment. With the arrival of sound, a substantial knowledge of electronics was also needed. Initially, the projection of sound fiims also called for a not inconsiderable amount of manual dexterity. Sound on disc took longer to disappear for cinema playback than it did as a primary production format, for the simple reason that a significant number of venues worldwide had invested in disc-based reproduction systems, but were not equipped for optical sound-on-film reproduction. For several years after the system's inflexibility and lower sound quality had forced it out of production use, the main Hollywood and-European studios had to operate a 'dual inventory' system for supplying release prints. A Vitaphone wax master was cut from the final mix optical sound negative for each reel, and discs pressed for supply to those cinemas which had not installed optical reproducers. There were other reasons why Vitaphone was also doomed as a reproduction format. While the use of electronically interlocked turntable and projector motors should theoretically have ensured perfect synchronisation between the two sources, the synchronisation was lost if the record stylus jumped during playback or if film prints were not repaired properly following any damage which resulted in lost footage. Technical journals and trade papers from the period give the impression that these were both significant problems, and in any case the records themselves had a much shorter lifetime than the film prints, thereby adding to the costs of continuing to support the format. Vitaphone had to all intents and purposes disappeared from the US by the end of 1931, although the distribution of discs continued in Europe for a little longer: in Britain, for example, Warner Bros, ceased supplying them in February 1932.30 One other significant area of standardisation was established as a direct result of the conversion to sound, which was the 'Academy' aspect ratio of approximately 1:1.38. The reasons for this are explained in chapter two, and it wiil suffice to note here that this was largeiy precipitated by the methods used by projectionists to get round the problem of Vitaphone and sound-on-film prints having different frame dimensions. This was the first occasion in which projectionists had ever needed to deaf with more than one ratio interchangeably. As a result most projectors built from the early 1930s onwards featured lenses and aperture plates (a steel plate with a hole punched out of it corresponding to the frame area on a release print, and which was positioned within a projector's gate assembly) which could easily be removed and replaced. By about the end of 1932, therefore, the technology which could be found in a typical cinema projection booth in North America or any European country was in roughly the form it would remain for the following two decades. As was noted in the previous chapter, the conversion to sound had necessitated an unprecedented level of investment within the exhibition industry, and it is interesting to note that almost all the significant developments and advances in film-related technology which took place during the late 1930s and 1940s were either not rolled out or could be delivered without any significant cost to exhibitors. Refinements to the RCA and Western Electric optical sound systems during the 1930s did not affect the ability of release prints to be played back on existing equipment. Although the experimental use of stereo variable area tracks began during this decade, multi-channel sound was not introduced on any significant scale until the mid-1950s, and did not become standard until the late 1980s. The brief widescreen launch of the early 1930s did result in some notable developments in projector technology, most importantly the 70mm variant of the Super Simplex designed for the Fox Grandeur process; but . again, this research and development ended up in mothballs for the following two decades. Three-strip Technicoior release prints, introduced in 1935, were projected in exactly the same way as their black-and-white counterparts - in fact, many other systems which had competed with Technicolor for market share during the 1910s and 1920s (e.g. Kinemacolor and lenticular Kodacolor) fell by the wayside precisely because they depended on non-standard equipment for projection. As the main cinema building boom in North America and Europe had taken place during the 1920s, : the acoustic properties of auditoria were not a significant architectural consideration in the majority of cases. Unlike the introduction of widescreen a generation later, the -conversion to sound had very little impact on the auditoria themselves, as cable runs ..were usually concealed in pre-existing false.ceilings and loudspeakers were usually positioned beside or above the screen. Perforated screens, which allowed speakers to be placed directly behind them without any loss to sound quality were not commercially produced until the 1970s. Acoustic panelling and fabric was used to some extent in auditoria where excessive reverberation was found to be present.3' The only other major development in film exhibition during this period was the introduction of the 16mm film soundtracks in 1933, which marked the start of this format's use as the primary 'non-theatrical' exhibition medium for the following half-century. Portable projectors which used incandescent light (i.e. generated by a filament inside a sealed glass tube, similar to a domestic Sight bulb) and with a seif-con-tamed optical sound reader and amplifier began to appear, and were used extensively in schools, businesses, community groups and even some wealthy homes: in short, ..anywhere fiims needed to be shown which was outside a purpose-built cinema. As 16mm was a 'safety only' gauge from the outset (see chapter two), it represented the ideal medium for such screenings. Although 35mm projectors designed for non-cinema use had previously been marketed on a limited scale (a 1931 research report on the use of films in schools described a 'portable' 35mm sound film projection kit, occupying five trunks!),32 their use had proven logistically impossible in the majority of situations, not least because the volume of 35mm safety stock manufactured before the 1948-50 conversion to triacetate was negligible. Adjusting to industrial change: 1948-76 The three decades following the end of World War Two saw a number of wide-ranging changes in the technology of film exhibition, ones which addressed and reflected the changing social and economic demands of cinemagoing. During the 1930s and moving image technology cinema exhibition 1940s, cinema buildings consisted mainly of single auditoria seating a thousand customers or more. They were usually located in town and city centres, and reached on foot or by public transport. By the 1950s, major changes were underway. In America and Europe, the working-class population in the teens and twenties before the war, who had tended to live close to urban centres and who constituted the bulk of the fiim industry's customer base, started to move out to newly-built suburbs and start families. The pattern of leisure activities began to change, prompted not least by the growth of television (though many social historians believe that this effect has tended to be overstated) which is discussed in the following chapter. The 'classical' model of film industry vertical integration began to fall apart due to a combination of political change (in particular the 'Paramount case' of 1948, in which the US government successfully argued that the practice of studios, distribution infrastructure and cinemas being owned by the same company was unfair and monopolistic) and reduced consumer demand for its output. The technological model of film exhibition which had been stabilised by the mid-1930s, therefore, was no longer sufficient to safeguard the industry's long-term economic future. As a result, two trends of technological change began to emerge and, eventually, to converge. The first was an attempt to reinvent the prestigious city centre venue in a way which foregrounded hitherto unmarketed image and sound technologies, in a deliberate form of product differentiation both from television and from the cinema of the previous decade. This was represented principally by the commercial rollout of widescreen and stereo sound in the mid-1950s. The second was the introduction of technologies which reduced the running costs of cinemas and increased their operating efficiency. Chief among them were the introduction of safety film, xenon illumination and 'long play' film transport systems, which enabled existing cinema buildings to be divided into multiple smaller auditoria without significant extra running costs, and eventually the multiplex boom of the 1980s. As will be recalled from chapter one, safety film had been around in one form or another since the mid-1900s, and had been mass-produced for use in amateur film formats since at least 1923, when the 16mm gauge was launched. The discovery of 'high acetyl' cellulose triacetate in 1948 eventually offered a product which offered similar mechanical and tensile strength to that of nitrate, and which could be manufactured without significant extra cost. By film industry standards the ensuing conversion took place very quickly: Eastman Kodak ceased to produce nitrate in February 1950, and the base had effectively disappeared from release print circulation within a couple of years. This had major implications for projection practice, in that the extensive and expensive fire safety precautions needed within projection booths and equipment used in handling nitrate could potentially be abolished, including the use of separate 'rewind rooms', fire extinguishers built into projectors, costly fire-retardant fabric within auditoria, and - most significantly - the level of projection booth staffing needed to comply with nitrate handling regulations in most countries. As with the obsolescence of Vitaphone, the advantages of safety film were felt last, of all in the cinema. The information pamphlet on safety fiim reproduced as fig. 1.6 on page 21 advises that the safety precautions associated with nitrate must not be moving image technology relaxed until 'every last foot' was removed from routine circulation, a process which was not complete until the mid-1950s. The next major developments were widescreen and stereo sound. Despite two distinctly negative precedents - Edison's Motion Picture Patents Corporation in the 1910s, and attempts to package widescreen with the conversion to sound in the early 1930s - the two were initially sold as a package. The technical principles behind three of the four widescreen systems launched during the 1950s (Cinerama, Cinemascope, VistaVision and Todd-AO) had been established at around the time of the'v conversion to sound, but attempts at commercial introduction proved unsuccessful. .. The sound systems that were designed to go with their widescreen counterparts were, however, based on a fundamentally new technology: magnetic recording. It would be another two decades before Alan Blumlein's work on stereo optical sound : in the 1930s would be developed to the point of commercial rollout; in the meantime magnetic sound, which had entered studio use as a production medium soon after ■the first tape recorders had been liberated from Nazi Germany, was also being seen . as the future of sound playback in the cinema. Leslie Knopp, technical advisor to the British Cinema Exhibitors' Association, spoke for many: v . It is now recognised that magnetically recorded sound is superior in quality to the optical soundtrack and I think it is the view held by the majority of technicians that the future system ■ of sound recording and reproduction will be by means of the magnetic track or tracks.33 :■■ lrv particular, magnetic tracks were seen as offering two distinct advantages. Firstly, there was the perceived quality of the sound itself, as even the first generation of magnetic sound technology used in the West was capable of recording and repro-; ducing a greater frequency range than any optical technology then in use. Secondly, this method could easily be adapted for mixing and playing back multiple 'channels' . synchronised to the same picture (i.e. stereo sound), without the costly modification of existing reproducers. Therefore, all the three widescreen systems which incorporated a stereo sound process used magnetic reproduction - a separate strip of . magnetically coated 35mm film carrying six channels and synchronised electronically in the case of Cinerama, and a number of oxide 'stripes' on the release prints in both the CinemaScope and Todd-AO formats. All three combinations achieved notable success in the prestigious first-run city centre market where admission prices were high and the technical quality of the projection and sound was an explicit selling point. Indeed these systems were consciously marketed as distinct theatrical events which were intended to represent a break from the standard, mass-produced technical medium which cinema operated as during the 1930s and 1940s. To start ■ with, only a relatively small number of auditoria were fully equipped with the new technologies, and in the case of the inaugural Todd-AO release of Oklahoma!, seats had to be booked weeks in advance and customers were required to attend the screenings wearing full evening dress. : Both widescreen and stereo eventually made the transition from a novelty to a technological standard. The ways in which they did so upheld existing precedents cinema exhibition for the rollout of new technologies, and re-established them when it came to subsequent change. Whereas the introduction of sound had necessitated investment in new equipment, widescreen also required significant architectural modifications to existing cinema buildings, it is therefore hardly surprising to note that the systems which needed the highest equipment investment and structural alteration of audito-ria achieved the lowest market saturation. Cinerama, with its very large, deeply curved screen and three separate projection booths was arguably the least successful. The number of auditoria was believed never to have exceeded thirty, and by 1959 (seven years after the format's launch), oniy 22 Cinerama venues were operating worldwide.34 The Todd-AO principle of using 70mm film to enhance the picture definition on a large screen and to carry multiple, high-quality magnetic soundtracks survived, and to the present day remains a niche format used in a small number of venues. Unlike the modified Simplex projectors used for the Fox Grandeur'system, the new generation of 70mm machines, starting with the Philips DP-70 (sold as the Norelco AA in North America) which was used to launch Todd-AO (of which more below), were backwards compatible: that is, they could also be used to project conventional 35mm film. CinemaScope is perhaps the most interesting case in point, as the way it produced a wide picture (the anamorphic lens) and the precedent it established of using an aspect ratio significantly over twice the width to height did become a widespread technology within two decades of its launch, accounting for approximately a third of Hollywood features by the mid-1970s. But this was not in the form originally envisaged. Its promoters, Twentieth Century Fox (TCF), soon discovered that many exhibitors were not prepared to invest in stereo sound at the same time as having the stages of their auditoria rebuilt and buying the new projection lenses, just as they were similarly disinclined to invest in widescreen in the immediate aftermath of the conversion to sound. Indeed, there were many instances recorded of cinemas being wired up to mix all four channels of the CinemaScope magnetic prints into a single mono one for playback, in order to avoid having to buy additional amplifiers and speakers. However, unlike the companies behind Cinerama and Todd-AO, TCF and Paramount adapted their systems to meet customer demand rather than dogmatically continuing to promote them as roadshow-only formats. TCF produced a modified version of CinemaScope which was compatible with existing mono sound systems,35 while Paramount developed a means of producing release prints from VistaVision originals which were fully compatible with existing cinema projection equipment but which offered a significantly higher image quality. In both cases, the only significant items of capital equipment which the cinema needed to purchase were an extra set of projection lenses. Furthermore, it soon proved possible to adapt smaller auditoria for widescreen projection with oniy Fig. 5.6 A small cinema in Cardiff, Wales, following its conversion for widescreen in l9S4.The original stage area has been left largely unchanged, with wider ratios obtained by lowering a vertical masking system (author's collection). 150 | moving image technology minimal rebuilding, although the resulting image was frequently a lot smaller than its Academy ratio predecessor. Therefore, through an emphasis on backwards compatibility and reducing the levei of investment needed at the exhibition end, widescreen successfully made the transition from a high-end roadshow format to an industrial norm, while preserving one of the key reasons for introducing it in the first place - using technology to market cinema as a distinct product from television. Stereo sound would eventually follow it, but not before a system became available which . also fulfilled these criteria. In notable contrast to the American-led development of film-based technologies generally, the next two important products that would have a major influence on cinema exhibition practice both had their origins in Europe. When it was introduced ■ in the 1920s, the carbon arc lamp had offered significant advantages over its cherni-cafly-lit predecessor: it was safer, cheaper, easier to operate and produced light at a more even colour temperature. There were, however, two key drawbacks. As the flame which produces the light in a carbon arc is caused by using electricity to burn : the substance of the positive electrode, eventually the carbon rod which forms that :electrode will burn away completely, and need replacing. During the burning process -itself, the distance between the two electrodes needs to.be constantly adjusted, or 'trimmed', to prevent the gap between them from becoming so great that the flame goes out. Although some mechanisms were devised for automating this process towards the end of the period when carbon arcs were in everyday use, the bottom line was that these lamps required frequent attention from the projectionist. Fur-. .thermore, most designs of lamphouse enabled the use of carbon rods which had -a.maximum burning time of approximately 30 minutes before they needed to be replaced, thereby making it impossible to show anything approaching an entire feature film using oniy one projector. During the nitrate period this did not matter, as fire .safety considerations dictated that 2,000 feet (about 20 minutes) was the maximum ; length of film of which it would be safe to risk igniting. But when the conversion to acetate removed this restriction, cinema owners began to think about the reduction in;staff costs and the possibility of constructing multiple auditoria within a single ; building which could potentially be enabled by technology that projected an entire programme unattended. With the risk of fire greatly dimished by the conversion from nitrate to acetate ■(cellulose triacetate is said to have similar burning characteristics to those of paper), there was no theoreticai reason why a single reel of up to several hours (or 10,000 feet plus) could not be assembled, if equipment could be devised to handle it. In reality, two problems needed to be overcome: the limited burning time of a carbon arc lamp, and the fact that the vast majority of projectors on the market still had a maximum spool capacity of only 2,000 feet. The limited continuous burning time of a carbon arc was overcome with the ■introduction of xenon arc illumination in the 1950s, a technology which succeeded in-combining the high light intensity and colour temperature of a carbon arc with the flexibility and ease of use of incandescent light. Incandescent illumination - in which . a thin wire, or filament, is enclosed within a glass bulb and produces light when elec- cinemo exhibition fricity is passed through it - had been in widespread use for domestic lighting since the iate nineteenth century. This technology was unsuitable for use in film projection due to the limited amount of light which each bulb could reliably generate, and its relatively low colour temperature. In crude terms, incandescent bulbs produce a 'yellowish' light which would distort the appearance of the silver image or colour dyes in projection. From the 1920s onwards it was found that the colour temperature could be improved by filling the sealed bulbs with an inert gas (tungsten at first, now more usually halogen), and this enabled the use of incandescent light for studio use, and in projectors lighting smaller screens (such as slide projectors and small gauge projectors for home and small hall use). But so far, it has proven impossible to design an incandescent bulb which can generate enough light to illuminate a full-sized cinema screen. The xenon arc lamp provided a compromise which overcame this problem. Like a carbon arc lamp, it generates light by producing a spark across two electrodes which in turn causes a gas to glow. But unlike in a carbon arc, that gas is not created by burning away a carbon rod. Instead, the two alloy electrodes are inside a quartz bulb which is filled with xenon gas under high pressure. The arc is struck by momentarily applying a very high voltage signal (usually of several thousand volts) across the electrodes, which overcomes the resistance produced by. the gap between them, thereby establishing a flame. The electricity supply then changes to a low-voltage, high-current signai, which maintains it. The heat generated inside the bulb causes the xenon gas to glow, thereby producing light of a similar intensity and colour temperature to that of a carbon arc. The lamp will then continue to operate for as long as is needed until the power supply is switched off. The earliest models of xenon lamp had service lives of 100-200 hours, while the ones on sale at the time of writing are typically guaranteed to between 1,500 and 2,000 hours. Apart from the use of protective clothing when fitting and removing them {the pressure of the xenon gas inside the bulb creates a small risk of explosion), no special maintenance or safety precautions are needed. Originally developed in Germany, the first xenon arc lamps for projector illumination were demonstrated by the Zeiss Ikon co. at the Photokina trade show in Berlin, 1954. By the end of the decade, they were reportedly in use in 50 per cent of German screens.36 Interviewed in 1957, the chief engineer of a British projector manufacturing firm noted that 'the new xenon light sources now under development will make the light on the screen independent of the operator',37 acknowledging that one of the key advantages of this new technology would be that the cinema industry could potentially use it to cut staff overheads. But although there was now a light source available which removed the time restriction of a carbon arc, the spool capacity of projectors still had to be overcome. In the wake of the conversion to sound, 2,000 feet had been established as a standard reel-length for release prints, meaning that two projectors interlocked for changeover operation were needed in order to present a complete feature film without interruption, and that a projectionist needed to be present throughout the screening. During the nitrate period this was not seen as being in any way restrictive, as the safety regulations in most countries required a higher level of staffing than was strictly moving image technology needed to present the film, anyway. With the introduction of safety film and xenon illumination this was no longer the case, and therefore exhibitors started looking for ways of projecting entire programmes using a single machine. The first systems to enter the market, from the mid-1950s onwards, consisted mainly of high capacity spools of up to 12,000 feet, which were mounted on heavy-duty spindles. These were either built into an enlarged pedestal underneath the projector mechanism and lamphouse, or positioned vertically on a separate structure known as a tower, situated behind or to one side of the projector. These spool carriers usually contained separate motors to power the take-up and rewind. Although release prints were still supplied to cinemas in 2,000-foot reels, the projectionist would 'make up' or assemble a complete .programme onto a single large spool, consisting of the feature film and any other material which was to .be shown (such as advertisements or trailers). Using a xenon arc lamp, this programme couid then be shown using a single projector, completely unat- ■■■} tended. The film-carrying system which would eventu-■ ally make it economically viable to construct multiple auditoria within a single cinema building is known as the. 'platter' (or sometimes the 'non-rewind' or 'cake-stand'). The first prototype was developed in 1964 as the result of a collaboration between Willi Burth, a cinema owner in southern Germany, and H. P. Zoller, an engineer, and was manufactured commercially from 1968.38 The device consisted of two or more horizontal turntables which held an assembled programme of up to three hours in length. Film is drawn from the centre of the roll through a regulating .device as the feeding plate is rotated by an electric motor, and after passing through the projector is wound around a central support, or 'collar' on the take-up plate. At the end of a show, the collar is pulled out of the taken-up film and replaced with a ■. feeder, with the result that the film can be shown again without the need to rewind. The time taken to rethread a projector and platter between screenings can be as lit-. tie as two to three minutes, and, as with any other device which enables an entire feature to be screened using a single projector, such a system can be left unattended during each performance. : The final piece in the jigsaw of technology which would reshape cinema exhibition in the 1970s and 1980s, was the introduction, of devices which automated a number of projection functions that were previously under the control of human .beings. The first electromechanical automation systems were developed in the mid-1960s, and consisted of a control unit and a reader which was either attached to the .projector itself or placed elsewherein the film path.The reader detected 'cues'which cinema exhibition Fig. 5.6 A film transport platter system, as installed in a modern cinema.TJie film is paid out from the feeder on the top plate, and tafcen up around the 'collar' at the bottom. The centre plate can be used to assemble and take down programmes while the film is running. were placed on the film surfaces, usually in the form of reflective adhesive labels. In response to these cues the reader would generate an electrical signal which caused the control unit to carry out functions such as lowering the auditorium lighting, opening screen curtains or adjusting the sound volume, and some systems even allowed performances to be started on a timer. In less than two decades, therefore, cinema exhibition had been transformed from a dangerous, highly-skilled, time-specific and labour-intensive process to one which could be managed according to economies of scale and carried out with far lower staffing levels. The multiplex and beyond: 1976-2005 The evolution in cinema projection technology that took place between the end of World War Two and the mid-1970s equipped it to meet the demands of a very different commercial reality to that which had existed during the 'classical' period following the conversion to sound. The combination of safety film, xenon arc illumination, the platter and automated projection had an impact on the cinema exhibition industry and the culture of cinemagoing in general which is difficult to underestimate. In the 1930s, cinema provided not just the only routine access to moving images on offer but, in the developed world, represented the largest individual sector of the leisure industry. Its technical standards were shaped by a combination of commercial forces within the industry itself and the technologies on offer through other media. Cinemas offered electrically recorded sound when the technology was avaiiable in a form which integrated with the film industry's existing norms, and because it had aiso become available to consumers in other ways, principally radio and records. Widescreen was rolled out in the 1950s when moving images in the Academy ratio became avaiiable through television and leisure time was increasingly taking place in a domestic setting. During the following two decades it gradually made the transition from a roadshow technology to a routine fixture in every high-street cinema, as that product differentiation was found to be economically necessary. Three new cinema exhibition technologies emerged during the final quarter of the twentieth century, and, like all their predecessors, they became established by the same process of market forces. The multiplex cinema - that is, a building which, instead of containing one large auditorium seating 1,000 or more, was subdivided into several smaller ones, some housing as few as 50 - was essentially a 1980s phenomenon, but one which had its origins in the conversion of cinema buildings in the 1960s and 1970s. This period was not an encouraging time for the cinema business, as is graphically illustrated by the statistics.39 British cinema admissions peaked in 1946 with 1,635 million ticket sales, declined gradually to 1,100 million admissions in 1956 and then began a relentless year-on-year fall, reaching a low of 54 million in 1984. Over two-thirds of the UK's licensed cinema premises were closed during this 28-year period. As has been noted elsewhere in this book, a widespread perception developed within the exhibition sector and beyond to the effect that television and changing leisure patterns were largely responsible for the industry's virtual collapse. One of the strategies used by the industry to fight back was the combination of moving image technology new technologies and an attempt to reinvent the social context of cinemagoing, as embodied by phenomena such as Cinerama and Todd-AO (and subsequently and to a lesser extent, Imax). The other was to replace spectacle with consumer choice. This process began in the late 1960s, when the operators of town- and city-centre sites built during the cinema boom of the 1920s and 1930s began to 'twin' or 'triple' them- This usually involved enclosing an auditorium's circle or balcony to form a (relatively! large cinema with a capacity of 500 seats or so, and building a partition wall • down the middle of the main floor space below to form two smaller ones. The use . of xenon-lit projectors being fed by platters, and usually operated under the control of a cinema automation unit, enabled the reconfigured building to be operated at no significant extra cost to its predecessor. Although the screens were a lot smaller, ^programming was more flexible. This had become a necessity, because customers were increasingly attending cinemas to see a specific film rather than as the leisure activity of default.40 As the results of this restructuring started to filter though, the decline in cin-- ema attendance slowed down markedly through the 1970s, before plummeting al-i most 50 per cent between 1980 and 1984. Interestingly, this period also saw an -equivalent rise in the number of prerecorded videocassette rentals of feature-films. During the period 1984 to 1994 this trend was reversed, largely thanks to the arrival of the multiplex cinema, which extended the principle of consumer choice to -all aspects of the exhibition process. This new form of cinema, which began in the ■United States in the early 1980s before being rolled out to Europe and beyond in themiddle of the decade, consisted of a large, aircraft-hangar type building, usually situated on a motorway or city ring road and with sufficient car parking space for all of its customers. The building itseif was divided into a large number of auditoria (as many as 32 in the case of Britain's largest multiplex, at the time of writing), with a .gallery along its centre forming the projection booth. Computer-controlled automation means that, whereas in the 1930s it was not uncommon for up to sixteen pro-. .jectionists to be needed to maintain a picture on one screen, one projectionist could : now maintain sixteen. The bottom line was that customers wanted the flexibility to see a specific film at a time which was convenient to them, and the multiplex ■ concept proved very efficient at being able to deliver that. The extent to which the growth of the multiplex enabled a revival of the film' exhibition industry is striking. During that 10-year period, ticket sales almost doubled, from 54 to 123.5 million. In .1984. the UK contained 660 sites and 1,271 screens (i.e. just under two auditoria in each building), but a decade later this had increased to 734 buildings and 1,969 screens (just under three). The rising number of screens caused by the multiplex phenomenon also had . aprofound effect on laboratory practice, owing to the vastly increased number of prints necessary for each run. In 1945 it was estimated that an average of 40 prints for a newly-released Hollywood 'A' feature were needed for UK distribution;4' by the late 1990s runs of over a thousand prints for a 'blockbuster' title were not uncommon. High-speed contact printers and developing machines were designed for the purpose, with some labs offering the capacity to print and process up to a million feet cinema exhibition per day. The changed film handling environment in the multiplex projection booth also resulted in the almost complete conversion from triacetate to polyester stock for release printing in the early to mid-1990s, largely in the wake of campaigning by the National Association of Theatre Owners (see chapter one), as it was more resilient and repelled the dust which would tend to accumulate on a print when sitting for long periods on an exposed platter deck. The other two developments during this period were both related to sound. The stereo film systems launched during the 1950s reached only a limited audience, due mainly to the industry's attempt to package it with widescreen being rejected by exhibitors. But as with electrical recording in the 1920s, consumers gradually started to experience it through other media; firstly in the form of long-playing stereo records (first sold in America in 1958) and later through VHF stereo radio broadcasting. Given that as late as the early 1970s, over 90 per cent of cinema auditoria were still only equipped to play the single-channel, limited range 'Academy curve' optical mono prints which had been the industry standard since the early 1930s, the market was ready for an upgrade. Once again, such an upgrade materialised and was rolled out when it became available in a form which was compatible with pre-existing industrial practice. It was deveioped by Ray Dolby as a spin-off to his noise reduction technologies, and enabled four channels of soundtrack to be recorded onto two Blumlein-style variable area optical tracks, which occupied the same area of the film as a conventional mono one. A technique termed 'phase shifting' by Dolby enabled each variable area track to carry two discrete channels, which were decoded in a sound processor linked to the projector's optical pickup cell. The crucial advantage of this system over the magnetic stereo methods which had preceded it was that it was totally backwards compatible: a Dolby-encoded print could still be played as mono in cinemas which did not have the new processors. This avoided the problem of 'dual inventory' print distribution (i.e. a situation in which some sorts of print are incompatible with the equipment in some cinemas} which had ultimately sunk lenticular colour, Vitaphone, VistaVision as a release medium and CinemaScope in its original form, among other systems. Even if cinemas were slow to purchase the new sound equipment, this would not prevent the rollout of Dolby prints, thereby increasing the range of titles available and making the system more attractive to potential purchasers. The original Dolby system, known as 'A type', was launched in 1976 with the release of Lizstomania (1976, dir. Ken Russell). The 1980s multiplexes nearly all installed stereo playback equipment from the start (which, along with the computer designed sightlines and acoustic properties of the auditoria, was a major selling point), and by the end of the decade stereo sound in one form or another had become standard equipment in most cinema auditoria in Europe and North America. An improved version of the Dolby system, known as 'Spectral Recording', was introduced in 1988 which, while keeping the same four channels in each mix (left, centre, right and surround), increased the dynamic range available from each channel. Again, the prints were totally backwards compatible, both with mono systems and those which had A-type decoders. moving image technology Exactly the same system of precedents applied to the mass-marketing of digital sound for cinema playback in 1992. This had already been available to consumers in the form of the compact disc for a decade previously (and some LPs were advertised as having been 'digitally recorded' or 'digitally mastered' as far back as the late 1970s), and the three competing playback systems which were all introduced to the cinema market in that year offered ease of installation and backwards compatibility in the print format as key selling points. Eastman Kodak had made an abortive attempt to bring digital sound to the market two years earlier with its 'Cinema Digital Sound' (CDS) system, in which the conventional optical track on a 35mm print was replaced with photographically recorded digital data.42 But with the analogue track removed the prints could only be shown in CDS-equipped venues, and CDS was ultimately . undermined by two factors: the need for dual print inventory, and the absence of .any analogue track meaning that there was no backup in the event of digital sound, ■reproduction being lost. The three systems introduced in 1992 were Dolby Digital (originally called SR-D), m which optical data 'blocks' were printed between the perforations on one side : of the print; the Digital Theater Systems (DTS) method, in which an.optieal timecode on.the film is used to synchronise the playback from data held on one or more com-., ■pact discs read from drives inside the sound processor; and Sony's 'Dynamic Digital Sound' system, in which data is printed between the perforations and: the outer edge of the film on both sides. While Dolby and DTS encoded six channels (directional surround and a separate sub-bass channel were added), Sony's method had the capacity -tocarry up to eight. All three systems initially established a market foothold. DTS is generally perceived to offer the highest quality sound, on account of the lower com- ::pression ratio (compression in digital audio and video is discussed in chapter eight) enabled by the decision to store the audio data separately on an offboard carrier, though the extra cost of producing and shipping data discs along with each film print means that it has never established the same market share as Dolby. The number of SDDS film releases never matched the level of either Dolby or DTS, and this was not helped by the fact that data tracks on the prints could be easily made unreadable by . even slight edge damage to the film. Almost two decades after the commercial rollout of the first mass-marketed cinema stereo sound system, therefore,, this has become a standard technology in nearly all anema auditoria in the Western world. Echoing the comments of those who believed that silent films had a future in the eariy 1930s, its introduction has caused some critics and spectators to believe that a perceived increase in the sound levels in cinema auditoria are actually detrimental to their experience of the films- being shown: Once there was a time when the enjoyment of cinema could be compromised by a neighbour rustling his sweet packaging. During The Lord of the Rings in the seat next door someone could have been road-testing a jack hammer and you would have remained in blissful ignorance.43 cinema exhibition Although the negativity in this statement is largely a generational issue {it was written by an author in his forties who had not visited a cinema for many years previously) it does illustrate, once again, the extent to which market forces shaped the form and combination of technologies used to deliver moving images to the pubSic. Audiences from the 1990s onwards generally responded enthusiastically to more channels in each recording and a higher overall sound level at which it was reproduced, be that in cinema, other public spaces such as nightclubs, television, domestic music recordings, radio or video games. Those who came to it with expectations of an earlier generation of media technologies, as with the critics of the 1930s who wanted to bring back silent films, were often disillusioned. Conclusion The evolution of the technologies used for cinema exhibition, from the Kinetoscope of 1892 to the digital sound reproducers of 1992, established a pattern and then followed it. This was based on the simple principle of playing back a single recorded sequence of images and sounds before as many paying customers as possible, thereby commercially exploiting that recording to the maximum extent possible. Technologies which upgraded, or at an audience's subjective level, 'improved' the experience of that reproduction were developed, tested and commercially introduced at regular intervals during the first century of this industry's existence. But in order to do so, they had to meet certain technical and economic criteria. Given the sheer volume of hardware and infrastructure involved after cinemas had started to be built in significant numbers from the late 1900s onwards, economies of scale dictated a more cautious approach to the rollout of new technologies. They had to conform to a framework of established technical standards, and the most successful ones embodied a degree of 'backwards compatibility'. The conversion to sound involved equipment which was simply added to existing installations in the projection booth. The wide-screen systems which eventually established themselves used conventional 35mm film and required little investment in equipment from the exhibitor. Dolby stereo variable area tracks couid also be played in mono on non-Dolby equipment, and so on and so forth. The technology we shall consider in the next chapter extends that model even further. In the case of television and (domestic) videotape recording, the technology in use at the receiving end did not have to be standardised and mass-produced across tens of thousands of cinemas: the principle had to be applied across hundreds of millions of homes worldwide. moving image technology chapter six! television and video if Hollywood ever makes a movie about the invention of television, a suitable title might be Three Hundred Men and a Baby.'' . fV was probably the first invention that was achieved by committee.'2 Television might have gone the way of the airship, nothing more than an interesting footnote in history.'3 Gentlemen, you have now invented the biggest time waster of ail time. Use it well.'4 From the previous chapter it will be apparent that, during the first half of the twentieth century, the medium through which the overwhelming majority of the world's population who experienced moving images did so was 35mm film projected in front :'of a mass-audience, sitting inside a building specifically designed for this purpose. . By the final quarter of the twentieth century, the technology used to deliver moving images as a mass medium had relocated from public buildings to private homes (the cinema still existed, but it was no longer a mass medium). As we wil! see later in this chapter, the technologies of film and television are closely linked - in fact, it is unlikely that television would have evolved in the way it did if film-based technologies did not represent a multi-million dollar global industry by the time the first scheduled, public broadcasts began. For example, not only did film provide the cultural precedents, but for the first two decades of television's existence it was the only means available of permanently recording live broadcasts and of broadcasting material that was not produced live in front of a camera. But there are also fundamental differences, both in the technologies themselves and in the cultural, political and economic frameworks through which they were developed, established and sold. Film records images through the use of chemical compounds that react to the presence of light, and in which the results of that reaction can then be made visible to the naked eye. Television and video recording work - like electrical audio recording - by representing a visual image as television and video