UCL CENTRE FOR ADVANCED SPATIAL ANALYSIS Centre for Advanced Spatial Analysis University College London 1 - 19 Torrington Place Gower St London WC1E 7HB Tel: +44 (0)20 7679 1782 casa@ucl.ac.uk www.casa.ucl.ac.uk WORKING PAPERS SERIES Digital Urban - The Visual City ISSN 1467-1298 Paper 124 - Sept 07 Digital Urban - The Visual City Dr Andy Hudson-Smith Abstract Nothing in the city is experienced by itself for a city’s perspicacity is the sum of its surroundings. To paraphrase Lynch (1960), at every instant, there is more than we can see and hear. This is the reality of the physical city, and thus in order to replicate the visual experience of the city within digital space, the space itself must convey to the user a sense of place. This is what we term the “Visual City”, a visually recognisable city built out of the digital equivalent of bricks and mortar, polygons, textures, and most importantly data. Recently there has been a revolution in the production and distribution of digital artefacts which represent the visual city. Digital city software that was once in the domain of high powered personal computers, research labs and professional software are now in the domain of the public-at-large through both the web and low-end home computing. These developments have gone hand in hand with the re-emergence of geography and geographic location as a way of tagging information to non-proprietary web-based software such as Google Maps, Google Earth, Microsoft’s Virtual Earth, ESRI’s ArcExplorer, and NASA’s World Wind, amongst others. The move towards ‘digital earths’ for the distribution of geographic information has, without doubt, opened up a widespread demand for the visualization of our environment where the emphasis is now on the third dimension. While the third dimension is central to the development of the digital or visual city, this is not the only way the city can be visualized for a number of emerging tools and ‘mashups’ are enabling visual data to be tagged geographically using a cornucopia of multimedia systems. We explore these social, textual, geographical, and visual technologies throughout this chapter. 1. The Development of Digital Space Digital space takes many forms. However in terms of the visual city, we are concerned with the creation of space that allows us to generate a visual understanding of our built environment. Knowledge of space is hard wired into us insubstantial and invisible; space is yet somehow there and here, penetrating all around us. Space for most of us hovers between ordinary, physical existence, and something given. Thus it alternates in our minds between the analytical and the absolutely given (Benedikt, 1996). Our interpretation of space and the resulting sense of location and place that is engendered influence our perception of space both in real and digital terms. Bell (1996) identifies three different kinds of space: visual, informational and perceptual. Visual space is unsurprisingly all that we can see. It is the array of objects that surround us creating, when viewed collectively, our environment. Each of the objects in any such space has a multitude of different attributes, from variations in light and colour to reflectivity. These objects create a reality which is a fully immersive environment in Cartesian space, space that can be interrupted and explored in three dimensions. If these objects are broken down to singular levels, then each can be viewed as being made up of a combination of primitives. Primitives in turn are a collection of graphic tokens such as points, lines and polygons, forming a two-dimensional or three dimensional arrangements, and it pays us to think of visual space populated by these tokens (Mitchell, 1994). If these points, lines and polygons can be recreated in digital space, along with their attributes, then digital space can mimic sufficient aspects of reality in terms of the urban dimensions necessary to create what we have called the visual city. Informational space can be seen as an overlay to visual space and it is in this space which we communicate and receive information. From urban signage to oral communication, information is communicated in visual space. In terms of the visual city, information should not be viewed as a separate space but an additional attribute or in more prosaic terms a new layer. Digital information takes the form of an embedding of data within digital space. This combination of informational and visual space can be seen as forming the basis for Google Earth and other digital globes. With the addition of user-friendly communication to convey such informational space, an overlaps occurs with the third form of space, that of social or perceptual space. Social space defines the user’s identity and role in relation to other users in the social environment. In digital space, the social dimension is increasingly important and this is seen in the rise of social networks such as MySpace, Facebook and Twitter, to name but a few. Of interest is the fact that these social spaces allow either the creation of visual space in terms of multi-user, three dimensional, environments such as the virtual world Second Life or more direct mashups which combine geo-located photographs of general users as displayed and accessed through Flickr within Google Maps. These applications are the key to the Visual City and we will come back to them in more detail later. 2. Creating Place and Space In our research group Centre for Advanced Spatial Analysis (CASA), we have built a 3D model of Greater London which we consider represents a Visual City. The production of the model has only been made possible due to the development of 3D GIS and related relevant tools in our case ESRI’s ArcScene, 3D Studio Max and various online visualization packages, most notably Google Earth. A reoccurring theme in the development of such 3D city models is the way in which emerging technologies are enabling us to query, manipulate and construct our environment remotely. The Visual City can now effectively be streamed and developed over the internet opening up a range of possibilities, not only for visualization, but also for displaying attributes of the population in the form of either socio-economic geographic data, agent-based models of how cities function or even as actual users engaging with the software. Indeed it is fair to say that we at a tipping point in city based information systems both in the way they are used and created. The goal of our Virtual London project is to develop a truly virtual city which can be occupied, queried and manipulated by citizens within a collaborative environment. This development route has entailed a combination of data capture, model development, and optimisation. The acquisition of suitable digital data is central to the development of Visual Cities and their use in the emerging online 3D GIS systems. In terms of pure visualization the production of photorealistic models of the built environment is key to the creation of visual space, yet it is a time consuming, manual process and one that up until recently was in the domain of professional photogrammetry. The standard approach to producing a photogrammetric reconstruction of the city has been through the use of calibrated images and matching control points. Figure 1 illustrates the development of one of the key buildings along the north bank of the Thames which is modelled using a combination of oblique photography from helicopter capture and ground based imagery. The model took approximately two days to produce. Figure 1 Photogrammetric Modelling In today’s Google-led world which is based on releasing free software with high levels of functionality combined with low levels of required expertise, it is now possible to considerably reduce the time taken to produce such models. Google SketchUp is a unique program that is available in both professional and freeware versions. The differences between the free and professional versions are negligible and are only significant in terms of importing and exporting data. This now means that the public at large are able to photomodel through SketchUp and produce their own sections of the city. Google SketchUp is linked directly with Google Earth which we examine further a little later, and it is linked for a good reason – for users to develop free content. Creating a Visual City, one that reflects the actual built form, is a huge task and therefore time consuming. So far, the only groups that have been able to get close to representing the city visually are Games companies such as Sony and more recently Microsoft. Games such as ‘The Getaway 3’ on the Playstation 3 represent the cutting edge in city visualization. The Getaway originally appeared on the PlayStation 2 as a 3D rendition of London covering approximately 10 square miles (16 square kilometres). The team behind the model produced a wire frame model based on a photographic survey of London and then projected the resulting textures onto the geometry. The results of such developments are impressive but the costs are typically in order of tens of millions of dollars to produce, while the models are also only of use for gaming. They cannot be easily ported into contexts either where geographical analysis is required or where the public at large can interact with them, largely due to their nature of construction. Therefore to reduce cost, Google released their SketchUp software so the users-at-large could produce the city themselves, block by block, building by building. Of note is the latest version of SketchUp which, at the time of writing, allows users to import and calibrate their own photography, directly modelling over the imagery. Although not as accurate as the traditional photogrammetry, it does allow rapid modelling and the widespread adoption of photorealistic content. Figure 2 illustrates a streetscape modelled in less than a day using SketchUp. The image is shown without textures to illustrate how architectural detail can be added to the model quickly and easily. Figure 2 Rapid Modelling Using Google SketchUp The release of Google SketchUp has in turn led to the development of the Google 3D Warehouse, an online repository which is directly linked with the SketchUp program. Currently there are 252 user submitted buildings in London, ranging from landmarks to people’s houses. Buildings are uploaded to the warehouse automatically from the programme, creating a quick and easy way to populate the city. This of course leads to duplication and worries about quality, ruling it out for real world applications such as architectural impact analysis or planning applications work, but it does provide a quick and effective visual insight into the city. The best of the models are selected by Google for viewing in the Community Layer of Google Earth, thus completing the cycle of the public-at-large creating the Visual City. One of the additional techniques we have used extensively in our group at CASA to communicate a visual sense of the city is panoramic imagery. The use of panoramas is not a new phenomenon; indeed the first panorama was patented in 1787 (Wyeld, 2006). Panoramic visualization is not three dimensional per se in that it consists of a series of photographs or computer rendered views stitched together to create a seamless image. Rigg (2000) defines a panorama as an unusually wide picture that shows at least as much width-ways as the eye is capable of seeing. As such, it provides greater left-to-right views than we can actually see (i.e., it shows content behind the viewer as well as in front). It was not until 1994 and the introduction of Quick Time Virtual Reality (QTVR) for the Apple Macintosh that panoramic production became available on home computers. Software was released that allowed a series of photographs to be seamlessly stitched to form a single complete 360 x 180 degree view, and we illustrate an example panorama in Figure 3. Figure 3 Swiss Re 360x180 Degree Panorama Although panoramas are essentially two dimensional, they can be inserted into a three dimensional scene to provide an instant sense of location and place. The field of view of a panorama equates to the coverage of a sphere. As such by draping a panoramic view onto a sphere and then moving the viewing field to the spheres centre, or nodal point, the view straightens out the lines in the image providing an exact replica of the human eye’s line of sight from the location. The ability to drape onto a sphere allows the panorama in to be depicted in x-y-z three dimensional space, and it can be embedded in other models of the Visual City. Figure 4 illustrates a panoramic sphere embedded within Google Earth. Figure 4 Panoramic Images in Google Earth The images are placed on the reverse face of the sphere allowing the user to look inside; while wrapping around a user when they enter the nodal point of the view. The panoramas or ‘Urban Spheres’ as we call them, are open source files linking to imagery outside of Google Earth on sites such as Flickr which quickly enables us to create a sense of location and place. Google Earth has been fundamental to the development of the Visual City and it is this to that we now turn. 4. Visual Cities and the Visual Earth The World Wide Web has provided a revolution in the way we obtain, distribute and react to information and we now take for granted the ability to search, edit and publish information regardless of location. The first commercially available browser, Netscape based on the earlier Mosaic, was released in 1994 and much has happened in terms of the way we now distribute, manipulate and visualize data since that time. It is arguable that we also take for granted to ability to zoom into any location on the globe and view various levels of informational and visual data in a three dimensional environment. Yet it is barely 24 months, at the time of writing, since the original Keyhole Earth Browser was re-branded and launched as Google Earth. Google Earth is the current buzz word in terms of geographic information and is covered in detail by Michael Goodchild in Chapter 2. The importance of Google Earth to the Visual City is three fold. Firstly, is the ability to view the city in two dimensions via high resolution aerial imagery. Levels of detail vary according to location with ‘Googleplex’ (the Google Campus) providing the highest current resolution at 2.54cm per pixel. The use of high resolution digital imagery allows the user to gain a visual overview of a city from the air. This is a new emergence and as such has led to the rise of sites that track locations of sightseeing within Google Earth. Although the highest resolutions are limited to urban areas, Google Earth sightseeing is a global phenomenon. To hone this discussion back to the city, the move to the third dimension has probably the largest impact in terms of visualization of geographic information than any other. Although predominantly US and Japan based due to copyright issues on data which we return to later in terms of our own model, Google’s three dimensional cities are fundamental to the idea of the Visual City. They represent a significant development in the visualization of city environments, not only in terms of our ability to view building outlines and polygons but also due to their location in true geographical space. Thus geographical location provides the third issue of importance in Google Earth which involves the ability to add data and visualize information using the three dimensional Visual City as a backdrop or canvas to other data sources. The ability to visualize and overlay information opens up a number of applications for the Visual City, applications which were once in the domain of the professional user, it is these to which we now turn. 4.1 Applications in the Visual City With the rise of computing power has come an increase in publicly accessible GIS information and with it the ability to visualize in three dimensions leading to a massive demand for city models. In terms of Virtual London, the fully functional complete model has been developed in different ways for different audiences. This is of some importance as each audience requires a different level of interaction and interface. While the visual use of the city is almost universally similar between different users, what changes is the level of data mining possible, the delivery method, and the interface. Broadly it is possible to identify two main categories of use, firstly we have fully professional usage which includes the use of the model by architects, developers, planners and other professionals who are anxious to use its full data query and visualization capabilities. For example, an architect might place a building within the model and use this to assess a variety of issues from its basic visualization to the impact it might have on traffic and surrounding land use. In terms of our London model, the fully professional application has been our main focus. The 3D model has been rolled out to all 33 London Boroughs, providing London with its first city wide 3D GIS system. This raises a number of issues in terms of software, hardware and expertise required to manage and view the model. As such we have rolled out along side the professional model a customised version written to dynamically load according to a viewpoint in Google Earth. Figure 5 illustrates a section of the model in Google Earth. Figure 5 Virtual London in Google Earth The Google Earth version is specifically developed for the non GIS user. In terms of professional use, the level of functionality is compromised but the ability to navigate and overlay other datasets is increased. This is a common trade-off for functionality versus cost and ease of use. As such the choice to roll out a Google Earth version is important as it allows any local government employee to view the model. This links in to our second level of user – the concerned citizen for public participation. Initially this was seen as the main focus for using the model but has had to be restricted due to issues of copyright with the Ordnance Survey base data used to present this version. The restrictions on data use have been central to city visualization and GIS in general, especially outside of the academic community. In short, data costs money to collect and therefore license to use the data is often restrictive in terms of further distribution. In terms of Virtual London, this led to the withdrawal of the public access version illustrating the difficulty faced by Ordnance Survey in adapting its licensing policies for the new age (Cross, 2007). The public face of Virtual London is therefore currently limited to movie files and as such indirect visualization of data within the city model. While this is restrictive in terms of public participation and allowing access to the data, it does result in improved visual output as with movie files; one is not concerned with real-time visualization. A good example of this is how a three dimensional city model can effectively communicate data in the visualization of air pollution where such levels of visualization are currently not possible in real-time but possible offline as we show in Figure 6. Figure 6 Air Pollution Rendering Figure 6 illustrates air pollution data from the Environmental Research Group at Kings College London where a pollutant surface based on nitrogen dioxide in three dimensions is draped over the cityscape. The move to visualize data in three dimensions is controversial and often seen as mere ‘eye candy’ by some specialists in the field. Yet in terms of a communication tool, it illustrates the areas of intense air pollution arguably more effectively than any two dimensional map. This may partly be due to the visual nature of the medium allowing a stronger sense of location and place to be obtained than a top down two dimensional view. As such any amount of data can be visualized with the model. Figure 7 illustrates how the city can flooded as the result of sea level rise. With the animation file, it is possible to watch the water level rise and therefore identify which areas are more at risk according to the degree of rise. Again, this is a use of the Visual City in offline mode where we can sensibly embed data to visualize important outcomes. Figure 7 London Flooding The Visual City does not necessarily need to be three dimensional. Indeed as we argue later, there are a number of emerging two dimensional technologies that create a Visual City. Neither should a Visual City be seen as purely data or informational space; for social space is becoming increasingly important in its development as we will now show. 5. The Development of Virtual Social Space Terms and phases come into and out of fashion. Cyberspace, a once common term for describing the Internet is now passé, as is the term Metaverse for the description of multiuser worlds. Yet it is Stephenson’s (1992) textual definition of the Metaverse which is closest to today’s visual virtual cities. Stephenson’s novel Snow Crash depicts life in the Metaverse as the following: “As Hiro approaches the Street, he sees two young couples, probably using their parents’ computer for a double date in the Metaverse, climbing down out of Port Zero, which is the local port of entry and monorail stop. He is not seeing real people of course. This is all part of the moving illustration drawn by his computer according to the specifications coming down the fiber-optic cable. The people are pieces of software called avatars.” Neal Stephenson, Snow Crash (1992, p.35). Avatars are an individual’s embodiment in the Visual City, providing the all-important visual and social presence in the digital environment. They are the citizens, the occupants, and the commuters of the digital realm; indeed they are the inhabitants of the Visual City in all but a real physical presence. The term avatar – for use in terms of digital environments – was first used by Chip Morningstar, the creator of Habitat, the first networked graphical virtual environment, developed on the Internet in 1985. The term ‘Avatar’ originates from the Hindu religion as an incarnation of a deity; hence an embodiment or manifestation of an idea or greater reality. Figure 8 illustrates typical designs for avatars in a virtual world, in this case in Second Life. Figure 8 Avatars in Second Life Second Life, launched in 2003, currently represents the most successful social/visual space on the Internet. It differs from other more game-based systems such as the popular World of Warcraft as it does not have any quests or goals. The system is purely a social geographic space within which its users are able to construct the environment entirely themselves. From the elevation of the landscape to the scale of a city, every part of Second Life’s visual space is editable. It is as close to the Metaverse that current technology allows and provides a unique insight into the future of the Visual City. Benedikt (1996) states that virtual worlds are not real in the material sense, many of the axioms of topology and geometry so compellingly observed to be an integral part of nature can therefore be violated or reinvented as can many of the laws of physics. It is this reinvention that allows attributes to be enhanced and emphasised and the laws of gravity, density and mass to be excluded, allowing buildings to be moved or deleted with the click of a mouse and allowing the user to fly above or anywhere within the environment. As such Second Life is a Visual City which does not collate to the cities in Google Earth. It is a landscape of fictional space existing only on one of the 3000 servers that power Second Life. The lack of gravity and the ability of Avatars to fly or teleport to locations creates a cityscape which differs considerably from the real world. With a combination of limited design control - there are no planners or architects - simply the ability of any user to create a virtual sprawl of spiralling urbanity mixed with eccentric retail areas and recreational land use parcels. In terms of the Visual City, you would not necessarily expect textual information to allow the creation of a cityscape. Yet combined with a social network, text-based communication can provide a uniquely visual view of the city as a whole. Text-based messages via mobile phones are now part of everyday life. The first text message was sent in December 1992, while SMS (short messaging service) was launched commercially for the first time in 1995 (Wilson, 2005). Text-based messaging is, in general, a one-to-one communication system. To create a social space, the SMS needs to be shared via a wider network and thus it becomes one-to-many in its communicative potential through newly emerging services such as Twitter. Twitter is representative of the recent trend in social networking sites allowing people to connect and communicate. Where it differs from sites such as MySpace is that it is purely based on the SMS format of 140 maximum characters with the text entry box via Twitter asking the simple question of ‘What are you doing’? As such, the system is applicable to short and often pithy updates on a person’s activity sent via mobile phone, instant messaging device, or via the Twitter website. The question is how can this text based information source create a Visual City? The answer is partly due to the shear number of users on Twitter (in excess of 200,000) and the ability to include a user’s location in the messages. Combining the location of Twitter posts, known as Tweets, with a Google Maps Mashup generates the ability to visualize what people are doing at different locations in a city in real-time. We illustrate the location of Tweets in Central London in Figure 9. Figure 9 Tweets in Central London New visualizations of Tweets are currently emerging on an almost daily basis allowing the concept to scale to the global level with the ability to visualize in real-time feeds of people’s thoughts and ‘what they are doing’. Using Microsoft Live (Microsoft’s webbased mapping service), it is possible to visualize these one way conversation flows updated every five seconds with either a global or street level view. Developed using a system known as Atlas and GeoRss, a startup company ‘Freshlogic’ have developed a mapping system that updates these feeds geographically. Of interest in terms of the city is the overview which is gained when viewing from above. By simply letting the system run, it will zoom into each new location, complete with address, users’ photographs and Tweet every 5 seconds. The system also works with geotagged photographs via Flickr. Using the same Atlas system, the map will update with new photography of places at each predetermined time interval. These again are live feeds into a web-based geographical visualization system, something that was unheard of and hardly imaginable a mere 18 months ago. The key to this rise in geographical information is data but not data as we would traditionally view it in large information sets from a central repository, often governmentbased, but personally gathered data. The move towards low cost, yet powerful, software such as SketchUp creates a geographically tagged database of three dimensional objects, mainly in relation to our built environment. Yet this is only one aspect, as we have seen. The move towards the increasing miniaturisation of hardware and the demand for remote access to information is pushing forward hand-held personal digital assistants (PDA’s) and more importantly the mobile phone market. These hardware innovations come in waves with each new wave adding increasingly complex functionality within increasingly easy-to-use interfaces. In the late 90’s, PDA’s were the ‘must have’ gadget for remote access to information. Functionality was limited to email and internet access, firstly via slow modem connections linked to mobile phones and then later via wi-fi hotspots. Such devices allow access to information but not in the geographic sense per se. As with all waves of innovation, PDA’s fell out of favour and are only just re-emerging, this time integrated within mobile phones, making available a portable digital tool kit for the data capture of Visual Cities available to the public at large. The latest of these devices is the Nokia N95, a phone which features a 5 mega pixel camera, wi-fi and more importantly a built in GPS. As such it makes the perfect tool for both capturing and communicating within the built environment – a portable tool to create the Visual City. The camera has a high enough resolution for use in photomodelling and SketchUp as well as holding up the possibilities of panoramic capture. The GPS unit allows tracking of routes and the uploading of data to Google Earth. Figure 10 Personal GPS Tracking Data Figure 10 Illustrates my route into Waterloo Station, London, tracked using the N95. The height of the route represents speed, providing a unique insight into my own travel into the city. The integration of GPS into devices such as mobile phones allows them to be used outside of the traditional car-based environment and thus they become part of our navigational abilities on foot. The ability to navigate through the physical city while capturing digital data in real-time or sending Tweets or geotagged photographs to Flickr, represents a key development in the Visual City. People generate data, data which up until now has generally not been logged, let alone sent to a digital earth for visualization. In terms of the Visual City, it should not be assumed that there is one Visual City for each urban area. Indeed we can identify numbers from one or two full three dimensional city models to hundreds of thousands for individual city visualizations. There is not as such a single platform or database for the increasing amount of information that can be captured. Google Earth provides a good basis with its Community Layer which provides information gathered by the public-at-large. The shear amount of information can however be overwhelming and in general, this layer is left switched off and therefore unseen by the majority of users. The shear density of population in a city, and thus the amount of information that could be input into system such as Google Earth, is resulting in vast amounts of data of varying quality. While such data is of interest on a number of levels for display, the move seems to be towards one of the personal, yet shared Visual City rather than a single collaborative database. 6. The Future: The Personal City A familiar theme is the decrease in knowledge required to create and present geographical information which is leading to a direct increase in the amount of information available. As we have seen, user created data can be visualized within a global system such as Tweets and Flickr via Atlas and Microsoft Live or as personal tracks via mobile devices within Google Earth. While all these data streams can by built into one Visual City, such as our Virtual London, there is also a move to more personalised geographic data. The editing of Google Maps to create a location previously involved the manual editing of code and a moderate knowledge of XML, but with the release of Google’s My Maps it is now possible to create one’s own map in a matter of minutes. The My Map’s application is a web-based service which allows the user to add points, lines and polygons as an overlay to Google Maps. This again is a significant addition to the visualization of the cityscape, both in two and three dimensions, as the overlays created can be exported to Google Earth or indeed any KML viewer. In addition to the ability to add points, polygons and lines to the map is the integration of video via either Google Video or YouTube. In essence, we are but at the beginning of what will be a revolution in social, visual and informational data plotted geographically by general users. The ability to create one’s own map of the cityscape is of prime importance as these maps can be either public or private. If the user chooses the public option which is the default, the map becomes searchable within the Google general search engine. Information embedded in the map thus, if searched for, directly links to the map. As such the map, be it city based or otherwise, becomes the key interface to informational space. The rise of social networks provides us with the ability to look down on the city and view the activities that its citizens are involved in. This ability provides unique social data and an insight into how the citizens are thinking, working, and socialising. At the moment, Twitters are two dimensional but it is a short step to move these data streams into a three dimensional world such as Google Earth. If you then combine this with avatars as in Second Life, then you not only have a Visual City with visual and informational space, you also introduce perceptual space into the context. This is more than a Visual City for we now stand at the threshold of a Visual Earth. 7. References Bell, J. (2006), Virtual Spaces and Places; Cyberspace; Space and Place in Computing; Presence Research,http://pegasus.cc.ucf.edu/~janzb/place/virtual.htm Benedikt, M. (1996), Information in Space is Space in Information, in Michelson, A., and Stjernfelt, F. (eds) Images form Afar. Scientific Visualisation – An Anthology. Copenhagen: Akademisk Forlaf, 161-171. Cross, M. (2007) Copyright Sinks Virtual Planning, The Guardian, 24/01/2007. Lynch, K. (1960) The Image of the City, Cambridge, MA and London: MIT Press. Mitchell, W.J. (1994) Picture Theory, Chicago: University of Chicago Press. Rigg, J. (2007) What is a Panorama, PanoGuide, http://www.panoguide.com/reference/panorama.html Stephenson, N. (1992) Snowcrash, Bantram Spectra, New York. Wilson, F. R, (2005) A History of SMS, http://prismspectrum.blogspot.com/2005_11_01_archive.html Wyeld, G. T., and Andrew, A. (2006) The Virtual City: Perspectives on the Dystopic Cybercity, The Journal of Architecture, 11, No.5, 613-620. Examples of the Visual City can be found at the author’s blog – http://www.digitalurban.blogspot.com