Aspen Movie Map

Last updated

The Aspen Movie Map was a hypermedia system developed at MIT that enabled the user to take a virtual tour through the city of Aspen, Colorado. It was developed by a team working with Andrew Lippman in 1978 with funding from ARPA.

Contents

Features

The Aspen Movie Map enabled the user to take a virtual tour through the city of Aspen, Colorado (that is, a form of surrogate travel). It is an early example of a hypermedia system.

A gyroscopic stabilizer with four 16mm stop-frame film cameras was mounted on top of a car with an encoder that triggered the cameras every ten feet. The distance was measured from an optical sensor attached to the hub of a bicycle wheel dragged behind the vehicle. The cameras were mounted in order to capture front, back, and side views as the car made its way through the city. Filming took place daily between 10 a.m. and 2 p.m. to minimize lighting discrepancies. The car was carefully driven down the center of every street in Aspen to enable registered match cuts.

The film was assembled into a collection of discontinuous scenes (one segment per view per city block) and then transferred to laserdisc, the analog-video precursor to modern digital optical disc storage technologies such as DVDs. A database was made that correlated the layout of the video on the disc with the two-dimensional street plan. Thus linked, the user was able to choose an arbitrary path through the city; the only restrictions being the necessity to stay in the center of the street; move ten feet between steps; and view the street from one of the four orthogonal views.

The interaction was controlled through a dynamically-generated menu overlaid on top of the video image: speed and viewing angle were modified by the selection of the appropriate icon through a touch-screen interface, harbinger of the ubiquitous interactive-video kiosk. Commands were sent from the client process handling the user input and overlay graphics to a server that accessed the database and controlled the laserdisc players. Another interface feature was the ability to touch any building in the current field of view, and, in a manner similar to the ISMAP feature of web browsers, jump to a façade of that building. Selected buildings contained additional data: e.g., interior shots, historical images, menus of restaurants, video interviews of city officials, etc., allowing the user to take a virtual tour through those buildings.

The facades of buildings were texture-mapped onto 3D models. The same 3D model was used to translate 2D screen coordinates into a database of buildings in order to provide hyperlinks to additional data. QADAS.jpg
The facades of buildings were texture-mapped onto 3D models. The same 3D model was used to translate 2D screen coordinates into a database of buildings in order to provide hyperlinks to additional data.

In a later implementation, the metadata, which was in large part automatically extracted from the animation database, was encoded as a digital signal in the analog video. The data encoded in each frame contained all the necessary information to enable a full-featured surrogate-travel experience.

Another feature of the system was a navigation map that was overlaid above the horizon in the top of the frame; the map both served to indicate the user's current position in the city (as well as a trace of streets previously explored) and to allow the user to jump to a two-dimensional city map, which allowed for an alternative way of moving through the city. Additional features of the map interface included the ability to jump back and forth between correlated aerial photographic and cartoon renderings with routes and landmarks highlighted; and to zoom in and out à la Charles Eames’s Powers of Ten film.

Aspen was filmed in early fall and winter. The user was able to in situ change seasons on demand while moving down the street or looking at a façade. A three-dimensional polygonal model of the city was also generated, using the Quick and Dirty Animation System (QADAS), which featured three-dimensional texture-mapping of the facades of landmark buildings, using an algorithm designed by Paul Heckbert. These computer-graphic images, also stored on the laserdisc, were also correlated to the video, enabling the user to view an abstract rendering of the city in real time.

Credits

MIT undergraduate Peter Clay, with help from Bob Mohl and Michael Naimark, filmed the hallways of MIT with a camera mounted on a cart. The film was transferred to a laserdisc as part of a collection of projects being done at the Architecture Machine Group (ArcMac).

The Aspen Movie Map was filmed in the fall of 1978, in winter 1979 and briefly again (with an active gyro stabilizer) in the fall of 1979. The first version was operational in early spring of 1979.

Many people were involved in the production, most notably: Nicholas Negroponte, founder and director of the Architecture Machine Group, who found support for the project from the Cybernetics Technology Office of DARPA; Andrew Lippman, principal investigator; Bob Mohl, who designed the map overlay system and ran user studies of the efficacy of the system for his PhD thesis; Richard Leacock (Ricky), who headed the MIT Film/Video section and shot along with MS student Marek Zalewski the Cinéma vérité interviews placed behind the facades of key buildings; John Borden, of Peace River Films in Cambridge, Massachusetts, who designed the stabilization rig; Kristina Hooper Woolsey of UCSC; Rebecca Allen; Scott Fisher, who matched the photos of Aspen in the silver-mining days from the historical society to the same scenes in Aspen in 1978 and who experimented with anamorphic imaging of the city (using a Volpe lens); Walter Bender, who designed and built the interface, the client/server model, and the animation system; Steve Gregory; Stan Sasaki, who built much of the electronics; Steve Yelick, who worked on the laserdisc interface and anamorphic rendering; Eric "Smokehouse" Brown, who built the metadata encoder/decoder; Paul Heckbert worked on the animation system; Mark Shirley and Paul Trevithick, who also worked on the animation; Ken Carson; Howard Eglowstein; and Michael Naimark, who was at the Center for Advanced Visual Studies and was responsible for the cinematography design and production.

The Ramtek 9000 series image display system was used for this project. Ramtek created a 32 bit interface to the Interdata for this purpose. Ramtek supplied image display systems which supplied square displays (256x256 or 512x512) as its competition did but also screen matches such as 320x240, 640x512 and 1280x1024. The original GE CAT Scanners all used the Ramtek 320x240 display. Some prices of the day may be on interest. A keyboard, joystick or trackball would each sell for around $1,200. A 19" CRT had a OEM price of around $5,000 and this would be purchased from Igagami in Japan. The production of a single CD master (around 13") was $300,000.

Purpose and applications

ARPA funding during the late 1970s was subject to the military application requirements of the Mansfield Amendment introduced by Mike Mansfield (which had severely limited funding for hypertext researchers like Douglas Engelbart).

The Aspen Movie Map's military application was to solve the problem of quickly familiarizing soldiers with new territory. The Department of Defense had been deeply impressed by the success of Operation Entebbe in 1976, where the Israeli commandos had quickly built a crude replica of the airport and practiced in it before attacking the real thing. DOD hoped that the Movie Map would show the way to a future where computers could instantly create a three-dimensional simulation of a hostile environment at much lower cost and in less time (see virtual reality).

While the Movie Map has been referred to as an early example of interactive video, it is perhaps more accurate to describe it as a pioneering example of interactive computing. Video, audio, still images and metadata were retrieved from a database and assembled on the fly by the computer (an Interdata minicomputer running the MagicSix operating system) redirecting its actions based upon user input; video was the principal, but not sole affordance of the interaction.

See also

Further reading

Related Research Articles

<span class="mw-page-title-main">Computer animation</span> Art of creating moving images using computers

Computer animation is the process used for digitally generating moving images. The more general term computer-generated imagery (CGI) encompasses both still images and moving images, while computer animation only refers to moving images. Modern computer animation usually uses 3D computer graphics.

<span class="mw-page-title-main">LaserDisc</span> Optical analog video disc format

The LaserDisc (LD) is a home video format and the first commercial optical disc storage medium, initially licensed, sold and marketed as MCA DiscoVision in the United States in 1978. Its diameter typically spans 30 cm (12 in). Unlike most optical-disc standards, LaserDisc is not fully digital, and instead requires the use of analog video signals.

<span class="mw-page-title-main">Videodisc</span> Random-access disc containing audio and analog video signals

Videodisc is a general term for a laser- or stylus-readable random-access disc that contains both audio and analog video signals recorded in an analog form. Typically, it is a reference to any such media that predates the mainstream popularity of the DVD format. The first mainstream official Videodisc was the Television Electronic Disc (TED) Videodisc, and the newest is the 4K Ultra HD Blu-Ray Disc. As of September 2023, the active video disc formats are Blu-ray Disc, DVD, and in other regions because of the price difference from DVD, Video CD (VCD) and SVCD.

<span class="mw-page-title-main">Motion capture</span> Process of recording the movement of objects or people

Motion capture is the process of recording the movement of objects or people. It is used in military, entertainment, sports, medical applications, and for validation of computer vision and robots. In filmmaking and video game development, it refers to recording actions of human actors and using that information to animate digital character models in 2D or 3D computer animation. When it includes face and fingers or captures subtle expressions, it is often referred to as performance capture. In many fields, motion capture is sometimes called motion tracking, but in filmmaking and games, motion tracking usually refers more to match moving.

Full-motion video (FMV) is a video game narration technique that relies upon pre-recorded video files to display action in the game. While many games feature FMVs as a way to present information during cutscenes, games that are primarily presented through FMVs are referred to as full-motion video games or interactive movies.

<span class="mw-page-title-main">BBC Domesday Project</span> Crowdsourced born-digital description of the UK, published in 1986

The BBC Domesday Project was a partnership between Acorn Computers, Philips, Logica, and the BBC to mark the 900th anniversary of the original Domesday Book, an 11th-century census of England. It has been cited as an example of digital obsolescence on account of the physical medium used for data storage.

An interactive film is a video game or other interactive media that has characteristics of a cinematic film. In the video game industry, the term refers to a movie game, a video game that presents its gameplay in a cinematic, scripted manner, often through the use of full-motion video of either animated or live-action footage.

<span class="mw-page-title-main">Scott Fisher (technologist)</span>

Scott Fisher is the Professor and Founding Chair of the Interactive Media Division in the USC School of Cinematic Arts at the University of Southern California, and Director of the Mobile and Environmental Media Lab there. He is an artist and technologist who has worked extensively on virtual reality, including pioneering work at NASA, Atari Research Labs, MIT's Architecture Machine Group and Keio University.

The term interactive video usually refers to a technique used to blend interaction and linear film or video.

<span class="mw-page-title-main">Michael Naimark</span>

Michael Naimark is an artist, inventor, and scholar in the fields of virtual reality and new media art. He is best known for his work in projection mapping, virtual travel, live global video, and cultural preservation, and often refers to this body of work as “place representation”.

<span class="mw-page-title-main">Virtual globe</span> 3D software model or representation of Earth or another world

A virtual globe is a three-dimensional (3D) software model or representation of Earth or another world. A virtual globe provides the user with the ability to freely move around in the virtual environment by changing the viewing angle and position. Compared to a conventional globe, virtual globes have the additional capability of representing many different views of the surface of Earth. These views may be of geographical features, man-made features such as roads and buildings, or abstract representations of demographic quantities such as population.

Eric Mayorga Howlett was the inventor of the LEEP, extreme wide-angle stereoscopic optics used in photographic and virtual reality systems.

<span class="mw-page-title-main">Virtual art</span>

Virtual art is a term for the virtualization of art, made with the technical media developed at the end of the 1980s. These include human-machine interfaces such as visualization casks, stereoscopic spectacles and screens, digital painting and sculpture, generators of three-dimensional sound, data gloves, data clothes, position sensors, tactile and power feed-back systems, etc. As virtual art covers such a wide array of mediums it is a catch-all term for specific focuses within it. Much contemporary art has become, in Frank Popper's terms, virtualized.

<span class="mw-page-title-main">3D computer graphics</span> Graphics that use a three-dimensional representation of geometric data

3D computer graphics, sometimes called CGI, 3-D-CGI or three-dimensional computer graphics, are graphics that use a three-dimensional representation of geometric data that is stored in the computer for the purposes of performing calculations and rendering digital images, usually 2D images but sometimes 3D images. The resulting images may be stored for viewing later or displayed in real time.

A projection augmented model is an element sometimes employed in virtual reality systems. It consists of a physical three-dimensional model onto which a computer image is projected to create a realistic looking object. Importantly, the physical model is the same geometric shape as the object that the PA model depicts.

<span class="mw-page-title-main">Computer graphics</span> Graphics created using computers

Computer graphics deals with generating images and art with the aid of computers. Today, computer graphics is a core technology in digital photography, film, video games, digital art, cell phone and computer displays, and many specialized applications. A great deal of specialized hardware and software has been developed, with the displays of most devices being driven by computer graphics hardware. It is a vast and recently developed area of computer science. The phrase was coined in 1960 by computer graphics researchers Verne Hudson and William Fetter of Boeing. It is often abbreviated as CG, or typically in the context of film as computer generated imagery (CGI). The non-artistic aspects of computer graphics are the subject of computer science research.

The history of optical recording can be divided into a few number of distinct major contributions. The pioneers of optical recording worked mostly independently, and their solutions to the many technical challenges have very distinctive features, such as

<span class="mw-page-title-main">Finger tracking</span> High-resolution technique in gesture recognition and image processing

In the field of gesture recognition and image processing, finger tracking is a high-resolution technique developed in 1969 that is employed to know the consecutive position of the fingers of the user and hence represent objects in 3D. In addition to that, the finger tracking technique is used as a tool of the computer, acting as an external device in our computer, similar to a keyboard and a mouse.

<span class="mw-page-title-main">Computer-generated imagery</span> Application of computer graphics to create or contribute to images

Computer-generated imagery (CGI) is a specific-technology or application of computer graphics for creating or improving images in art, printed media, simulators, videos and video games. These images are either static or dynamic. CGI both refers to 2D computer graphics and 3D computer graphics with the purpose of designing characters, virtual worlds, or scenes and special effects. The application of CGI for creating/improving animations is called computer animation, or CGI animation.

<span class="mw-page-title-main">LV-ROM</span> LaserDisc-based format designed for the simultaneous storage of analog video and computer software

LV-ROM is an optical disc format developed by Philips Electronics to integrate analog video and computer software for interactive multimedia. The LV-ROM is a specialized variation of the CAV Laserdisc. LV-ROM is an initialism for "LaserVision Read-Only Memory".

References