Route panorama

Last updated

Route panorama is a continuous 2D image that includes all the scenes visible from a route, as it first appeared in Zheng and Tsuji's work of panoramic views in 1990. [1]

Contents

Overview

Different from a local panorama at a static viewpoint, a digital route panorama is constructed from partial views at consecutive viewpoints along a path. [2] A general approach to obtain such a complete route panorama is to use a line camera or slit camera mounted on a vehicle moving along the path smoothly. The camera scans temporal scenes in the side direction of the path and connect them to the spatial image. This is realized by a program that processes temporal image data or video data in a computer. The route panorama can extend to a long distance for indexing scenes and navigation on the Internet. The long image can further be transmitted to and be scrolled on computer screens or handheld devices as moving panorama for access of geospatial locations, navigation, georeferencing, [3] etc.

Mathematically, the route panorama employs a parallel-and-perspective projection [4] that is a continuous and extreme case of multi-perspective view to pixel lines. It may have the aspect ratio of an object different from what a normal perspective projection generates. In addition, a video camcorder is used to produce the route panorama by taking only one pixel line in the video frame at a time with the auto-exposure function of the camcorder and shaking removal function using the inter-frame matching.

If the depth of scenes from the path has a dominant layer, a route panorama can also be created on that layer by stitching discrete photos consecutively taken along the path [5] using Photomontage. Under the same circumstance, a dynamic slit selected in the video frame can generate a route panorama with less shape distortion. [6] [7]

Panoramic View of Old City lane (Pole) Ahmedabad.jpg
A multipoint panorama of one of the street of the old city Ahmedabad
RoutePanorama.jpg
A route panorama image captured with a video camera on a vehicle along Alabama St. Indianapolis, Indiana, United States
Along the River During the Qingming Festival (Qing Court Version).jpg
Ancient painting scroll: Along the River During the Qingming Festival , 18th-century remake of a 12th-century original by Chinese artist Zhang Zeduan.

See also

Related Research Articles

Computer vision tasks include methods for acquiring, processing, analyzing and understanding digital images, and extraction of high-dimensional data from the real world in order to produce numerical or symbolic information, e.g. in the forms of decisions. Understanding in this context means the transformation of visual images into descriptions of world that make sense to thought processes and can elicit appropriate action. This image understanding can be seen as the disentangling of symbolic information from image data using models constructed with the aid of geometry, physics, statistics, and learning theory.

<span class="mw-page-title-main">Video</span> Electronic moving image

Video is an electronic medium for the recording, copying, playback, broadcasting, and display of moving visual media. Video was first developed for mechanical television systems, which were quickly replaced by cathode-ray tube (CRT) systems, which, in turn, were replaced by flat-panel displays of several types.

<span class="mw-page-title-main">Motion blur</span> Photography artifact from moving objects

Motion blur is the apparent streaking of moving objects in a photograph or a sequence of frames, such as a film or animation. It results when the image being recorded changes during the recording of a single exposure, due to rapid movement or long exposure.

Microscope image processing is a broad term that covers the use of digital image processing techniques to process, analyze and present images obtained from a microscope. Such processing is now commonplace in a number of diverse fields such as medicine, biological research, cancer research, drug testing, metallurgy, etc. A number of manufacturers of microscopes now specifically design in features that allow the microscopes to interface to an image processing system.

<span class="mw-page-title-main">Camcorder</span> Video camera with built-in video recorder

A camcorder is a self-contained portable electronic device with video and recording as its primary function. It is typically equipped with an articulating screen mounted on the left side, a belt to facilitate holding on the right side, hot-swappable battery facing towards the user, hot-swappable recording media, and an internally contained quiet optical zoom lens.

Panoramic photography is a technique of photography, using specialized equipment or software, that captures images with horizontally elongated fields of view. It is sometimes known as wide format photography. The term has also been applied to a photograph that is cropped to a relatively wide aspect ratio, like the familiar letterbox format in wide-screen video.

<span class="mw-page-title-main">Slit-scan photography</span> Photographic and cinematographic process

The slit-scan photography technique is a photographic and cinematographic process where a moveable slide, into which a slit has been cut, is inserted between the camera and the subject to be photographed.

QuickTime VR is an image file format developed by Apple Inc. for QuickTime, and discontinued along with QuickTime 7. It allows the creation and viewing of VR photography, photographically captured panoramas, and the viewing of objects photographed from multiple angles. It functions as plugins for the QuickTime Player and for the QuickTime Web browser plugin.

<span class="mw-page-title-main">HDV</span> Magnetic tape-based HD videocassette format for camcorders

HDV is a format for recording of high-definition video on DV videocassette tape. The format was originally developed by JVC and supported by Sony, Canon, and Sharp. The four companies formed the HDV Consortium in September 2003.

Image resolution is the level of detail of an image. The term applies to digital images, film images, and other types of images. "Higher resolution" means more image detail. Image resolution can be measured in various ways. Resolution quantifies how close lines can be to each other and still be visibly resolved. Resolution units can be tied to physical sizes, to the overall size of a picture, or to angular subtense. Instead of single lines, line pairs are often used, composed of a dark line and an adjacent light line; for example, a resolution of 10 lines per millimeter means 5 dark lines alternating with 5 light lines, or 5 line pairs per millimeter. Photographic lens are most often quoted in line pairs per millimeter.

<span class="mw-page-title-main">Image stitching</span> Combining multiple photographic images with overlapping fields of view

Image stitching or photo stitching is the process of combining multiple photographic images with overlapping fields of view to produce a segmented panorama or high-resolution image. Commonly performed through the use of computer software, most approaches to image stitching require nearly exact overlaps between images and identical exposures to produce seamless results, although some stitching algorithms actually benefit from differently exposed images by doing high-dynamic-range imaging in regions of overlap. Some digital cameras can stitch their photos internally.

<span class="mw-page-title-main">Sony HDR-HC1</span> Digital camera model

The Sony HDR-HC1, introduced in mid-2005, is the first consumer HDV camcorder to support 1080i.

A rotating line camera is a digital camera that uses a linear CCD array to assemble a digital image as the camera rotates. The CCD array may consist of three sensor lines, one for each RGB color channel. Advanced rotating line cameras may have multiple linear CCD arrays on the focal plate and may capture multiple panoramic images during their rotation.

<span class="mw-page-title-main">VR photography</span> Interactive panoramic photo viewing format

VR photography is the interactive viewing of panoramic photographs, generally encompassing a 360-degree circle or a spherical view. The results is known as VR photograph, 360-degree photo, photo sphere, or spherical photo, as well as interactive panorama or immersive panorama.

<span class="mw-page-title-main">Rolling shutter</span> Image capture method

Rolling shutter is a method of image capture in which a still picture or each frame of a video is captured not by taking a snapshot of the entire scene at a single instant in time but rather by scanning across the scene rapidly, vertically, horizontally or rotationally. In other words, not all parts of the image of the scene are recorded at exactly the same instant. This produces predictable distortions of fast-moving objects or rapid flashes of light. This is in contrast with "global shutter" in which the entire frame is captured at the same instant.

<span class="mw-page-title-main">Strip photography</span> Type of photographic technique

Strip photography, or slit photography, is a photographic technique of capturing a two-dimensional image as a sequence of one-dimensional images over time, in contrast to a normal photo which is a single two-dimensional image at one point in time. A moving scene is recorded, over a period of time, using a camera that observes a narrow strip rather than the full field. If the subject is moving through this observed strip at constant speed, they will appear in the finished photo as a visible object. Stationary objects, like the background, will be the same the whole way across the photo and appear as stripes along the time axis; see examples on this page.

This glossary defines terms that are used in the document "Defining Video Quality Requirements: A Guide for Public Safety", developed by the Video Quality in Public Safety (VQIPS) Working Group. It contains terminology and explanations of concepts relevant to the video industry. The purpose of the glossary is to inform the reader of commonly used vocabulary terms in the video domain. This glossary was compiled from various industry sources.

<span class="mw-page-title-main">Pixel Camera</span> Camera application developed by Google for Pixel devices

Pixel Camera, formerly Google Camera, is a camera phone application developed by Google for the Android operating system. Development for the application began in 2011 at the Google X research incubator led by Marc Levoy, which was developing image fusion technology for Google Glass. It was publicly released for Android 4.4+ on the Google Play on April 16, 2014. It was initially supported on all devices running Android 4.4 KitKat and higher, but became only officially supported on Google Pixel devices in the following years. The app was renamed Pixel Camera in October 2023, with the launch of the Pixel 8 and Pixel 8 Pro.

In computer vision, rigid motion segmentation is the process of separating regions, features, or trajectories from a video sequence into coherent subsets of space and time. These subsets correspond to independent rigidly moving objects in the scene. The goal of this segmentation is to differentiate and extract the meaningful rigid motion from the background and analyze it. Image segmentation techniques labels the pixels to be a part of pixels with certain characteristics at a particular time. Here, the pixels are segmented depending on its relative movement over a period of time i.e. the time of the video sequence.

<span class="mw-page-title-main">Event camera</span> Type of imaging sensor

An event camera, also known as a neuromorphic camera, silicon retina or dynamic vision sensor, is an imaging sensor that responds to local changes in brightness. Event cameras do not capture images using a shutter as conventional (frame) cameras do. Instead, each pixel inside an event camera operates independently and asynchronously, reporting changes in brightness as they occur, and staying silent otherwise.

References

  1. Jiang Yu Zheng, Saburo Tsuji, Panoramic representation for route recognition by a mobile robot, International Journal of Computer Vision 9(1): 55-76 (1992)
  2. Jiang Yu Zheng, Digital Route Panorama, IEEE MultiMedia, 10(3): 57-67 (2003)
  3. Jiang Yu Zheng, Min Shi: Mapping cityscapes into cyberspace for visualization. Journal of Visualization and Computer Animation 16(2): 97-107 (2005)
  4. Jiang Yu Zheng, Yu Zhou, Panayiotis Mili Scanning Scene Tunnel for City Traversing, IEEE Trans. on Visualization and Computer Graphics 12(2) (March 2006), 155 - 167, 2006
  5. Aseem Agarwala et al. Photographing long scenes with multi-viewpoint panoramas, SIGGRAPH06, 2006
  6. Augusto Román, Hendrik P.A. Lensch, Automatic Multiperspective Images Eurographics Symposium on Rendering (2006)
  7. Rav-Acha, Alex; Engel, Giora; Peleg, Shmuel (2008). "Minimal Aspect Distortion (MAD) Mosaicing of Long Scenes". International Journal of Computer Vision. 78 (2–3): 187–206. doi:10.1007/s11263-007-0101-9.