Softwarp

Last updated

Softwarp is a software technique to warp an image so that it can be projected on a curved screen. This can be done in real time by inserting the softwarp as a last step in the rendering cycle. The problem is to know how the image should be warped to look correct on the curved screen. There are several techniques to auto calibrate the warping by projecting a pattern and using cameras and/or sensors. The information from the sensors is sent to the software so that it can analyze the data and calculate the curvature of the projection screen.

Contents

Usage

The softwarp can be used to project virtual views on curved walls and domes. These are usually used in vehicle simulators, for instance boat-, car- and airplane simulators. To make it possible to cover a dome with a 360 degree view you need to use several projectors. A problem with using several projectors on the same screen is that the edges between the projected images get about twice the amount of light. This is solved by using a technique called edge blending. With this technique a “filter” is inserted on the edge that fades the image from 100% light strength (luminance) to 0% (the lowest luminance depends on the contrast ratio of the projector).

History

The first warping technologies used a hardware image processing unit to warp the image. This processing unit was inserted between the graphics card and the projector. The problem with this technique is that it depends on the type of signal and the quality of the signal from the graphics card to warp it correctly. The process unit also needs several lines of image information before it can start sending out the warped image. This adds a latency to the display system that could be a problem in simulators that need fast response time, for instance fighter jet simulators. Softwarping eliminates the latency.

Related Research Articles

<span class="mw-page-title-main">Scanline rendering</span> 3D computer graphics image rendering method

Scanline rendering is an algorithm for visible surface determination, in 3D computer graphics, that works on a row-by-row basis rather than a polygon-by-polygon or pixel-by-pixel basis. All of the polygons to be rendered are first sorted by the top y coordinate at which they first appear, then each row or scan line of the image is computed using the intersection of a scanline with the polygons on the front of the sorted list, while the sorted list is updated to discard no-longer-visible polygons as the active scan line is advanced down the picture.

Gamma correction or gamma is a nonlinear operation used to encode and decode luminance or tristimulus values in video or still image systems. Gamma correction is, in the simplest cases, defined by the following power-law expression:

<span class="mw-page-title-main">Texture mapping</span> Method of defining surface detail on a computer-generated graphic or 3D model

Texture mapping is a method for mapping a texture on a computer-generated graphic. Texture here can be high frequency detail, surface texture, or color.

<span class="mw-page-title-main">Ray casting</span> Methodological basis for 3D CAD/CAM solid modeling and image rendering

Ray casting is the methodological basis for 3D CAD/CAM solid modeling and image rendering. It is essentially the same as ray tracing for computer graphics where virtual light rays are "cast" or "traced" on their path from the focal point of a camera through each pixel in the camera sensor to determine what is visible along the ray in the 3D scene. The term "Ray Casting" was introduced by Scott Roth while at the General Motors Research Labs from 1978–1980. His paper, "Ray Casting for Modeling Solids", describes modeled solid objects by combining primitive solids, such as blocks and cylinders, using the set operators union (+), intersection (&), and difference (-). The general idea of using these binary operators for solid modeling is largely due to Voelcker and Requicha's geometric modelling group at the University of Rochester. See solid modeling for a broad overview of solid modeling methods. This figure on the right shows a U-Joint modeled from cylinders and blocks in a binary tree using Roth's ray casting system in 1979.

<span class="mw-page-title-main">Cave automatic virtual environment</span> Immersive virtual reality environment

A cave automatic virtual environment is an immersive virtual reality environment where projectors are directed to between three and six of the walls of a room-sized cube. The name is also a reference to the allegory of the Cave in Plato's Republic in which a philosopher contemplates perception, reality, and illusion.

<span class="mw-page-title-main">Compositing</span> Combining of visual elements from separate sources into single images

Compositing is the process or technique of combining visual elements from separate sources into single images, often to create the illusion that all those elements are parts of the same scene. Live-action shooting for compositing is variously called "chroma key", "blue screen", "green screen" and other names. Today, most, though not all, compositing is achieved through digital image manipulation. Pre-digital compositing techniques, however, go back as far as the trick films of Georges Méliès in the late 19th century, and some are still in use.

<span class="mw-page-title-main">Tone mapping</span> Image processing technique

Tone mapping is a technique used in image processing and computer graphics to map one set of colors to another to approximate the appearance of high-dynamic-range (HDR) images in a medium that has a more limited dynamic range. Print-outs, CRT or LCD monitors, and projectors all have a limited dynamic range that is inadequate to reproduce the full range of light intensities present in natural scenes. Tone mapping addresses the problem of strong contrast reduction from the scene radiance to the displayable range while preserving the image details and color appearance important to appreciate the original scene content.

<span class="mw-page-title-main">Projection screen</span> Apparatus for displaying a projected image

A projection screen is an installation consisting of a surface and a support structure used for displaying a projected image for the view of an audience. Projection screens may be permanently installed on a wall, as in a movie theater, mounted to or placed in a ceiling using a rollable projection surface that retracts into a casing, painted on a wall, or portable with tripod or floor rising models as in a conference room or other non-dedicated viewing space. Another popular type of portable screens are inflatable screens for outdoor movie screening.

Fulldome refers to immersive dome-based video display environments. The dome, horizontal or tilted, is filled with real-time (interactive) or pre-rendered (linear) computer animations, live capture images, or composited environments.

<span class="mw-page-title-main">Large-screen television technology</span> Technology rapidly developed in the late 1990s and 2000s

Large-screen television technology developed rapidly in the late 1990s and 2000s. Prior to the development of thin-screen technologies, rear-projection television was standard for larger displays, and jumbotron, a non-projection video display technology, was used at stadiums and concerts. Various thin-screen technologies are being developed, but only liquid crystal display (LCD), plasma display (PDP) and Digital Light Processing (DLP) have been publicly released. Recent technologies like organic light-emitting diode (OLED) as well as not-yet-released technologies like surface-conduction electron-emitter display (SED) or field emission display (FED) are in development to supersede earlier flat-screen technologies in picture quality.

<span class="mw-page-title-main">Image warping</span> Digital image distortion

Image warping is the process of digitally manipulating an image such that any shapes portrayed in the image have been significantly distorted. Warping may be used for correcting image distortion as well as for creative purposes. The same techniques are equally applicable to video.

<span class="mw-page-title-main">Computer graphics</span> Graphics created using computers

Computer graphics deals with generating images and art with the aid of computers. Computer graphics is a core technology in digital photography, film, video games, digital art, cell phone and computer displays, and many specialized applications. A great deal of specialized hardware and software has been developed, with the displays of most devices being driven by computer graphics hardware. It is a vast and recently developed area of computer science. The phrase was coined in 1960 by computer graphics researchers Verne Hudson and William Fetter of Boeing. It is often abbreviated as CG, or typically in the context of film as computer generated imagery (CGI). The non-artistic aspects of computer graphics are the subject of computer science research.

<span class="mw-page-title-main">Indoor golf</span>

Indoor golf is an umbrella term for all activities in golf which can be carried out indoors. Venues include indoor driving ranges, chipping areas, putting greens, machines and home golf simulators. Many of these indoor facilities are businesses that include additional entertainment options as well as food and drink for customers.

A variety of computer graphic techniques have been used to display video game content throughout the history of video games. The predominance of individual techniques have evolved over time, primarily due to hardware advances and restrictions such as the processing power of central or graphics processing units.

<span class="mw-page-title-main">Multi-image</span>

Multi-image is the now largely obsolete practice and business of using 35mm slides (diapositives) projected by single or multiple slide projectors onto one or more screens in synchronization with an audio voice-over or music track. Multi-image productions are also known as multi-image slide presentations, slide shows and diaporamas and are a specific form of multimedia or audio-visual production.

<span class="mw-page-title-main">Luminance HDR</span>

Luminance HDR, formerly Qtpfsgui, is graphics software used for the creation and manipulation of high-dynamic-range images. Released under the terms of the GPL, it is available for Linux, Windows and Mac OS X. Luminance HDR supports several High Dynamic Range (HDR) as well as Low Dynamic Range (LDR) file formats.

Image Geometry Correction is the process of digitally manipulating image data such that the image’s projection precisely matches a specific projection surface or shape. Image geometry correction compensates for the distortion created by off-axis projector or screen placement or non-flat screen surface, by applying a pre-compensating inverse distortion to that image in the digital domain.

Warpalizer is a professional warp and blend software application made by Univisual Technologies AB in Sweden, for use in simulators.

A curved screen is an electronic display device that, contrasting with the flat-panel display, features a concave viewing surface. Curved screen TVs were introduced to the consumer market in 2013, primarily due to the efforts of Korean companies Samsung and LG, while curved screen projection displays, such as the Cinerama, have existed since the 1950s.

<span class="mw-page-title-main">Virtual reality headset</span> Head-mounted device that provides virtual reality for the wearer

A virtual reality headset is a head-mounted device that uses 3D near-eye displays and positional tracking to provide a virtual reality environment for the user. VR headsets are widely used with VR video games, but they are also used in other applications, including simulators and trainers. VR headsets typically include a stereoscopic display, stereo sound, and sensors like accelerometers and gyroscopes for tracking the pose of the user's head to match the orientation of the virtual camera with the user's eye positions in the real world.