Focus-plus-context screen

Last updated
The original focus-plus-context screen prototype consisted of an 18" LCD screen embedded in a 5' front-projected screen. The callout shows the different resolutions of focus and context area. FocusPlusContextScreen.jpg
The original focus-plus-context screen prototype consisted of an 18" LCD screen embedded in a 5' front-projected screen. The callout shows the different resolutions of focus and context area.

A focus-plus-context screen is a specialized type of display device that consists of one or more high-resolution "focus" displays embedded into a larger low-resolution "context" display. Image content is displayed across all display regions, such that the scaling of the image is preserved, while its resolution varies across the display regions.

Contents

The original focus-plus-context screen prototype consisted of an 18"/45 cm LCD screen embedded in a 5'/150 cm front-projected screen. Alternative designs have been proposed that achieve the mixed-resolution effect by combining two or more projectors with different focal lengths [1]

While the high-resolution area of the original prototype was located at a fixed location, follow-up projects have obtained a movable focus area by using a Tablet PC.

Patrick Baudisch [2] is the inventor of focus-plus-context screens (2000, while at Xerox PARC)

Advantages

Disadvantages

Related Research Articles

<span class="mw-page-title-main">Augmented reality</span> View of the real world with computer-generated supplementary features

Augmented reality (AR) is an interactive experience that combines the real world and computer-generated content. The content can span multiple sensory modalities, including visual, auditory, haptic, somatosensory and olfactory. AR can be defined as a system that incorporates three basic features: a combination of real and virtual worlds, real-time interaction, and accurate 3D registration of virtual and real objects. The overlaid sensory information can be constructive, or destructive. As such, it is one of the key technologies in the reality-virtuality continuum.

Code review is a software quality assurance activity in which one or more people check a program, mainly by viewing and reading parts of its source code, either after implementation or as an interruption of implementation. At least one of the persons must not have authored the code. The persons performing the checking, excluding the author, are called "reviewers".

<span class="mw-page-title-main">Cave automatic virtual environment</span> Immersive virtual reality environment

A cave automatic virtual environment is an immersive virtual reality environment where projectors are directed to between three and six of the walls of a room-sized cube. The name is also a reference to the allegory of the Cave in Plato's Republic in which a philosopher contemplates perception, reality, and illusion.

A volumetric display device is a display device that forms a visual representation of an object in three physical dimensions, as opposed to the planar image of traditional screens that simulate depth through a number of different visual effects. One definition offered by pioneers in the field is that volumetric displays create 3D imagery via the emission, scattering, or relaying of illumination from well-defined regions in (x,y,z) space.

<span class="mw-page-title-main">Head-mounted display</span> Type of display device

A head-mounted display (HMD) is a display device, worn on the head or as part of a helmet, that has a small display optic in front of one or each eye. HMDs have many uses including gaming, aviation, engineering, and medicine.

<span class="mw-page-title-main">Handheld projector</span> Image projector in a handheld device

A handheld projector is an image projector in a handheld device. It was developed as a computer display device for compact portable devices such as mobile phones, personal digital assistants, and digital cameras, which have sufficient storage capacity to handle presentation materials but are too small to accommodate a display screen that an audience can see easily. Handheld projectors involve miniaturized hardware, and software that can project digital images onto a nearby viewing surface.

<span class="mw-page-title-main">Gesture recognition</span> Topic in computer science and language technology

Gesture recognition is an area of research and development in computer science and language technology concerned with the recognition and interpretation of human gestures. A subdiscipline of computer vision, it employs mathematical algorithms to interpret gestures.

Multimodal interaction provides the user with multiple modes of interacting with a system. A multimodal interface provides several distinct tools for input and output of data.

Software visualization or software visualisation refers to the visualization of information of and related to software systems—either the architecture of its source code or metrics of their runtime behavior—and their development process by means of static, interactive or animated 2-D or 3-D visual representations of their structure, execution, behavior, and evolution.

<span class="mw-page-title-main">Powerwall</span> Large, ultra-high-resolution display

A powerwall is a large, ultra-high-resolution display that is constructed of a matrix of other displays, which may be either monitors or projectors. It is important to differentiate between powerwalls and displays that are just large, for example, the single projector display used in many lecture theatres. These displays rarely have a resolution higher than 1920 × 1080 pixels, and so present the same amount of information as on a standard desktop display. With Powerwall displays, users can view the display from a distance and see an overview of the data (context), but can also move to within arm’s length and see data in great detail (focus). This technique of moving around the display is known as physical navigation, and can help users to better understand their data.

<span class="mw-page-title-main">Video wall</span> Technique used for creating large video displays, without a video projector

A video wall is a special multi-monitor setup that consists of multiple computer monitors, video projectors, or television sets tiled together contiguously or overlapped in order to form one large screen. Typical display technologies include LCD panels, Direct View LED arrays, blended projection screens, Laser Phosphor Displays, and rear projection cubes. Jumbotron technology was also previously used. Diamond Vision was historically similar to Jumbotron in that they both used cathode-ray tube (CRT) technology, but with slight differences between the two. Early Diamond vision displays used separate flood gun CRTs, one per subpixel. Later Diamond vision displays and all Jumbotrons used field-replaceable modules containing several flood gun CRTs each, one per subpixel, that had common connections shared across all CRTs in a module; the module was connected through a single weather-sealed connector.

A projection augmented model is an element sometimes employed in virtual reality systems. It consists of a physical three-dimensional model onto which a computer image is projected to create a realistic looking object. Importantly, the physical model is the same geometric shape as the object that the PA model depicts.

In computing, 3D interaction is a form of human-machine interaction where users are able to move and perform interaction in 3D space. Both human and machine process information where the physical position of elements in the 3D space is relevant.

<span class="mw-page-title-main">SixthSense</span> Gesture-based wearable computer system

SixthSense is a gesture-based wearable computer system developed at MIT Media Lab by Steve Mann in 1994 and 1997, and 1998, and further developed by Pranav Mistry, in 2009, both of whom developed both hardware and software for both headworn and neckworn versions of it. It comprises a headworn or neck-worn pendant that contains both a data projector and camera. Headworn versions were built at MIT Media Lab in 1997 that combined cameras and illumination systems for interactive photographic art, and also included gesture recognition.

<span class="mw-page-title-main">DiamondTouch</span> Multiple person interface device

The DiamondTouch table is a multi-touch, interactive PC interface product from Circle Twelve Inc. It is a human interface device that has the capability of allowing multiple people to interact simultaneously while identifying which person is touching where. The technology was originally developed at Mitsubishi Electric Research Laboratories (MERL) in 2001 and later licensed to Circle Twelve Inc in 2008. The DiamondTouch table is used to facilitate face-to-face collaboration, brainstorming, and decision-making, and users include construction management company Parsons Brinckerhoff, the Methodist Hospital, and the US National Geospatial-Intelligence Agency (NGA).

Sonic interaction design is the study and exploitation of sound as one of the principal channels conveying information, meaning, and aesthetic/emotional qualities in interactive contexts. Sonic interaction design is at the intersection of interaction design and sound and music computing. If interaction design is about designing objects people interact with, and such interactions are facilitated by computational means, in sonic interaction design, sound is mediating interaction either as a display of processes or as an input medium.

Maureen C. Stone is an American computer scientist, specializing in color modeling.

<span class="mw-page-title-main">IllumiRoom</span> Microsoft research Project

IllumiRoom is a Microsoft Research project that augments a television screen with images projected onto the wall and surrounding objects. The current proof-of-concept uses a Kinect sensor and video projector. The Kinect sensor captures the geometry and colors of the area of the room that surrounds the television, and the projector displays video around the television that corresponds to a video source on the television, such as a video game or movie.

<span class="mw-page-title-main">Industrial augmented reality</span>

Industrial augmented reality (IAR) is related to the application of augmented reality (AR) and heads-up displays to support an industrial process. The use of IAR dates back to the 1990s with the work of Thomas Caudell and David Mizell about the application of AR at Boeing. Since then several applications of this technique over the years have been proposed showing its potential in supporting some industrial processes. Although there have been several advances in technology, IAR is still considered to be at an infant developmental stage.

Patrick Baudisch is a computer science professor and the chair of the Human Computer Interaction Lab at Hasso Plattner Institute, Potsdam University. While his early research interests revolved around natural user interfaces and interactive devices, his research focus shifted to virtual reality and haptics in the late 2000s and to digital fabrication, such as 3D Printing and Laser cutting in the 2010s. Prior to teaching and researching at Hasso Plattner Institute, Patrick Baudisch was a research scientist at Microsoft Research and Xerox PARC. He has been a member of CHI Academy since 2013, and an ACM distinguished scientist since 2014. He holds a PhD degree in Computer Science from the Department of Computer Science of the Technische Universität Darmstadt, Germany.

References

  1. Ashdown, M.; Robinson, P. (2005). "Escritoire: A Personal Projected Display". IEEE MultiMedia. 12: 34–42. doi:10.1109/MMUL.2005.18. hdl: 11025/1625 .
  2. Patrick Baudisch
Notes