Cave automatic virtual environment

Last updated
The CAVE CAVE Crayoland.jpg
The CAVE

A cave automatic virtual environment (better known by the recursive acronym CAVE) is an immersive virtual reality environment where projectors are directed to between three and six of the walls of a room-sized cube. The name is also a reference to the allegory of the Cave in Plato's Republic in which a philosopher contemplates perception, reality, and illusion.

Contents

The CAVE was invented by Carolina Cruz-Neira, Daniel J. Sandin, and Thomas A. DeFanti at the University of Illinois, Chicago Electronic Visualization Laboratory in 1992. [1] The images on the walls were in stereo to give a depth cue. [2]

General characteristics

A CAVE is typically a video theater situated within a larger room. The walls of a CAVE are typically made up of rear-projection screens, however large-scale LED displays are becoming more common. The floor can be a downward-projection screen, a bottom projected screen or a flat panel display. The projection systems are very high-resolution due to the near distance viewing which requires very small pixel sizes to retain the illusion of reality. The user wears 3D glasses inside the CAVE to see 3D graphics generated by the CAVE. People using the CAVE can see objects apparently floating in the air, and can walk around them, getting a proper view of what they would look like in reality. This was initially made possible by electromagnetic sensors, but has converted to infrared cameras. The frame of early CAVEs had to be built from non-magnetic materials such as wood to minimize interference with the electromagnetic sensors; the change to infrared tracking has removed that limitation. A CAVE user's movements are tracked by the sensors typically attached to the 3D glasses and the video continually adjusts to retain the viewers perspective. Computers control both this aspect of the CAVE and the audio aspect. There are typically multiple speakers placed at multiple angles in the CAVE, providing 3D sound to complement the 3D video.[ citation needed ]

Technology

A lifelike visual display is created by projectors positioned outside the CAVE and controlled by physical movements from a user inside the CAVE. A motion capture system records the real time position of the user. Stereoscopic LCD shutter glasses convey a 3D image. The computers rapidly generate a pair of images, one for each of the user's eyes, based on the motion capture data. The glasses are synchronized with the projectors so that each eye only sees the correct image. Since the projectors are positioned outside the cube, mirrors are often used to reduce the distance required from the projectors to the screens. One or more computers drive the projectors. Clusters of desktop PCs are popular to run CAVEs, because they cost less and run faster.

Software and libraries designed specifically for CAVE applications are available. There are several techniques for rendering the scene. There are three popular scene graphs in use today: OpenSG, OpenSceneGraph, and OpenGL Performer. OpenSG and OpenSceneGraph are open source; while OpenGL Performer is free, its source code is not included.

Calibration

To be able to create an image that will not be distorted or out of place, the displays and sensors must be calibrated. The calibration process depends on the motion capture technology being used. Optical or Inertial-acoustic systems only requires to configure the zero and the axes used by the tracking system. Calibration of electromagnetic sensors (like the ones used in the first cave) is more complex. In this case a person will put on the special glasses needed to see the images in 3D. The projectors then fill the CAVE with many one-inch boxes set one foot apart. The person then takes an instrument called an "ultrasonic measurement device" which has a cursor in the middle of it, and positions the device so that the cursor is visually in line with the projected box. This process can go on until almost 400 different blocks are measured. Each time the cursor is placed inside a block, a computer program records the location of that block and sends the location to another computer. If the points are calibrated accurately, there should be no distortion in the images that are projected in the CAVE. This also allows the CAVE to correctly identify where the user is located and can precisely track their movements, allowing the projectors to display images based on where the person is inside the CAVE. [3]

Applications

The concept of the original CAVE has been reapplied and is currently being used in a variety of fields. Many universities own CAVE systems. CAVEs have many uses. Many engineering companies use CAVEs to enhance product development. [4] [5] Prototypes of parts can be created and tested, interfaces can be developed, and factory layouts can be simulated, all before spending any money on physical parts. This gives engineers a better idea of how a part will behave in the product in its entirety. CAVEs are also used more and more in the collaborative planning in construction sector. [6] Researchers can use CAVE system to conduct their research topic in a more accessible and effective method. For example, CAVEs was applied on the investigation of training subjects on landing an F-16 aircraft. [7]

The EVL team at UIC released the CAVE2 in October 2012. [8] Similar to the original CAVE, it is a 3D immersive environment but is based on LCD panels rather than projection.

See also

Related Research Articles

<span class="mw-page-title-main">Virtual reality</span> Computer-simulated experience

Virtual reality (VR) is a simulated experience that employs 3D near-eye displays and pose tracking to give the user an immersive feel of a virtual world. Applications of virtual reality include entertainment, education and business. VR is one of the key technologies in the reality-virtuality continuum. As such, it is different from other digital visualization solutions, such as augmented virtuality and augmented reality.

<span class="mw-page-title-main">Augmented reality</span> View of the real world with computer-generated supplementary features

Augmented reality (AR) is an interactive experience that combines the real world and computer-generated 3D content. The content can span multiple sensory modalities, including visual, auditory, haptic, somatosensory and olfactory. AR can be defined as a system that incorporates three basic features: a combination of real and virtual worlds, real-time interaction, and accurate 3D registration of virtual and real objects. The overlaid sensory information can be constructive, or destructive. As such, it is one of the key technologies in the reality-virtuality continuum.

<span class="mw-page-title-main">Mixed reality</span> Merging of real and virtual worlds to produce new environments

Mixed reality (MR) is a term used to describe the merging of a real-world environment and a computer-generated one. Physical and virtual objects may co-exist in mixed reality environments and interact in real time.

<span class="mw-page-title-main">Head-mounted display</span> Type of display device

A head-mounted display (HMD) is a display device, worn on the head or as part of a helmet, that has a small display optic in front of one or each eye. HMDs have many uses including gaming, aviation, engineering, and medicine.

<span class="mw-page-title-main">Daniel J. Sandin</span> American artist and researcher

Daniel J. Sandin is an American video and computer graphics artist, designer and researcher. He is a Professor Emeritus of the School of Art & Design at University of Illinois at Chicago, and co-director of the Electronic Visualization Laboratory (EVL) at the University of Illinois at Chicago. He is an internationally recognized pioneer in computer graphics, electronic art and visualization.

Thomas Albert "Tom" DeFanti is an American computer graphics researcher and pioneer. His work has ranged from early computer animation, to scientific visualization, virtual reality, and grid computing. He is a distinguished professor of Computer Science at the University of Illinois at Chicago, and a research scientist at the California Institute for Telecommunications and Information Technology (Calit2).

The Electronic Visualization Laboratory (EVL) is an interdisciplinary research lab and graduate studies program at the University of Illinois at Chicago, bringing together faculty, students and staff primarily from the Art and Computer Science departments of UIC. The primary areas of research are in computer graphics, visualization, virtual and augmented reality, advanced networking, and media art. Graduates of EVL either earn a Masters or Doctoral degree in Computer Science.

Fulldome refers to immersive dome-based video display environments. The dome, horizontal or tilted, is filled with real-time (interactive) or pre-rendered (linear) computer animations, live capture images, or composited environments.

<span class="mw-page-title-main">Immersion (virtual reality)</span> Perception of being physically present in a non-physical world

In virtual reality (VR), immersion is the perception of being physically present in a non-physical world. The perception is created by surrounding the user of the VR system in images, sound or other stimuli that provide an engrossing total environment.

<span class="mw-page-title-main">Future Vision Technologies</span> Companies based in Champaign County, Illinois

Future Vision Technologies (FVT), operating from 1991 to 1995, was part of the second wave of companies working to commercialize virtual reality technology. The company was founded by a team out of the Advanced Digital Systems Laboratory in the department of Electrical and Computer Engineering at the University of Illinois at Urbana-Champaign. The three original members, Matt Klapman, David Frerichs, and Kevin Lee, were later joined by John Belmonte. The company ceased to be an active entity when its PC card business was sold to Fujitsu Microelectronics.

A projection augmented model is an element sometimes employed in virtual reality systems. It consists of a physical three-dimensional model onto which a computer image is projected to create a realistic looking object. Importantly, the physical model is the same geometric shape as the object that the PA model depicts.

<span class="mw-page-title-main">Maxine D. Brown</span> American computer scientist

Maxine D. Brown is an American computer scientist and retired director of the Electronic Visualization Laboratory (EVL) at the University of Illinois at Chicago (UIC). Along with Tom DeFanti and Bruce McCormick, she co-edited the 1987 NSF report, Visualization in Scientific Computing, which defined the field of scientific visualization.

In computing, 3D interaction is a form of human-machine interaction where users are able to move and perform interaction in 3D space. Both human and machine process information where the physical position of elements in the 3D space is relevant.

Virtual worlds are playing an increasingly important role in education, especially in language learning. By March 2007 it was estimated that over 200 universities or academic institutions were involved in Second Life. Joe Miller, Linden Lab Vice President of Platform and Technology Development, claimed in 2009 that "Language learning is the most common education-based activity in Second Life". Many mainstream language institutes and private language schools are now using 3D virtual environments to support language learning.

<span class="mw-page-title-main">Projection mapping</span> Using software to guide the placement of light displays on objects

Projection mapping, similar to video mapping and spatial augmented reality, is a projection technique used to turn objects, often irregularly shaped, into display surfaces for video projection. The objects may be complex industrial landscapes, such as buildings, small indoor objects, or theatrical stages. Using specialized software, a two- or three-dimensional object is spatially mapped on the virtual program which mimics the real environment it is to be projected on. The software can then interact with a projector to fit any desired image onto the surface of that object. The technique is used by artists and advertisers who can add extra dimensions, optical illusions, and notions of movement onto previously static objects. The video is commonly combined with or triggered by audio to create an audiovisual narrative. In recent years the technique has also been widely used in the context of cultural heritage, as it has proved to be an excellent edutainment tool.

<span class="mw-page-title-main">IllumiRoom</span> Microsoft research Project

IllumiRoom is a Microsoft Research project that augments a television screen with images projected onto the wall and surrounding objects. The current proof-of-concept uses a Kinect sensor and video projector. The Kinect sensor captures the geometry and colors of the area of the room that surrounds the television, and the projector displays video around the television that corresponds to a video source on the television, such as a video game or movie.

<span class="mw-page-title-main">Carolina Cruz-Neira</span> American computer scientist and educator

Carolina Cruz-Neira is a Spanish-Venezuelan-American computer engineer, researcher, designer, educator, and a pioneer of virtual reality (VR). She is known for inventing the cave automatic virtual environment (CAVE). She previously worked at Iowa State University (ISU), University of Louisiana at Lafayette, University of Arkansas at Little Rock, and she is currently an Agere Chair Professor at University of Central Florida (UCF).

<span class="mw-page-title-main">Sprout (computer)</span>

Sprout by HP was a personal computer from HP Inc. announced on October 29, 2014 and released for sale on November 9, 2014. The system was conceived by Brad Short, Distinguished Technologist at HP Inc., who along with Louis Kim, Head of the Immersive Computing Group at HP Inc., co-founded and led a team within HP Inc. to develop and productize the computing concept.

<span class="mw-page-title-main">Pose tracking</span>

In virtual reality (VR) and augmented reality (AR), a pose tracking system detects the precise pose of head-mounted displays, controllers, other objects or body parts within Euclidean space. Pose tracking is often referred to as 6DOF tracking, for the six degrees of freedom in which the pose is often tracked.

Immersive learning is a learning method which students being immersed into a virtual dialogue, the feeling of presence is used as an evidence of getting immersed. The virtual dialogue can be created by two ways, the usage of virtual technics, and the narrative like reading a book. The motivations of using virtual reality (VR) for teaching contain: learning efficiency, time problems, physical inaccessibility, limits due to a dangerous situation and ethical problems.

References

  1. Cruz-Neira, Carolina; Sandin, Daniel J.; DeFanti, Thomas A.; Kenyon, Robert V.; Hart, John C. (1 June 1992). "The CAVE: Audio Visual Experience Automatic Virtual Environment". Commun. ACM. 35 (6): 64–72. doi: 10.1145/129888.129892 . ISSN   0001-0782. S2CID   19283900.
  2. Carlson, Wayne E. (2017-06-20). "17.5 Virtual Spaces". The Ohio State University. Retrieved 2024-04-12.
  3. "The CAVE (CAVE Automatic Virtual Environment)". Archived from the original on 2007-01-09. Retrieved 2006-06-27.
  4. Ottosson, Stig (1970-01-01). "Virtual reality in the product development process". Journal of Engineering Design. 13 (2): 159–172. doi:10.1080/09544820210129823. S2CID   110260269.
  5. Product Engineering: Tools and Methods Based on Virtual Reality. 2007-06-06. Retrieved 2014-08-04.
  6. Nostrad (2014-06-13). "Collaborative Planning with Sweco Cave: State-of-the-art in Design and Design Management". Slideshare.net. Retrieved 2014-08-04.
  7. Repperger, D. W.; Gilkey, R. H.; Green, R.; Lafleur, T.; Haas, M. W. (2003). "Effects of Haptic Feedback and Turbulence on Landing Performance Using an Immersive Cave Automatic Virtual Environment (CAVE)". Perceptual and Motor Skills. 97 (3): 820–832. doi:10.2466/pms.2003.97.3.820. PMID   14738347. S2CID   41324691.
  8. EVL (2009-05-01). "CAVE2: Next-Generation Virtual-Reality and Visualization Hybrid Environment for Immersive Simulation and Information Analysis" . Retrieved 2014-08-07.