You can help expand this article with text translated from the corresponding article in Spanish. (December 2024)Click [show] for important translation instructions.
|
Extended reality (XR) is an umbrella term to refer to augmented reality (AR), mixed reality (MR), and virtual reality (VR). The technology is intended to combine or mirror the physical world with a "digital twin world" able to interact with it, [1] [2] giving users an immersive experience by being in a virtual or augmented environment.
The fields of virtual reality and augmented reality are rapidly growing and being applied in a wide range of areas such as entertainment, cinema, marketing, real estate, training, education, maintenance [3] and remote work. [4] Extended reality has the ability to be used for joint effort in the workplace, training, educational purposes, therapeutic treatments, and data exploration and analysis.
Extended reality works by using visual data acquisition that is either accessed locally or shared and transfers over a network and to the human senses. By enabling real-time responses in a virtual stimulus these devices create customized experiences. Advancing in 5G and edge computing – a type of computing that is done "at or near the source of data" – could aid in data rates, increase user capacity, and reduce latency. These applications will likely expand extended reality into the future.
Around one-third of the global extended reality market is attributed to Europe.[ citation needed ]
Virtual reality (VR) is a simulated experience that employs 3D near-eye displays and pose tracking to give the user an immersive feel of a virtual world. Applications of virtual reality include entertainment, education and business. VR is one of the key technologies in the reality-virtuality continuum. As such, it is different from other digital visualization solutions, such as augmented virtuality and augmented reality.
A wearable computer, also known as a body-borne computer, is a computing device worn on the body. The definition of 'wearable computer' may be narrow or broad, extending to smartphones or even ordinary wristwatches.
Augmented reality (AR) is an interactive experience that combines the real world and computer-generated 3D content. The content can span multiple sensory modalities, including visual, auditory, haptic, somatosensory and olfactory. AR can be defined as a system that incorporates three basic features: a combination of real and virtual worlds, real-time interaction, and accurate 3D registration of virtual and real objects. The overlaid sensory information can be constructive, or destructive. As such, it is one of the key technologies in the reality-virtuality continuum.
Computer-mediated reality refers to the ability to add to, subtract information from, or otherwise manipulate one's perception of reality through the use of a wearable computer or hand-held device such as a smartphone.
Mixed reality (MR) is a term used to describe the merging of a real-world environment and a computer-generated one. Physical and virtual objects may co-exist in mixed reality environments and interact in real time.
The metaverse is a loosely defined term referring to virtual worlds in which users represented by avatars interact, usually in 3D and focused on social and economic connection.
Interactive media normally refers to products and services on digital computer-based systems which respond to the user's actions by presenting content such as text, moving image, animation, video and audio. Since its early conception, various forms of interactive media have emerged with impacts on educational and commercial markets. With the rise of decision-driven media, concerns surround the impacts of cybersecurity and societal distraction.
Ambient intelligence (AmI) refers to environments with electronic devices that are aware of and can recognize the presence of human beings and adapt accordingly. This concept encompasses various technologies in consumer electronics, telecommunications, and computing. Its primary purpose is to enhance user interactions through context-aware systems.
Locative media or location-based media (LBM) is a virtual medium of communication functionally bound to a location. The physical implementation of locative media, however, is not bound to the same location to which the content refers.
In virtual reality (VR), immersion is the perception of being physically present in a non-physical world. The perception is created by surrounding the user of the VR system in images, sound or other stimuli that provide an engrossing total environment.
The term "Supranet" was introduced by Luca Delgrossi and Domenico Ferrari in 1997 during the 7th International Workshop on Network and Operating System Support for Digital Audio and Video (NOSSDAV). It was originally defined as "a virtual network for a group on top of any integrated-services internetwork." Another interpretation, as defined by Gartner, an American technological research firm, describes Supranet as the "fusion of the physical and digital worlds."
A projection augmented model is an element sometimes employed in virtual reality systems. It consists of a physical three-dimensional model onto which a computer image is projected to create a realistic looking object. Importantly, the physical model is the same geometric shape as the object that the PA model depicts.
In computing, 3D interaction is a form of human-machine interaction where users are able to move and perform interaction in 3D space. Both human and machine process information where the physical position of elements in the 3D space is relevant.
Augmented learning is an on-demand learning technique where the environment adapts to the learner. By providing remediation on-demand, learners can gain greater understanding of a topic while stimulating discovery and learning. Technologies incorporating rich media and interaction have demonstrated the educational potential that scholars, teachers and students are embracing. Instead of focusing on memorization, the learner experiences an adaptive learning experience based upon the current context. The augmented content can be dynamically tailored to the learner's natural environment by displaying text, images, video or even playing audio. This additional information is commonly shown in a pop-up window for computer-based environments.
Wearable technology is any technology that is designed to be used while worn. Common types of wearable technology include smartwatches and smartglasses. Wearable electronic devices are often close to or on the surface of the skin, where they detect, analyze, and transmit information such as vital signs, and/or ambient data and which allow in some cases immediate biofeedback to the wearer.
A digital twin is a digital model of an intended or actual real-world physical product, system, or process that serves as a digital counterpart of it for purposes such as simulation, integration, testing, monitoring, and maintenance.
WebXR Device API is a Web application programming interface (API) that describes support for accessing augmented reality and virtual reality devices, such as the HTC Vive, Oculus Rift, Meta Quest, Google Cardboard, HoloLens, Apple Vision Pro, Android XR-based devices, Magic Leap or Open Source Virtual Reality (OSVR), in a web browser. The WebXR Device API and related APIs are standards defined by W3C groups, the Immersive Web Community Group and Immersive Web Working Group. While the Community Group works on the proposals in the incubation period, the Working Group defines the final web specifications to be implemented by the browsers.
OpenXR is an open-source, royalty-free standard for access to virtual reality and augmented reality platforms and devices. It is developed by a working group managed by the Khronos Group consortium. OpenXR was announced by the Khronos Group on February 27, 2017, during GDC 2017. A provisional version of the standard was released on March 18, 2019, to enable developers and implementers to provide feedback on it. On July 29, 2019, OpenXR 1.0 was released to the public by Khronos Group at SIGGRAPH 2019 and on April 15, 2024, OpenXR 1.1 was released by Khronos.
Spatial computing is any of various human–computer interaction techniques that are perceived by users as taking place in the real world, in and around their natural bodies and physical environments, instead of constrained to and perceptually behind computer screens. This concept inverts the long-standing practice of teaching people to interact with computers in digital environments, and instead teaches computers to better understand and interact with people more naturally in the human world. This concept overlaps with and encompasses others including extended reality, augmented reality, mixed reality, natural user interface, contextual computing, affective computing, and ubiquitous computing. The usage for labeling and discussing these adjacent technologies is imprecise.
Human-City Interaction is the intersection between human-computer interaction and urban computing. The area involves data-driven methods such as analysis tools, prediction methods to present the solutions to urban design problems. Practitioners, Designers, software engineers in this area employ large sets of user-centric data to design urban environments with high levels of interactivity. This discipline mainly focuses on the user perspective and devises various interaction design between the citizen (user) and various urban entities. Common examples in the discipline include the interactivity between human and buildings, Interaction between Human and IoT devices, participatory and collective urban design, and so on. The discipline attracts growing interests from people of various background such as designers, urban planners, computer scientists, and even architecture. Although the design canvas between human and city is board, Lee et al. proposed a framework considering the multi-disciplinary interests together, in which the emerging technologies such as extended reality (XR) can serve as a platform for such co-design purposes.