Mixed reality (MR) is a term used to describe the merging of a real-world environment and a computer-generated one. Physical and virtual objects may co-exist in mixed reality environments and interact in real time.
Mixed reality that incorporates haptics has sometimes been referred to as visuo-haptic mixed reality. [1] [2]
In a physics context, the term "interreality system" refers to a virtual reality system coupled with its real-world counterpart. [3] A 2007 paper describes an interreality system comprising a real physical pendulum coupled to a pendulum that only exists in virtual reality. [4] This system has two stable states of motion: a "dual reality" state in which the motion of the two pendula are uncorrelated, and a "mixed reality" state in which the pendula exhibit stable phase-locked motion, which is highly correlated. The use of the terms "mixed reality" and "interreality" is clearly defined in the context of physics and may be slightly different in other fields, however, it is generally seen as, "bridging the physical and virtual world". [5]
Mixed reality has been used in applications across fields including design, education, entertainment, military training, healthcare, product content management, and human-in-the-loop operation of robots.
Simulation-based learning includes VR and AR based training and interactive, experiential learning. There are many potential use cases for mixed reality in both educational settings and professional training settings. In education, AR has been used to simulate historical battles, providing an unparalleled immersive experience for students and potentially enhanced learning experiences. [6] In addition, AR has shown effectiveness in university education for health science and medical students within disciplines that benefit from 3D representations of models, such as physiology and anatomy. [7] [8]
From television shows to game consoles, mixed reality has many applications in the field of entertainment.
The 2004 British game show Bamzooki called upon child contestants to create virtual "Zooks" and watch them compete in a variety of challenges. [9] The show used mixed reality to bring the Zooks to life. The television show ran for four seasons, ending in 2010. [9]
The 2003 game show FightBox also called upon contestants to create competitive characters and used mixed reality to allow them to interact. [10] Unlike Bamzoomi's generally non-violent challenges, the goal of FightBox was for new contestants to create the strongest fighter to win the competition. [10]
In 2009, researchers presented to the International Symposium on Mixed and Augmented Reality (ISMAR) their social product called "BlogWall", which consisted of a projected screen on a wall. [11] Users could post short text clips or images on the wall and play simple games such as Pong . [11] The BlogWall also featured a poetry mode where it would rearrange the messages it received to form a poem and a polling mode where users could ask others to answer their polls. [11]
Mario Kart Live: Home Circuit is a mixed reality racing game for the Nintendo Switch that was released in October 2020.[16a-New] The game allows players to use their home as a race track [12] Within the first week of release, 73,918 copies were sold in Japan, making it the country's best selling game of the week. [13]
Other research has examined the potential for mixed reality to be applied to theatre, film, and theme parks. [14]
The first fully immersive mixed reality system was the Virtual Fixtures platform, which was developed in 1992 by Louis Rosenberg at the Armstrong Laboratories of the United States Air Force. [15] It enabled human users to control robots in real-world environments that included real physical objects and 3D virtual overlays ("fixtures") that were added enhance human performance of manipulation tasks. Published studies showed that by introducing virtual objects into the real world, significant performance increases could be achieved by human operators. [15] [16] [17]
Combat reality can be simulated and represented using complex, layered data and visual aides, most of which are head-mounted displays (HMD), which encompass any display technology that can be worn on the user's head. [18] Military training solutions are often built on commercial off-the-shelf (COTS) technologies, such as Improbable's synthetic environment platform, Virtual Battlespace 3 and VirTra, with the latter two platforms used by the United States Army. As of 2018 [update] , VirTra is being used by both civilian and military law enforcement to train personnel in a variety of scenarios, including active shooter, domestic violence, and military traffic stops. [19] [20] Mixed reality technologies have been used by the United States Army Research Laboratory to study how this stress affects decision-making. With mixed reality, researchers may safely study military personnel in scenarios where soldiers would not likely survive. [21]
In 2017, the U.S. Army was developing the Synthetic Training Environment (STE), a collection of technologies for training purposes that was expected to include mixed reality. As of 2018 [update] , STE was still in development without a projected completion date. Some recorded goals of STE included enhancing realism and increasing simulation training capabilities and STE availability to other systems. [22]
It was claimed that mixed-reality environments like STE could reduce training costs, [23] [24] such as reducing the amount of ammunition expended during training. [25] In 2018, it was reported that STE would include representation of any part of the world's terrain for training purposes. [26] STE would offer a variety of training opportunities for squad brigade and combat teams, including Stryker, armory, and infantry teams. [27]
A blended space is a space in which a physical environment and a virtual environment are deliberately integrated in a close knit way. The aim of blended space design is to provide people with the experience of feeling a sense of presence in the blended space, acting directly on the content of the blended space. [28] [29] Examples of blended spaces include augmented reality devices such as the Microsoft HoloLens and games such as Pokémon Go in addition to many smartphone tourism apps, smart meeting rooms and applications such as bus tracker systems.
The idea of blending comes from the ideas of conceptual integration, or conceptual blending introduced by Gilles Fauconnier and Mark Turner.
Manuel Imaz and David Benyon introduced blending theory to look at concepts in software engineering and human-computer interaction. [30]
The simplest implementation of a blended space requires two features. The first required feature is input. The input can range from tactile, to changes in the environment. The next required feature is notifications received from the digital spaces. The correspondences between the physical and digital space have to be abstracted and exploited by the design of the blended space. Seamless integration of both the spaces is rare. Blended spaces need anchoring points or technologies to link the spaces. [29]
A well designed blended space advertises and conveys the digital content in a subtle and unobtrusive way. Presence can be measured using physiological, behavioral, and subjective measures derived from the space. [30]
There are two main components to any space. They are:
For presence in a blended space, there must be a physical space and a digital space. In the context of blended space, the higher the communication between the physical and digital spaces, the richer the experience. [28] This communication happens through the medium of correspondents which relay the state and nature of objects.
For the purpose of looking at blended spaces, the nature and characteristics of any space can be represented by these factors:# Ontology – Different types of objects present in the space the total number of objects and the relationships between objects and the space.
Physical space – Physical spaces are spaces which afford spatial interaction. [31] This kind of spatial interaction greatly impacts the user's cognitive model. [32]
Digital space – Digital space (also called the information space) consists of all the information content. This content can be in any form. [33]
Mixed reality allows a global workforce of remote teams to work together and tackle an organization's business challenges. No matter where they are physically located, an employee can wear a headset and noise-canceling headphones and enter a collaborative, immersive virtual environment. As these applications can accurately translate in real time, language barriers become irrelevant. This process also increases flexibility. While many employers still use inflexible models of fixed working time and location, there is evidence that employees are more productive if they have greater autonomy over where, when, and how they work. Some employees prefer loud work environments, while others need silence. Some work best in the morning; others work best at night. Employees also benefit from autonomy in how they work because of different ways of processing information. The classic model for learning styles differentiates between visual, auditory, and kinesthetic learners. [34]
Machine maintenance can also be executed with the help of mixed reality. Larger companies with multiple manufacturing locations and a lot of machinery can use mixed reality to educate and instruct their employees. The machines need regular checkups and have to be adjusted every now and then. These adjustments are mostly done by humans, so employees need to be informed about needed adjustments. By using mixed reality, employees from multiple locations can wear headsets and receive live instructions about the changes. Instructors can operate the representation that every employee sees, and can glide through the production area, zooming in to technical details and explaining every change needed. Employees completing a five-minute training session with such a mixed-reality program have been shown to attain the same learning results as reading a 50-page training manual. [35] An extension to this environment is the incorporation of live data from operating machinery into the virtual collaborative space and then associated with three dimensional virtual models of the equipment. This enables training and execution of maintenance, operational and safety work processes, which would otherwise be difficult in a live setting, while making use of expertise, no matter their physical location. [36]
Mixed reality can be used to build mockups that combine physical and digital elements. With the use of simultaneous localization and mapping (SLAM), mockups can interact with the physical world to gain control of more realistic sensory experiences [37] like object permanence, which would normally be infeasible or extremely difficult to track and analyze without the use of both digital and physical aides. [38] [39]
Smartglasses can be incorporated into the operating room to aide in surgical procedures; possibly displaying patient data conveniently while overlaying precise visual guides for the surgeon. [40] [41] Mixed reality headsets like the Microsoft HoloLens have been theorized to allow for efficient sharing of information between doctors, in addition to providing a platform for enhanced training. [42] [41] This can, in some situations (i.e. patient infected with contagious disease), improve doctor safety and reduce PPE use. [43] While mixed reality has lots of potential for enhancing healthcare, it does have some drawbacks too. [41] The technology may never fully integrate into scenarios when a patient is present, as there are ethical concerns surrounding the doctor not being able to see the patient. [41] [39] Mixed reality is also useful for healthcare education. For example, according to a 2022 report from the World Economic Forum, 85% of first-year medical students at Case Western Reserve University reported that mixed reality for teaching anatomy was "equivalent" or "better" than the in-person class. [44]
Product content management before the advent of mixed reality consisted largely of brochures and little customer-product engagement outside of this 2-dimensional realm. [45] With mixed reality technology improvements, new forms of interactive product content management has emerged. Most notably, 3-dimensional digital renderings of normally 2-dimensional products have increased reachability and effectiveness of consumer-product interaction. [46]
Recent advances in mixed-reality technologies have renewed interest in alternative modes of communication for human-robot interaction. [47] Human operators wearing mixed reality glasses such as HoloLens can interact with (control and monitor) e.g. robots and lifting machines [48] on site in a digital factory setup. This use case typically requires real-time data communication between a mixed reality interface with the machine / process / system, which could be enabled by incorporating digital twin technology. [48]
Mixed reality allows sellers to show the customers how a certain commodity will suit their demands. A seller may demonstrate how a certain product will fit into the homes of the buyer. The buyer with the assistance of the VR can virtually pick the item, spin around and place to their desired points. This improves the buyer's confidence of making a purchase and reduces the number of returns. [49]
Architectural firms can allow customers to virtually visit their desired homes.
While mixed reality refers to the intertwining of the virtual world and the physical world at a high level, there are a variety of digital mediums used to accomplish a mixed reality environment. They may range from handheld devices to entire rooms, each having practical uses in different disciplines. [50] [51]
The cave automatic virtual environment (CAVE) is an environment, typically a small room located in a larger outer room, in which a user is surrounded by projected displays around them, above them, and below them. [50] 3D glasses and surround sound complement the projections to provide the user with a sense of perspective that aims to simulate the physical world. [50] Since being developed, CAVE systems have been adopted by engineers developing and testing prototype products. [52] They allow product designers to test their prototypes before expending resources to produce a physical prototype, while also opening doors for "hands-on" testing on non-tangible objects such as microscopic environments or entire factory floors. [52] After developing the CAVE, the same researchers eventually released the CAVE2, which builds off of the original CAVE's shortcomings. [53] The original projections were substituted for 37 megapixel 3D LCD panels, network cables integrate the CAVE2 with the internet, and a more precise camera system allows the environment to shift as the user moves throughout it. [53]
Head-up display (HUD) is a display that projects imagery directly in front of a viewer without heavily obfuscating their environment. A standard HUD is composed of three elements: a projector, which is responsible for overlaying the graphics of the HUD, the combiner, which is the surface the graphics are projected onto, and the computer, which integrates the two other components and computes any real-time calculations or adjustments. [54] Prototype HUDs were first used in military applications to aid fighter pilots in combat, but eventually evolved to aid in all aspects of flight – not just combat. [55] HUDs were then standardized across commercial aviation as well, eventually creeping into the automotive industry. One of the first applications of HUD in automotive transport came with Pioneer's Heads-up system, which replaces the driver-side sun visor with a display that projects navigation instructions onto the road in front of the driver. [56] Major manufacturers such as General Motors, Toyota, Audi, and BMW have since included some form of head-up display in certain models.
A head-mounted display (HMD), worn over the entire head or worn in front of the eyes, is a device that uses one or two optics to project an image directly in front of the user's eyes. Its applications range across medicine, entertainment, aviation, and engineering, providing a layer of visual immersion that traditional displays cannot achieve. [57] Head-mounted displays are most popular with consumers in the entertainment market, with major tech companies developing HMDs to complement their existing products. [58] [59] However, these head-mounted displays are virtual reality displays and do not integrate the physical world. Popular augmented reality HMDs, however, are more favorable in enterprise environments. Microsoft's HoloLens is an augmented reality HMD that has applications in medicine, giving doctors more profound real-time insight, as well as engineering, overlaying important information on top of the physical world. [60] Another notable augmented reality HMD has been developed by Magic Leap, a startup developing a similar product with applications in both the private sector and the consumer market. [61]
Mobile devices, including smartphones and tablets, have continued to increase in computing power and portability. Many modern mobile devices come equipped with toolkits for developing augmented reality applications. [51] These applications allow developers to overlay computer graphics over videos of the physical world. The first augmented reality mobile game with widespread success was Pokémon GO, which released in 2016 and accumulated 800 million downloads. [62] While entertainment applications utilizing AR have proven successful, productivity and utility apps have also begun integrating AR features. Google has released updates to their Google Maps application that includes AR navigation directions overlaid onto the streets in front of the user, as well as expanding their translate app to overlay translated text onto physical writing in over 20 foreign languages. [63] Mobile devices are unique display technologies due to the fact that they are commonly equipped at all times.
Virtual reality (VR) is a simulated experience that employs 3D near-eye displays and pose tracking to give the user an immersive feel of a virtual world. Applications of virtual reality include entertainment, education and business. VR is one of the key technologies in the reality-virtuality continuum. As such, it is different from other digital visualization solutions, such as augmented virtuality and augmented reality.
Augmented reality (AR) is an interactive experience that combines the real world and computer-generated 3D content. The content can span multiple sensory modalities, including visual, auditory, haptic, somatosensory and olfactory. AR can be defined as a system that incorporates three basic features: a combination of real and virtual worlds, real-time interaction, and accurate 3D registration of virtual and real objects. The overlaid sensory information can be constructive, or destructive. As such, it is one of the key technologies in the reality-virtuality continuum.
Computer-mediated reality refers to the ability to add to, subtract information from, or otherwise manipulate one's perception of reality through the use of a wearable computer or hand-held device such as a smartphone.
A head-mounted display (HMD) is a display device, worn on the head or as part of a helmet, that has a small display optic in front of one or each eye. HMDs have many uses including gaming, aviation, engineering, and medicine.
In virtual reality (VR), immersion is the perception of being physically present in a non-physical world. The perception is created by surrounding the user of the VR system in images, sound or other stimuli that provide an engrossing total environment.
A projection augmented model is an element sometimes employed in virtual reality systems. It consists of a physical three-dimensional model onto which a computer image is projected to create a realistic looking object. Importantly, the physical model is the same geometric shape as the object that the PA model depicts.
In computing, 3D interaction is a form of human-machine interaction where users are able to move and perform interaction in 3D space. Both human and machine process information where the physical position of elements in the 3D space is relevant.
A virtual touch screen (VTS) is a user interface system that augments virtual objects into reality either through a projector or optical display using sensors to track a person's interaction with the object. For instance, using a display and a rear projector system a person could create images that look three-dimensional and appear to float in midair. Some systems utilize an optical head-mounted display to augment the virtual objects onto the transparent display utilizing sensors to determine visual and physical interactions with the virtual objects projected.
Computer-generated imagery (CGI) is a specific-technology or application of computer graphics for creating or improving images in art, printed media, simulators, videos and video games. These images are either static or dynamic. CGI both refers to 2D computer graphics and 3D computer graphics with the purpose of designing characters, virtual worlds, or scenes and special effects. The application of CGI for creating/improving animations is called computer animation, or CGI animation.
An ARTag is a fiducial marker system to support 3D registration (alignment) and pose tracking in augmented reality. They can be used to facilitate the appearance of virtual objects, games, and animations within the real world. Like the earlier ARToolKit system, they allow for video tracking capabilities that calculate a camera's position and orientation relative to physical markers in real time. Once the camera's position is known, a virtual camera can be positioned at the same point, revealing the virtual object at the location of the ARTag. It thus addresses two of the key problems in Augmented Reality: viewpoint tracking and virtual object interaction.
zSpace is a technology firm based in San Jose, California that combines elements of virtual and augmented reality in a computer. zSpace mostly provides AR/VR technology to the education market. It allows teachers and learners to interact with simulated objects in virtual environments.
Windows Mixed Reality (WMR) is a discontinued platform by Microsoft which provides augmented reality and virtual reality experiences with compatible head-mounted displays.
Microsoft HoloLens is an augmented reality (AR)/mixed reality (MR) headset developed and manufactured by Microsoft. HoloLens runs the Windows Mixed Reality platform under the Windows 10 operating system. Some of the positional tracking technology used in HoloLens can trace its lineage to the Microsoft Kinect, an accessory for Microsoft's Xbox 360 and Xbox One game consoles that was introduced in 2010.
Extended reality (XR) is an umbrella term to refer to augmented reality (AR), virtual reality (VR), and mixed reality (MR). The technology is intended to combine or mirror the physical world with a "digital twin world" able to interact with it, giving users an immersive experience by being in a virtual or augmented environment.
Sidekick is a project developed by NASA and Microsoft, started in December 2015 on the International Space Station, which provides virtual assistance to astronauts using Microsoft HoloLens augmented reality headsets.
Industrial augmented reality (IAR) is related to the application of augmented reality (AR) and heads-up displays to support an industrial process. The use of IAR dates back to the 1990s with the work of Thomas Caudell and David Mizell about the application of AR at Boeing. Since then several applications of this technique over the years have been proposed showing its potential in supporting some industrial processes. Although there have been several advances in technology, IAR is still considered to be at an infant developmental stage.
In virtual reality (VR) and augmented reality (AR), a pose tracking system detects the precise pose of head-mounted displays, controllers, other objects or body parts within Euclidean space. Pose tracking is often referred to as 6DOF tracking, for the six degrees of freedom in which the pose is often tracked.
Virtual reality (VR) is a computer application which allows users to experience immersive, three dimensional visual and audio simulations. According to Pinho (2004), virtual reality is characterized by immersion in the 3D world, interaction with virtual objects, and involvement in exploring the virtual environment. The feasibility of the virtual reality in education has been debated due to several obstacles such as affordability of VR software and hardware. The psychological effects of virtual reality are also a negative consideration. However, recent technological progress has made VR more viable and promise new learning models and styles for students. These facets of virtual reality have found applications within the primary education sphere in enhancing student learning, increasing engagement, and creating new opportunities for addressing learning preferences.
There are many applications of virtual reality. Applications have been developed in a variety of domains, such as education, architectural and urban design, digital marketing and activism, engineering and robotics, entertainment, virtual communities, fine arts, healthcare and clinical therapies, heritage and archaeology, occupational safety, social science and psychology.
Microsoft Holoportation is a project from Microsoft Research that demonstrates real-time holographic communications with the Microsoft Hololens. Holoportation is described as "a new type of 3D capture technology that allows high-quality 3D models of people to be reconstructed, compressed and transmitted anywhere in the world in real time. This allows users wearing virtual or augmented reality displays to see, hear and interact with remote participants in 3D, almost as if they were present in the same physical space. From an audio-visual perspective, communicating and interacting with remote users edges closer to face-to-face communication." The project was launched by Shahram Izadi and his Microsoft team in 2016. In March 2016, Alex Kipman performed a live demonstration of the technology at the TED conference as part of his talk. In 2020, Microsoft Mesh was launched which offered Holoportation capabilities to "project yourself as your most lifelike, photorealistic self in mixed reality to interact as if you are there in person"
Media related to Mixed reality at Wikimedia Commons