Industrial augmented reality (IAR) is related to the application of augmented reality (AR) and heads-up displays to support an industrial process. [1] The use of IAR dates back to the 1990s with the work of Thomas Caudell and David Mizell about the application of AR at Boeing. [2] Since then several applications of this technique over the years have been proposed showing its potential in supporting some industrial processes. Although there have been several advances in technology, IAR is still considered to be at an infant developmental stage. [3] [4] [5]
Some challenging factors of IAR development are related to the necessary interdisciplinarity knowledge in areas such as object recognition, computer graphics, artificial intelligence and human-computer-interaction. Where a partial context understanding is required for the adaptation to unexpected conditions and understand the user's actions and intentions. Additionally user intuitive interfaces still remain a challenge likewise hardware improvements such as sensors and displays. [4] [6] [7]
Further, some controversy prevails about the boundaries that define IAR and its potential benefits for some activities with the currently available technology. [8]
Although the origins of augmented reality dates from the 1960s, when Ivan Sutherland created the first head-mounted display [9] it did not gain strength until the early 1990s, when the David Mizell and Thomas Caudell developed the first industrial AR at Boeing. They used a head-mounted display (HMD) to superimpose a computer-generated diagram of the manufacturing process with a real-time world registration and the user's head position calculation. They coined the name augmented reality for this technology. [2] [10]
Contemporary several prototypes were proposed to demonstrate AR's application to manufacturing: [11] a laser-printer maintenance application was proposed in 1993 by Steven K. Feiner and coauthors by introducing the concept of knowledge-based AR for maintenance assistance (KARMA). [12] [13] Whitaker Ross et al. proposed a system to display the name of the part pointed by the user in an engine. [14]
By the 2000s, the interest in AR had grown considerably. Some important groups were funded: [10] the largest consortium for IAR backed by the German's Federal Ministry of Education and Research (ARVIKA), with the aim of researching and implementing AR in relevant German industries,; [15] the European Community founded several projects including Service and Training through (STAR), which is a collaboration between institutes and companies from Europe and the US, [16] and Advanced Augmented Reality Technologies for Industrial Service Applications (ARTESAS) derived from ARVIKA, focused on the development of AR for automotive and aerospace maintenance. [17] Likewise, from other countries such as Sweden, Australia and Japan with the aim of encourage the IAR development.
From the beginning of 2010 until today, advances in hardware devices such as the wearable Google Glass, the reduced cost of mobile devices, and the increasing user familiarity with this technology. [7] Besides the increasing product-development complexity where products are becoming more versatile and intricate, with multiple variations and mass customization. Opened new scenarios for this technology. [18]
One of the most promising fields of AR application is industrial manufacturing, where it can be used to support some activities in product development and manufacturing [19] through providing information available to reduce and simplify the user's decisions. [20] The general issues of the development of an AR system can still be classified into: [21]
There are technologies needed to build AR systems. Some of them are directly related with the performance of the software and hardware that enable the deployment of AR, such as displays, sensors, processors, recognition, tracking, registration among others. [21] Thus AR uses different approaches to integrate the virtual and real worlds, where several technologies influence the usability and applicability. [22]
Some common unsolved issues concern tracking systems suited for industrial scenarios which mean: poorly textured objects with smooth surfaces and strong light variation; object recognition using natural features when it is not possible to use markers; [7] the improvement of accuracy and latency of registration, [4] and 3D context scene capture to allow context awareness. [6]
The limited understanding of human factors is likely to be obstructing the spreading of IAR beyond laboratories prototypes. [4] Their study is challenged to overcome technological issues (deficiencies in resolution, field of view, brightness, contrast, tracking systems, among others) in order to separate the AR performance from interface factors and technological issues. [23]
It was suggested by [10] that for an IAR application to be successful in a commercial environment, it has to be "user friendly", meaning that it needs to be easy and safe to set up, learn, use and customize and the user should feel free to move with an AR system. [8] As well as the use of natural interfaces in order to control AR by using natural movements of the body have also motivated a good deal of research. The reason for this is that the usability not only depends on the system's stability but also on the control interface's quality. [22]
Further, the user interface should avoid overloading the user with information and prevent over relying on it in order to avoid losing important cues from the environment. [24] Other issues are also related with improving multiple user collaboration [6]
It is the final challenge, given an ideal AR system (hardware, software and an intuitive interface) to be accepted and become a part of a user's daily life. [21]
Consequently, one of the most important factors related with the adoption of any new technology is the perception of usefulness, and AR needs to show a clear cost-benefit relation. [25] Some studies suggest that, in order for AR to be perceived as useful the task should be high enough to require its use. [26]
Other non-addressed but important issues for technology acceptance are related with fashion, ethics, and privacy. [21]
Assembling is the process of putting together several separate components in order to create a functional one. It can be performed in different stages of the product's life. [27] Even though nowadays many assembly operations are automated, some of them still require human assistance as, in many cases, their bits of information are detached from the equipment. Thus it is necessary to alternate their attention which leads to decreasing productivity and increasing of errors and injuries [18]
The use of AR is encouraged by the premise that instructions might be easier to understand if instead of being available as manuals they are super imposed upon the actual equipment. [11] Some of the uses of AR in the support of the assembly can be categorized into: [6]
Similarly, by using AR it is possible to simulate the user's motion during assembly to acquire an accurate and realistic movement of virtual parts. [28]
On the other hand, some of the critical issues of the support assembly task are related with the dynamic reconfiguration of the state diagram, which allows to automatically identify the step of assembly and also adapt to unexpected actions or errors of the user. [6] [7] Thus, defining 'what', 'where', and 'when' to display information becomes a challenge since it requires a minimum understanding of the surrounding scene. [29]
Like assembly, maintenance serves as a natural application for AR because it requires keeping the user's attention on a specific area and the synthesis of additional information such as complex sequences, component identification, and textual data. [30]
Furthermore, the efficiency and speed of maintenance processes can be improved through AR by quickly displaying relevant information about an unfamiliar piece of equipment to a technician. [31] Similarly, AR can support maintenance tasks by acting as an "x-ray" like vision, or providing information from sensors directly to the user. [24]
It can also be employed in repairing tasks. For instance, in the diagnosis of modern cars whose status information can be loaded via a plug-like connector. AR can be used to immediately display the diagnosis on the engine [3]
Many industries are required to perform complex activities that needs previous training. Therefore, for learning a new skill, technicians need to be trained in sensorimotive and cognitive of sub-skills that can be challenging. This kind of training can be supported by AR. [32]
Additionally, the possibility to use AR to motivate both trainees and students through enhancing the realism of practices has been suggested. [33] By providing instructions with an AR, the immediate capability to perform the task can be also accomplished. [34]
Other advantages of using AR for training are that students can interact with real objects and, at the same time, have access to guidance information, and the existence of tactile feedback provided by the interaction with real objects. [32]
By displaying information about the components of manufacturing in real time. For instance, Volkswagen has used it to verify parts by analyzing their interfering edges and variance. AR has also been used in automobile development to display and verify car components and ergonomic test in reality. By superimposing the original 3D model over the real surface, [35] deviations can easily be identified, and therefore sources of error can be corrected. [36] [37] [38]
The use of AR have been proposed to interact with scientific data in shared environments, where it allows a 3D interaction with real objects compared to virtual reality, while allowing the user to move freely in the real world. [39] Similar systems for allowing multiple user's with HMD can interact with dynamic visual simulations of engineering processes [40]
In the same way, AR simulation of working machinery can be checked from mobile devices as well as other information such as temperature and, time of use, which can reduce worker's movements and stress. [19]
The benefits of implementing AR into some industrial activities have been of a high interest. However, there is still a debate as the current level of the technology hides all of its potential. [4] [8]
It has been reported that, in maintenance and repair, the use of AR can reduce time to locate a task, head and neck movements, and also other disadvantages related to bulky, low resolution display hardware [41] In addition, in training it aims to improve the performance up to 30% and, at the same time reduce costs by 25%. [42]
Similar benefits have been reported by Juha Sääski et al. in the comparative use of AR versus paper instructions to support the assembly operation of the parts of a tractor's accessory power unit which showed a time and error reduction (six time less). [43]
But, on the other hand, the long-term use has been reported to cause stress and strain to the user. Johannes Tümler et al. compared the stress and strain produced by picking parts from a rack using AR versus using paper as a reference, which showed as result a change of the strain that could be assumed by the non-optimal system. [44]
Additionally, it has been suggested [32] that one potential danger in use AR for training is the user dependability of this technology. As a consequence the user will not be able to perform the task without it. In order to overcome this situation, the information available during training needs to be controlled.
The boundaries that define IAR are still not clear. For instance, one of the most accepted definitions [11] of AR implies that the virtual elements need to be registered. But in industrial field, performance is a main goal, and therefore has been an extensive research about the presentation of virtual components in AR regarding the type of the task. This research has shown that the optimal visual aid type may variate depending on the difficulty of the task. [45]
Finally, it has been suggested [10] that in order to have commercial IAR solutions, they must be:
A wearable computer, also known as a body-borne computer, is a computing device worn on the body. The definition of 'wearable computer' may be narrow or broad, extending to smartphones or even ordinary wristwatches.
In the industrial design field of human–computer interaction, a user interface (UI) is the space where interactions between humans and machines occur. The goal of this interaction is to allow effective operation and control of the machine from the human end, while the machine simultaneously feeds back information that aids the operators' decision-making process. Examples of this broad concept of user interfaces include the interactive aspects of computer operating systems, hand tools, heavy machinery operator controls and process controls. The design considerations applicable when creating user interfaces are related to, or involve such disciplines as, ergonomics and psychology.
Augmented reality (AR) is an interactive experience that combines the real world and computer-generated content. The content can span multiple sensory modalities, including visual, auditory, haptic, somatosensory and olfactory. AR can be defined as a system that incorporates three basic features: a combination of real and virtual worlds, real-time interaction, and accurate 3D registration of virtual and real objects. The overlaid sensory information can be constructive, or destructive. This experience is seamlessly interwoven with the physical world such that it is perceived as an immersive aspect of the real environment. In this way, augmented reality alters one's ongoing perception of a real-world environment, whereas virtual reality completely replaces the user's real-world environment with a simulated one.
Computer-mediated reality refers to the ability to add to, subtract information from, or otherwise manipulate one's perception of reality through the use of a wearable computer or hand-held device such as a smartphone.
Mixed reality (MR) is a term used to describe the merging of a real-world environment and a computer-generated one. Physical and virtual objects may co-exist in mixed reality environments and interact in real time.
A virtual retinal display (VRD), also known as a retinal scan display (RSD) or retinal projector (RP), is a display technology that draws a raster display directly onto the retina of the eye.
The Electronic Visualization Laboratory (EVL) is an interdisciplinary research lab and graduate studies program at the University of Illinois at Chicago, bringing together faculty, students and staff primarily from the Art and Computer Science departments of UIC. The primary areas of research are in computer graphics, visualization, virtual and augmented reality, advanced networking, and media art. Graduates of EVL either earn a Masters or Doctoral degree in Computer Science.
Virtual reality in telerehabilitation is a method used first in the training of musculoskeletal patients using asynchronous patient data uploading, and an internet video link. Subsequently, therapists using virtual reality-based telerehabilitation prescribe exercise routines via the web which are then accessed and executed by patients through a web browser. Therapists then monitor the patient's progress via the web and modify the therapy asynchronously without real-time interaction or training.
Immersion into virtual reality (VR) is a perception of being physically present in a non-physical world. The perception is created by surrounding the user of the VR system in images, sound or other stimuli that provide an engrossing total environment.
A projection augmented model is an element sometimes employed in virtual reality systems. It consists of a physical three-dimensional model onto which a computer image is projected to create a realistic looking object. Importantly, the physical model is the same geometric shape as the object that the PA model depicts.
In computing, 3D interaction is a form of human-machine interaction where users are able to move and perform interaction in 3D space. Both human and machine process information where the physical position of elements in the 3D space is relevant.
Augmented reality-based testing (ARBT) is a test method that combines augmented reality and software testing to enhance testing by inserting an additional dimension into the testers field of view. For example, a tester wearing a head-mounted display (HMD) or Augmented reality contact lenses that places images of both the physical world and registered virtual graphical objects over the user's view of the world can detect virtual labels on areas of a system to clarify test operating instructions for a tester who is performing tests on a complex system.
Visuo-haptic mixed reality (VHMR) is a branch of mixed reality that has the ability of merging visual and tactile perceptions of both virtual and real objects with a collocated approach. The first known system to overlay augmented haptic perceptions on direct views of the real world is the Virtual Fixtures system developed in 1992 at the US Air Force Research Laboratories. Like any emerging technology, the development of the VHMR systems is accompanied by challenges that, in this case, deal with the efforts to enhance the multi-modal human perception with the user-computer interface and interaction devices at the moment available. Visuo-haptic mixed reality (VHMR) consists of adding to a real scene the ability to see and touch virtual objects. It requires the use of see-through display technology for visually mixing real and virtual objects and haptic devices necessary to provide haptic stimuli to the user while interacting with the virtual objects. A VHMR setup allows the user to perceive visual and kinesthetic stimuli in a co-located manner, i.e., the user can see and touch virtual objects at the same spatial location. This setup overcomes the limits of the traditional one, i.e, display and haptic device, because the visuo-haptic co-location of the user's hand and a virtual tool improve the sensory integration of multimodal cues and makes the interaction more natural. But it also comes with technological challenges in order to improve the naturalness of the perceptual experience.
Extended reality (XR) is a catch-all term to refer to augmented reality (AR), virtual reality (VR), and mixed reality (MR). The technology is intended to combine or mirror the physical world with a "digital twin world" able to interact with it. Giving users an immersive experience by being in a virtual or augmented environment.
Steven K. Feiner is an American computer scientist, serving as Professor for computer science at Columbia University in the field of computer graphics. He is well-known for his research in augmented reality (AR), and co-author of Computer Graphics: Principles and Practice. He directs the Columbia University Computer Graphics and User Interface Lab.
Ronald Azuma is an American computer scientist, widely recognized for contributing to the field of augmented reality (AR). His work A survey of augmented reality became the most cited article in the AR field and is one of the most influential MIT Press papers of all time. Azuma is considered to provide a commonly accepted definition of AR and is often named one of AR’s most recognized experts.
In virtual reality (VR) and augmented reality (AR), a pose tracking system detects the precise pose of head-mounted displays, controllers, other objects or body parts within Euclidean space. Pose tracking is often referred to as 6DOF tracking, for the six degrees of freedom in which the pose is often tracked.
Virtual reality (VR) is a computer application which allows users to experience immersive, three dimensional visual and audio simulations. According to Pinho (2004), virtual reality is characterized by immersion in the 3D world, interaction with virtual objects, and involvement in exploring the virtual environment. The feasibility of the virtual reality in education has been debated due to several obstacles such as affordability of VR software and hardware. The psychological effects of virtual reality are also a negative consideration. However, recent technological progress has made VR more viable and promise new learning models and styles for students. These facets of virtual reality have found applications within the primary education sphere in enhancing student learning, increasing engagement, and creating new opportunities for addressing learning preferences.
Gudrun Johanna Klinker is a German computer scientist known for her work on augmented reality.
GRADE is a CERN research programme. The programme was approved by the CERN Research Board in December 2015.
{{cite journal}}
: CS1 maint: multiple names: authors list (link){{cite journal}}
: CS1 maint: multiple names: authors list (link){{cite conference}}
: CS1 maint: multiple names: authors list (link){{cite journal}}
: CS1 maint: multiple names: authors list (link){{cite journal}}
: CS1 maint: multiple names: authors list (link){{cite conference}}
: CS1 maint: multiple names: authors list (link){{cite conference}}
: CS1 maint: multiple names: authors list (link){{cite news}}
: CS1 maint: multiple names: authors list (link){{cite conference}}
: CS1 maint: multiple names: authors list (link){{cite journal}}
: CS1 maint: multiple names: authors list (link){{cite journal}}
: CS1 maint: multiple names: authors list (link){{cite journal}}
: CS1 maint: multiple names: authors list (link){{cite journal}}
: CS1 maint: multiple names: authors list (link){{cite journal}}
: CS1 maint: multiple names: authors list (link){{cite journal}}
: CS1 maint: multiple names: authors list (link){{cite journal}}
: Cite journal requires |journal=
(help){{cite journal}}
: CS1 maint: multiple names: authors list (link){{cite conference}}
: CS1 maint: multiple names: authors list (link){{cite conference}}
: CS1 maint: multiple names: authors list (link){{cite conference}}
: CS1 maint: multiple names: authors list (link){{cite journal}}
: CS1 maint: multiple names: authors list (link){{cite journal}}
: CS1 maint: multiple names: authors list (link)