Ronald Azuma | |
---|---|
Citizenship | United States |
Alma mater | University of North Carolina [1] |
Scientific career | |
Fields | Augmented reality |
Institutions | Intel Laboratories |
Thesis | Predictive tracking for augmented reality (1995) |
Doctoral advisor | T. Gary Bishop |
Website | ronaldazuma |
Ronald Azuma is an American computer scientist, widely recognized for contributing to the field of augmented reality (AR). His work A survey of augmented reality [2] became the most cited article in the AR field and is one of the most influential MIT Press papers of all time. [3] Azuma is considered to provide a commonly accepted definition of AR and is often named one of AR’s most recognized experts. [3] [4] [5]
Azuma was named Fellow of the Institute of Electrical and Electronics Engineers (IEEE) in 2016 for contributions to Augmented Reality (AR). [6]
The list of most cited patents according to Google Scholar: [7]
With his scientific research and publications, Azuma contributed on the international scale to the computer science field of augmented reality, including such publishers as MIT Press or IEEE. Below are his most cited articles, according to Google Scholar: [7]
Ronald Linn Rivest is a cryptographer and computer scientist whose work has spanned the fields of algorithms and combinatorics, cryptography, machine learning, and election integrity. He is an Institute Professor at the Massachusetts Institute of Technology (MIT), and a member of MIT's Department of Electrical Engineering and Computer Science and its Computer Science and Artificial Intelligence Laboratory.
Augmented reality (AR) is an interactive experience that combines the real world and computer-generated 3D content. The content can span multiple sensory modalities, including visual, auditory, haptic, somatosensory and olfactory. AR can be defined as a system that incorporates three basic features: a combination of real and virtual worlds, real-time interaction, and accurate 3D registration of virtual and real objects. The overlaid sensory information can be constructive, or destructive. As such, it is one of the key technologies in the reality-virtuality continuum.
Neuroevolution, or neuro-evolution, is a form of artificial intelligence that uses evolutionary algorithms to generate artificial neural networks (ANN), parameters, and rules. It is most commonly applied in artificial life, general game playing and evolutionary robotics. The main benefit is that neuroevolution can be applied more widely than supervised learning algorithms, which require a syllabus of correct input-output pairs. In contrast, neuroevolution requires only a measure of a network's performance at a task. For example, the outcome of a game can be easily measured without providing labeled examples of desired strategies. Neuroevolution is commonly used as part of the reinforcement learning paradigm, and it can be contrasted with conventional deep learning techniques that use backpropagation with a fixed topology.
Computer-mediated reality refers to the ability to add to, subtract information from, or otherwise manipulate one's perception of reality through the use of a wearable computer or hand-held device such as a smartphone.
Mixed reality (MR) is a term used to describe the merging of a real-world environment and a computer-generated one. Physical and virtual objects may co-exist in mixed reality environments and interact in real time.
A head-mounted display (HMD) is a display device, worn on the head or as part of a helmet, that has a small display optic in front of one or each eye. HMDs have many uses including gaming, aviation, engineering, and medicine.
Video tracking is the process of locating a moving object over time using a camera. It has a variety of uses, some of which are: human-computer interaction, security and surveillance, video communication and compression, augmented reality, traffic control, medical imaging and video editing. Video tracking can be a time-consuming process due to the amount of data that is contained in video. Adding further to the complexity is the possible need to use object recognition techniques for tracking, a challenging problem in its own right.
The Sword of Damocles is widely misattributed as the name of the first AR display prototype. According to Ivan Sutherland, this was merely a joke name for the mechanical system that supported and tracked the actual HMD below it. It happened to look like a giant overhead cross, hence the joke. Ivan Sutherland's 1968 ground-breaking AR prototype was actually called "the head-mounted display", which is perhaps the first recorded use of the term "HMD", and he preferred "Stereoscopic-Television Apparatus for Individual Use."
Keystroke dynamics, keystroke biometrics, typing dynamics, andtyping biometrics refer to the collection of biometric information generated by key press related events that occur when a user types on a keyboard. Use of patterns in key operation to identify operators predates the modern computing, and keyboards, and has been proposed as an authentication alternative to passwords and PIN numbers.
Immersion into virtual reality (VR) is the perception of being physically present in a non-physical world. The perception is created by surrounding the user of the VR system in images, sound or other stimuli that provide an engrossing total environment.
A projection augmented model is an element sometimes employed in virtual reality systems. It consists of a physical three-dimensional model onto which a computer image is projected to create a realistic looking object. Importantly, the physical model is the same geometric shape as the object that the PA model depicts.
Francis Fan Lee was a Chinese-American inventor, businessman, and professor emeritus of Electrical Engineering and Computer Science at the Massachusetts Institute of Technology (MIT). Lee is the founder of Lexicon (company). He is best known for three inventions: the Digital Cardiac Monitor (1969), the Digital Audio Signal Processor (1971), and the Digital Time Compression System (1972). In 1984, Lexicon won an Emmy Award for Engineering Excellence for the Model 1200 Audio Time Compressor and Expander, widely used in the television industry.
Dieter Schmalstieg is an Austrian computer scientist, full professor, and head of the Institute of Computer Graphics and Vision (ICG) at Graz University of Technology. In 1993 he received a master of science diploma and in 1997 the degree of doctor of technical sciences. Currently he has over 300 peer-reviewed works which were cited over 20,000 times which brought him an h-index of 70.
Emotion recognition is the process of identifying human emotion. People vary widely in their accuracy at recognizing the emotions of others. Use of technology to help people with emotion recognition is a relatively nascent research area. Generally, the technology works best if it uses multiple modalities in context. To date, the most work has been conducted on automating the recognition of facial expressions from video, spoken expressions from audio, written expressions from text, and physiology as measured by wearables.
Industrial augmented reality (IAR) is related to the application of augmented reality (AR) and heads-up displays to support an industrial process. The use of IAR dates back to the 1990s with the work of Thomas Caudell and David Mizell about the application of AR at Boeing. Since then several applications of this technique over the years have been proposed showing its potential in supporting some industrial processes. Although there have been several advances in technology, IAR is still considered to be at an infant developmental stage.
In virtual reality (VR) and augmented reality (AR), a pose tracking system detects the precise pose of head-mounted displays, controllers, other objects or body parts within Euclidean space. Pose tracking is often referred to as 6DOF tracking, for the six degrees of freedom in which the pose is often tracked.
GRADE is a CERN research programme. The programme was approved by the CERN Research Board in December 2015.
Human-City Interaction is the intersection between human-computer interaction and urban computing. The area involves data-driven methods such as analysis tools, prediction methods to present the solutions to urban design problems. Practitioners, Designers, software engineers in this area employ large sets of user-centric data to design urban environments with high levels of interactivity. This discipline mainly focuses on the user perspective and devises various interaction design between the citizen (user) and various urban entities. Common examples in the discipline include the interactivity between human and buildings, Interaction between Human and IoT devices, participatory and collective urban design, and so on. The discipline attracts growing interests from people of various background such as designers, urban planners, computer scientists, and even architecture. Although the design canvas between human and city is board, Lee et al. proposed a framework considering the multi-disciplinary interests together, in which the emerging technologies such as extended reality (XR) can serve as a platform for such co-design purposes.
Steven Mark Drucker is an American computer scientist who studies how to help people understand data, and communicate their insights to others. He is a Partner at Microsoft Research, where he also serves as the Research Manager of the VIDA group. Drucker is an affiliate professor at the University of Washington Computer Science and Engineering Department.