An EyeTap [1] [2] [3] is a concept for a wearable computing device that is worn in front of the eye that acts as a camera to record the scene available to the eye as well as a display to superimpose computer-generated imagery on the original scene available to the eye. [3] [4] This structure allows the user's eye to operate as both a monitor and a camera as the EyeTap intakes the world around it and augments the image the user sees allowing it to overlay computer-generated data over top of the normal world the user would perceive.
In order to capture what the eye is seeing as accurately as possible, an EyeTap uses a beam splitter [5] to send the same scene (with reduced intensity) to both the eye and a camera. The camera then digitizes the reflected image of the scene and sends it to a computer. The computer processes the image and then sends it to a projector. The projector sends the image to the other side of the beam splitter so that this computer-generated image is reflected into the eye to be superimposed on the original scene. Stereo EyeTaps modify light passing through both eyes, but many research prototypes (mainly for reasons of ease of construction) only tap one eye.
EyeTap is also the name of an organization founded by inventor Steve Mann [6] [7] [8] [9] to develop and promote EyeTap-related technologies such as wearable computers. [4] [10]
An EyeTap is somewhat like a head-up display (HUD). The important difference is that the scene available to the eye is also available to the computer that projects the head-up display. This enables the EyeTap to modify the computer generated scene in response to the natural scene. One use, for instance, would be a sports EyeTap: here the wearer, while in a stadium, would be able to follow a particular player in a field and have the EyeTap display statistics relevant to that player as a floating box above the player. Another practical use for the EyeTap would be in a construction yard as it would allow the user to reference the blueprints, especially in a 3D manner, to the current state of the building, display a list of current materials and their current locations as well perform basic measurements. Or, even in the business world, the EyeTap has great potential, for it would be capable of delivering to the user constant up to date information on the stock market, the user's corporation, and meeting statuses. On a more day-to-day basis some of Steve Mann's first uses for the technology was using it to keep track of names of people and places, his to-do lists, and keeping track of his other daily ordeals. [11] The EyeTap Criteria[ clarification needed ] are an attempt to define how close a real, practical device comes to such an ideal. EyeTaps could have great use in any field where the user would benefit from real-time interactive information that is largely visual in nature. This is sometimes referred to as computer-mediated reality , [12] [13] commonly known as augmented reality . [14]
Eyetap has been explored as a potential tool for individuals with visual disabilities due to its abilities to direct visual information to parts of the retina that function well. [15] As well, Eyetap's role in sousveillance has been explored by Mann, Jason Nolan and Barry Wellman. [16] [17] [18]
Users may find that they experience side effects such as headaches and difficulty sleeping if usage occurs shortly before sleep.[ citation needed ] Mann finds that due to his extensive use of the device that going without it can cause him to feel "nauseous, unsteady, naked" when he removes it. [2]
The EyeTap has applications in the world of cyborg logging, as it allows the user the ability to perform real-time visual capture of their daily lives from their own point of view. In this way, the EyeTap could be used to create a lifelong cyborg log or “glog” of the user's life and the events they participate in, potentially recording enough media to allow producers centuries in the future to present the user's life as interactive entertainment (or historical education) to consumers of that era.
Steve Mann created the first version of the EyeTap, which consisted of a computer in a backpack wired up to a camera and its viewfinder which in turn was rigged to a helmet. Ever since this first version, it has gone through multiple models as wearable computing evolves, allowing the EyeTap to shrink down to a smaller and less weighty version.
Currently the EyeTap consists of the eyepiece used to display the images, the keypad which the user can use to interface with the EyeTap and have it perform the desired tasks, a CPU which can be attached to most articles of clothing and in some cases even a Wi-Fi device so the user can access the Internet and online data.
The EyeTap is essentially a half-silvered mirror in front of the user's eye, reflecting some of the light into a sensor. The sensor then sends the image to the aremac, a display device capable of displaying data at any fitting depth. The output rays from the aremac are reflected off the half-silvered mirror back into the eye of the user along with the original light rays.
In these cases, the EyeTap views infrared light, as well as the overall design schematic of how the EyeTap manipulates lightrays. [19]
A conceptual diagram of an EyeTap:
CCD Cameras (Charge-coupled device) are the most common type of digital camera used today.[ citation needed ]
A wearable computer, also known as a body-borne computer, is a computing device worn on the body. The definition of 'wearable computer' may be narrow or broad, extending to smartphones or even ordinary wristwatches.
Augmented reality (AR) is an interactive experience that combines the real world and computer-generated 3D content. The content can span multiple sensory modalities, including visual, auditory, haptic, somatosensory and olfactory. AR can be defined as a system that incorporates three basic features: a combination of real and virtual worlds, real-time interaction, and accurate 3D registration of virtual and real objects. The overlaid sensory information can be constructive, or destructive. As such, it is one of the key technologies in the reality-virtuality continuum.
William Stephen George Mann is a Canadian engineer, professor, and inventor who works in augmented reality, computational photography, particularly wearable computing, and high-dynamic-range imaging. Mann has sometimes been labeled the "Father of Wearable Computing" for early inventions and continuing contributions to the field. He cofounded InteraXon, makers of the Muse brain-sensing headband, and is also a founding member of the IEEE Council on Extended Intelligence (CXI). Mann is currently CTO and cofounder at Blueberry X Technologies and Chairman of MannLab. Mann was born in Canada, and currently lives in Toronto, Canada, with his wife and two children. In 2023, Mann unsuccessfully ran for mayor of Toronto.
Stereoscopy is a technique for creating or enhancing the illusion of depth in an image by means of stereopsis for binocular vision. The word stereoscopy derives from Greek στερεός (stereos) 'firm, solid' and σκοπέω (skopeō) 'to look, to see'. Any stereoscopic image is called a stereogram. Originally, stereogram referred to a pair of stereo images which could be viewed using a stereoscope.
Sousveillance is the recording of an activity by a member of the public, rather than a person or organisation in authority, typically by way of small wearable or portable personal technologies. The term, coined by Steve Mann, stems from the contrasting French words sur, meaning "above", and sous, meaning "below", i.e. "surveillance" denotes the "eye-in-the-sky" watching from above, whereas "sousveillance" denotes bringing the means of observation down to human level, either physically or hierarchically.
Computer-mediated reality refers to the ability to add to, subtract information from, or otherwise manipulate one's perception of reality through the use of a wearable computer or hand-held device such as a smartphone.
A keyer is an electronic device used for signaling by hand, by way of pressing one or more switches. The technical term keyer has two very similar meanings, which are nonetheless distinct: One for telegraphy and the other for accessory devices built for computer-human communication:
A head-mounted display (HMD) is a display device, worn on the head or as part of a helmet, that has a small display optic in front of one or each eye. HMDs have many uses including gaming, aviation, engineering, and medicine.
A virtual retinal display (VRD), also known as a retinal scan display (RSD) or retinal projector (RP), is a display technology that draws a raster display directly onto the retina of the eye.
Equiveillance is a state of equilibrium, or a desire to attain a state of equilibrium, between surveillance and sousveillance. It is sometimes confused with transparency. The balance (equilibrium) provided by equiveillance allows individuals to construct their own cases from evidence they gather themselves, rather than merely having access to surveillance data that could possibly incriminate them.
Light painting, painting with light,light drawing, light art performance photography, or sometimes also freezelight are terms that describe photographic techniques of moving a light source while taking a long-exposure photograph, either to illuminate a subject or space, or to shine light at the camera to 'draw', or by moving the camera itself during exposure of light sources. Practiced since the 1880s, the technique is used for both scientific and artistic purposes, as well as in commercial photography.
High dynamic range (HDR), also known as wide dynamic range, extended dynamic range, or expanded dynamic range, is a signal with a higher dynamic range than usual.
Lifestreaming is an act of documenting and sharing aspects of one's daily experiences online, via a lifestream website that publishes things of a person's choosing.
A lifelog is a personal record of one's daily life in a varying amount of detail, for a variety of purposes. The record contains a comprehensive dataset of a human's activities. The data could be used to increase knowledge about how people live their lives. In recent years, some lifelog data has been automatically captured by wearable technology or mobile devices. People who keep lifelogs about themselves are known as lifeloggers.
A cyborg —a portmanteau of cybernetic and organism—is a being with both organic and biomechatronic body parts. The term was coined in 1960 by Manfred Clynes and Nathan S. Kline. In contrast to biorobots and androids, the term cyborg applies to a living organism that has restored function or enhanced abilities due to the integration of some artificial component or technology that relies on feedback.
SixthSense is a gesture-based wearable computer system developed at MIT Media Lab by Steve Mann in 1994 and 1997, and 1998, and further developed by Pranav Mistry, in 2009, both of whom developed both hardware and software for both headworn and neckworn versions of it. It comprises a headworn or neck-worn pendant that contains both a data projector and camera. Headworn versions were built at MIT Media Lab in 1997 that combined cameras and illumination systems for interactive photographic art, and also included gesture recognition.
In computing, a natural user interface (NUI) or natural interface is a user interface that is effectively invisible, and remains invisible as the user continuously learns increasingly complex interactions. The word "natural" is used because most computer interfaces use artificial control devices whose operation has to be learned. Examples include voice assistants, such as Alexa and Siri, touch and multitouch interactions on today's mobile phones and tablets, but also touch interfaces invisibly integrated into the textiles of furniture.
Smartglasses or smart glasses are eye or head-worn wearable computers. Many smartglasses include displays that add information alongside or to what the wearer sees. Alternatively, smartglasses are sometimes defined as glasses that are able to change their optical properties, such as smart sunglasses that are programmed to change tint by electronic means. Alternatively, smartglasses are sometimes defined as glasses that include headphone functionality.
Cyborg data mining is the practice of collecting data produced by an implantable device that monitors bodily processes for commercial interests. As an android is a human-like robot, a cyborg, on the other hand, is an organism whose physiological functioning is aided by or dependent upon a mechanical/electronic device that relies on some sort of feedback.
Egocentric vision or first-person vision is a sub-field of computer vision that entails analyzing images and videos captured by a wearable camera, which is typically worn on the head or on the chest and naturally approximates the visual field of the camera wearer. Consequently, visual data capture the part of the scene on which the user focuses to carry out the task at hand and offer a valuable perspective to understand the user's activities and their context in a naturalistic setting.
{{cite journal}}
: Cite journal requires |journal=
(help)