SixthSense is a gesture-based wearable computer system developed at MIT Media Lab by Steve Mann in 1994 and 1997 (headworn gestural interface), and 1998 (neckworn version), and further developed by Pranav Mistry (also at MIT Media Lab), in 2009, both of whom developed both hardware and software for both headworn and neckworn versions of it. It comprises a headworn or neck-worn pendant that contains both a data projector and camera. Headworn versions were built at MIT Media Lab in 1997 (by Steve Mann) that combined cameras and illumination systems for interactive photographic art, and also included gesture recognition (e.g. finger-tracking using colored tape on the fingers). [3] [4] [5] [6]
SixthSense is a name for extra information supplied by a wearable computer, such as the device called EyeTap (Mann), Telepointer (Mann), and "WuW" (Wear yoUr World) by Pranav Mistry. [9] [10]
Sixth Sense technology (a camera combined with a light source) was developed in 1997 as a headworn device, and in 1998 as a neckworn object, but the Sixth Sense name for this work was not coined and published until 2001, when Mann coined the term "Sixth Sense" to describe such devices. [11] [12]
Mann referred to this wearable computing technology as affording a "Synthetic Synesthesia of the Sixth Sense", believing that wearable computing and digital information could act in addition to the five traditional senses. [13] Ten years later, Pattie Maes, also with MIT Media Lab, used the term "Sixth Sense" in this same context, in a TED talk.
Similarly, other inventors have used the term sixth-sense technology to describe new capabilities that augment the traditional five human senses. For example, in U.S. patent no. 9,374,397, timo platt et als, refer to their new communications invention as creating a new social and personal sense, i.e., a "metaphorical sixth sense", enabling users (while retaining their privacy and anonymity) to sense and share the "stories" and other attributes and information of those around them.
Ubiquitous computing is a concept in software engineering, hardware engineering and computer science where computing is made to appear anytime and everywhere. In contrast to desktop computing, ubiquitous computing can occur using any device, in any location, and in any format. A user interacts with the computer, which can exist in many different forms, including laptop computers, tablets, smart phones and terminals in everyday objects such as a refrigerator or a pair of glasses. The underlying technologies to support ubiquitous computing include Internet, advanced middleware, operating system, mobile code, sensors, microprocessors, new I/O and user interfaces, computer networks, mobile protocols, location and positioning, and new materials.
A wearable computer, also known as a body-borne computer, is a computing device worn on the body. The definition of 'wearable computer' may be narrow or broad, extending to smartphones or even ordinary wristwatches.
William Stephen George Mann is a Canadian engineer, professor, and inventor who works in augmented reality, computational photography, particularly wearable computing, and high-dynamic-range imaging. Mann is sometimes labeled the "Father of Wearable Computing" for early inventions and continuing contributions to the field. He cofounded InteraXon, makers of the Muse brain-sensing headband, and is also a founding member of the IEEE Council on Extended Intelligence (CXI). Mann is currently CTO and cofounder at Blueberry X Technologies and Chairman of MannLab. Mann was born in Canada, and currently lives in Toronto, Canada, with his wife and two children. In 2023, Mann unsuccessfully ran for mayor of Toronto.
An EyeTap is a concept for a wearable computing device that is worn in front of the eye that acts as a camera to record the scene available to the eye as well as a display to superimpose computer-generated imagery on the original scene available to the eye. This structure allows the user's eye to operate as both a monitor and a camera as the EyeTap intakes the world around it and augments the image the user sees allowing it to overlay computer-generated data over top of the normal world the user would perceive.
A handheld projector is an image projector in a handheld device. It was developed as a computer display device for compact portable devices such as mobile phones, personal digital assistants, and digital cameras, which have sufficient storage capacity to handle presentation materials but are too small to accommodate a display screen that an audience can see easily. Handheld projectors involve miniaturized hardware, and software that can project digital images onto a nearby viewing surface.
Gesture recognition is an area of research and development in computer science and language technology concerned with the recognition and interpretation of human gestures. A subdiscipline of computer vision, it employs mathematical algorithms to interpret gestures.
Equiveillance is a state of equilibrium, or a desire to attain a state of equilibrium, between surveillance and sousveillance. It is sometimes confused with transparency. The balance (equilibrium) provided by equiveillance allows individuals to construct their own cases from evidence they gather themselves, rather than merely having access to surveillance data that could possibly incriminate them.
Rosalind Wright Picard is an American scholar and inventor who is Professor of Media Arts and Sciences at MIT, founder and director of the Affective Computing Research Group at the MIT Media Lab, and co-founder of the startups Affectiva and Empatica.
In computing, multi-touch is technology that enables a surface to recognize the presence of more than one point of contact with the surface at the same time. The origins of multitouch began at CERN, MIT, University of Toronto, Carnegie Mellon University and Bell Labs in the 1970s. CERN started using multi-touch screens as early as 1976 for the controls of the Super Proton Synchrotron. Capacitive multi-touch displays were popularized by Apple's iPhone in 2007. Multi-touch may be used to implement additional functionality, such as pinch to zoom or to activate certain subroutines attached to predefined gestures using gesture recognition.
Microsoft's SenseCam is a lifelogging camera with a fisheye lens and trigger sensors, such as accelerometers, heat sensing, and audio, invented by Lyndsay Williams, a patent granted in 2009. Usually worn around the neck, Sensecam is used for the MyLifeBits project, a lifetime storage database. Early developers were James Srinivasan and Trevor Taylor.
Humanistic Intelligence (HI) is defined, in the context of wearable computing, by Marvin Minsky, Ray Kurzweil, and Steve Mann, as follows:
Humanistic Intelligence [HI] is intelligence that arises because of a human being in the feedback loop of a computational process, where the human and computer are inextricably intertwined. When a wearable computer embodies HI and becomes so technologically advanced that its intelligence matches our own biological brain, something much more powerful emerges from this synergy that gives rise to superhuman intelligence within the single “cyborg” being.
A lifelog is a personal record of one's daily life in a varying amount of detail, for a variety of purposes. The record contains a comprehensive dataset of a human's activities. The data could be used to increase knowledge about how people live their lives. In recent years, some lifelog data has been automatically captured by wearable technology or mobile devices. People who keep lifelogs about themselves are known as lifeloggers.
A cyborg —a portmanteau of cybernetic and organism—is a being with both organic and biomechatronic body parts. The term was coined in 1960 by Manfred Clynes and Nathan S. Kline. In contrast to biorobots and androids, the term cyborg applies to a living organism that has restored function or enhanced abilities due to the integration of some artificial component or technology that relies on feedback.
In computing, a natural user interface (NUI) or natural interface is a user interface that is effectively invisible, and remains invisible as the user continuously learns increasingly complex interactions. The word "natural" is used because most computer interfaces use artificial control devices whose operation has to be learned. Examples include voice assistants, such as Alexa and Siri, touch and multitouch interactions on today's mobile phones and tablets, but also touch interfaces invisibly integrated into the textiles furnitures.
Pranav Mistry is a computer scientist and inventor. He is the former President and CEO of STAR Labs. He is currently the founder and CEO of TWO, an Artificial Reality startup. He is best known for his work on SixthSense, Samsung Galaxy Gear and Project Beyond.
Telepointer is a neck-worn gestural interface system developed by MIT Media Lab student Steve Mann in 1998. Mann originally referred to the device as "Synthetic Synesthesia of the Sixth Sense". In the 1990s and early 2000s Mann used this project as a teaching example at the University of Toronto.
'SPARSH' is a novel interaction method to transfer data between digital devices by touch gestures. Sparsh prototype system is designed and developed by Pranav Mistry, Suranga Nanayakkara of the MIT Media Lab. Sparsh lets the user touch whatever data item he or she wants to copy from a device. At that moment, the data item is conceptually saved to the user. Next, the user touches the other device he or she wants to paste/pass the saved content into. Sparsh uses touch-based interactions as indications for what to copy and where to pass it. Technically, the actual transfer of media happens via the information cloud. The user authentication is achieved by face recognition, fingerprint detection or username-password combination. Sparsh lets the user conceptually transfer media from one digital device to one's body and pass it to the other digital device by simple touch gestures. At present, Sparsh system supports Android and Windows platforms.
Irfan Aziz Essa is a professor in the School of Interactive Computing of the College of Computing, and adjunct professor in the School of Electrical and Computer Engineering at the Georgia Institute of Technology. He is an associate dean in Georgia Tech's College of Computing and the director of the new Interdisciplinary Research Center for Machine Learning at Georgia Tech (ML@GT).
Chris Harrison is a British-born, American computer scientist and entrepreneur, working in the fields of human–computer interaction, machine learning and sensor-driven interactive systems. He is a professor at Carnegie Mellon University and director of the Future Interfaces Group within the Human–Computer Interaction Institute. He has previously conducted research at AT&T Labs, Microsoft Research, IBM Research and Disney Research. He is also the CTO and co-founder of Qeexo, a machine learning and interaction technology startup.
Egocentric vision or first-person vision is a sub-field of computer vision that entails analyzing images and videos captured by a wearable camera, which is typically worn on the head or on the chest and naturally approximates the visual field of the camera wearer. Consequently, visual data capture the part of the scene on which the user focuses to carry out the task at hand and offer a valuable perspective to understand the user's activities and their context in a naturalistic setting.