Microsoft's SenseCam is a lifelogging camera with a fisheye lens and trigger sensors, such as accelerometers, heat sensing, and audio, invented by Lyndsay Williams, a patent [1] granted in 2009. Usually worn around the neck, Sensecam is used for the MyLifeBits project, a lifetime storage database. Early developers were James Srinivasan and Trevor Taylor.
Earlier work on neck-worn sensor cameras with fisheye lenses was done by Steve Mann, and published in 2001. [2] [3]
Microsoft Sensecam, Mann's earlier sensor cameras, and subsequent similar products like Autographer, Glogger and the Narrative Clip are all examples of Wearable Computing. [4]
Wearable neck-worn cameras contribute to an easier way of collecting and indexing one's daily experiences by unobtrusively taking photographs whenever a change in temperature, movement, or lighting triggers the internal sensor. The Sensecam [5] is also equipped with an accelerometer, which is used to trigger images and can also stabilise images so as to reduce blurriness. The camera is usually worn around the neck via a lanyard.
The photos represent almost every experience of its wearer's day. They are taken via a wide-angle lens to capture an image likely to contain most of what the wearer can see. The SenseCam uses a flash memory, which has the means to store upwards of 2,000 photos per day as .jpg files, though more recent models with larger and faster memory cards mean a wearer typically stores up to 4,000 images per day. These files can then be uploaded and automatically viewed as a daily movie, which can be easily reviewed and indexed using a custom viewer application running on a PC. It is possible to replay the images from a single day in a few minutes. [5] An alternative way of viewing images is to have a day's worth of data automatically segmented into 'events' and to use an event-based browser which can view each event (of 50, 100 or more individual SenseCam images) using a keyframe chosen as a representative of that event.
SenseCams have mostly been used in medical applications, particularly to aid those with poor memory as a result of disease or brain trauma. Several studies have been published by Chris Moulin, Aiden R. Doherty and Alan F. Smeaton [6] showing how reviewing one's SenseCam images can lead to what Martin A. Conway, a memory researcher from the University of Leeds, calls "Proustian moments", [7] characterised as floods of recalled details of some event in the past. SenseCams have also been used in lifelogging, and one researcher at Dublin City University, Ireland, has been wearing a SenseCam for most of his waking hours since 2006 and has generated over 13 million SenseCam images of his life. [8]
In October 2009, SenseCam technology was adopted by Vicon Revue and is now available as a product. [9]
There is a wiki dedicated to SenseCam technical issues, software, news, and various research activities and publications about, and using, SenseCam. [10]
Microsoft Research has contributed a device to aid lifebloggers among several potential users. SenseCam was first developed to help people with memory loss, but the camera is currently being tested to aid those with serious cognitive memory loss. The SenseCam produces images very similar to one's memory, particularly episodic memory, which is usually in the form of visual imagery. [11] By reviewing the day's filmstrip, patients with Alzheimer's, amnesia, and other memory impairments found it much easier to retrieve lost memories.
Microsoft Research has also tested internal audio level detection and audio recording for the SenseCam, although there are no plans to build these into the research prototypes at the moment. The research team is also exploring the potential of including sensors that will monitor the wearer's heart rate, body temperature, and other physiological changes, along with an electrocardiogram recorder when capturing pictures.
Other possible applications include using the camera's records for ethnographic studies of social phenomena, monitoring food intake, and assessing an environment's accessibility for people with disabilities. [12]
A wearable computer, also known as a body-borne computer, is a computing device worn on the body. The definition of 'wearable computer' may be narrow or broad, extending to smartphones or even ordinary wristwatches.
Augmented reality (AR) is an interactive experience that combines the real world and computer-generated content. The content can span multiple sensory modalities, including visual, auditory, haptic, somatosensory and olfactory. AR can be defined as a system that incorporates three basic features: a combination of real and virtual worlds, real-time interaction, and accurate 3D registration of virtual and real objects. The overlaid sensory information can be constructive, or destructive. This experience is seamlessly interwoven with the physical world such that it is perceived as an immersive aspect of the real environment. In this way, augmented reality alters one's ongoing perception of a real-world environment, whereas virtual reality completely replaces the user's real-world environment with a simulated one.
In photography, angle of view (AOV) describes the angular extent of a given scene that is imaged by a camera. It is used interchangeably with the more general term field of view.
An accelerometer is a tool that measures proper acceleration. Proper acceleration is the acceleration of a body in its own instantaneous rest frame; this is different from coordinate acceleration, which is acceleration in a fixed coordinate system. For example, an accelerometer at rest on the surface of the Earth will measure an acceleration due to Earth's gravity, straight upwards of g ≈ 9.81 m/s2. By contrast, accelerometers in free fall will measure zero.
Sousveillance is the recording of an activity by a member of the public, rather than a person or organisation in authority, typically by way of small wearable or portable personal technologies. The term, coined by Steve Mann, stems from the contrasting French words sur, meaning "above", and sous, meaning "below", i.e. "surveillance" denotes the "eye-in-the-sky" watching from above, whereas "sousveillance" denotes bringing the means of observation down to human level, either physically or hierarchically.
A fisheye lens is an ultra wide-angle lens that produces strong visual distortion intended to create a wide panoramic or hemispherical image. Fisheye lenses achieve extremely wide angles of view, well beyond any rectilinear lens. Instead of producing images with straight lines of perspective, fisheye lenses use a special mapping, which gives images a characteristic convex non-rectilinear appearance.
A light field camera, also known as a plenoptic camera, is a camera that captures information about the light field emanating from a scene; that is, the intensity of light in a scene, and also the precise direction that the light rays are traveling in space. This contrasts with conventional cameras, which record only light intensity at various wavelengths.
Image stabilization (IS) is a family of techniques that reduce blurring associated with the motion of a camera or other imaging device during exposure.
MyLifeBits is a life-logging experiment begun in 2001. It is a Microsoft Research project inspired by Vannevar Bush's hypothetical Memex computer system. The project includes full-text search, text and audio annotations, and hyperlinks. The "experimental subject" of the project is computer scientist Gordon Bell, and the project will try to collect a lifetime of storage on and about Bell. Jim Gemmell of Microsoft Research and Roger Lueder were the architects and creators of the system and its software.
A panomorph lens is a particular type of wide-angle lens specifically designed to improve optical performances in predefined zones of interest, or across the whole image, compared to traditional fisheye lenses. Some examples of improved optical parameters include the number of pixels, the MTF or the relative illumination.
A lifelog is a personal record of one's daily life in a varying amount of detail, for a variety of purposes. The record contains a comprehensive dataset of a human's activities. The data could be used to increase knowledge about how people live their lives. In recent years, some lifelog data has been automatically captured by wearable technology or mobile devices. People who keep lifelogs about themselves are known as lifeloggers.
In photography, an omnidirectional camera, also known as 360-degree camera, is a camera having a field of view that covers approximately the entire sphere or at least a full circle in the horizontal plane. Omnidirectional cameras are important in areas where large visual field coverage is needed, such as in panoramic photography and robotics.
A time-of-flight camera, also known as time-of-flight sensor, is a range imaging camera system for measuring distances between the camera and the subject for each point of the image based on time-of-flight, the round trip time of an artificial light signal, as provided by a laser or an LED. Laser-based time-of-flight cameras are part of a broader class of scannerless LIDAR, in which the entire scene is captured with each laser pulse, as opposed to point-by-point with a laser beam such as in scanning LIDAR systems. Time-of-flight camera products for civil applications began to emerge around 2000, as the semiconductor processes allowed the production of components fast enough for such devices. The systems cover ranges of a few centimeters up to several kilometers.
The Narrative Clip is a small wearable lifelogging camera. Its development began in 2012 by the Swedish company Memoto after a successful crowd funding via Kickstarter. It can automatically take a picture every 30 seconds whilst being worn throughout the day, a practice known as "life-logging". At the end of the day the Clip uploads the photos and videos it made into the vendor's cloud service, where they are processed and organized into collections called Moments, available to the user through a web client or mobile apps. The Moments or individual photos and videos can be shared through other apps or through the company's own social network.
Autographer is a hands-free, wearable digital camera developed by OMG Life. The camera uses five different sensors to determine when to automatically take photos and can take up to 2,000 pictures a day. It was released in July 2013 and is used primarily for lifelogging, entertainment and travel. As of 16 October 2016, OMG Life, the company behind Autographer discontinued operations.
Cathal Gurrin is an Irish Professor and lifelogger. He is the Head of the Adapt Centre at Dublin City University, a Funded Investigator of the Insight Centre, and the director of the Human Media Archives research group. He was previously the deputy head of the School of Computing.
Tango was an augmented reality computing platform, developed and authored by the Advanced Technology and Projects (ATAP), a skunkworks division of Google. It used computer vision to enable mobile devices, such as smartphones and tablets, to detect their position relative to the world around them without using GPS or other external signals. This allowed application developers to create user experiences that include indoor navigation, 3D mapping, physical space measurement, environmental recognition, augmented reality, and windows into a virtual world.
Smartglasses or smart glasses are eye or head-worn wearable computers that offer useful capabilities to the user. Many smartglasses include displays that add information alongside or to what the wearer sees. Alternatively, smartglasses are sometimes defined as glasses that are able to change their optical properties, such as smart sunglasses that are programmed to change tint by electronic means. Alternatively, smartglasses are sometimes defined as glasses that include headphone functionality.
The Samsung Gear 360 is the first 360 degree camera by Samsung Electronics. It was released as a part of the Samsung Gear family of devices. It uses two cameras to take 360° photos and videos.
Egocentric vision or first-person vision is a sub-field of computer vision that entails analyzing images and videos captured by a wearable camera, which is typically worn on the head or on the chest and naturally approximates the visual field of the camera wearer. Consequently, visual data capture the part of the scene on which the user focuses to carry out the task at hand and offer a valuable perspective to understand the user's activities and their context in a naturalistic setting.
{{cite journal}}
: CS1 maint: multiple names: authors list (link)