![]() | |
Developer | MUV Interactive |
---|---|
Type | Active human sensing |
Generation | 1 |
Release date | October 2015 |
Website | muvinteractive.com (archived from August 11, 2020) |
Bird is an interactive input device designed by Israel-based startup, MUV Interactive, which develops technology for wearable interfaces. [1] [2] [3] Bird connects to computers to make any surface an interactive 3D environment. The device features remote touch, touchpad swipe control, gesture control, touchscreen capabilities, voice command recognition, a laser pointer, and other advanced options. [4] [5] [6]
Rami Parham, CEO and founder of MUV Interactive, [3] established the company in 2011 with his brother and aimed to create an advanced way of interacting with connected devices. [7] [8] Parham founded MUV Interactive in Herzliya, Israel with COO Yuval Ben-Zeev. [9] [10]
In 2013, MUV Interactive raised seed funding from investors, including the OurCrowd funding platform, for the development of Bird. [5] [9] [11] [12] Pre-orders for Bird began in 2015, and the device shipped to thousands worldwide the following year. Bird is currently used in corporate, educational, and personal settings. [13]
Bird is a device that is worn on the index finger and allows users to engage and interact with their digital content. The wearable device uses motion sensing technology to turn a TV or projected image into an interactive display – from up close like a touchscreen or remotely. [2] [14] Up to five Bird devices can be used on the same surface. [15] [16] The device operates through ten different sensors, including accelerometer, motion, and proximity sensors. Algorithms analyze the data including the wearer's position in space, pointing direction, hand posture, voice commands, and pressure levels from the sensors in real time. [5] [17] Bird's sensors accurately detect data up to 100 feet away from the interactive area. [5] [14] [18]
Bird's various features allow the user to interact with the display in diverse ways. Remote touch is used to control content remotely like a remote mouse from up to 100 feet away. The device's touchpad allows the user to scroll up, down, left, and right. Bird's gesture control allows a user to control content using large hand gestures to make presentations more engaging. The touch feature turns any surface into a touchscreen. Bird can also be used as a smart controller for smart appliances including smart light bulbs and thermostats and as a control for drones using gestures. [1] [17]
In computing, a pointing device gesture or mouse gesture is a way of combining pointing device or finger movements and clicks that the software recognizes as a specific computer event and responds to accordingly. They can be useful for people who have difficulties typing on a keyboard. For example, in a web browser, a user can navigate to the previously viewed page by pressing the right pointing device button, moving the pointing device briefly to the left, then releasing the button.
A pointing device is a human interface device that allows a user to input spatial data to a computer. CAD systems and graphical user interfaces (GUI) allow the user to control and provide data to the computer using physical gestures by moving a hand-held mouse or similar device across the surface of the physical desktop and activating switches on the mouse. Movements of the pointing device are echoed on the screen by movements of the pointer and other visual changes. Common gestures are point and click and drag and drop.
A touchpad or trackpad is a type of pointing device. Its largest component is a tactile sensor: an electronic device with a flat surface, that detects the motion and position of a user's fingers, and translates them to a position on a screen, to control a pointer in a graphical user interface. Touchpads are common on laptop computers, contrasted with desktop computers, where mice are more prevalent. Trackpads are sometimes used on desktops, where desk space is scarce. Because trackpads can be made small, they can be found on personal digital assistants (PDAs) and some portable media players. Wireless touchpads are also available, as detached accessories.
Haptic technology is technology that can create an experience of touch by applying forces, vibrations, or motions to the user. These technologies can be used to create virtual objects in a computer simulation, to control virtual objects, and to enhance remote control of machines and devices (telerobotics). Haptic devices may incorporate tactile sensors that measure forces exerted by the user on the interface. The word haptic, from the Greek: ἁπτικός (haptikos), means "tactile, pertaining to the sense of touch". Simple haptic devices are common in the form of game controllers, joysticks, and steering wheels.
A touchscreen or touch screen is the assembly of both an input and output ('display') device. The touch panel is normally layered on the top of an electronic visual display of an electronic device.
Synaptics Incorporated is a publicly owned San Jose, California-based developer of human interface (HMI) hardware and software, including touchpads for computer laptops; touch, display driver, and fingerprint biometrics technology for smartphones; and touch, video and far-field voice technology for smart home devices and automotives. Synaptics sells its products to original equipment manufacturers (OEMs) and display manufacturers.
Gesture recognition is an area of research and development in computer science and language technology concerned with the recognition and interpretation of human gestures. A subdiscipline of computer vision, it employs mathematical algorithms to interpret gestures. Gestures can originate from any bodily motion or state, but commonly originate from the face or hand. One area of the field is emotion recognition derived from facial expressions and hand gestures. Users can make simple gestures to control or interact with devices without physically touching them. Many approaches have been made using cameras and computer vision algorithms to interpret sign language, however, the identification and recognition of posture, gait, proxemics, and human behaviors is also the subject of gesture recognition techniques. Gesture recognition is a path for computers to begin to better understand and interpret human body language, previously not possible through text or unenhanced graphical (GUI) user interfaces.
A virtual keyboard is a software component that allows the input of characters without the need for physical keys. The interaction with the virtual keyboard happens mostly via a touchscreen interface, but can also take place in a different form in virtual or augmented reality.
In computing, multi-touch is technology that enables a surface to recognize the presence of more than one point of contact with the surface at the same time. The origins of multitouch began at CERN, MIT, University of Toronto, Carnegie Mellon University and Bell Labs in the 1970s. CERN started using multi-touch screens as early as 1976 for the controls of the Super Proton Synchrotron. A form of gesture recognition, capacitive multi-touch displays were popularized by Apple's iPhone in 2007. Plural-point awareness may be used to implement additional functionality, such as pinch to zoom or to activate certain subroutines attached to predefined gestures.
In electrical engineering, capacitive sensing is a technology, based on capacitive coupling, that can detect and measure anything that is conductive or has a dielectric constant different from air. Many types of sensors use capacitive sensing, including sensors to detect and measure proximity, pressure, position and displacement, force, humidity, fluid level, and acceleration. Human interface devices based on capacitive sensing, such as touchpads, can replace the computer mouse. Digital audio players, mobile phones, and tablet computers will sometimes use capacitive sensing touchscreens as input devices. Capacitive sensors can also replace mechanical buttons.
In computing, an input device is a piece of equipment used to provide data and control signals to an information processing system, such as a computer or information appliance. Examples of input devices include keyboards, mouse, scanners, cameras, joysticks, and microphones.
In computing, a stylus is a small pen-shaped instrument whose tip position on a computer monitor can be detected. It is used to draw, or make selections by tapping. While devices with touchscreens such as newer computers, mobile devices, game consoles, and graphics tablets can usually be operated with a fingertip, a stylus provides more accurate and controllable input. The stylus has the same function as a mouse or touchpad as a pointing device; its use is commonly called pen computing.
Skinput is an input technology that uses bio-acoustic sensing to localize finger taps on the skin. When augmented with a pico-projector, the device can provide a direct manipulation, graphical user interface on the body. The technology was developed by |Chris Harrison], [[Desne Tanl ]] and Dan Morris, at Microsoft Research's Computational User Experiences Group. Skinput represents one way to decouple input from electronic devices with the aim of allowing devices to become smaller without simultaneously shrinking the surface area on which input can be performed. While other systems, like SixthSense have attempted this with computer vision, Skinput employs acoustics, which take advantage of the human body's natural sound conductive properties. This allows the body to be annexed as an input surface without the need for the skin to be invasively instrumented with sensors, tracking markers, or other items.
PrimeSense was an Israeli 3D sensing company based in Tel Aviv. PrimeSense had offices in Israel, North America, Japan, Singapore, Korea, China and Taiwan. PrimeSense was bought by Apple Inc. for $360 million on November 24, 2013.
Microsoft Tablet PC is a term coined by Microsoft for tablet computers conforming to a set of specifications announced in 2001 by Microsoft, for a pen-enabled personal computer, conforming to hardware specifications devised by Microsoft and running a licensed copy of Windows XP Tablet PC Edition operating system or a derivative thereof.
OmniTouch is a wearable computer, depth-sensing camera and projection system that enables interactive multitouch interfaces on everyday surface. Beyond the shoulder-worn system, there is no instrumentation of the user or the environment. For example, the present shoulder-worn implementation allows users to manipulate interfaces projected onto the environment, held objects, and their own bodies. On such surfaces - without any calibration - OmniTouch provides capabilities similar to that of a touchscreen: X and Y location in 2D interfaces and whether fingers are “clicked” or hovering. This enables a wide variety of applications, similar to what one might find on a modern smartphone. A user study assessing pointing accuracy of the system suggested buttons needed to be 2.3 cm (0.91 in) in diameter to achieve reliable operation on the hand, 1.6 cm (0.63 in) on walls. This is approaching the accuracy of capacitive touchscreens, like those found in smart phones, but on arbitrary surfaces.
A smudge attack is an information extraction attack that discerns the password input of a touchscreen device such as a cell phone or tablet computer from fingerprint smudges. A team of researchers at the University of Pennsylvania were the first to investigate this type of attack in 2010. An attack occurs when an unauthorized user is in possession or is nearby the device of interest. The attacker relies on detecting the oily smudges produced and left behind by the user's fingers to find the pattern or code needed to access the device and its contents. Simple cameras, lights, fingerprint powder, and image processing software can be used to capture the fingerprint deposits created when the user unlocks their device. Under proper lighting and camera settings, the finger smudges can be easily detected, and the heaviest smudges can be used to infer the most frequent input swipes or taps from the user.
An optical head-mounted display (OHMD) is a wearable device that has the capability of reflecting projected images as well as allowing the user to see through it. In some cases, this may qualify as augmented reality (AR) technology. OHMD technology has existed since 1997 in various forms, but despite a number of attempts from industry, has yet to have had major commercial success.
Smartglasses or smart glasses are eye or head-worn wearable computers that offer useful capabilities to the user. Many smartglasses include displays that add information alongside or to what the wearer sees. Alternatively, smartglasses are sometimes defined as glasses that are able to change their optical properties, such as smart sunglasses that are programmed to change tint by electronic means. Alternatively, smartglasses are sometimes defined as glasses that include headphone functionality.
A smart ring is a compact wearable electronic device that combines mobile technology with features for convenient on-the-go use. These devices, typically designed to fit on a finger like a traditional ring, offer functionalities like mobile payments, access control, gesture control, and activity tracking. Smart rings can connect to smartphones or other devices, and some can operate independently, communicating with cloud-based systems or performing standalone tasks. While lacking traditional displays, they respond to contextual cues, such as proximity to payment terminals or specific gestures.