Displair

Last updated

Displair is a 3D interactive raster display technology developed by a Russian company of the same name. The Displair projects images onto sheets of water droplets suspended in air, giving the illusion of a hologram. Unlike other cold fog projecting technologies, the images projected by the Displair can also respond rapidly to multi-touch manipulation, as well as allowing taste and aroma to be incorporated.

Contents

Russian-made Displair screenless 3D multi-touch display being demonstrated at the Consumer Electronics Show (CES) 2013, Las Vegas Displair 3D fog display device.jpg
Russian-made Displair screenless 3D multi-touch display being demonstrated at the Consumer Electronics Show (CES) 2013, Las Vegas

History

Developer Maxim Kamanin introduced the Displair at Seliger 2010. In July of that year he chose the "Displair" name for both the product and the company as a portmanteau of the English words "display" and "air". [1] The company subsequently obtained investment for further development of the prototype, technology licensing, and small-scale commercial production. [2] Applications to date have included displays for in-store advertising and kiosks. [3]

Technology

The Displair device projects still and moving images onto a "screenless" display consisting of cold stable air flow containing particles of water produced by a cavitation method. These particles are small enough not to leave traces of moisture, and their surface tension high enough to maintain stability after contact with physical objects and wind. [4]

Displair uses third party computerised multi-touch technologies to allow control of images with fingers or with other objects. The display can work with up to 1500 touch points simultaneously with a delay time of less than 0.2 seconds. [5] This makes it possible to allow manipulation by more than one user, and also to identify more complex gestures than similar 3D display systems. The company is working on incorporating flavoring and taste interaction with projected images in the future. [6]

See also

Related Research Articles

<span class="mw-page-title-main">Graphics tablet</span> Computer input device

A graphics tablet is a computer input device that enables a user to hand-draw images, animations and graphics, with a special pen-like stylus, similar to the way a person draws images with a pencil and paper. These tablets may also be used to capture data or handwritten signatures. It can also be used to trace an image from a piece of paper that is taped or otherwise secured to the tablet surface. Capturing data in this way, by tracing or entering the corners of linear polylines or shapes, is called digitizing.

<span class="mw-page-title-main">Touchscreen</span> Input and output device

A touchscreen or touch screen is the assembly of both an input and output ('display') device. The touch panel is normally layered on the top of an electronic visual display of an electronic device.

A volumetric display device is a display device that forms a visual representation of an object in three physical dimensions, as opposed to the planar image of traditional screens that simulate depth through a number of different visual effects. One definition offered by pioneers in the field is that volumetric displays create 3D imagery via the emission, scattering, or relaying of illumination from well-defined regions in (x,y,z) space.

<span class="mw-page-title-main">Interactive kiosk</span> Computer terminal that provides access to information, communication, commerce etc.

An interactive kiosk is a computer terminal featuring specialized hardware and software that provides access to information and applications for communication, commerce, entertainment, or education.

<span class="mw-page-title-main">Gesture recognition</span> Topic in computer science and language technology

Gesture recognition is a topic in computer science and language technology with the goal of interpreting human gestures via mathematical algorithms. It is a subdiscipline of computer vision. Gestures can originate from any bodily motion or state, but commonly originate from the face or hand. Focuses in the field include emotion recognition from face and hand gesture recognition since they are all expressions. Users can make simple gestures to control or interact with devices without physically touching them. Many approaches have been made using cameras and computer vision algorithms to interpret sign language, however, the identification and recognition of posture, gait, proxemics, and human behaviors is also the subject of gesture recognition techniques. Gesture recognition can be seen as a way for computers to begin to understand human body language, thus building a better bridge between machines and humans than older text user interfaces or even GUIs, which still limit the majority of input to keyboard and mouse and interact naturally without any mechanical devices.

The Heliodisplay is an air-based display using principally air that is already present in the operating environment. The system developed by IO2 Technology in 2001 uses a projection unit focused onto multiple layers of air and dry micron-size atomized particles in mid-air, resulting in a two-dimensional display that appears to float. This is similar in principle to the cinematic technique of rear projection and can appear three-dimensional when using appropriate content. As dark areas of the image may appear invisible, the image may be more realistic than on a projection screen, although it is still not volumetric. However the system does allow for multiple viewing and dual viewing when combined with two light sources. The necessity of an oblique viewing angle +/- 30 degrees may be required for various configurations due to the rear-projection requirement.

Screenless video is any system for transmitting visual information from a video source without the use of a screen. Screenless computing systems can be divided into three groups: Visual Image, Retinal Direct, and Synaptic Interface.

<span class="mw-page-title-main">Multi-touch</span> Technology

In computing, multi-touch is technology that enables a surface to recognize the presence of more than one point of contact with the surface at the same time. The origins of multitouch began at CERN, MIT, University of Toronto, Carnegie Mellon University and Bell Labs in the 1970s. CERN started using multi-touch screens as early as 1976 for the controls of the Super Proton Synchrotron. Capacitive multi-touch displays were popularized by Apple's iPhone in 2007. Plural-point awareness may be used to implement additional functionality, such as pinch to zoom or to activate certain subroutines attached to predefined gestures.

<span class="mw-page-title-main">Pen computing</span> Uses a stylus and tablet/touchscreen

Pen computing refers to any computer user-interface using a pen or stylus and tablet, over input devices such as a keyboard or a mouse.

<span class="mw-page-title-main">Microsoft PixelSense</span> Interactive surface computing platform by Microsoft

Microsoft PixelSense was an interactive surface computing platform that allowed one or more people to use and touch real-world objects, and share digital content at the same time. The PixelSense platform consists of software and hardware products that combine vision based multitouch PC hardware, 360-degree multiuser application design, and Windows software to create a natural user interface (NUI).

Surface computing is the use of a specialized computer GUI in which traditional GUI elements are replaced by intuitive, everyday objects. Instead of a keyboard and mouse, the user interacts with a surface. Typically the surface is a touch-sensitive screen, though other surface types like non-flat three-dimensional objects have been implemented as well. It has been said that this more closely replicates the familiar hands-on experience of everyday object manipulation.

A holographic display is a type of 3D display that utilizes light diffraction to display a three-dimensional image to the viewer. Holographic displays are distinguished from other forms of 3D displays in that they do not require the viewer to wear any special glasses or use external equipment to be able to see the image, and do not cause the vergence-accommodation conflict.

In electrical engineering, capacitive sensing is a technology, based on capacitive coupling, that can detect and measure anything that is conductive or has a dielectric constant different from air. Many types of sensors use capacitive sensing, including sensors to detect and measure proximity, pressure, position and displacement, force, humidity, fluid level, and acceleration. Human interface devices based on capacitive sensing, such as touchpads, can replace the computer mouse. Digital audio players, mobile phones, and tablet computers will sometimes use capacitive sensing touchscreens as input devices. Capacitive sensors can also replace mechanical buttons.

In computing, a natural user interface (NUI) or natural interface is a user interface that is effectively invisible, and remains invisible as the user continuously learns increasingly complex interactions. The word "natural" is used because most computer interfaces use artificial control devices whose operation has to be learned. Examples include voice assistants, such as Alexa and Siri, touch and multitouch interactions on today's mobile phones and tablets, but also touch interfaces invisibly integrated into the textiles furnitures.

A holographic screen is a two-dimensional display technology that uses coated glass media for the projection surface of a video projector. "Holographic" refers not to a stereoscopic effect, but to the coating that bundles light using formed microlenses. The lens design and attributes match the holographic area. The lenses may appear similar to the fresnel lenses used in overhead projectors. The resulting effect is that of a free-space display, because the image carrier appears very transparent. Additionally, the beam manipulation by the lenses can be used to make the image appear to be floating in front of or behind the glass, rather than directly on it. However, this display is only two-dimensional and not true three-dimensional. It is unclear if such a technology will be able to provide acceptable three-dimensional images in the future.

<span class="mw-page-title-main">Ideum</span>

Ideum is a company based in Corrales, New Mexico, United States that produces multitouch tables and walls, custom interactive exhibits, and custom hardware. The company was founded in 1999 by Jim Spadaccini, who created interactive exhibits for San Francisco's Exploratorium before becoming creative director at Ideum. With strong ties to the museum and informal science education fields, many of the company's products and services are targeted at museums and other public spaces. In 2015 and 2016, the company was listed on the Inc. 5000, list of the Fastest Growing Companies in the US. In 2017 and again in 2018, the company was honored as one of the "Best Entrepreneurial Companies in America" by Entrepreneur Magazine.

A virtual touch screen (VTS) is a user interface system that augments virtual objects into reality either through a projector or optical display using sensors to track a person's interaction with the object. For instance, using a display and a rear projector system a person could create images that look three-dimensional and appear to float in midair. Some systems utilize an optical head-mounted display to augment the virtual objects onto the transparent display utilizing sensors to determine visual and physical interactions with the virtual objects projected.

<span class="mw-page-title-main">Zytronic</span> British electronics company

Zytronic is a manufacturer and developer of touch technology products based in Blaydon upon Tyne, United Kingdom.

A fog display, fog screen, vapor screen or vapor display is a system that uses haze machines or water vapor to create a semi-transparent wall, or "curtain" of suspended particles which trapped in a thin sheet of air and are illuminated by a projector, in order to produce a display whose images seem to float in mid air. Several commercial systems exist, such as FogScreen, Displair and Heliodisplay. There is also an open-source variant being developed called Hoverlay II

<span class="mw-page-title-main">Sprout (computer)</span>

Sprout by HP was a personal computer from HP Inc. announced on October 29, 2014 and released for sale on November 9, 2014. The system was conceived by Brad Short, a Distinguished Technologist at HP Inc., who along with Louis Kim co-founded and led a team within HP Inc. to develop and productize the computing concept. Former Apple Inc. employee, Eric Monsef, was also brought on to lead the Sprout team through a successful product launch.

References

  1. Василий Романцов (2012-02-07). "Русский гаджет: Максим Каманин (The Russkiy gadget: Maxim Kamanin)". Бизнес-журнал. Archived from the original on 2012-06-22. Retrieved 2012-06-10.
  2. "Russian air screen was invested with $1M". Lenta.ru. 2012-05-24. Retrieved 2012-06-10.
  3. "In thin air: Could touch display projected on mist replace physical screens?". CNN. 2013-12-24. Retrieved 27 December 2013.
  4. "Displair company from Astrakhan demonstrated interactive screenless display". kiosks.ru. 2012-04-17. Retrieved 2012-06-08.
  5. TechCrunch. "Move over Kinect — Displair from Russia is a gesture interface in thin air" . Retrieved 27 December 2013.
  6. "Astrakhan company Displair showed interactive multi-touch screen" . Retrieved 27 December 2013.