Jefferson Y. "Jeff" Han (born 1975) is a computer scientist who worked for New York University's (NYU) Courant Institute of Mathematical Sciences until 2006. He is one of the main developers of "multi-touch sensing", which, unlike older touch-screen interfaces, is able to recognize multiple points of contact.
Han also works on other projects in the fields of autonomous robot navigation, motion capture, real-time computer graphics, and human-computer interaction.
He presented his multi-touch sensing work in February 2006 at the TED (Technology Entertainment Design) Conference in Monterey, California. TED released the video online six months later and it spread quickly on YouTube. [1]
Han founded a company called Perceptive Pixel to develop his touch screen technology further, and he has already shipped touch screens to parts of the military. [2] Han's technology has been featured most notably as the "Magic Wall" on CNN's Election Center coverage. [3] Han's company was acquired by Microsoft in 2012, where he became Partner General Manager of Perceptive Pixel (later Surface Hub). Han left Microsoft in late 2015, shortly before Surface Hub's launch. [4]
He is the son of middle-class Korean immigrants who emigrated to the United States in the 1970s.
Han graduated from The Dalton School in New York in 1993 and studied computer science and electrical engineering for three years at Cornell University before leaving to join a start-up company to commercialize the CU-SeeMe video-conferencing software that he helped develop while an undergraduate at Cornell. [2]
Han was named to Time magazine's 2008 listing of the "100 Most Influential People in The World". [5]
A touchscreen is a type of display that can detect touch input from a user. It consists of both an input device and an output device. The touch panel is typically layered on the top of the electronic visual display of a device. Touchscreens are commonly found in smartphones, tablets, laptops, and other electronic devices.
Gesture recognition is an area of research and development in computer science and language technology concerned with the recognition and interpretation of human gestures. A subdiscipline of computer vision, it employs mathematical algorithms to interpret gestures.
In computing, multi-touch is technology that enables a surface to recognize the presence of more than one point of contact with the surface at the same time. The origins of multitouch began at CERN, MIT, University of Toronto, Carnegie Mellon University and Bell Labs in the 1970s. CERN started using multi-touch screens as early as 1976 for the controls of the Super Proton Synchrotron. Capacitive multi-touch displays were popularized by Apple's iPhone in 2007. Multi-touch may be used to implement additional functionality, such as pinch to zoom or to activate certain subroutines attached to predefined gestures using gesture recognition.
A touch user interface (TUI) is a computer-pointing technology based upon the sense of touch (haptics). Whereas a graphical user interface (GUI) relies upon the sense of sight, a TUI enables not only the sense of touch to innervate and activate computer-based functions, it also allows the user, particularly those with visual impairments, an added level of interaction based upon tactile or Braille input.
Perceptive Pixel was a developer and producer of multi-touch interfaces. It was purchased by Microsoft in 2012. Its technology is now used in fields including broadcast, defense, geo-intelligence, energy exploration, industrial design and medical imaging.
Pen computing refers to any computer user-interface using a pen or stylus and tablet, over input devices such as a keyboard or a mouse.
Microsoft PixelSense was an interactive surface computing platform that allowed one or more people to use and touch real-world objects, and share digital content at the same time. The PixelSense platform consists of software and hardware products that combine vision based multitouch PC hardware, 360-degree multiuser application design, and Windows software to create a natural user interface (NUI).
A surface computer is a computer that interacts with the user through the surface of an ordinary object, rather than through a monitor, keyboard, mouse, or other physical hardware.
Surface computing is the use of a specialized computer GUI in which traditional GUI elements are replaced by intuitive, everyday objects. Instead of a keyboard and mouse, the user interacts with a surface. Typically the surface is a touch-sensitive screen, though other surface types like non-flat three-dimensional objects have been implemented as well. It has been said that this more closely replicates the familiar hands-on experience of everyday object manipulation.
A text entry interface or text entry device is an interface that is used to enter text information in an electronic device. A commonly used device is a mechanical computer keyboard. Most laptop computers have an integrated mechanical keyboard, and desktop computers are usually operated primarily using a keyboard and mouse. Devices such as smartphones and tablets mean that interfaces such as virtual keyboards and voice recognition are becoming more popular as text entry systems.
The Multi-Touch Collaboration Wall is a large monitor invented by Jeff Han that employs multi-touch technology, and is marketed by Han's company, Perceptive Pixel. Han initially developed the technology for military applications. The wall has received most of its publicity because of its use by news network CNN during its coverage of the 2008 US presidential election. Usually operated by John King, it is often referred to as CNN's "Magic Wall", or the "Magic Map."
In human–computer interaction, an organic user interface (OUI) is defined as a user interface with a non-flat display. After Engelbart and Sutherland's graphical user interface (GUI), which was based on the cathode ray tube (CRT), and Kay and Weiser's ubiquitous computing, which is based on the flat panel liquid-crystal display (LCD), OUI represents one possible third wave of display interaction paradigms, pertaining to multi-shaped and flexible displays. In an OUI, the display surface is always the focus of interaction, and may actively or passively change shape upon analog inputs. These inputs are provided through direct physical gestures, rather than through indirect point-and-click control. Note that the term "Organic" in OUI was derived from organic architecture, referring to the adoption of natural form to design a better fit with human ecology. The term also alludes to the use of organic electronics for this purpose.
In computing, a natural user interface (NUI) or natural interface is a user interface that is effectively invisible, and remains invisible as the user continuously learns increasingly complex interactions. The word "natural" is used because most computer interfaces use artificial control devices whose operation has to be learned. Examples include voice assistants, such as Alexa and Siri, touch and multitouch interactions on today's mobile phones and tablets, but also touch interfaces invisibly integrated into the textiles of furniture.
The DiamondTouch table is a multi-touch, interactive PC interface product from Circle Twelve Inc. It is a human interface device that has the capability of allowing multiple people to interact simultaneously while identifying which person is touching where. The technology was originally developed at Mitsubishi Electric Research Laboratories (MERL) in 2001 and later licensed to Circle Twelve Inc in 2008. The DiamondTouch table is used to facilitate face-to-face collaboration, brainstorming, and decision-making, and users include construction management company Parsons Brinckerhoff, the Methodist Hospital, and the US National Geospatial-Intelligence Agency (NGA).
A virtual touch screen (VTS) is a user interface system that augments virtual objects into reality either through a projector or optical display using sensors to track a person's interaction with the object. For instance, using a display and a rear projector system a person could create images that look three-dimensional and appear to float in midair. Some systems utilize an optical head-mounted display to augment the virtual objects onto the transparent display utilizing sensors to determine visual and physical interactions with the virtual objects projected.
The history of tablet computers and the associated special operating software is an example of pen computing technology, and thus the development of tablets has deep historical roots. The first patent for a system that recognized handwritten characters by analyzing the handwriting motion was granted in 1914. The first publicly demonstrated system using a tablet and handwriting recognition instead of a keyboard for working with a modern digital computer dates to 1956.
Microsoft Tablet PC is a term coined by Microsoft for tablet computers conforming to hardware specifications, devised by Microsoft, and announced in 2001 for a pen-enabled personal computer and running a licensed copy of the Windows XP Tablet PC Edition operating system or a derivative thereof.
Joe Belfiore is an American business executive who has held various roles at Microsoft since August 1990, mostly in the field of user experience. A frequent speaker, Belfiore has appeared at many Microsoft conferences, often giving demos on stage and/or acting as a spokesperson for the company. In 2018, he was named the #1 Microsoft Influencer for fans to follow on Twitter. In 2004, he gave a TED Talk in-person at the TED Conference in Monterey,CA. In summer 2023 he retired from Microsoft and is now active as chair of a non-profit board.
The Surface Hub is a brand of interactive whiteboard developed and marketed by Microsoft, as part of the Microsoft Surface family. The Surface Hub is a wall-mounted or roller-stand-mounted device with either a 55-inch (140 cm) 1080p or an 84-inch (210 cm) 4K 120 Hz touchscreen with multi-touch and multi-pen capabilities, running the Windows 10 operating system. The devices are targeted for businesses to use while collaborating and videoconferencing.
Chris Harrison is a British-born, American computer scientist and entrepreneur, working in the fields of human–computer interaction, machine learning and sensor-driven interactive systems. He is a professor at Carnegie Mellon University and director of the Future Interfaces Group within the Human–Computer Interaction Institute. He has previously conducted research at AT&T Labs, Microsoft Research, IBM Research and Disney Research. He is also the CTO and co-founder of Qeexo, a machine learning and interaction technology startup.