Patrick Baudisch

Last updated

Patrick Baudisch is a computer science professor and the chair of the Human Computer Interaction Lab at Hasso Plattner Institute, Potsdam University. While his early research interests revolved around natural user interfaces and interactive devices, his research focus shifted to virtual reality and haptics in the late 2000s and to digital fabrication, such as 3D Printing and Laser cutting in the 2010s. [1] Prior to teaching and researching at Hasso Plattner Institute, Patrick Baudisch was a research scientist at Microsoft Research and Xerox PARC. He has been a member of CHI Academy since 2013, and an ACM distinguished scientist since 2014. He holds a PhD degree in Computer Science from the Department of Computer Science of the Technische Universität Darmstadt, Germany. [1]

Contents

Current Research Interest: Personal Fabrication

Baudisch's main research interest lies in digital fabrication, largely focussing on software systems that allow users to design and fabricate physical objects using 3D printers (such as Trussfab [2] and http://brickify.it) and laser cutters (https://kyub.com).

Baudisch's publications focus on conference papers at ACM CHI and ACM UIST. [3]

An overview of Baudisch's research agenda on personal fabrication can be found in Patrick Baudisch and Stefanie Mueller (2017), "Personal Fabrication", Foundations and Trends in Human–Computer Interaction: Vol. 10: No. 3–4, pp 165–293. http://dx.doi.org/10.1561/1100000055

Research Interest prior to 2012

Baudisch's research interests are natural user interfaces and interactive devices, which include miniature mobile devices, touch input, interactive floors, and interactive rooms. [1]

From this era, his five most cited publications on Google Scholar Citations are: Precise selection techniques for multi-touch screens, Halo: a technique for visualizing off-screen objects, Shift: a technique for operating pen-based interfaces using touch, drag-and-pop and drag-and-pick: Techniques for accessing remote screen content on touch-and pen-operated systems and Lucid touch: a see-through mobile device. [4]

Precise selection techniques for multi-touch screens

The paper Precise selection techniques for multi-touch screens was published on April 4, 2006, by Hrvoje Benko, Andrew D Wilson and Patrick Baudisch, totally cited 530 times. [5]

The paper presents five techniques called Dual Finger Selections to leverage multi-touch sensitive displays to help users select very small targets on display. This is to resolve the issues, in the terms of touch screen interactions, introduced by factors such as large finger size and the lack of sensing precision. To make use of the techniques, a user would adjust the control-display ratio using a secondary finger, while controlling the cursor movements using the primary finger. The paper also introduces SimPress, a clicking technique, which reduces motion errors during the process of clicking as well as enable hovering state on devices that are not able to support proximity. [6]

The user study in paper reported that in terms of error rate reduction, the 3 chosen techniques (Stretch, X-Menu, and Slider) outperformed the control technique and were favoured by the participants. Among the chosen techniques Stretch, X-menu and Slider, Stretch has the best performance and preference overall. [6]

Halo: a technique for visualizing off-screen objects

The paper Halo: a technique for visualizing off-screen objects was published on April 5, 2003, by Patrick Baudisch and Ruth Rosenholtz, totally cited 453 times. [7]

The paper introduces Halo, a visualization technique that shows users the location of off-screen objects, to enable spatial cognition. Halo achieves this by showing parts of rings in border region of the display to indicate off screen objects. Users can then identify the exact location of the off-screen objects based on the location and portion of the rings that are visible. [8]

The paper reported that users completed tasks 16-33% faster, giving no significant differences on error rates within the scope of the study. [8]

Shift: a technique for operating pen-based interfaces using touch

The paper Shift: a technique for operating pen-based interfaces using touch was published on April 29, 2007 by Daniel Vogel and Patrick Baudisch, totally cited 429 times. [9]

The paper proposes Shift, a pointing technique, so that users can touch target points on screens designed for styluses with high precision using their fingers rather than a stylus. Shift reduces targeting times and error rates by showing a copy of the targeted screen area at a separate location. Shift also shows a pointer to the actual selection point of the finger. Note that Shift is only enabled when the users find it necessary so that users have the option to have the conventional touch screen experience. [10]

Report results show that participants' actions have much lower error rates than an ordinary touch screen, and shorter times overall for larger targets compared to Offset Cursor. [10]

Drag-and-pop and drag-and-pick: Techniques for accessing remote screen content on touch-and pen-operated systems

The paper Drag-and-pop and drag-and-pick: Techniques for accessing remote screen content on touch-and pen-operated systems was published in August 2003 by Patrick Baudisch, Edward Cutrell, Dan Robbins, Mary Czerwinski, Peter Tandler, Benjamin Bederson and Alex Zierlinger, totally cited 405 times. [11]

The paper presents interaction techniques Drag-and-pop and drag-and-pick, designed for display systems based on pen and touch interactions, and they grant users the access to see contents on screen that are otherwise not easy to reach. Drag-and-pop extends traditional drag-and-drop. More specifically, drag-and-pop responds to users by moving the potential target icons in users' cursor direction temporarily, so that users only need to move their hands on a smaller distance or scale. Drag-and-pick further extends drag-and-pop by activating icons such as to open folders or applications. [12]

Results of the study report that drag-and-pop interface enables participants to file icons as fast as 3.7 times than the traditional drag-and-drop interaction on a 15' wide display. [12]

Lucid touch: a see-through mobile device

The paper Lucid touch: a see-through mobile device was published on October 7, 2007, by Daniel Wigdor, Clifton Forlines, Patrick Baudisch, John Barnwell and Chia Shen, totally cited 303 times. [13]

The paper focus on the difficult aspects of touch screens: touching the small screen of a mobile device can be inconvenient because users' hands and fingers could block the contents they plan to interact with. This paper introduces LucidTouch, a mobile device that users control from the back. LucidTouch displays an image of users' hands on screen, so that it gives the illusion that LucidTouch is transparent when in fact it is not. Users are able to interact with targets with greater precision because of the pseudo-transparency. Furthermore, like an ordinary mobile touch screen, LucidTouch reacts to multiple touch points at the same time so users can perform multi-touch actions [14]

Initial study results illustrate that because of factors such as improved accuracy, and unblocked view of the screen, many users prefer interactions with LucidTouch than interactions with conventional devices. [14]

Awards

ACM Awards

ACM CHI Academy

Patrick Baudisch was inducted into the CHI Academy in 2013. [1]

ACM Distinguished Scientist

Patrick Baudisch became an ACM distinguished scientist in 2014. [1]

Recent Publication Awards

CHI 2015 Best Paper Award

Winning Paper: Affordance++: allowing objects to communicate dynamic use [15] [16]

UIST 2013 Best Paper Award

Winning Paper: Fiberio: A Touchscreen That Senses Fingerprints [15] [16]

CHI 2013 Best Paper Award

Winning Paper: LaserOrigami: Laser-Cutting 3D Objects [15] [16]

CHI 2010 Best Paper Award

Winning Paper: Lumino: Tangible Blocks for Tabletop Computers Based on Glass Fiber Bundles [15] [16]

CHI 2007 Best Paper Award

Winning Paper: Shift: A Technique for Operating Pen-Based Interfaces Using Touch [15] [17]

Students

Patrick Baudisch's PhD students include:

Related Research Articles

<span class="mw-page-title-main">Pointing device gesture</span>

In computing, a pointing device gesture or mouse gesture is a way of combining pointing device or finger movements and clicks that the software recognizes as a specific computer event and responds to accordingly. They can be useful for people who have difficulties typing on a keyboard. For example, in a web browser, a user can navigate to the previously viewed page by pressing the right pointing device button, moving the pointing device briefly to the left, then releasing the button.

<span class="mw-page-title-main">Pointing device</span> Human interface device for computers

A pointing device is a human interface device that allows a user to input spatial data to a computer. CAD systems and graphical user interfaces (GUI) allow the user to control and provide data to the computer using physical gestures by moving a hand-held mouse or similar device across the surface of the physical desktop and activating switches on the mouse. Movements of the pointing device are echoed on the screen by movements of the pointer and other visual changes. Common gestures are point and click and drag and drop.

<span class="mw-page-title-main">User interface</span> Means by which a user interacts with and controls a machine

In the industrial design field of human–computer interaction, a user interface (UI) is the space where interactions between humans and machines occur. The goal of this interaction is to allow effective operation and control of the machine from the human end, while the machine simultaneously feeds back information that aids the operators' decision-making process. Examples of this broad concept of user interfaces include the interactive aspects of computer operating systems, hand tools, heavy machinery operator controls and process controls. The design considerations applicable when creating user interfaces are related to, or involve such disciplines as, ergonomics and psychology.

<span class="mw-page-title-main">Scrollbar</span> Graphical user interface element

A scrollbar is an interaction technique or widget in which continuous text, pictures, or any other content can be scrolled in a predetermined direction on a computer display, window, or viewport so that all of the content can be viewed, even if only a fraction of the content can be seen on a device's screen at one time. It offers a solution to the problem of navigation to a known or unknown location within a two-dimensional information space. It was also known as a handle in the very first GUIs. They are present in a wide range of electronic devices including computers, graphing calculators, mobile phones, and portable media players. The user interacts with the scrollbar elements using some method of direct action, the scrollbar translates that action into scrolling commands, and the user receives feedback through a visual updating of both the scrollbar elements and the scrolled content.

<span class="mw-page-title-main">Touchpad</span> Type of pointing device

A touchpad or trackpad is a type of pointing device. Its largest component is a tactile sensor: an electronic device with a flat surface, that detects the motion and position of a user's fingers, and translates them to a position on a screen, to control a pointer in a graphical user interface. Touchpads are common on laptop computers, contrasted with desktop computers, where mice are more prevalent. Trackpads are sometimes used on desktops, where desk space is scarce. Because trackpads can be made small, they can be found on personal digital assistants (PDAs) and some portable media players. Wireless touchpads are also available, as detached accessories.

<span class="mw-page-title-main">Drag and drop</span> Action in computer graphic user interfaces

In computer graphical user interfaces, drag and drop is a pointing device gesture in which the user selects a virtual object by "grabbing" it and dragging it to a different location or onto another virtual object. In general, it can be used to invoke many kinds of actions, or create various types of associations between two abstract objects.

<span class="mw-page-title-main">Touchscreen</span> Input and output device

A touchscreen or touch screen is the assembly of both an input and output ('display') device. The touch panel is normally layered on the top of an electronic visual display of an electronic device.

<span class="mw-page-title-main">Gesture recognition</span> Topic in computer science and language technology

Gesture recognition is a topic in computer science and language technology with the goal of interpreting human gestures via mathematical algorithms. It is a subdiscipline of computer vision. Gestures can originate from any bodily motion or state, but commonly originate from the face or hand. Focuses in the field include emotion recognition from face and hand gesture recognition since they are all expressions. Users can make simple gestures to control or interact with devices without physically touching them. Many approaches have been made using cameras and computer vision algorithms to interpret sign language, however, the identification and recognition of posture, gait, proxemics, and human behaviors is also the subject of gesture recognition techniques. Gesture recognition can be seen as a way for computers to begin to understand human body language, thus building a better bridge between machines and humans than older text user interfaces or even GUIs, which still limit the majority of input to keyboard and mouse and interact naturally without any mechanical devices.

In human–computer interaction, a cursor is an indicator used to show the current position on a computer monitor or other display device that will respond to input.

A voice-user interface (VUI) makes spoken human interaction with computers possible, using speech recognition to understand spoken commands and answer questions, and typically text to speech to play a reply. A voice command device is a device controlled with a voice user interface.

<span class="mw-page-title-main">Virtual keyboard</span> Software component

A virtual keyboard is a software component that allows the input of characters without the need for physical keys. The interaction with the virtual keyboard happens mostly via a touchscreen interface, but can also take place in a different form in virtual or augmented reality.

<span class="mw-page-title-main">Multi-touch</span> Technology

In computing, multi-touch is technology that enables a surface to recognize the presence of more than one point of contact with the surface at the same time. The origins of multitouch began at CERN, MIT, University of Toronto, Carnegie Mellon University and Bell Labs in the 1970s. CERN started using multi-touch screens as early as 1976 for the controls of the Super Proton Synchrotron. Capacitive multi-touch displays were popularized by Apple's iPhone in 2007. Plural-point awareness may be used to implement additional functionality, such as pinch to zoom or to activate certain subroutines attached to predefined gestures.

<span class="mw-page-title-main">Pen computing</span> Uses a stylus and tablet/touchscreen

Pen computing refers to any computer user-interface using a pen or stylus and tablet, over input devices such as a keyboard or a mouse.

In computing, 3D interaction is a form of human-machine interaction where users are able to move and perform interaction in 3D space. Both human and machine process information where the physical position of elements in the 3D space is relevant.

In computing, a natural user interface (NUI) or natural interface is a user interface that is effectively invisible, and remains invisible as the user continuously learns increasingly complex interactions. The word "natural" is used because most computer interfaces use artificial control devices whose operation has to be learned. Examples include voice assistants, such as Alexa and Siri, touch and multitouch interactions on today's mobile phones and tablets, but also touch interfaces invisibly integrated into the textiles furnitures.

The Samsung Behold II is a touch-screen, 3G- compatible smartphone with a 5.0-megapixel camera. The Samsung Behold II is powered by the Android OS, making it the fourth Android powered phone by T-Mobile USA. Other Android powered phones by T-Mobile are the G1, myTouch 3G, and the Motorola CLIQ. It was released on November 18, 2009. On May 27, 2010, Samsung announced that Android 1.6 "Donut" would be the final firmware release for the device.

<span class="mw-page-title-main">HTC Evo 4G</span> Android smartphone developed by HTC Corporation

The HTC Evo 4G is a smartphone developed by HTC Corporation and marketed as Sprint's flagship Android smartphone, running on its WiMAX network. The smartphone was launched on June 4, 2010. It was the first 4G enabled smartphone released in the United States.

The HTC Evo Shift 4G is a smartphone developed by HTC Corporation and marketed as the concurrent/sequel to Sprint's flagship Android smartphone, running on its 4G WiMAX network. The smartphone launched on January 9, 2011.

<span class="mw-page-title-main">Samsung Galaxy Note 3</span> 2013 Android smartphone by Samsung

The Samsung Galaxy Note 3 is an Android phablet smartphone produced by Samsung Electronics as part of the Samsung Galaxy Note series. The Galaxy Note 3 was unveiled on September 4, 2013, with its worldwide release beginning later in the month. Serving as a successor to the Galaxy Note II, the Note 3 was designed to have a lighter, more upscale design than previous iterations of the Galaxy Note series, and to expand upon the stylus and multitasking-oriented functionality in its software—which includes a new pie menu opened through the button on the stylus for quick access to pen-enabled apps, along with pop-up apps and expanded multi-window functionality. It additionally features new sensors, a USB 3.0 port, 3 GB of RAM, and its video camera has been upgraded to 2160p (4K) resolution and doubled framerate of 60 at 1080p, placing it among the earliest smartphones to be equipped with any of these.

Ken Hinckley is an American computer scientist and inventor. He is a senior principal research manager at Microsoft Research. He is known for his research in human-computer interaction, specifically on sensing techniques, pen computing, and cross-device interaction.

References

  1. 1 2 3 4 5 Baudisch, Patrick. "patrick baudisch's biography". patrick baudisch. Retrieved 3 April 2018.
  2. Baudisch, Patrick. "CURRICULUM VITAE Patrick Baudisch" (PDF). patrick baudisch. Retrieved 3 April 2018.
  3. "Patrick Baudisch - Google Scholar Citations". Google Scholar Citations. Retrieved 3 April 2018.
  4. "Precise selection techniques for multi-touch screens - Google Scholar Citations". Google Scholar Citations. Retrieved 3 April 2018.
  5. 1 2 Benko, Hrvoje; Wilson, Andrew; Baudisch, Patrick (22 April 2006). "Precise selection techniques for multi-touch screens": 1263–1272. CiteSeerX   10.1.1.556.558 .{{cite journal}}: Cite journal requires |journal= (help)
  6. "Halo: a technique for visualizing off-screen objects - Google Scholar Citations". Google Scholar Citations. Retrieved 3 April 2018.
  7. 1 2 Baudisch, Patrick; Rosenholtz, Ruth (3 April 2003). "Halo: a Technique for Visualizing Off-Screen Locations": 481. Retrieved 3 April 2018.{{cite journal}}: Cite journal requires |journal= (help)
  8. "Shift: a technique for operating pen-based interfaces using touch - Google Scholar Citations". Google Scholar Citations. Retrieved 3 April 2018.
  9. 1 2 Vogel, Daniel; Baudisch, Patrick (29 April 2007). "Shift: A Technique for Operating Pen-Based Interfaces Using Touch" (PDF): 657. Retrieved 3 April 2018.{{cite journal}}: Cite journal requires |journal= (help)
  10. "Drag-and-pop and drag-and-pick: Techniques for accessing remote screen content on touch-and pen-operated systems - Google Scholar Citations". Google Scholar Citations. Retrieved 3 April 2018.
  11. 1 2 Baudisch, Patrick; Cutrell, Edward; Robbins, Dan; Czerwinski, Mary; Tandler, Peter; Bederson, Benjamin; Zierlinger, Alex (August 2003). "Drag-and-Pop and Drag-and-Pick: techniques for accessing remote screen content on touch- and pen-operated systems" (PDF): 57. Retrieved 3 April 2018.{{cite journal}}: Cite journal requires |journal= (help)
  12. "Lucid touch: a see-through mobile device - Google Scholar Citations". Google Scholar Citations. Retrieved 3 April 2018.
  13. 1 2 Wigdor, Daniel; Forlines, Clifton; Baudisch, Patrick; Barnwell, John; Shen, Chia (7 October 2007). "LucidTouch: A See-Through Mobile Device" (PDF): 269. Retrieved 3 April 2018.{{cite journal}}: Cite journal requires |journal= (help)
  14. 1 2 3 4 5 Baudisch, Patrick. "patrick baudisch's publications". patrick baudisch. Retrieved 3 April 2018.
  15. 1 2 3 4 "Publications" (in German). 2018-04-20. Retrieved 2018-05-05.
  16. "SIGCHI Announces Best of CHI 2007 Award Winners" (PDF). Chi. Retrieved May 5, 2018.