Powerwall

Last updated
The 53.7 million pixel Powerwall at the University of Leeds A user performing gesture interactions with a Powerwall display at the University of Leeds.jpg
The 53.7 million pixel Powerwall at the University of Leeds

A powerwall is a large, ultra-high-resolution display that is constructed of a matrix of other displays, which may be either monitors or projectors. It is important to differentiate between powerwalls and displays that are just large, for example, the single projector display used in many lecture theatres. These displays rarely have a resolution higher than 1920 × 1080 pixels, and so present the same amount of information as on a standard desktop display. With Powerwall displays, users can view the display from a distance and see an overview of the data (context), but can also move to within arm’s length and see data in great detail (focus). This technique of moving around the display is known as physical navigation, [1] and can help users to better understand their data.

Contents

The first Powerwall display was installed at the University of Minnesota [2] in 1994. It was made of four rear-projection displays, providing a resolution of 7.8 million pixels (3200 × 2400 pixels). Increases in graphic display power, combined with decreases in hardware costs, means that less hardware is required to drive such displays. In 2006, a 50–60 mega-pixel Powerwall display required a cluster of seven machines to drive it, in 2012 the same display could be driven by a single machine with three graphics cards, and in 2015 it could be driven by a single graphics card alone. Rather than seeing a decrease in the use of PC clusters as a result of this, we are instead seeing cluster-driven Powerwall displays with even higher resolutions. Currently, the highest resolution display in the world is the Reality Deck, [3] running at 1.5 billion pixels, powered by a cluster of 18 nodes.

Interaction

Both software and hardware techniques have been proposed to aid with Powerwall interaction. There have been several devices that use pointing for selection. [4] This type of interaction is well supported for collaboration, and makes it possible for multiple users to interact simultaneously. Touch interfaces also support collaboration, and increasingly multi-touch interfaces are being overlaid on top of large displays. [5] The physical size of the display, however, can leave users prone to fatigue. Mobile devices such as tablets can be used as interaction devices, but the secondary screen can distract users’ attention. It has been found that this issue can be addressed by adding physical widgets to the tablet’s screen. [6] Finally, software techniques such as modifying the window management interface or providing a lens for selecting small targets has been found to speed up interaction. [7]

Visualisation

In the field of medical visualisation, Powerwall displays have been used to render high-resolution, digitally scanned histology slides, [8] [9] where the high pixel count increases the volume of data that is rendered at any one time, and the context offered by the size of the display provides a spatial reference, aiding navigation through the visualization. The same principal can also be said for geographical data such as maps, where it has been found that the large display real estate increases performance for searching and route-tracing. [10] Rather than flooding the large display real estate with data, tools such as ForceSPIRE make use of semantic interaction to enable analysts to spatially cluster data. [11]

Collaboration

Research on collaboration with Powerwall displays is related to that of tabletops, which suggests that partitioning the display space is crucial to efficient collaboration, and that distinct territories may be identified in the spatial layout of information. Physical movement, however, influences performance with large displays [1] and the relative distance among collaborators also influences their interaction. [12] Yet, most tabletop studies have participants sit down and stay put. A recent study found that during a collaborative sensemaking session in front of a multi-touch Powerwal display, the ability to physically navigate allowed users to fluidly shift between shared and personal spaces. [5]

Related Research Articles

<span class="mw-page-title-main">Fitts's law</span> Predictive model of human movement

Fitts's law is a predictive model of human movement primarily used in human–computer interaction and ergonomics. The law predicts that the time required to rapidly move to a target area is a function of the ratio between the distance to the target and the width of the target. Fitts's law is used to model the act of pointing, either by physically touching an object with a hand or finger, or virtually, by pointing to an object on a computer monitor using a pointing device. It was initially developed by Paul Fitts.

<span class="mw-page-title-main">WIMP (computing)</span> Style of human-computer interaction

In human–computer interaction, WIMP stands for "windows, icons, menus, pointer", denoting a style of interaction using these elements of the user interface. Other expansions are sometimes used, such as substituting "mouse" and "mice" for menus, or "pull-down menu" and "pointing" for pointer.

A recommender system, or a recommendation system, is a subclass of information filtering system that provides suggestions for items that are most pertinent to a particular user. Recommender systems are particularly useful when an individual needs to choose an item from a potentially overwhelming number of items that a service may offer.

<span class="mw-page-title-main">Gesture recognition</span> Topic in computer science and language technology

Gesture recognition is an area of research and development in computer science and language technology concerned with the recognition and interpretation of human gestures. A subdiscipline of computer vision, it employs mathematical algorithms to interpret gestures.

<span class="mw-page-title-main">Tangible user interface</span>

A tangible user interface (TUI) is a user interface in which a person interacts with digital information through the physical environment. The initial name was Graspable User Interface, which is no longer used. The purpose of TUI development is to empower collaboration, learning, and design by giving physical forms to digital information, thus taking advantage of the human ability to grasp and manipulate physical objects and materials.

<span class="mw-page-title-main">Ben Shneiderman</span> American computer scientist

Ben Shneiderman is an American computer scientist, a Distinguished University Professor in the University of Maryland Department of Computer Science, which is part of the University of Maryland College of Computer, Mathematical, and Natural Sciences at the University of Maryland, College Park, and the founding director (1983-2000) of the University of Maryland Human-Computer Interaction Lab. He conducted fundamental research in the field of human–computer interaction, developing new ideas, methods, and tools such as the direct manipulation interface, and his eight rules of design.

Backchannel is the use of networked computers to maintain a real-time online conversation alongside the primary group activity or live spoken remarks. The term was coined from the linguistics term to describe listeners' behaviours during verbal communication.

In computing, post-WIMP comprises work on user interfaces, mostly graphical user interfaces, which attempt to go beyond the paradigm of windows, icons, menus and a pointing device, i.e. WIMP interfaces.

A projection augmented model is an element sometimes employed in virtual reality systems. It consists of a physical three-dimensional model onto which a computer image is projected to create a realistic looking object. Importantly, the physical model is the same geometric shape as the object that the PA model depicts.

In computing, 3D interaction is a form of human-machine interaction where users are able to move and perform interaction in 3D space. Both human and machine process information where the physical position of elements in the 3D space is relevant.

In computing, a natural user interface (NUI) or natural interface is a user interface that is effectively invisible, and remains invisible as the user continuously learns increasingly complex interactions. The word "natural" is used because most computer interfaces use artificial control devices whose operation has to be learned. Examples include voice assistants, such as Alexa and Siri, touch and multitouch interactions on today's mobile phones and tablets, but also touch interfaces invisibly integrated into the textiles furnitures.

<span class="mw-page-title-main">Human–computer interaction</span> Academic discipline studying the relationship between computer systems and their users

Human–computer interaction (HCI) is research in the design and the use of computer technology, which focuses on the interfaces between people (users) and computers. HCI researchers observe the ways humans interact with computers and design technologies that allow humans to interact with computers in novel ways. A device that allows interaction between human being and a computer is known as a "Human-computer Interface (HCI)".

Urban computing is an interdisciplinary field which pertains to the study and application of computing technology in urban areas. This involves the application of wireless networks, sensors, computational power, and data to improve the quality of densely populated areas. Urban computing is the technological framework for smart cities.

<span class="mw-page-title-main">DiamondTouch</span> Multiple person interface device

The DiamondTouch table is a multi-touch, interactive PC interface product from Circle Twelve Inc. It is a human interface device that has the capability of allowing multiple people to interact simultaneously while identifying which person is touching where. The technology was originally developed at Mitsubishi Electric Research Laboratories (MERL) in 2001 and later licensed to Circle Twelve Inc in 2008. The DiamondTouch table is used to facilitate face-to-face collaboration, brainstorming, and decision-making, and users include construction management company Parsons Brinckerhoff, the Methodist Hospital, and the US National Geospatial-Intelligence Agency (NGA).

Sonic interaction design is the study and exploitation of sound as one of the principal channels conveying information, meaning, and aesthetic/emotional qualities in interactive contexts. Sonic interaction design is at the intersection of interaction design and sound and music computing. If interaction design is about designing objects people interact with, and such interactions are facilitated by computational means, in sonic interaction design, sound is mediating interaction either as a display of processes or as an input medium.

<span class="mw-page-title-main">Projector camera systems</span>

Projector-camera systems (pro-cam), also called camera-projector systems, augment a local surface with a projected captured image of a remote surface, creating a shared workspace for remote collaboration and communication. Projector-camera systems may also be used for artistic and entertainment purposes. A pro-cam system consists of a vertical screen for implementing interpersonal space where front-facing videos are displayed, and a horizontal projected screen on the tabletop for implementing shared workspace where downward facing videos are overlapped. An automatically pre-warped image is sent to the projector to ensure that the horizontal screen appears undistorted.

Animal–computer interaction (ACI) is a field of research for the design and use of technology with, for and by animals covering different kinds of animals from wildlife, zoo and domesticated animals in different roles. It emerged from, and was heavily influenced by, the discipline of Human–computer interaction (HCI). As the field expanded, it has become increasingly multi-disciplinary, incorporating techniques and research from disciplines such as artificial intelligence (AI), requirements engineering (RE), and veterinary science.

Andrew Cockburn is currently working as a Professor in the Department of Computer Science and Software Engineering at the University of Canterbury in Christchurch, New Zealand. He is in charge of the Human Computer Interactions Lab where he conducts research focused on designing and testing user interfaces that integrate with inherent human factors.

<span class="mw-page-title-main">Shumin Zhai</span> Human–computer interaction research scientist

Shumin Zhai is a Chinese-born American Canadian Human–computer interaction (HCI) research scientist and inventor. He is known for his research specifically on input devices and interaction methods, swipe-gesture-based touchscreen keyboards, eye-tracking interfaces, and models of human performance in human-computer interaction. His studies have contributed to both foundational models and understandings of HCI and practical user interface designs and flagship products. He previously worked at IBM where he invented the ShapeWriter text entry method for smartphones, which is a predecessor to the modern Swype keyboard. Dr. Zhai's publications have won the ACM UIST Lasting Impact Award and the IEEE Computer Society Best Paper Award, among others, and he is most known for his research specifically on input devices and interaction methods, swipe-gesture-based touchscreen keyboards, eye-tracking interfaces, and models of human performance in human-computer interaction. Dr. Zhai is currently a Principal Scientist at Google where he leads and directs research, design, and development of human-device input methods and haptics systems.

Jofish Kaye is an American and British scientist specializing in human-computer interaction and artificial intelligence. He runs interaction design and user research at anthem.ai, and is an editor of Personal & Ubiquitous Computing.

References

  1. 1 2 Ball, Robert; North, Chris; Bowman, Doug A. (2007). "Move to improve". Proceedings of the SIGCHI conference on Human factors in computing systems - CHI '07. pp. 191–200. doi:10.1145/1240624.1240656. ISBN   978-1-59593-593-9.
  2. University of Minnesota PowerWall - http://www.lcse.umn.edu/research/powerwall/powerwall.html
  3. Stony Brook Reality Deck - http://labs.cs.sunysb.edu/labs/vislab/reality-deck-home/
  4. Davis, James; Chen, Xing (2002). "Lumipoint: Multi-user laser-based interaction on large tiled displays". Displays. 23 (5): 205–11. doi:10.1016/S0141-9382(02)00039-2.
  5. 1 2 Jakobsen, Mikkel; Hornbæk, Kasper (2012). "Proximity and physical navigation in collaborative work with a multi-touch wall-display". Proceedings of the 2012 ACM annual conference extended abstracts on Human Factors in Computing Systems Extended Abstracts - CHI EA '12. pp. 2519–24. doi:10.1145/2212776.2223829. ISBN   978-1-4503-1016-1.
  6. Jansen, Yvonne; Dragicevic, Pierre; Fekete, Jean-Daniel (2012). "Tangible remote controllers for wall-size displays". Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems - CHI '12. pp. 2865–74. doi:10.1145/2207676.2208691. ISBN   978-1-4503-1015-4.
  7. Rooney, Chris; Ruddle, Roy (2012). "Improving Window Manipulation and Content Interaction on High-Resolution, Wall-Sized Displays" (PDF). International Journal of Human-Computer Interaction. 28 (7): 423–32. doi:10.1080/10447318.2011.608626.
  8. Treanor, Darren; Jordan-Owers, Naomi; Hodrien, John; Wood, Jason; Quirke, Phil; Ruddle, Roy A (2009). "Virtual reality Powerwall versus conventional microscope for viewing pathology slides: An experimental comparison" (PDF). Histopathology. 55 (3): 294–300. doi:10.1111/j.1365-2559.2009.03389.x. PMID   19723144.
  9. The Leeds Virtual Microscope - http://www.comp.leeds.ac.uk/royr/research/rti/lvm.html
  10. R. Ball, M. Varghese, A. Sabri, D. Cox, C. Fierer, M. Peterson, B. Cartensen, and C. North. 2005. Evaluating the benefits of tiled displays for navigating maps. In Proceedings of the International Conference on HCI, pages 66–71. http://www.actapress.com/Abstract.aspx?paperId=22477
  11. Endert, Alex; Fiaux, Patrick; North, Chris (2012). "Semantic interaction for visual text analytics". Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems - CHI '12. pp. 473–82. doi:10.1145/2207676.2207741. ISBN   978-1-4503-1015-4.
  12. Ballendat, Till; Marquardt, Nicolai; Greenberg, Saul (2010). "Proxemic interaction". ACM International Conference on Interactive Tabletops and Surfaces - ITS '10. pp. 121–30. doi:10.1145/1936652.1936676. ISBN   978-1-4503-0399-6.