Roberta Klatzky

Last updated
Roberta Klatzky
Born(1947-01-06)January 6, 1947
School Perceptual psychology
Main interests
haptic perception, human-computer interactions
Notable ideas
The human haptic system is capable of accurate geometric shape perception and object recognition; the UCSB Personal Guided System

Roberta "Bobby Lou" Klatzky is a Professor of Psychology at Carnegie Mellon University (CMU). She specializes in human perception and cognition, particularly relating to visual and non-visual perception and representation of space and geometric shapes. Klatzky received a B.A. in mathematics from the University of Michigan in 1968 and a Ph.D. in psychology from Stanford University in 1972. She has done extensive research on human haptic and visual object recognition, navigation under visual and nonvisual guidance, and perceptually guided action.

Her work has an application to navigation aids for the blind, haptic interfaces, exploratory robotics, teleoperation, and virtual environments. Specifically, alongside Jack Loomis and the late Reginald Golledge, Klatzky played a major part in the development of the UCSB Personal Guided System, a GPS-based navigation system for the blind. The impact of Klatzky's research in psychology and human-computer interaction has been recognized by numerous organizations, and she has been elected as a fellow in the American Psychological Association, the Association for Psychological Science, the Society of Experimental Psychologists and the American Association for the Advancement of Science. She has been awarded the Humboldt Senior Research Award from the Alexander von Humboldt Foundation and the Kurt-Koffka medal from the Justus-Liebig-University of Giessen, Germany. Prior to working at CMU, Klatzky was employed at the University of California, Santa Barbara.

Klatzky is a member of the Center for the Neural Basis of Cognition and the Human-Computer Interaction Institute at CMU. She has also completed editorial work for a number of prestigious journals in cognitive and perceptual psychology, including IEEE, Acta Psychologica and Perception and Psychophysics, and she is listed in Outstanding Scientists of the 20th Century.

Selected works


Related Research Articles

<span class="mw-page-title-main">Perception</span> Interpretation of sensory information

Perception is the organization, identification, and interpretation of sensory information in order to represent and understand the presented information or environment. All perception involves signals that go through the nervous system, which in turn result from physical or chemical stimulation of the sensory system. Vision involves light striking the retina of the eye; smell is mediated by odor molecules; and hearing involves pressure waves.

A tactile illusion is an illusion that affects the sense of touch. Some tactile illusions require active touch, whereas others can be evoked passively. In recent years, a growing interest among perceptual researchers has led to the discovery of new tactile illusions and to the celebration of tactile illusions in the popular science press. Some tactile illusions are analogous to visual and auditory illusions, suggesting that these sensory systems may process information in similar ways; other tactile illusions don't have obvious visual or auditory analogs.

<span class="mw-page-title-main">Attention</span> Psychological process of selectively perceiving and prioritising discrete aspects of information

Attention is the concentration of awareness on some phenomenon to the exclusion of other stimuli. It is a process of selectively concentrating on a discrete aspect of information, whether considered subjective or objective. William James (1890) wrote that "Attention is the taking possession by the mind, in clear and vivid form, of one out of what seem several simultaneously possible objects or trains of thought. Focalization, concentration, of consciousness are of its essence." Attention has also been described as the allocation of limited cognitive processing resources. Attention is manifested by an attentional bottleneck, in terms of the amount of data the brain can process each second; for example, in human vision, only less than 1% of the visual input data can enter the bottleneck, leading to inattentional blindness.

<span class="mw-page-title-main">Subitizing</span> Assessing the quantity of objects in a visual scene without individually counting each item

Subitizing is the rapid, accurate, and confident judgments of numbers performed for small numbers of items. The term was coined in 1949 by E. L. Kaufman et al., and is derived from the Latin adjective subitus and captures a feeling of immediately knowing how many items lie within the visual scene, when the number of items present falls within the subitizing range. Sets larger than about four items cannot be subitized unless the items appear in a pattern with which the person is familiar. Large, familiar sets might be counted one-by-one. A person could also estimate the number of a large set—a skill similar to, but different from, subitizing.

Multisensory integration, also known as multimodal integration, is the study of how information from the different sensory modalities may be integrated by the nervous system. A coherent representation of objects combining modalities enables animals to have meaningful perceptual experiences. Indeed, multisensory integration is central to adaptive behavior because it allows animals to perceive a world of coherent perceptual entities. Multisensory integration also deals with how different sensory modalities interact with one another and alter each other's processing.

Within computer technology, the gaze-contingency paradigm is a general term for techniques allowing a computer screen display to change in function depending on where the viewer is looking. Gaze-contingent techniques are part of the eye movement field of study in psychology.

Haptic perception means literally the ability "to grasp something". Perception in this case is achieved through the active exploration of surfaces and objects by a moving subject, as opposed to passive contact by a static subject during tactile perception.

Reginald George Golledge was an Australian-born American Professor of Geography at the University of California, Santa Barbara. He was named Faculty Research Lecturer for 2009. During his career he wrote or edited 16 books and 100 chapters for other books, and wrote more than 150 academic papers.

<span class="mw-page-title-main">Perceptual learning</span>

Perceptual learning is learning better perception skills such as differentiating two musical tones from one another or categorizations of spatial and temporal patterns relevant to real-world expertise. Examples of this may include reading, seeing relations among chess pieces, and knowing whether or not an X-ray image shows a tumor.

Haptic memory is the form of sensory memory specific to touch stimuli. Haptic memory is used regularly when assessing the necessary forces for gripping and interacting with familiar objects. It may also influence one's interactions with novel objects of an apparently similar size and density. Similar to visual iconic memory, traces of haptically acquired information are short lived and prone to decay after approximately two seconds. Haptic memory is best for stimuli applied to areas of the skin that are more sensitive to touch. Haptics involves at least two subsystems; cutaneous, or everything skin related, and kinesthetic, or joint angle and the relative location of body. Haptics generally involves active, manual examination and is quite capable of processing physical traits of objects and surfaces.

Action-specific perception, or perception-action, is a psychological theory that people perceive their environment and events within it in terms of their ability to act. This theory hence suggests that a person's capability to carry out a particular task affects how they perceive the different aspects and methods involved in that task. For example, softball players who are hitting better see the ball as bigger. Tennis players see the ball as moving slower when they successfully return the ball. In the field of human-computer interaction, alterations in accuracy impact both the perception of size and time, while adjustments in movement speed impact the perception of distance. Furthermore, the perceiver's intention to act is also critical; while the perceiver's ability to perform the intended action influences perception, the perceiver's abilities for unintended actions have little or no effect on perception. Finally, the objective difficulty of the task appears to modulate size, distance, and time perception.

Representational momentum is a small, but reliable, error in our visual perception of moving objects. Representational moment was discovered and named by Jennifer Freyd and Ronald Finke. Instead of knowing the exact location of a moving object, viewers actually think it is a bit further along its trajectory as time goes forward. For example, people viewing an object moving from left to right that suddenly disappears will report they saw it a bit further to the right than where it actually vanished. While not a big error, it has been found in a variety of different events ranging from simple rotations to camera movement through a scene. The name "representational momentum" initially reflected the idea that the forward displacement was the result of the perceptual system having internalized, or evolved to include, basic principles of Newtonian physics, but it has come to mean forward displacements that continue a presented pattern along a variety of dimensions, not just position or orientation. As with many areas of cognitive psychology, theories can focus on bottom-up or top-down aspects of the task. Bottom-up theories of representational momentum highlight the role of eye movements and stimulus presentation, while top-down theories highlight the role of the observer's experience and expectations regarding the presented event.

Spatial cognition is the acquisition, organization, utilization, and revision of knowledge about spatial environments. It is most about how animals including humans behave within space and the knowledge they built around it, rather than space itself. These capabilities enable individuals to manage basic and high-level cognitive tasks in everyday life. Numerous disciplines work together to understand spatial cognition in different species, especially in humans. Thereby, spatial cognition studies also have helped to link cognitive psychology and neuroscience. Scientists in both fields work together to figure out what role spatial cognition plays in the brain as well as to determine the surrounding neurobiological infrastructure.

Object-based attention refers to the relationship between an ‘object’ representation and a person’s visually stimulated, selective attention, as opposed to a relationship involving either a spatial or a feature representation; although these types of selective attention are not necessarily mutually exclusive. Research into object-based attention suggests that attention improves the quality of the sensory representation of a selected object, and results in the enhanced processing of that object’s features.

Perceptual load theory is a psychological theory of attention. It was presented by Nilli Lavie in the mid-nineties as a potential resolution to the early/late selection debate.

<span class="mw-page-title-main">Spatial ability</span>

Spatial ability or visuo-spatial ability is the capacity to understand, reason, and remember the visual and spatial relations among objects or space.

Farley Norman is a professor of psychological sciences at Western Kentucky University. He is a co-director of the Gustav Fechner Perception Laboratory at Western Kentucky University, along with his wife, Hideko Norman.

Ensemble coding, also known as ensemble perception or summary representation, is a theory in cognitive neuroscience about the internal representation of groups of objects in the human mind. Ensemble coding proposes that such information is recorded via summary statistics, particularly the average or variance. Experimental evidence tends to support the theory for low-level visual information, such as shapes and sizes, as well as some high-level features such as face gender. Nonetheless, it remains unclear the extent to which ensemble coding applies to high-level or non-visual stimuli, and the theory remains the subject of active research.

Susan J. Lederman is a Canadian experimental psychologist. She is a professor emerita in the Department of Psychology at Queen's University in Kingston, Ontario, Canada. She is recognized for her contributions to the field of haptics.

Lola L. Cuddy is a Canadian psychologist recognized for her contributions to the field of music psychology. She is a professor emerita in the Department of Psychology at Queen's University in Kingston, Ontario.