Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Roberta L. Klatzky is active.

Publication


Featured researches published by Roberta L. Klatzky.


Cognitive Psychology | 1987

Hand Movements: A Window into Haptic Object Recognition.

Susan J. Lederman; Roberta L. Klatzky

Abstract Two experiments establish links between desired knowledge about objects and hand movements during haptic object exploration. Experiment 1 used a match-to-sample task, in which blindfolded subjects were directed to match objects on a particular dimension (e.g., texture). Hand movements during object exploration were reliably classified as “exploratory procedures,” each procedure defined by its invariant and typical properties. The movement profile, i.e., the distribution of exploratory procedures, was directly related to the desired object knowledge that was required for the match. Experiment 2 addressed the reasons for the specific links between exploratory procedures and knowledge goals. Hand movements were constrained, and performance on various matching tasks was assessed. The procedures were considered in terms of their necessity, sufficiency, and optimality of performance for each task. The results establish that in free exploration, a procedure is generally used to acquire information about an object property, not because it is merely sufficient, but because it is optimal or even necessary. Hand movements can serve as “windows,” through which it is possible to learn about the underlying representation of objects in memory and the processes by which such representations are derived and utilized.


Journal of Experimental Psychology: General | 1993

Nonvisual Navigation by Blind and Sighted: Assessment of Path Integration Ability

Jack M. Loomis; Roberta L. Klatzky; Reginald G. Golledge; Joseph G. Cicinelli; James W. Pellegrino; Phyllis A. Fry

Blindfolded sighted, adventitiously blind, and congenitally blind subjects performed a set of navigation tasks. The more complex tasks involved spatial inference and included retracing a multisegment route in reverse, returning directly to an origin after being led over linear segments, and pointing to targets after locomotion. As a group, subjects responded systematically to route manipulations in the complex tasks, but performance was poor. Patterns of error and response latency are informative about the internal representation used; in particular, they do not support the hypothesis that only a representation of the origin of locomotion is maintained. The slight performance differences between groups varying in visual experience were neither large nor consistent across tasks. Results provide little indication that spatial competence strongly depends on prior visual experience.


Attention Perception & Psychophysics | 1985

Identifying objects by touch: An “expert system”

Roberta L. Klatzky; Susan J. Lederman; Victoria A. Metzger

How good are we at recognizing objects by touch? Intuition may suggest that the haptic system is a poor recognition device, and previous research with nonsense shapes and tangible-graphics displays supports this opinion. We argue that the recognition capabilities of touch are best assessed with three-dimensional, familiar objects. The present study provides a baseline measure of recognition under those circumstances, and it indicates that haptic object recognition can be both rapid and accurate.


Attention Perception & Psychophysics | 2009

Haptic perception: A tutorial

Susan J. Lederman; Roberta L. Klatzky

This tutorial focuses on the sense of touch within the context of a fully active human observer. It is intended for graduate students and researchers outside the discipline who seek an introduction to the rapidly evolving field of human haptics. The tutorial begins with a review of peripheral sensory receptors in skin, muscles, tendons, and joints. We then describe an extensive body of research on “what” and “where” channels, the former dealing with haptic perception of objects, surfaces, and their properties, and the latter with perception of spatial layout on the skin and in external space relative to the perceiver. We conclude with a brief discussion of other significant issues in the field, including vision-touch interactions, affective touch, neural plasticity, and applications.


Lecture Notes in Computer Science | 1998

Allocentric and Egocentric Spatial Representations: Definitions, Distinctions, and Interconnections

Roberta L. Klatzky

Although the literatures on human spatial cognition and animal navigation often make distinctions between egocentric and allocentric (also called exocentric or geocentric) representations, the terms have not generally been well defined. This chapter begins by making formal distinctions between three kinds of representations: allocentric locational, egocentric locational, and allocentric heading representations. These distinctions are made in the context of whole-body navigation (as contrasted, e.g., with manipulation). They are made on the basis of primitive parameters specified by each representation, and the representational distinctions are further supported by work on brain mechanisms used for animal navigation. From the assumptions about primitives, further inferences are made as to the kind of information each representation potentially makes available. Empirical studies of how well people compute primitive and derived spatial parameters are briefly reviewed. Finally, the chapter addresses what representations humans may use for processing spatial information during physical and imagined movement, and work on imagined updating of spatial position is used to constrain the connectivity among representations.


Psychological Science | 1998

Spatial Updating of Self-Position and Orientation During Real, Imagined, and Virtual Locomotion

Roberta L. Klatzky; Jack M. Loomis; Andrew C. Beall; Sarah S. Chance; Reginald G. Golledge

Two studies investigated updating of self-position and heading during real, imagined, and simulated locomotion. Subjects were exposed to a two-segment path with a turn between segments; they responded by turning to face the origin as they would if they had walked the path and were at the end of the second segment. The conditions of pathway exposure included physical walking, imagined walking from a verbal description, watching another person walk, and experiencing optic flow that simulated walking, with or without a physical turn between the path segments. If subjects failed to update an internal representation of heading, but did encode the pathway trajectory, they should have overturned by the magnitude of the turn between the path segments. Such systematic overturning was found in the description and watching conditions, but not with physical walking. Simulated optic flow was not by itself sufficient to induce spatial updating that supported correct turn responses.


Teleoperators and Virtual Environments | 1998

Navigation System for the Blind: Auditory Display Modes and Guidance

Jack M. Loomis; Reginald G. Golledge; Roberta L. Klatzky

The research we are reporting here is part of our effort to develop a navigation system for the blind. Our long-term goal is to create a portable, self-contained system that will allow visually impaired individuals to travel through familiar and unfamiliar environments without the assistance of guides. The system, as it exists now, consists of the following functional components: (1) a module for determining the travelers position and orientation in space, (2) a Geographic Information System comprising a detailed database of our test site and software for route planning and for obtaining information from the database, and (3) the user interface. The experiment reported here is concerned with one function of the navigation system: guiding the traveler along a predefined route. We evaluate guidance performance as a function of four different display modes: one involving spatialized sound from a virtual acoustic display, and three involving verbal commands issued by a synthetic speech display. The virtual display mode fared best in terms of both guidance performance and user preferences.


Acta Psychologica | 1993

Extracting object properties through haptic exploration.

Susan J. Lederman; Roberta L. Klatzky

This paper reviews some of our recent research on haptic exploration, perception and recognition of multidimensional objects. We begin by considering the nature of manual exploration in terms of the characteristics of various exploratory procedures (EPs) or stereotypical patterns of hand movements. Next, we explore their consequences for the sequence of EPs selected, for the relative cognitive salience of material versus geometric properties, and for dimensional integration. Finally, we discuss several applications of our research programme to the development of tangible graphics displays for the blind, autonomous and teleoperated haptic robotic systems, and food evaluation in the food industry.


Contemporary Educational Psychology | 2003

Teachers’ gestures facilitate students’ learning: A lesson in symmetry

Laura Valenzeno; Martha W. Alibali; Roberta L. Klatzky

This study investigated whether teachers gestures influence students comprehension of instructional discourse, and thereby influence students learning. Pointing and tracing gestures ‘‘ground’’ teachers speech by linking abstract, verbal utterances to the concrete, physical environment. We hypothesize that such grounding should facilitate students comprehension, and therefore their learning, of instructional material. Preschool children viewed one of two videotaped lessons about the concept of symmetry. In the verbal-plus-gesture lesson, the teacher produced pointing and tracing gestures as she explained the concept. In the verbal-only lesson, the teacher did not produce any gestures. On the posttest, children were asked to judge six items as symmetrical or asymmetrical, and to explain their judgments. Children who saw the verbal-plus-gesture lesson scored higher on the posttest than children who saw the verbal-only lesson. Thus, teachers gestures can indeed facilitate student learning. The results suggest that gestures may play an important role in instructional communication.


Perception | 1991

Similarity of tactual and visual picture recognition with limited field of view

Jack M. Loomis; Roberta L. Klatzky; Susan J. Lederman

Subjects attempted to recognize simple line drawings of common objects using either touch or vision. In the touch condition, subjects explored raised line drawings using the distal pad of the index finger or the distal pads both of the index and of the middle fingers. In the visual condition, a computer-driven display was used to simulate tactual exploration. By moving an electronic pen over a digitizing tablet, the subject could explore a line drawing stored in memory; on the display screen a portion of the drawing appeared to move behind a stationary aperture, in concert with the movement of the pen. This aperture was varied in width, thus simulating the use of one or two fingers. In terms of average recognition accuracy and average response latency, recognition performance was virtually the same in the one-finger touch condition and the simulated one-finger vision condition. Visual recognition performance improved considerably when the visual field size was doubled (simulating two fingers), but tactual performance showed little improvement, suggesting that the effective tactual field of view for this task is approximately equal to one finger pad. This latter result agrees with other reports in the literature indicating that integration of two-dimensional pattern information extending over multiple fingers on the same hand is quite poor. The near equivalence of tactual picture perception and narrow-field vision suggests that the difficulties of tactual picture recognition must be largely due to the narrowness of the effective field of view.

Collaboration


Dive into the Roberta L. Klatzky's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jack M. Loomis

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bing Wu

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Yoky Matsuoka

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Ralph L. Hollis

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Bambi R. Brewer

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

John M. Galeotti

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge