Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mary Hayhoe is active.

Publication


Featured researches published by Mary Hayhoe.


Trends in Cognitive Sciences | 2005

Eye movements in natural behavior

Mary Hayhoe; Dana H. Ballard

The classic experiments of Yarbus over 50 years ago revealed that saccadic eye movements reflect cognitive processes. But it is only recently that three separate advances have greatly expanded our understanding of the intricate role of eye movements in cognitive function. The first is the demonstration of the pervasive role of the task in guiding where and when to fixate. The second has been the recognition of the role of internal reward in guiding eye and body movements, revealed especially in neurophysiological studies. The third important advance has been the theoretical developments in the fields of reinforcement learning and graphic simulation. All of these advances are proving crucial for understanding how behavioral programs control the selection of visual information.


Journal of Cognitive Neuroscience | 1995

Memory representations in natural tasks

Dana H. Ballard; Mary Hayhoe; Jeff B. Pelz

The very limited capacity of short-term or working memory is one of the most prominent features of human cognition. Most studies have stressed delimiting the upper bounds of this memory in memorization tasks rather than the performance of everyday tasks. We designed a series of experiments to test the use of short-term memory in the course of a natural hand-eye task where subjects have the freedom to choose their own task parameters. In this case subjects choose not to operate at the maximum capacity of short-term memory but instead seek to minimize its use. In particular, reducing the instantaneous memory required to perform the task can be done by serializing the task with eye movements. These eye movements allow subjects to postpone the gathering of task-relevant information until just before it is required. The reluctance to use short-term memory can be explained if such memory is expensive to use with respect to the cost of the serializing strategy.


Journal of Vision | 2011

Eye guidance in natural vision: reinterpreting salience.

Benjamin W. Tatler; Mary Hayhoe; Michael F. Land; Dana H. Ballard

Models of gaze allocation in complex scenes are derived mainly from studies of static picture viewing. The dominant framework to emerge has been image salience, where properties of the stimulus play a crucial role in guiding the eyes. However, salience-based schemes are poor at accounting for many aspects of picture viewing and can fail dramatically in the context of natural task performance. These failures have led to the development of new models of gaze allocation in scene viewing that address a number of these issues. However, models based on the picture-viewing paradigm are unlikely to generalize to a broader range of experimental contexts, because the stimulus context is limited, and the dynamic, task-driven nature of vision is not represented. We argue that there is a need to move away from this class of model and find the principles that govern gaze allocation in a broader range of settings. We outline the major limitations of salience-based selection schemes and highlight what we have learned from studies of gaze allocation in natural vision. Clear principles of selection are found across many instances of natural vision and these are not the principles that might be expected from picture-viewing studies. We discuss the emerging theoretical framework for gaze allocation on the basis of reward maximization and uncertainty reduction.


Vision Research | 1997

Task Constraints in Visual Working Memory

Mary Hayhoe; David G. Bensinger; Dana H. Ballard

This paper examines the nature of visual representations that direct ongoing performance in sensorimotor tasks. Performance of such natural tasks requires relating visual information from different gaze positions. To explore this we used the technique of making task relevant display changes during saccadic eye movements. Subjects copied a pattern of colored blocks on a computer monitor, using the mouse to drag the blocks across the screen. Eye position was monitored using a dual-purkinje eye tracker, and the color of blocks in the pattern was changed at different points in task performance. When the target of the saccade changed color during the saccade, the duration of fixations on the model pattern increased, depending on the point in the task that the change was made. Thus different fixations on the same visual stimulus served a different purpose. The results also indicated that the visual information that is retained across successive fixations depends on moment by moment task demands. This is consistent with previous suggestions that visual representations are limited and task dependent. Changes in blocks in addition to the saccade target led to greater increases in fixation duration. This indicated that some global aspect of the pattern was retained across different fixations. Fixation durations revealed effects of the display changes that were not revealed in perceptual report. This can be understood by distinguishing between processes that operate at different levels of description and different time scales. Our conscious experience of the world may reflect events over a longer time scale than those underlying the substructure of the perceptuo-motor machinery.


Vision Research | 2002

Eye movements in iconic visual search

Rajesh P. N. Rao; Gregory J. Zelinsky; Mary Hayhoe; Dana H. Ballard

Visual cognition depends critically on the moment-to-moment orientation of gaze. To change the gaze to a new location in space, that location must be computed and used by the oculomotor system. One of the most common sources of information for this computation is the visual appearance of an object. A crucial question is: How is the appearance information contained in the photometric array is converted into a target position? This paper proposes a such a model that accomplishes this calculation. The model uses iconic scene representations derived from oriented spatiochromatic filters at multiple scales. Visual search for a target object proceeds in a coarse-to-fine fashion with the targets largest scale filter responses being compared first. Task-relevant target locations are represented as saliency maps which are used to program eye movements. A central feature of the model is that it separates the targeting process, which changes gaze, from the decision process, which extracts information at or near the new gaze point to guide behavior. The model provides a detailed explanation for center-of-gravity saccades that have been observed in many previous experiments. In addition, the models targeting performance has been compared with the eye movements of human subjects under identical conditions in natural visual search tasks. The results show good agreement both quantitatively (the search paths are strikingly similar) and qualitatively (the fixations of false targets are comparable).


Experimental Brain Research | 2001

The coordination of eye, head, and hand movements in a natural task.

Jeff B. Pelz; Mary Hayhoe; Russ Loeber

Abstract. Relatively little is known about movements of the eyes, head, and hands in natural tasks. Normal behavior requires spatial and temporal coordination of the movements in more complex circumstances than are typically studied, and usually provides the opportunity for motor planning. Previous studies of natural tasks have indicated that the parameters of eye and head movements are set by global task constraints. In this experiment, we explore the temporal coordination of eye, head, and hand movements while subjects performed a simple block-copying task. The task involved fixations to gather information about the pattern, as well as visually guided hand movements to pick up and place blocks. Subjects used rhythmic patterns of eye, head, and hand movements in a fixed temporal sequence or coordinative structure. However, the pattern varied according to the immediate task context. Coordination was maintained by delaying the hand movements until the eye was available for guiding the movement. This suggests that observers maintain coordination by setting up a temporary, task-specific synergy between the eye and hand. Head movements displayed considerable flexibility and frequently diverged from the gaze change, appearing instead to be linked to the hand trajectories. This indicates that the coordination of eye and head in gaze changes is usually the consequence of a synergistic linkage rather than an obligatory one. These temporary synergies simplify the coordination problem by reducing the number of control variables, and consequently the attentional demands, necessary for the task.


Visual Cognition | 2000

Vision Using Routines: A Functional Account of Vision

Mary Hayhoe

This paper presents the case for a functional account of vision. A variety of studies have consistently revealed “change blindness” or insensitivity to changes in the visual scene during an eye movement. These studies indicate that only a small part of the information in the scene is represented in the brain from moment to moment. It is still unclear, however, exactly what is included in visual representations. This paper reviews experiments using an extended visuo-motor task, showing that display changes affect performance differently depending on the observers place in the task. These effects are revealed by increases in fixation duration following a change. Different task-dependent increases suggest that the visual system represents only the information that is necessary for the immediate visual task. This allows a principled exploration of the stimulus properties that are included in the internal visual representation. The task specificity also has a more general implication that vision should be conceptualized as an active process executing special purpose “routines” that compute only the currently necessary information. Evidence for this view and its implications for visual representations are discussed. Comparison of the change blindness phenomenon and fixation durations shows that conscious report does not reveal the extent of the representations computed by the routines.


Vision Research | 1987

The time-course of multiplicative and subtractive adaptation process.

Mary Hayhoe; N.I. Benimoff; Donald C. Hood

This paper examines, for foveal cone vision, the processes which mediate the transition to a steady state of adaptation following a change of illumination. In the steady state, the signal from an adapting field is attenuated not only by a multiplicative factor (reduction in gain) but also by a subtractive signal. We show that the multiplicative change is accomplished very rapidly following the onset of an adapting field (within about 50 msec). Much of the subtractive change is also accomplished rapidly, but it takes several sec to complete. At the offset of the field, the multiplicative process takes over 200 msec to recover. This slower time-course at offset may be a consequence of receptoral persistence.


Perception | 1991

Integration of Form across Saccadic Eye Movements

Mary Hayhoe; Joel Lachter; Jerome A. Feldman

To perceive a stable world, one must somehow be able to relate visual information from successive fixations. Little is known, however, about the nature of the integrative process. By using a task which requires the integration of spatial position information from different fixations, it is demonstrated that visual information from previous fixations is preserved in a world-centered representation which is precise enough to support judgements of geometric shape. It is also shown that successive views are aligned with respect to common visual features, indicating that visual stability may be normally accomplished by a visual matching strategy in combination with cancellation by an eye-position signal.


The Journal of Neuroscience | 2009

Perceptual relearning of complex visual motion after V1 damage in humans

Krystel R. Huxlin; Tim Martin; Kristin N. Kelly; Meghan Riley; Deborah I. Friedman; W. Scott Burgin; Mary Hayhoe

Damage to the adult, primary visual cortex (V1) causes severe visual impairment that was previously thought to be permanent, yet several visual pathways survive V1 damage, mediating residual, often unconscious functions known as “blindsight.” Because some of these pathways normally mediate complex visual motion perception, we asked whether specific training in the blind field could improve not just simple but also complex visual motion discriminations in humans with long-standing V1 damage. Global direction discrimination training was administered to the blind field of five adults with unilateral cortical blindness. Training returned direction integration thresholds to normal at the trained locations. Although retinotopically localized to trained locations, training effects transferred to multiple stimulus and task conditions, improving the detection of luminance increments, contrast sensitivity for drifting gratings, and the extraction of motion signal from noise. Thus, perceptual relearning of complex visual motion processing is possible without an intact V1 but only when specific training is administered in the blind field. These findings indicate a much greater capacity for adult visual plasticity after V1 damage than previously thought. Most likely, basic mechanisms of visual learning must operate quite effectively in extrastriate visual cortex, providing new hope and direction for the development of principled rehabilitation strategies to treat visual deficits resulting from permanent visual cortical damage.

Collaboration


Dive into the Mary Hayhoe's collaboration.

Top Co-Authors

Avatar

Dana H. Ballard

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Brian Sullivan

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Matthew Tong

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Jason A. Droll

University of California

View shared research outputs
Top Co-Authors

Avatar

Constantin A. Rothkopf

Frankfurt Institute for Advanced Studies

View shared research outputs
Top Co-Authors

Avatar

Jonathan Matthis

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jeff B. Pelz

Rochester Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Chia-Ling Li

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge