Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Monica S. Castelhano is active.

Publication


Featured researches published by Monica S. Castelhano.


Psychological Review | 2006

Contextual guidance of eye movements and attention in real-world scenes: The role of global features in object search.

Antonio Torralba; Aude Oliva; Monica S. Castelhano; John M. Henderson

Many experiments have shown that the human visual system makes extensive use of contextual information for facilitating object search in natural scenes. However, the question of how to formally model contextual influences is still open. On the basis of a Bayesian framework, the authors present an original approach of attentional guidance by global scene context. The model comprises 2 parallel pathways; one pathway computes local features (saliency) and the other computes global (scene-centered) features. The contextual guidance model of attention combines bottom-up saliency, scene context, and top-down mechanisms at an early stage of visual processing and predicts the image regions likely to be fixated by human observers performing natural search tasks in real-world scenes.


international conference on image processing | 2003

Top-down control of visual attention in object detection

Aude Oliva; Antonio Torralba; Monica S. Castelhano; John M. Henderson

Current computational models of visual attention focus on bottom-up information and ignore scene context. However, studies in visual cognition show that humans use context to facilitate object detection in natural scenes by directing their attention or eyes to diagnostic regions. Here we propose a model of attention guidance based on global scene configuration. We show that the statistics of low-level features across the scene image determine where a specific object (e.g. a person) should be located. Human eye movements show that regions chosen by the top-down model agree with regions scrutinized by human observers performing a visual search task for people. The results validate the proposition that top-down information from visual context modulates the saliency of image regions during the task of object detection. Contextual information provides a shortcut for efficient object detection systems.


Eye Movements#R##N#A Window on Mind and Brain | 2007

VISUAL SALIENCY DOES NOT ACCOUNT FOR EYE MOVEMENTS DURING VISUAL SEARCH IN REAL-WORLD SCENES

John M. Henderson; James R. Brockmole; Monica S. Castelhano; Michael L. Mack

Publisher Summary This chapter presents testing of the hypothesis that fixation locations during scene viewing are primarily determined by visual salience. Eye movements were collected from participants who viewed photographs of real-world scenes during an active search task. Visual salience as determined by a popular computational model did not predict region-to-region saccades or saccade sequences any better than did a random model. Consistent with other reports in the literature, intensity, contrast, and edge density differed at fixated scene regions compared to regions that were not fixated, but these fixated regions also differ in rated semantic informativeness. Therefore, any observed correlations between fixation locations and image statistics cannot be unambiguously attributed to these image statistics. The chapter concludes that visual saliency does not account for eye movements during active search. The existing evidence is consistent with the hypothesis that cognitive factors play the dominant role in active gaze control.


Journal of Vision | 2009

Viewing task influences eye movement control during active scene perception

Monica S. Castelhano; Michael L. Mack; John M. Henderson

Expanding on the seminal work of G. Buswell (1935) and I. A. Yarbus (1967), we investigated how task instruction influences specific parameters of eye movement control. In the present study, 20 participants viewed color photographs of natural scenes under two instruction sets: visual search and memorization. Results showed that task influenced a number of eye movement measures including the number of fixations and gaze duration on specific objects. Additional analyses revealed that the areas fixated were qualitatively different between the two tasks. However, other measures such as average saccade amplitude and individual fixation durations remained constant across the viewing of the scene and across tasks. The present study demonstrates that viewing task biases the selection of scene regions and aggregate measures of fixation time on those regions but does not influence other measures, such as the duration of individual fixations.


Journal of Experimental Psychology: Human Perception and Performance | 2007

Initial scene representations facilitate eye movement guidance in visual search

Monica S. Castelhano; John M. Henderson

What role does the initial glimpse of a scene play in subsequent eye movement guidance? In 4 experiments, a brief scene preview was followed by object search through the scene via a small moving window that was tied to fixation position. Experiment 1 demonstrated that the scene preview resulted in more efficient eye movements compared with a control preview. Experiments 2 and 3 showed that this scene preview benefit was not due to the conceptual category of the scene or identification of the target object in the preview. Experiment 4 demonstrated that the scene preview benefit was unaffected by changing the size of the scene from preview to search. Taken together, the results suggest that an abstract (size invariant) visual representation is generated in an initial scene glimpse and that this representation can be retained in memory and used to guide subsequent eye movements.


Journal of Experimental Psychology: Learning, Memory and Cognition | 2006

Contextual cueing in naturalistic scenes : Global and local contexts

James R. Brockmole; Monica S. Castelhano; John M. Henderson

In contextual cueing, the position of a target within a group of distractors is learned over repeated exposure to a display with reference to a few nearby items rather than to the global pattern created by the elements. The authors contrasted the role of global and local contexts for contextual cueing in naturalistic scenes. Experiment 1 showed that learned target positions transfer when local information is altered but not when global information is changed. Experiment 2 showed that scene-target covariation is learned more slowly when local, but not global, information is repeated across trials than when global but not local information is repeated. Thus, in naturalistic scenes, observers are biased to associate target locations with global contexts.


Visual Cognition | 2005

Incidental visual memory for objects in scenes

Monica S. Castelhano; John M. Henderson

Three experiments were conducted to investigate the existence of incidentally acquired, long-term, detailed visual memory for objects embedded in previously viewed scenes. Participants performed intentional memorization and incidental visual search learning tasks while viewing photographs of real-world scenes. A visual memory test for previously viewed objects from these scenes then followed. Participants were not aware that they would be tested on the scenes following incidental learning in the visual search task. In two types of memory tests for visually specific object information (token discrimination and mirror-image discrimination), performance following both the memorization and visual search conditions was reliably above chance. These results indicate that recent demonstrations of good visual memory during scene viewing are not due to intentional scene memorization. Instead, long-term visual representations are incidentally generated as a natural product of scene perception.


Psychology and Aging | 2009

Eye Movements and the Perceptual Span in Older and Younger Readers

Keith Rayner; Monica S. Castelhano; Jinmian Yang

The size of the perceptual span (or the span of effective vision) in older readers was examined with the moving window paradigm (G. W. McConkie & K. Rayner, 1975). Two experiments demonstrated that older readers have a smaller and more symmetric span than that of younger readers. These 2 characteristics (smaller and more symmetric span) of older readers may be a consequence of their less efficient processing of nonfoveal information, which results in a riskier reading strategy.


Canadian Journal of Experimental Psychology | 2008

Stable Individual Differences Across Images in Human Saccadic Eye Movements

Monica S. Castelhano; John M. Henderson

Individual differences in eye movements during picture viewing were examined across image format, content, and foveal quality in 3 experiments. Experiment 1 demonstrated that an individuals fixation durations were strongly related across 3 types of scene formats and that saccade amplitudes followed the same pattern. In Experiment 2, a similar relationship was observed for fixation durations across faces and scenes, although the amplitude relationship did not hold as strongly. In Experiment 3, the duration and amplitude relationships were observed when foveal information was degraded and even removed. Eye movement characteristics differ across individuals, but there is a great deal of consistency within individuals when viewing different types of images.


Attention Perception & Psychophysics | 2003

Eye movements and picture processing during recognition

John M. Henderson; Carrick C. Williams; Monica S. Castelhano; Richard J. Falk

Eye movements were monitored during a recognition memory test of previously studied pictures of full-color scenes. The test scenes were identical to the originals, had an object deleted from them, or included a new object substituted for an original object. In contrast to a prior report (Parker, 1978), we found no evidence that object deletions or substitutions could be recognized on the basis of information acquired from the visual periphery. Deletions were difficult to recognize peripherally, and the eyes were not attracted to them. Overall, the amplitude of the average saccade to the critical object during the memory test was less than 4.5° of visual angle in all conditions and averaged 4.1° across conditions. We conclude that information about object presence and identity in a scene is limited to a relatively small region around the current fixation point.

Collaboration


Dive into the Monica S. Castelhano's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Keith Rayner

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Aude Oliva

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Alexander Pollatsek

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar

Antonio Torralba

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jinmian Yang

University of Massachusetts Amherst

View shared research outputs
Researchain Logo
Decentralizing Knowledge