Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gregory J. Zelinsky is active.

Publication


Featured researches published by Gregory J. Zelinsky.


Journal of Experimental Psychology: Human Perception and Performance | 1999

Influence of Attentional Capture on Oculomotor Control

Jan Theeuwes; Arthur F. Kramer; Sowon Hahn; David E. Irwin; Gregory J. Zelinsky

Previous research has shown that when searching for a color singleton, top-down control cannot prevent attentional capture by an abrupt visual onset. The present research addressed whether a task-irrelevant abrupt onset would affect eye movement behavior when searching for a color singleton. Results show that in many instances the eye moved in the direction of the task-irrelevant abrupt onset. There was evidence that top-down control could neither entirely prevent attentional capture by visual onsets nor prevent the eye from starting to move in the direction of the onset. Results suggest parallel programming of 2 saccades: 1 voluntary goal-directed eye movement toward the color singleton target and 1 stimulus-driven eye movement reflexively elicited by the abrupt onset. A neurophysiologically plausible model that can account for the current findings is discussed.


Vision Research | 2002

Eye movements in iconic visual search

Rajesh P. N. Rao; Gregory J. Zelinsky; Mary Hayhoe; Dana H. Ballard

Visual cognition depends critically on the moment-to-moment orientation of gaze. To change the gaze to a new location in space, that location must be computed and used by the oculomotor system. One of the most common sources of information for this computation is the visual appearance of an object. A crucial question is: How is the appearance information contained in the photometric array is converted into a target position? This paper proposes a such a model that accomplishes this calculation. The model uses iconic scene representations derived from oriented spatiochromatic filters at multiple scales. Visual search for a target object proceeds in a coarse-to-fine fashion with the targets largest scale filter responses being compared first. Task-relevant target locations are represented as saliency maps which are used to program eye movements. A central feature of the model is that it separates the targeting process, which changes gaze, from the decision process, which extracts information at or near the new gaze point to guide behavior. The model provides a detailed explanation for center-of-gravity saccades that have been observed in many previous experiments. In addition, the models targeting performance has been compared with the eye movements of human subjects under identical conditions in natural visual search tasks. The results show good agreement both quantitatively (the search paths are strikingly similar) and qualitatively (the fixations of false targets are comparable).


Journal of Experimental Psychology: Human Perception and Performance | 1997

Eye movements during parallel–serial visual search.

Gregory J. Zelinsky; David L. Sheinberg

Two experiments (one using O and Q-like stimuli and the other using colored-oriented bars) investigated the oculomotor behavior accompanying parallel-serial visual search. Eye movements were recorded as participants searched for a target in 5- or 17-item displays. Results indicated the presence of parallel-serial search dichotomies and 2:1 ratios of negative to positive slopes in the number of saccades initiated during both search tasks. This saccade number measure also correlated highly with search times, accounting for up to 67% of the reaction time (RT) variability. Weak correlations between fixation durations and RTs suggest that this oculomotor measure may be related more to stimulus factors than to search processes. A third experiment compared free-eye and fixed-eye searches and found a small RT advantage when eye movements were prevented. Together these findings suggest that parallel-serial search dichotomies are reflected in oculomotor behavior.


Psychological Review | 2008

A Theory of Eye Movements during Target Acquisition

Gregory J. Zelinsky

The gaze movements accompanying target localization were examined via human observers and a computational model (target acquisition model [TAM]). Search contexts ranged from fully realistic scenes to toys in a crib to Os and Qs, and manipulations included set size, target eccentricity, and target-distractor similarity. Observers and the model always previewed the same targets and searched identical displays. Behavioral and simulated eye movements were analyzed for acquisition accuracy, efficiency, and target guidance. TAMs behavior generally fell within the behavioral means 95% confidence interval for all measures in each experiment/condition. This agreement suggests that a fixed-parameter model using spatiochromatic filters and a simulated retina, when driven by the correct visual routines, can be a good general-purpose predictor of human target acquisition behavior.


Vision Research | 2006

Scene context guides eye movements during visual search

Mark Neider; Gregory J. Zelinsky

How does scene context guide search behavior to likely target locations? We had observers search for scene-constrained and scene-unconstrained targets, and found that scene-constrained targets were detected faster and with fewer eye movements. Observers also directed more initial saccades to target-consistent scene regions and devoted more time to searching these regions. However, final checking fixations on target-inconsistent regions were common in target-absent trials, suggesting that scene context does not strictly confine search to likely target locations. We interpret these data as evidence for a rapid top-down biasing of search behavior by scene context to the target-consistent regions of a scene.


Attention Perception & Psychophysics | 2002

Eye movements and scene perception: Memory for things observed

David E. Irwin; Gregory J. Zelinsky

In this study, we examined the characteristics of on-line scene representations, using a partial-report procedure. Subjects inspected a simple scene containing seven objects for 1, 3, 5, 9, or 15 fixations; shortly after scene offset, a marker cued one scene location for report. Consistent with previous research, the results indicated that scene representations are relatively sparse; even after 15 fixations on a scene, the subjects remembered the position/identity pairings for only about 78% of the objects in the scene, or the equivalent of about five objects-worth of information. Report of the last three objects that were foveated and of the object about to be foveated was very accurate, however, suggesting that recently attended information in a scene is represented quite well. Information about the scene appeared to accumulate over multiple fixations, but the capacity of the on-line scene representation appeared to be limited to about five items. Implications for recent theories of scene representation are discussed.


Cognition | 2008

Coordinating Cognition: The Costs and Benefits of Shared Gaze During Collaborative Search

Susan E. Brennan; Xin Chen; Christopher A. Dickinson; Mark Neider; Gregory J. Zelinsky

Collaboration has its benefits, but coordination has its costs. We explored the potential for remotely located pairs of people to collaborate during visual search, using shared gaze and speech. Pairs of searchers wearing eyetrackers jointly performed an O-in-Qs search task alone, or in one of three collaboration conditions: shared gaze (with one searcher seeing a gaze-cursor indicating where the other was looking, and vice versa), shared-voice (by speaking to each other), and shared-gaze-plus-voice (by using both gaze-cursors and speech). Although collaborating pairs performed better than solitary searchers, search in the shared gaze condition was best of all: twice as fast and efficient as solitary search. People can successfully communicate and coordinate their searching labor using shared gaze alone. Strikingly, shared gaze search was even faster than shared-gaze-plus-voice search; speaking incurred substantial coordination costs. We conclude that shared gaze affords a highly efficient method of coordinating parallel activity in a time-critical spatial task.


Vision Research | 1996

Using eye saccades to assess the selectivity of search movements

Gregory J. Zelinsky

The degree of selectivity or guidance underlying search was tested by having subjects search for a target (a red vertical or green horizontal bar) among Similar (red horizontal and green vertical bars) and Dissimilar distractors (blue and yellow diagonal bars). If search is indeed a guided process, then the Dissimilar items should not be given the same scrutiny as elements sharing a feature with the target. The frequency of eye movements directed to the two distractor types was used as an indicator of this scrutiny. The analysis revealed almost equal percentages of saccades to Similar and Dissimilar elements (55% and 45%, respectively). Although indicating some evidence for selectivity during oculomotor search, this finding suggests that simpler and less optimal strategies may undermine the more efficient guided search algorithm.


Vision Research | 2006

Real-world visual search is dominated by top-down guidance

Xin Chen; Gregory J. Zelinsky

How do bottom-up and top-down guidance signals combine to guide search behavior? Observers searched for a target either with or without a preview (top-down manipulation) or a color singleton (bottom-up manipulation) among the display objects. With a preview, reaction times were faster and more initial eye movements were guided to the target; the singleton failed to attract initial saccades under these conditions. Only in the absence of a preview did subjects preferentially fixate the color singleton. We conclude that the search for realistic objects is guided primarily by top-down control. Implications for saliency map models of visual search are discussed.


international conference on computer vision | 2007

Real-time Accurate Object Detection using Multiple Resolutions

Wei Zhang; Gregory J. Zelinsky; Dimitris Samaras

We propose a multi-resolution framework inspired by human visual search for general object detection. Different resolutions are represented using a coarse-to-fine feature hierarchy. During detection, the lower resolution features are initially used to reject the majority of negative windows at relatively low cost, leaving a relatively small number of windows to be processed in higher resolutions. This enables the use of computationally more expensive higher resolution features to achieve high detection accuracy. We applied this framework on Histograms of Oriented Gradient (HOG) features for object detection. Our multi-resolution detector produced better performance for pedestrian detection than state-of-the-art methods (Dalal and Triggs, 2005), and was faster during both training and testing. Testing our method on motorbikes and cars from the VOC database revealed similar improvements in both speed and accuracy, suggesting that our approach is suitable for realtime general object detection applications.

Collaboration


Dive into the Gregory J. Zelinsky's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Joseph Schmidt

University of South Carolina

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mark Neider

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xin Chen

Stony Brook University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge