Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jeremy M. Wolfe is active.

Publication


Featured researches published by Jeremy M. Wolfe.


Psychonomic Bulletin & Review | 1994

Guided Search 2.0 A revised model of visual search

Jeremy M. Wolfe

An important component of routine visual behavior is the ability to find one item in a visual world filled with other, distracting items. This ability to performvisual search has been the subject of a large body of research in the past 15 years. This paper reviews the visual search literature and presents a model of human search behavior. Built upon the work of Neisser, Treisman, Julesz, and others, the model distinguishes between a preattentive, massively parallel stage that processes information about basic visual features (color, motion, various depth cues, etc.) across large portions of the visual field and a subsequent limited-capacity stage that performs other, more complex operations (e.g., face recognition, reading, object identification) over a limited portion of the visual field. The spatial deployment of the limited-capacity process is under attentional control. The heart of the guided search model is the idea that attentional deployment of limited resources isguided by the output of the earlier parallel processes. Guided Search 2.0 (GS2) is a revision of the model in which virtually all aspects of the model have been made more explicit and/or revised in light of new data. The paper is organized into four parts: Part 1 presents the model and the details of its computer simulation. Part 2 reviews the visual search literature on preattentive processing of basic features and shows how the GS2 simulation reproduces those results. Part 3 reviews the literature on the attentional deployment of limited-capacity processes in conjunction and serial searches and shows how the simulation handles those conditions. Finally, Part 4 deals with shortcomings of the model and unresolved issues.


Journal of Experimental Psychology: Human Perception and Performance | 1989

Guided search: an alternative to the feature integration model for visual search.

Jeremy M. Wolfe; Kyle R. Cave; Susan L. Franzel

Subjects searched sets of items for targets defined by conjunctions of color and form, color and orientation, or color and size. Set size was varied and reaction times (RT) were measured. For many unpracticed subjects, the slopes of the resulting RT X Set Size functions are too shallow to be consistent with Treismans feature integration model, which proposes serial, self-terminating search for conjunctions. Searches for triple conjunctions (Color X Size X Form) are easier than searches for standard conjunctions and can be independent of set size. A guided search model similar to Hoffmans (1979) two-stage model can account for these data. In the model, parallel processes use information about simple features to guide attention in the search for conjunctions. Triple conjunctions are found more efficiently than standard conjunctions because three parallel processes can guide attention more effectively than two.


Nature Reviews Neuroscience | 2004

What attributes guide the deployment of visual attention and how do they do it

Jeremy M. Wolfe; Todd S. Horowitz

As you drive into the centre of town, cars and trucks approach from several directions, and pedestrians swarm into the intersection. The wind blows a newspaper into the gutter and a pigeon does something unexpected on your windshield. This would be a demanding and stressful situation, but you would probably make it to the other side of town without mishap. Why is this situation taxing, and how do you cope?


Cognitive Psychology | 1990

Modeling the role of parallel processing in visual search

Kyle R. Cave; Jeremy M. Wolfe

Treismans Feature Integration Theory and Juleszs Texton Theory explain many aspects of visual search. However, these theories require that parallel processing mechanisms not be used in many visual searches for which they would be useful, and they imply that visual processing should be much slower than it is. Most importantly, they cannot account for recent data showing that some subjects can perform some conjunction searches very efficiently. Feature Integration Theory can be modified so that it accounts for these data and helps to answer these questions. In this new theory, which we call Guided Search, the parallel stage guides the serial stage as it chooses display elements to process. A computer simulation of Guided Search produces the same general patterns as human subjects in a number of different types of visual search.


Psychological Science | 1998

What Can 1 Million Trials Tell Us About Visual Search

Jeremy M. Wolfe

In a typical visual search experiment, observers look through a set of items for a designated target that may or may not be present. Reaction time (RT) is measured as a function of the number of items in the display (set size), and inferences about the underlying search processes are based on the slopes of the resulting RT x Set Size functions. Most search experiments involve 5 to 15 subjects performing a few hundred trials each. In this retrospective study, I examine results from 2,500 experimental sessions of a few hundred trials each (approximately 1 million total trials). These data represent a wide variety of search tasks. The resulting picture of human search behavior requires changes in our theories of visual search.


Nature | 1998

Visual search has no memory

Todd S. Horowitz; Jeremy M. Wolfe

Humans spend a lot of time searching for things, such as roadside traffic signs, soccer balls or tumours in mammograms. These tasks involve the deployment of attention from one item in the visual field to the next. Common sense suggests that rejected items should be noted in some fashion so that effort is not expended in re-examining items that have been attended to and rejected. However, common sense is wrong. Here we asked human observers to search for a letter ‘T’ among letters ‘L’. This search demandsvisual attention and normally proceeds at a rate of 20–30 milliseconds per item. In the critical condition, we randomly relocated all letters every 111 milliseconds. This made it impossible for the subjects to keep track of the progress of the search. Nevertheless, the efficiency of the search was unchanged. Theories of visual search all assume that search relies on accumulating information about the identity of objects over time. Such theories predict that search efficiency will be drastically reduced if the scene is continually shuffled while the observer is trying to search through it. As we show that efficiency is not impaired, the standard theories must be revised.


Journal of Experimental Psychology: Human Perception and Performance | 1992

The role of categorization in visual search for orientation

Jeremy M. Wolfe; Stacia R. Friedman-Hill; Marion I. Stewart; Kathleen M. O'Connell

Visual search for 1 target orientation is fast and virtually independent of set size if all of the distractors are of a single, different orientation. However, in the presence of distractors of several orientations, search can become inefficient and strongly dependent on set size (Exp. 1). Search can be inefficient even if only 2 distractor orientations are used and even if those orientations are quite remote from the target orientation (e.g. 20 degrees or even 40 degrees away, Exp. 2). Search for 1 orientation among heterogeneous distractor orientations becomes more efficient if the target orientation is the only item possessing a categorical attribute such as steep, shallow (Exp. 3), tilted left or tilted right (Exp. 4), or simply tilted (Exps. 5 and 6). Orientation categories appear to be 1 of several strategies used in visual search for orientation. These serve as a compromise between the limits on parallel visual processing and the demands of a complex visual world.


Vision Research | 2004

How fast can you change your mind? The speed of top-down guidance in visual search

Jeremy M. Wolfe; Todd S. Horowitz; Naomi M. Kenner; Megan Hyle; Nina Vasan

Most laboratory visual search tasks involve many searches for the same target, while in the real world we typically change our target with each search (e.g. find the coffee cup, then the sugar). How quickly can the visual system be reconfigured to search for a new target? Here observers searched for targets specified by cues presented at different SOAs relative to the search stimulus. Search for different targets on each trial was compared to search for the same target over a block of trials. Experiments 1 and 2 showed that an exact picture cue acts within 200 ms to make varied target conjunction search as fast and efficient as blocked conjunction search. Word cues were slower and never as effective. Experiment 3 replicated this result with a task that required top-down information about target identity. Experiment 4 showed that the effects of an exact picture cue were not mandatory. Experiments 5 and 6 used pictures of real objects to cue targets by category level.


Journal of Experimental Psychology: General | 2007

Low target prevalence is a stubborn source of errors in visual search tasks

Jeremy M. Wolfe; Todd S. Horowitz; Michael J. Van Wert; Naomi M. Kenner; Skyler S. Place; Nour Kibbi

In visual search tasks, observers look for targets in displays containing distractors. Likelihood that targets will be missed varies with target prevalence, the frequency with which targets are presented across trials. Miss error rates are much higher at low target prevalence (1%-2%) than at high prevalence (50%). Unfortunately, low prevalence is characteristic of important search tasks such as airport security and medical screening where miss errors are dangerous. A series of experiments show this prevalence effect is very robust. In signal detection terms, the prevalence effect can be explained as a criterion shift and not a change in sensitivity. Several efforts to induce observers to adopt a better criterion fail. However, a regime of brief retraining periods with high prevalence and full feedback allows observers to hold a good criterion during periods of low prevalence with no feedback.


Attention Perception & Psychophysics | 2001

Asymmetries in visual search: An introduction

Jeremy M. Wolfe

In visual search tasks, observers look for a target stimulus among distractor stimuli. A visual search asymmetry is said to occur when a search for stimulus A among stimulus B produces different results from a search for B among A. Anne Treisman made search asymmetries into an important tool in the study of visual attention. She argued that it was easier to find a target that was defined by the presence of a preattentive basic feature than to find a target defined by the absence of that feature. Four of the eight papers in this symposium inPerception & Psychophysics deal with the use of search asymmetries to identify stimulus attributes that behave as basic features in this context. Another two papers deal with the long-standing question of whether a novelty can be considered to be a basic feature. Asymmetries can also arise when one type of stimulus is easier to identify or classify than another. Levin and Angelone’s paper on visual search for faces of different races is an examination of an asymmetry of this variety. Finally, Previc and Naegele investigate an asymmetry based on the spatial location of the target. Taken as a whole, these papers illustrate the continuing value of the search asymmetry paradigm.

Collaboration


Dive into the Jeremy M. Wolfe's collaboration.

Top Co-Authors

Avatar

Todd S. Horowitz

Brigham and Women's Hospital

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Melissa L.-H. Võ

Goethe University Frankfurt

View shared research outputs
Top Co-Authors

Avatar

Stephen J. Flusberg

Brigham and Women's Hospital

View shared research outputs
Top Co-Authors

Avatar

David E. Fencsik

Brigham and Women's Hospital

View shared research outputs
Top Co-Authors

Avatar

Krista A. Ehinger

Brigham and Women's Hospital

View shared research outputs
Top Co-Authors

Avatar

Michael J. Van Wert

Brigham and Women's Hospital

View shared research outputs
Top Co-Authors

Avatar

Aude Oliva

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge