Jeffrey Y. Lin
University of Washington
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jeffrey Y. Lin.
Psychonomic Bulletin & Review | 2008
Steven Franconeri; Jeffrey Y. Lin; Zenon W. Pylyshyn; Brian D. Fisher; James T. Enns
Everyday tasks often require us to keep track of multiple objects in dynamic scenes. Past studies show that tracking becomes more difficult as objects move faster. In the present study, we show that this trade-off may not be due to increased speed itself but may, instead, be due to the increased crowding that usually accompanies increases in speed. Here, we isolate changes in speed from variations in crowding, by projecting a tracking display either onto a small area at the center of a hemispheric projection dome or onto the entire dome. Use of the larger display increased retinal image size and object speed by a factor of 4 but did not increase interobject crowding. Results showed that tracking accuracy was equally good in the large-display condition, even when the objects traveled far into the visual periphery. Accuracy was also not reduced when we tested object speeds that limited performance in the small-display condition. These results, along with a reinterpretation of past studies, suggest that we might be able to track multiple moving objects as fast as we can a single moving object, once the effect of object crowding is eliminated.
Psychological Science | 2008
Jeffrey Y. Lin; Steven Franconeri; James T. Enns
How observers distribute limited processing resources to regions of a scene is based on a dynamic balance between current goals and reflexive tendencies. Past research showed that these reflexive tendencies include orienting toward objects that expand as if they were looming toward the observer, presumably because this signal indicates an impending collision. Here we report that during visual search, items that loom abruptly capture attention more strongly when they approach from the periphery rather than from near the center of gaze (Experiment 1), and target objects are more likely to be attended when they are on a collision path with the observer rather than on a near-miss path (Experiment 2). Both effects are exaggerated when search is performed in a large projection dome (Experiment 3). These findings suggest that the human visual system prioritizes events that are likely to require a behaviorally urgent response.
PLOS Biology | 2010
Jeffrey Y. Lin; Amanda Pype; Scott O. Murray; Geoffrey M. Boynton
What determines whether a scene is remembered or forgotten? Our results show how visual scenes are encoded into memory at behaviorally relevant points in time.
Current Biology | 2009
Jeffrey Y. Lin; Scott O. Murray; Geoffrey M. Boynton
Visual images that convey threatening information can automatically capture attention. One example is an object looming in the direction of the observer-presumably because such a stimulus signals an impending collision. A critical question for understanding the relationship between attention and conscious awareness is whether awareness is required for this type of prioritized attentional selection. Although it has been suggested that visual spatial attention can only be affected by consciously perceived events, we show that automatic allocation of attention can occur even without conscious awareness of impending threat. We used a visual search task to show that a looming stimulus on a collision path with an observer captures attention but a looming stimulus on a near-miss path does not. Critically, observers were unaware of any difference between collision and near-miss stimuli even when explicitly asked to discriminate between them in separate experiments. These results counter traditional salience-based models of attentional capture, demonstrating that in the absence of perceptual awareness, the visual system can extract behaviorally relevant details from a visual scene and automatically categorize threatening versus nonthreatening images at a level of precision beyond our conscious perceptual capabilities.
Journal of Vision | 2011
Jeffrey Y. Lin; Bjorn Hubert-Wallander; Scott O. Murray; Geoffrey M. Boynton
Performance on a visual task is improved when attention is directed to relevant spatial locations or specific visual features. Spatial attention can be directed either voluntarily (endogenously) or automatically (exogenously). However, feature-based attention has only been shown to operate endogenously. Here, we show that an exogenous cue to a visual feature can lead to improved performance in visual search. Response times were measured as subjects detected or discriminated a target oval among an array of disks, each with a unique color. An uninformative colored cue was flashed at the beginning of each trial that sometimes matched the location and/or color of the target oval. Subjects detected or discriminated the target faster when the color of the cue matched the color of the target, regardless of the cues location relative to the target. Our results suggest evidence for a previously unknown exogenous cuing mechanism for feature-based attention.
Journal of General Internal Medicine | 2008
Kathlyn E. Fletcher; Francine C. Wiest; Lakshmi Halasyamani; Jeffrey Y. Lin; Victoria Nelson; Samuel R. Kaufman; Sanjay Saint; Marilyn M. Schapira
Archive | 2010
Jeffrey Y. Lin; Ione Fine; Geoffrey M. Boynton; Scott O. Murray
Journal of Vision | 2010
Amanda Pype; Jeffrey Y. Lin; Scott O. Murray; Geoffrey M. Boynton
Journal of Vision | 2012
Jeffrey Y. Lin; Bjorn Hubert-Wallander; Sung Jun Joo; Scott O. Murray; Geoffrey M. Boynton
F1000Research | 2012
Bjorn Hubert-Wallander; Jeffrey Y. Lin; Sung Jun Joo; Scott O. Murray; Geoffrey M. Boynton