Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daniel B. Vatterott is active.

Publication


Featured researches published by Daniel B. Vatterott.


Psychology of Learning and Motivation | 2014

Chapter Eight – The Control of Visual Attention: Toward a Unified Account

Shaun P. Vecera; Joshua D. Cosman; Daniel B. Vatterott; Zachary J. J. Roper

Visual attention is deployed through visual scenes to find behaviorally relevant targets. This attentional deployment—or attentional control—can be based on either stimulus factors, such as the salience of an object or region, or goal relevance, such as the match between an object and the target being searched for. Decades of research have measured attentional control by examining attentional interruption by a completely irrelevant distracting object, which may or may not capture attention. Based on the results of attentional capture tasks, the literature has distilled two alternative views of attentional control and capture: one focused on stimulus-driven factors and the other based on goal-driven factors. In the current paper, we propose an alternative in which stimulus-driven control and goal-driven control are not mutually exclusive but instead related through task dynamics, specifically experience. Attentional control is initially stimulus-driven. However, as participants gain experience with all aspects of a task, attentional control rapidly becomes increasingly goal-driven. We present four experiments that examine this experience-dependent attentional tuning. We show that to resist capture and be highly selective based on target properties, attention must be configured to aspects of a task through experience.Abstract Visual attention is deployed through visual scenes to find behaviorally relevant targets. This attentional deployment—or attentional control—can be based on either stimulus factors, such as the salience of an object or region, or goal relevance, such as the match between an object and the target being searched for. Decades of research have measured attentional control by examining attentional interruption by a completely irrelevant distracting object, which may or may not capture attention. Based on the results of attentional capture tasks, the literature has distilled two alternative views of attentional control and capture: one focused on stimulus-driven factors and the other based on goal-driven factors. In the current paper, we propose an alternative in which stimulus-driven control and goal-driven control are not mutually exclusive but instead related through task dynamics, specifically experience. Attentional control is initially stimulus-driven. However, as participants gain experience with all aspects of a task, attentional control rapidly becomes increasingly goal-driven. We present four experiments that examine this experience-dependent attentional tuning. We show that to resist capture and be highly selective based on target properties, attention must be configured to aspects of a task through experience.


Frontiers in Psychology | 2013

Prolonged disengagement from distractors near the hands

Daniel B. Vatterott; Shaun P. Vecera

Because items near our hands are often more important than items far from our hands, the brain processes visual items near our hands differently than items far from our hands. Multiple experiments have attributed this processing difference to spatial attention, but the exact mechanism behind how spatial attention near our hands changes is still under investigation. The current experiments sought to differentiate between two of the proposed mechanisms: a prioritization of the space near the hands and a prolonged disengagement of spatial attention near the hands. To differentiate between these two accounts, we used the additional singleton paradigm in which observers searched for a shape singleton among homogenously shaped distractors. On half the trials, one of the distractors was a different color. Both the prioritization and disengagement accounts predict differently colored distractors near the hands will slow target responses more than differently colored distractors far from the hands, but the prioritization account also predicts faster responses to targets near the hands than far from the hands. The disengagement account does not make this prediction, because attention does not need to be disengaged when the target appears near the hand. We found support for the disengagement account: Salient distractors near the hands slowed responses more than those far from the hands, yet observers did not respond faster to targets near the hands.


Attention Perception & Psychophysics | 2014

Visual statistical learning can drive object-based attentional selection

Libo Zhao; Joshua D. Cosman; Daniel B. Vatterott; Prahlad Gupta; Shaun P. Vecera

Recent work on statistical learning has demonstrated that environmental regularities can influence aspects of perception, such as familiarity judgments. Here, we ask if statistical co-occurrences accumulated from visual statistical learning could form objects that serve as the units of attention (i.e., object-based attention). Experiment 1 demonstrated that, after observers first viewed pairs of shapes that co-occurred in particular spatial relationships, they were able to recognize the co-occurring pairs, and were faster to discriminate two targets when they appeared within a learned pair (“object”) than when the targets appeared between learned pairs, demonstrating an equivalent of an object-based attention effect. Experiment 2 replicated the results of Experiment 1 using a different set of shape pairs, and revealed a negative association between the attention effect and familiarity judgments of the co-occurred pairs. Experiment 3 reports three control experiments that validated the task procedure and ruled out alternative accounts.


Visual Cognition | 2015

The attentional window configures to object and surface boundaries

Daniel B. Vatterott; Shaun P. Vecera

Attention can select items based on location or features. Belopolsky and colleagues posited the attentional window hypothesis, which theorized that spatial and featural selection interact such that featural selection occurs within a “window” of spatial selection. Kerzel and colleagues recently found that the attentional window can take complex shapes, but cannot configure around non-contiguous locations. The current experiments investigated whether perceptual grouping cues, which produce perceptual objects or surfaces, enable the attentional window to configure around non-contiguous locations. Using the additional singleton paradigm, we reasoned that observers (1) would be slowed by a colour singleton distractor that appears within the observers’ attentional window and (2) would be unaffected by distractors that do not appear within the attentional window. In separate blocks of trials, a target appeared upon one of two objects. Observers were cued to the relevant surface, and we asked if responses were affected by distractors on the cued task-relevant surface, and on the uncued irrelevant surface. Colour singleton distractors slowed responses when they appeared on the cued surface, even when those locations were non-contiguous locations. Distractors on the irrelevant surface did not affect responses. The results support a highly adaptable attentional window that is configurable to the surfaces and boundaries established by intermediate-level vision.


Attention Perception & Psychophysics | 2018

Rejecting salient distractors: Generalization from experience

Daniel B. Vatterott; Michael C. Mozer; Shaun P. Vecera

Distraction impairs performance of many important, everyday tasks. Attentional control limits distraction by preferentially selecting important items for limited-capacity cognitive operations. Research in attentional control has typically investigated the degree to which selection of items is stimulus-driven versus goal-driven. Recent work finds that when observers initially learn a task, the selection is based on stimulus-driven factors, but through experience, goal-driven factors have an increasing influence. The modulation of selection by goals has been studied within the paradigm of learned distractor rejection, in which experience over a sequence of trials enables individuals eventually to ignore a perceptually salient distractor. The experiments presented examine whether observers can generalize learned distractor rejection to novel distractors. Observers searched for a target and ignored a salient color-singleton distractor that appeared in half of the trials. In Experiment 1, observers who learned distractor rejection in a variable environment rejected a novel distractor more effectively than observers who learned distractor rejection in a less variable, homogeneous environment, demonstrating that variable, heterogeneous stimulus environments encourage generalizable learned distractor rejection. Experiments 2 and 3 investigated the time course of learned distractor rejection across the experiment and found that after experiencing four color-singleton distractors in different blocks, observers could effectively reject subsequent novel color-singleton distractors. These results suggest that the optimization of attentional control to the task environment can be interpreted as a form of learning, demonstrating experience’s critical role in attentional control.


Visual Cognition | 2012

The attentional window configures to object boundaries

Daniel B. Vatterott; Shaun P. Vecera

Research on attentional capture has shown the efficiency of task-relevant target selection is often affected by salient task-irrelevant events. The attentional window hypothesis (Belopolsky, Zwaan, Theeuwes, & Kramer, 2007) offers one explanation for why targets are, at times, selected efficiently in the face of more salient events. In this hypothesis, observers’ attention can be diffuse or focused and only irrelevant events within this window-like space can capture attention. One unanswered question is whether the attentional window can be configured to noncontiguous spatial locations within an object or functions as a zoom-lens, which must maintain a spotlight like distribution (Eriksen & St. James, 1986). Object-based attention research has shown objects control attentional selection (e.g., Egly, Driver, & Rafal, 1994; Vecera, 1994). In fact, Cosman and Vecera (2012) demonstrated object-based attention modulates the extent of distractor processing. These findings suggest an observer’s attentional windows may naturally configure to objects. For example, Figure 1b represents a situation where attention might spread through a cued object. If the attentional window functions like a zoom-lens and is unable to configure to object boundaries, then colour singletons at all the locations will capture attention. On the other hand, if the attentional window accommodates object boundaries, then only colour singletons on the cued object will capture attention. Kerzel, Born, and Schonhammer (in press) recently found observers were able to constrain the attentional window to only an inner or outer ring of items, but observers were not able to constrain capture to a set of items without this spatial separation between the relevant groups. We suggest that observers were not able to constrain capture to the relevant groups because there were not strong enough perceptual grouping cues (e.g., object boundaries).


Psychonomic Bulletin & Review | 2012

Experience-dependent attentional tuning of distractor rejection

Daniel B. Vatterott; Shaun P. Vecera


Journal of Experimental Psychology: Human Perception and Performance | 2014

Location-Specific Effects of Attention During Visual Short-Term Memory Maintenance

Michi Matsukura; Joshua D. Cosman; Zachary J. J. Roper; Daniel B. Vatterott; Shaun P. Vecera


Archive | 2014

The Control of Visual Attention

Shaun P. Vecera; Joshua D. Cosman; Daniel B. Vatterott; Zachary J. J. Roper


Archive | 2018

Attention and Processing Speed

Benjamin D. Lester; Daniel B. Vatterott; Shaun P. Vecera

Collaboration


Dive into the Daniel B. Vatterott's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Benjamin D. Lester

University of Iowa Hospitals and Clinics

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael C. Mozer

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge