Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Aaron R. Seitz is active.

Publication


Featured researches published by Aaron R. Seitz.


Trends in Cognitive Sciences | 2008

Benefits of multisensory learning

Ladan Shams; Aaron R. Seitz

Studies of learning, and in particular perceptual learning, have focused on learning of stimuli consisting of a single sensory modality. However, our experience in the world involves constant multisensory stimulation. For instance, visual and auditory information are integrated in performing many tasks that involve localizing and tracking moving objects. Therefore, it is likely that the human brain has evolved to develop, learn and operate optimally in multisensory environments. We suggest that training protocols that employ unisensory stimulus regimes do not engage multisensory learning mechanisms and, therefore, might not be optimal for learning. However, multisensory-training protocols can better approximate natural settings and are more effective for learning.


Trends in Cognitive Sciences | 2005

A unified model for perceptual learning

Aaron R. Seitz; Takeo Watanabe

Perceptual learning in adult humans and animals refers to improvements in sensory abilities after training. These improvements had been thought to occur only when attention is focused on the stimuli to be learned (task-relevant learning) but recent studies demonstrate performance improvements outside the focus of attention (task-irrelevant learning). Here, we propose a unified model that explains both task-relevant and task-irrelevant learning. The model suggests that long-term sensitivity enhancements to task-relevant or irrelevant stimuli occur as a result of timely interactions between diffused signals triggered by task performance and signals produced by stimulus presentation. The proposed mechanism uses multiple attentional and reinforcement systems that rely on different underlying neuromodulators. Our model provides insights into how neural modulators, attentional and reinforcement learning systems are related.


Nature | 2003

Psychophysics: Is subliminal learning really passive?

Aaron R. Seitz; Takeo Watanabe

Perceptual learning can occur as a result of exposure to a subliminal stimulus, without the subject having to pay attention and without relevance to the particular task in hand — but is this type of learning purely passive? Here we show that perceptual learning is not passive, but instead results from reinforcement by an independent task. As this learning occurred on a subliminal feature, our results are inconsistent with attentional learning theories in which learning occurs only on stimuli to which attention is directed. Instead, our findings suggest that the successful recognition of a relevant stimulus can trigger an internal reward and give rise to the learning of irrelevant and even subliminal features that are correlated with the occurrence of the reward.


Neuron | 2009

Rewards Evoke Learning of Unconsciously Processed Visual Stimuli in Adult Humans

Aaron R. Seitz; Dongho Kim; Takeo Watanabe

The study of human learning is complicated by the myriad of processing elements involved in conducting any behavioral task. In the case of visual perceptual learning, there has been significant controversy regarding the task processes that guide the formation of this learning. However, there is a developing consensus that top-down, task-related factors are required for such learning to take place. Here we challenge this idea by use of a novel procedure in which human participants, who were deprived of food and water, passively viewed visual stimuli while receiving occasional drops of water as rewards. Visual orientation stimuli, which were temporally paired with the liquid rewards, were viewed monocularly and rendered imperceptible by continuously flashing contour-rich patterns to the other eye. Results show that visual learning can be formed in human adults through stimulus-reward pairing in the absence of a task and without awareness of the stimulus presentation or reward contingencies.


Current Opinion in Neurobiology | 2007

A common framework for perceptual learning

Aaron R. Seitz; Hubert R. Dinse

In this review, we summarize recent evidence that perceptual learning can occur not only under training conditions but also in situations of unattended and passive sensory stimulation. We suggest that the key to learning is to boost stimulus-related activity that is normally insufficient exceed a learning threshold. We discuss how factors such as attention and reinforcement have crucial, permissive roles in learning. We observe, however, that highly optimized stimulation protocols can also boost responses and promote learning. This helps to reconcile observations of how learning can occur (or fail to occur) in seemingly contradictory circumstances, and argues that different processes that affect learning operate through similar mechanisms that are probably based on, and mediated by, neuromodulatory factors.


Current Biology | 2006

Sound facilitates visual learning.

Aaron R. Seitz; Robyn Kim; Ladan Shams

Numerous studies show that practice can result in performance improvements on low-level visual perceptual tasks [1-5]. However, such learning is characteristically difficult and slow, requiring many days of training [6-8]. Here, we show that a multisensory audiovisual training procedure facilitates visual learning and results in significantly faster learning than unisensory visual training. We trained one group of subjects with an audiovisual motion-detection task and a second group with a visual motion-detection task, and compared performance on trials containing only visual signals across ten days of training. Whereas observers in both groups showed improvements of visual sensitivity with training, subjects trained with multisensory stimuli showed significantly more learning both within and across training sessions. These benefits of multisensory training are particularly surprising given that the learning of visual motion stimuli is generally thought to be mediated by low-level visual brain areas [6, 9, 10]. Although crossmodal interactions are ubiquitous in human perceptual processing [11-13], the contribution of crossmodal information to perceptual learning has not been studied previously. Our results show that multisensory interactions can be exploited to yield more efficient learning of sensory information and suggest that multisensory training programs would be most effective for the acquisition of new skills.


Vision Research | 2009

The Phenomenon of Task-Irrelevant Perceptual Learning

Aaron R. Seitz; Takeo Watanabe

Task-irrelevant perceptual learning (TIPL) has captured a growing interest in the field of perceptual learning. The basic phenomenon is that stimulus features that are irrelevant to a subjects task (i.e. convey no useful information to that task) can be learned due to their consistent presentation during task-performance. Here we review recent research on TIPL and focus on two key aspects of TIPL; (1) the mechanisms gating learning in TIPL, and (2) what is learned through TIPL. We show that TIPL is gated by learning signals that are triggered from task processing or by rewards. These learning signals operate to enhance processing of individual stimulus features and appear to result in plasticity in early stages of visual processing. Furthermore, we discuss recent research that demonstrates that TIPL is not in opposition to theories of attention but instead that TIPL operates in concert with attention. Where attentional learning is best to enhance (or suppress) processing of stimuli of known task relevance, TIPL serves to enhance perception of stimuli that are originally inadequately processed by the brain.


PLOS ONE | 2008

Benefits of Stimulus Congruency for Multisensory Facilitation of Visual Learning

Robyn Kim; Aaron R. Seitz; Ladan Shams

Background Studies of perceptual learning have largely focused on unisensory stimuli. However, multisensory interactions are ubiquitous in perception, even at early processing stages, and thus can potentially play a role in learning. Here, we examine the effect of auditory-visual congruency on visual learning. Methodology/Principle Findings Subjects were trained over five days on a visual motion coherence detection task with either congruent audiovisual, or incongruent audiovisual stimuli. Comparing performance on visual-only trials, we find that training with congruent audiovisual stimuli produces significantly better learning than training with incongruent audiovisual stimuli or with only visual stimuli. Conclusions/Significance This advantage from stimulus congruency during training suggests that the benefits of multisensory training may result from audiovisual interactions at a perceptual rather than cognitive level.


Applied Cognitive Psychology | 1999

Changing beliefs and memories through dream interpretation

Giuliana Mazzoni; Elizabeth F. Loftus; Aaron R. Seitz; Steven Jay Lynn

Autobiographical memory is malleable, but how much can we change peoples beliefs and memories about the past? We approached this question with a method designed to supply subjects with a highly personalized suggestion about what probably happened in their childhood. In the current study, one group of subjects (the ‘Dream’ subjects) had their dreams interpreted to indicate that they had experienced a critical childhood event (e.g. being harassed by a bully) before the age of 3. Relative to control subjects who did not receive personalized suggestion, the Dream subjects were more likely to increase their belief that they had the critical experience, and approximately half of these also produced concrete memory reports. These findings are discussed in terms of their implications for autobiographical memory, and also for psychotherapy practice. Copyright


The Journal of Neuroscience | 2014

Prolonged training at threshold promotes robust retinotopic specificity in perceptual learning.

Shao-Chin Hung; Aaron R. Seitz

Human perceptual learning is classically thought to be highly specific to trained stimulis retinal location. Together with evidence that specific learning effects can result in corresponding changes in early visual cortex, researchers have theorized that specificity implies regionalization of learning in the brain. However, other research suggests that specificity can arise from learning readout in decision areas or through top-down processes. Notably, recent research using a novel double-training paradigm reveals dramatic generalization of perceptual learning to untrained locations when multiple stimuli are trained. These data provoked significant controversy in the field and challenged extant models of perceptual learning. To resolve this controversy, we investigated mechanisms that account for retinotopic specificity in perceptual learning. We replicated findings of transfer after double training; however, we show that prolonged training at threshold, which leads to a greater number of difficult trials during training, preserves location specificity when double training occurred at the same location or sequentially at different locations. Likewise, we find that prolonged training at threshold determines the degree of transfer in single training of a peripheral orientation discrimination task. Together, these data show that retinotopic specificity depends highly upon particularities of the training procedure. We suggest that perceptual learning can arise from decision rules, attention learning, or representational changes, and small differences in the training approach can emphasize some of these over the others.

Collaboration


Dive into the Aaron R. Seitz's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ladan Shams

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jenni Deveau

University of California

View shared research outputs
Top Co-Authors

Avatar

Robyn Kim

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kristina Visscher

University of Alabama at Birmingham

View shared research outputs
Researchain Logo
Decentralizing Knowledge