Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Eric A. Reavis is active.

Publication


Featured researches published by Eric A. Reavis.


Human Brain Mapping | 2014

Neural Mechanisms of Feature Conjunction Learning: Enduring Changes in Occipital Cortex After A Week of Training

Sebastian M. Frank; Eric A. Reavis; Peter U. Tse; Mark W. Greenlee

Most visual activities, whether reading, driving, or playing video games, require rapid detection and identification of learned patterns defined by arbitrary conjunctions of visual features. Initially, such detection is slow and inefficient, but it can become fast and efficient with training. To determine how the brain learns to process conjunctions of visual features efficiently, we trained participants over eight consecutive days to search for a target defined by an arbitrary conjunction of color and location among distractors with a different conjunction of the same features. During each training session, we measured brain activity with functional magnetic resonance imaging (fMRI). The speed of visual search for feature conjunctions improved dramatically within just a few days. These behavioral improvements were correlated with increased neural responses to the stimuli in visual cortex. This suggests that changes in neural processing in visual cortex contribute to the speeding up of visual feature conjunction search. We find evidence that this effect is driven by an increase in the signal‐to‐noise ratio (SNR) of the BOLD signal for search targets over distractors. In a control condition where target and distractor identities were exchanged after training, learned search efficiency was abolished, suggesting that the primary improvement was perceptual learning for the search stimuli, not task‐learning. Moreover, when participants were retested on the original task after nine months without further training, the acquired changes in behavior and brain activity were still present, showing that this can be an enduring form of learning and neural reorganization. Hum Brain Mapp 35:1201–1211, 2014.


NeuroImage | 2013

Pattern classification precedes region-average hemodynamic response in early visual cortex

Peter Köhler; Sergey V. Fogelson; Eric A. Reavis; Ming Meng; J. Swaroop Guntupalli; Michael Hanke; Yaroslav O. Halchenko; Andrew C. Connolly; James V. Haxby; Peter U. Tse

How quickly can information about the neural response to a visual stimulus be detected in the hemodynamic response measured using fMRI? Multi-voxel pattern analysis (MVPA) uses pattern classification to detect subtle stimulus-specific information from patterns of responses among voxels, including information that cannot be detected in the average response across a given brain region. Here we use MVPA in combination with rapid temporal sampling of the fMRI signal to investigate the temporal evolution of classification accuracy and its relationship to the average regional hemodynamic response. In primary visual cortex (V1) stimulus information can be detected in the pattern of voxel responses more than a second before the average hemodynamic response of V1 deviates from baseline, and classification accuracy peaks before the peak of the average hemodynamic response. Both of these effects are restricted to early visual cortex, with higher level areas showing no difference or, in some cases, the opposite temporal relationship. These results have methodological implications for fMRI studies using MVPA because they demonstrate that information can be decoded from hemodynamic activity more quickly than previously assumed.


Human Brain Mapping | 2016

Neural correlates of context-dependent feature conjunction learning in visual search tasks

Eric A. Reavis; Sebastian M. Frank; Mark W. Greenlee; Peter U. Tse

Many perceptual learning experiments show that repeated exposure to a basic visual feature such as a specific orientation or spatial frequency can modify perception of that feature, and that those perceptual changes are associated with changes in neural tuning early in visual processing. Such perceptual learning effects thus exert a bottom‐up influence on subsequent stimulus processing, independent of task‐demands or endogenous influences (e.g., volitional attention). However, it is unclear whether such bottom‐up changes in perception can occur as more complex stimuli such as conjunctions of visual features are learned. It is not known whether changes in the efficiency with which people learn to process feature conjunctions in a task (e.g., visual search) reflect true bottom‐up perceptual learning versus top‐down, task‐related learning (e.g., learning better control of endogenous attention). Here we show that feature conjunction learning in visual search leads to bottom‐up changes in stimulus processing. First, using fMRI, we demonstrate that conjunction learning in visual search has a distinct neural signature: an increase in target‐evoked activity relative to distractor‐evoked activity (i.e., a relative increase in target salience). Second, we demonstrate that after learning, this neural signature is still evident even when participants passively view learned stimuli while performing an unrelated, attention‐demanding task. This suggests that conjunction learning results in altered bottom‐up perceptual processing of the learned conjunction stimuli (i.e., a perceptual change independent of the task). We further show that the acquired change in target‐evoked activity is contextually dependent on the presence of distractors, suggesting that search array Gestalts are learned. Hum Brain Mapp 37:2319–2330, 2016.


NeuroImage | 2015

Caudate nucleus reactivity predicts perceptual learning rate for visual feature conjunctions

Eric A. Reavis; Sebastian M. Frank; Peter U. Tse

Useful information in the visual environment is often contained in specific conjunctions of visual features (e.g., color and shape). The ability to quickly and accurately process such conjunctions can be learned. However, the neural mechanisms responsible for such learning remain largely unknown. It has been suggested that some forms of visual learning might involve the dopaminergic neuromodulatory system (Roelfsema et al., 2010; Seitz and Watanabe, 2005), but this hypothesis has not yet been directly tested. Here we test the hypothesis that learning visual feature conjunctions involves the dopaminergic system, using functional neuroimaging, genetic assays, and behavioral testing techniques. We use a correlative approach to evaluate potential associations between individual differences in visual feature conjunction learning rate and individual differences in dopaminergic function as indexed by neuroimaging and genetic markers. We find a significant correlation between activity in the caudate nucleus (a component of the dopaminergic system connected to visual areas of the brain) and visual feature conjunction learning rate. Specifically, individuals who showed a larger difference in activity between positive and negative feedback on an unrelated cognitive task, indicative of a more reactive dopaminergic system, learned visual feature conjunctions more quickly than those who showed a smaller activity difference. This finding supports the hypothesis that the dopaminergic system is involved in visual learning, and suggests that visual feature conjunction learning could be closely related to associative learning. However, no significant, reliable correlations were found between feature conjunction learning and genotype or dopaminergic activity in any other regions of interest.


Vision Research | 2013

Effects of attention on visual experience during monocular rivalry

Eric A. Reavis; Peter Köhler; Gideon Caplovitz; Thalia Wheatley; Peter U. Tse

There is a long-running debate over the extent to which volitional attention can modulate the appearance of visual stimuli. Here we use monocular rivalry between afterimages to explore the effects of attention on the contents of visual experience. In three experiments, we demonstrate that attended afterimages are seen for longer periods, on average, than unattended afterimages. This occurs both when a feature of the afterimage is attended directly and when a frame surrounding the afterimage is attended. The results of these experiments show that volitional attention can dramatically influence the contents of visual experience.


Attention Perception & Psychophysics | 2018

Learning efficient visual search for stimuli containing diagnostic spatial configurations and color-shape conjunctions

Eric A. Reavis; Sebastian M. Frank; Peter U. Tse

Visual search is often slow and difficult for complex stimuli such as feature conjunctions. Search efficiency, however, can improve with training. Search for stimuli that can be identified by the spatial configuration of two elements (e.g., the relative position of two colored shapes) improves dramatically within a few hundred trials of practice. Several recent imaging studies have identified neural correlates of this learning, but it remains unclear what stimulus properties participants learn to use to search efficiently. Influential models, such as reverse hierarchy theory, propose two major possibilities: learning to use information contained in low-level image statistics (e.g., single features at particular retinotopic locations) or in high-level characteristics (e.g., feature conjunctions) of the task-relevant stimuli. In a series of experiments, we tested these two hypotheses, which make different predictions about the effect of various stimulus manipulations after training. We find relatively small effects of manipulating low-level properties of the stimuli (e.g., changing their retinotopic location) and some conjunctive properties (e.g., color-position), whereas the effects of manipulating other conjunctive properties (e.g., color-shape) are larger. Overall, the findings suggest conjunction learning involving such stimuli might be an emergent phenomenon that reflects multiple different learning processes, each of which capitalizes on different types of information contained in the stimuli. We also show that both targets and distractors are learned, and that reversing learned target and distractor identities impairs performance. This suggests that participants do not merely learn to discriminate target and distractor stimuli, they also learn stimulus identity mappings that contribute to performance improvements.


Cerebral Cortex | 2016

Pretraining Cortical Thickness Predicts Subsequent Perceptual Learning Rate in a Visual Search Task

Sebastian M. Frank; Eric A. Reavis; Mark W. Greenlee; Peter U. Tse


Journal of Vision | 2010

Attention modulates perceptual rivalry within after-images

Peter U. Tse; Peter Köhler; Eric A. Reavis


Handbook of Experimental Phenomenology: Visual Perception of Shape, Space and Appearance | 2013

How Attention can Alter Appearances

Peter U. Tse; Eric A. Reavis; Peter Köhler; Gideon Caplovitz; Thalia Wheatley


Journal of Vision | 2014

Integration of visual features over time: Behavior and brain activity

Sebastian M. Frank; Eric A. Reavis; Mark W. Greenlee; Peter U. Tse

Collaboration


Dive into the Eric A. Reavis's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge