Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Thomas Sprague is active.

Publication


Featured researches published by Thomas Sprague.


Current Biology | 2014

Reconstructions of Information in Visual Spatial Working Memory Degrade with Memory Load

Thomas Sprague; Edward F. Ester; John T. Serences

Working memory (WM) enables the maintenance and manipulation of information relevant to behavioral goals. Variability in WM ability is strongly correlated with IQ [1], and WM function is impaired in many neurological and psychiatric disorders [2, 3], suggesting that this system is a core component of higher cognition. WM storage is thought to be mediated by patterns of activity in neural populations selective for specific properties (e.g., color, orientation, location, and motion direction) of memoranda [4-13]. Accordingly, many models propose that differences in the amplitude of these population responses should be related to differences in memory performance [14, 15]. Here, we used functional magnetic resonance imaging and an image reconstruction technique based on a spatial encoding model [16] to visualize and quantify population-level memory representations supported by multivoxel patterns of activation within regions of occipital, parietal and frontal cortex while participants precisely remembered the location(s) of zero, one, or two small stimuli. We successfully reconstructed images containing representations of the remembered-but not forgotten-locations within regions of occipital, parietal, and frontal cortex using delay-period activation patterns. Critically, the amplitude of representations of remembered locations and behavioral performance both decreased with increasing memory load. These results suggest that differences in visual WM performance between memory load conditions are mediated by changes in the fidelity of large-scale population response profiles distributed across multiple areas of human cortex.


Neuron | 2016

Restoring Latent Visual Working Memory Representations in Human Cortex

Thomas Sprague; Edward F. Ester; John T. Serences

Working memory (WM) enables the storage and manipulation of limited amounts of information over short periods. Prominent models posit that increasing the number of remembered items decreases the spiking activity dedicated to each item via mutual inhibition, which irreparably degrades the fidelity of each items representation. We tested these models by determining if degraded memory representations could be recovered following a post-cue indicating which of several items in spatial WM would be recalled. Using an fMRI-based image reconstruction technique, we identified impaired behavioral performance and degraded mnemonic representations with elevated memory load. However, in several cortical regions, degraded mnemonic representations recovered substantially following a post-cue, and this recovery tracked behavioral performance. These results challenge pure spike-based models of WM and suggest that remembered items are additionally encoded within latent or hidden neural codes that can help reinvigorate active WM representations.


Journal of Cognitive Neuroscience | 2016

Decoding and reconstructing the focus of spatial attention from the topography of alpha-band oscillations

Jason Samaha; Thomas Sprague; Bradley R. Postle

Many aspects of perception and cognition are supported by activity in neural populations that are tuned to different stimulus features (e.g., orientation, spatial location, color). Goal-directed behavior, such as sustained attention, requires a mechanism for the selective prioritization of contextually appropriate representations. A candidate mechanism of sustained spatial attention is neural activity in the alpha band (8–13 Hz), whose power in the human EEG covaries with the focus of covert attention. Here, we applied an inverted encoding model to assess whether spatially selective neural responses could be recovered from the topography of alpha-band oscillations during spatial attention. Participants were cued to covertly attend to one of six spatial locations arranged concentrically around fixation while EEG was recorded. A linear classifier applied to EEG data during sustained attention demonstrated successful classification of the attended location from the topography of alpha power, although not from other frequency bands. We next sought to reconstruct the focus of spatial attention over time by applying inverted encoding models to the topography of alpha power and phase. Alpha power, but not phase, allowed for robust reconstructions of the specific attended location beginning around 450 msec postcue, an onset earlier than previous reports. These results demonstrate that posterior alpha-band oscillations can be used to track activity in feature-selective neural populations with high temporal precision during the deployment of covert spatial attention.


Trends in Cognitive Sciences | 2015

Visual attention mitigates information loss in small- and large-scale neural codes

Thomas Sprague; Sameer Saproo; John T. Serences

The visual system transforms complex inputs into robust and parsimonious neural codes that efficiently guide behavior. Because neural communication is stochastic, the amount of encoded visual information necessarily decreases with each synapse. This constraint requires that sensory signals are processed in a manner that protects information about relevant stimuli from degradation. Such selective processing--or selective attention--is implemented via several mechanisms, including neural gain and changes in tuning properties. However, examining each of these effects in isolation obscures their joint impact on the fidelity of stimulus feature representations by large-scale population codes. Instead, large-scale activity patterns can be used to reconstruct representations of relevant and irrelevant stimuli, thereby providing a holistic understanding about how neuron-level modulations collectively impact stimulus encoding.


The Journal of Neuroscience | 2014

Changing the Spatial Scope of Attention Alters Patterns of Neural Gain in Human Cortex

Sirawaj Itthipuripat; Javier O. Garcia; Nuttida Rungratsameetaweemana; Thomas Sprague; John T. Serences

Over the last several decades, spatial attention has been shown to influence the activity of neurons in visual cortex in various ways. These conflicting observations have inspired competing models to account for the influence of attention on perception and behavior. Here, we used electroencephalography (EEG) to assess steady-state visual evoked potentials (SSVEP) in human subjects and showed that highly focused spatial attention primarily enhanced neural responses to high-contrast stimuli (response gain), whereas distributed attention primarily enhanced responses to medium-contrast stimuli (contrast gain). Together, these data suggest that different patterns of neural modulation do not reflect fundamentally different neural mechanisms, but instead reflect changes in the spatial extent of attention.


eneuro | 2016

How Do Visual and Parietal Cortex Contribute to Visual Short-Term Memory?

Edward F. Ester; Rosanne L. Rademaker; Thomas Sprague

Visual short-term memory (VSTM) enables the representation and manipulation of information no longer present in the sensorium. VSTM storage has long been associated with sustained increases in univariate activity (eg, averaged single-neuron spike counts or fMRI activation levels) across a broad network of frontal and parietal cortical areas (for review, see D’Esposito and Postle, 2015). More recently, several research groups have used multivariate analytical techniques to “decode” or infer the identity of a remembered visual stimulus from multivariate fMRI responses measured in human visual cortical areas (eg, V1–V4) during the delay period of a VSTM task, even though these areas typically do not show sustained increases in activity during VSTM storage (Harrison and Tong, 2009; Serences et al., 2009; Riggall and Postle, 2012; Emrich et al., 2013; van Bergen et al., 2015). However, virtually all of these studies have used simple designs that require participants to remember information over a blank delay period. In many real world scenarios, information must be stored despite a constant barrage of dynamic and unpredictable sensory input. How does the brain accomplish this goal? A recent human neuroimaging paper by Bettencourt and Xu (2016) attempted to answer precisely this question. In their Experiment 1, participants were shown a sequence of two tilted gratings and retroactively cued to remember the orientation of either the first or the second grating. Following a blank delay period, participants judged whether the orientation of a probe grating was tilted slightly clockwise or anticlockwise of the remembered orientation. During the first half of experimental blocks, participants remembered the cued orientation over a blank delay (no-distractor blocks). During the second half of blocks, the delay period was filled with a sequence of …


The Journal of Neuroscience | 2017

Spatial tuning shifts increase the discriminability and fidelity of population codes in visual cortex.

Vy Vo; Thomas Sprague; John T. Serences

Selective visual attention enables organisms to enhance the representation of behaviorally relevant stimuli by altering the encoding properties of single receptive fields (RFs). Yet we know little about how the attentional modulations of single RFs contribute to the encoding of an entire visual scene. Addressing this issue requires (1) measuring a group of RFs that tile a continuous portion of visual space, (2) constructing a population-level measurement of spatial representations based on these RFs, and (3) linking how different types of RF attentional modulations change the population-level representation. To accomplish these aims, we used fMRI to characterize the responses of thousands of voxels in retinotopically organized human cortex. First, we found that the response modulations of voxel RFs (vRFs) depend on the spatial relationship between the RF center and the visual location of the attended target. Second, we used two analyses to assess the spatial encoding quality of a population of voxels. We found that attention increased fine spatial discriminability and representational fidelity near the attended target. Third, we linked these findings by manipulating the observed vRF attentional modulations and recomputing our measures of the fidelity of population codes. Surprisingly, we discovered that attentional enhancements of population-level representations largely depend on position shifts of vRFs, rather than changes in size or gain. Our data suggest that position shifts of single RFs are a principal mechanism by which attention enhances population-level representations in visual cortex. SIGNIFICANCE STATEMENT Although changes in the gain and size of RFs have dominated our view of how attention modulates visual information codes, such hypotheses have largely relied on the extrapolation of single-cell responses to population responses. Here we use fMRI to relate changes in single voxel receptive fields (vRFs) to changes in population-level representations. We find that vRF position shifts contribute more to population-level enhancements of visual information than changes in vRF size or gain. This finding suggests that position shifts are a principal mechanism by which spatial attention enhances population codes for relevant visual information. This poses challenges for labeled line theories of information processing, suggesting that downstream regions likely rely on distributed inputs rather than single neuron-to-neuron mappings.


eNeuro | 2018

Inverted Encoding Models Assay Population-Level Stimulus Representations, Not Single-Unit Neural Tuning

Thomas Sprague; Kirsten Adam; Joshua J. Foster; Masih Rahmati; David W. Sutterer; Vy Vo

Inverted encoding models (IEMs) are a powerful tool for reconstructing population-level stimulus representations from aggregate measurements of neural activity (e.g., fMRI or EEG). In a recent report, Liu et al. (2018) tested whether IEMs can provide information about the underlying tuning of single units. Here, we argue that using stimulus reconstructions to infer properties of single neurons, such as neural tuning bandwidth, is an ill-posed problem with no unambiguous solution. Instead of interpreting results from these methods as evidence about single-unit tuning, we emphasize the utility of these methods for assaying population-level stimulus representations. These can be compared across task conditions to better constrain theories of large-scale neural information processing across experimental manipulations, such as changing sensory input or attention. Neuroscience methods range astronomically in scale. In some experiments, we record subthreshold membrane potentials in individual neurons, while in others we measure aggregate responses of thousands of neurons at the millimeter scale. A central goal in neuroscience is to bridge insights across all scales to understand the core computations underlying cognition (Churchland and Sejnowski, 1988). However, inferential problems arise when moving across scales: single-unit response properties cannot be inferred from fMRI activation in single voxels, subthreshold membrane potential cannot be inferred from extracellular spike rate, and the state of single ion channels cannot be inferred from intracellular recordings. These are all examples of an inverse problem in which an observation at a larger scale is consistent with an enormous number of possible observations at a smaller scale. Recent analytical advances have circumvented challenges inherent in inverse problems by instead transforming aggregate signals from their native “measurement space” (e.g., activation pattern across fMRI voxels) into a …


bioRxiv | 2018

Reconciling fMRI and EEG indices of attentional modulations in human visual cortex

Sirawaj Itthipuripat; Thomas Sprague; John T. Serences

Functional magnetic resonance imaging (fMRI) and electroencephalography (EEG) are the two most popular non-invasive methods used to study the neural mechanisms underlying human cognition. These approaches are considered complementary: fMRI has higher spatial resolution but sluggish temporal resolution, whereas EEG has millisecond temporal resolution, but only at a broad spatial scale. Beyond the obvious fact that fMRI measures properties of blood and EEG measures changes in electric fields, many foundational studies assume that, aside from differences in spatial and temporal precision, these two methods index the same underlying neural modulations. We tested this assumption by using EEG and fMRI to measure attentional modulations of neural responses to stimuli of different visual contrasts. We found that equivalent experiments performed using fMRI and EEG on the same participants revealed remarkably different patterns of attentional modulations: event-related fMRI responses provided evidence for an additive increase in responses across all contrasts equally, whereas early stimulus-evoked event-related potentials (ERPs) showed larger modulations with increasing stimulus contrast and only a later negative-going ERP and low-frequency oscillatory EEG signals showed effects similar to fMRI. These results demonstrate that there is not a one-to-one correspondence between the physiological mechanisms that give rise to modulations of fMRI responses and the most commonly used ERP markers, and that the typical approach of employing fMRI and EEG to gain complementary information about localization and temporal dynamics is over-simplified. Instead, fMRI and EEG index different physiological modulations and their joint application affords synergistic insights into the neural mechanisms supporting human cognition.


bioRxiv | 2017

Category learning biases sensory representations in visual cortex.

Edward F. Ester; Thomas Sprague; John T. Serences

Category learning distorts perceptual space by enhancing the discriminability of physically similar yet categorically distinct exemplars. These distortions could in part reflect changes in how sensory neural populations selective for category-defining features encode information. Here, we tested this possibility by using fMRI and EEG to quantify the feature-selective information content of signals measured in early visual cortical areas after participants learned to classify a set of oriented stimuli into discrete categories. Reconstructed representations of orientation in early visual areas were systematically biased by category membership. These biases were strongest for orientations near the category boundary where they would be most beneficial, predicted category discrimination performance, and emerged rapidly after stimulus onset, suggesting that category learning can produce significant changes in how neural populations in early visual areas respond to incoming sensory signals.Categorization allows organisms to generalize existing knowledge to novel stimuli and to discriminate between physically similar yet conceptually different stimuli. Humans, nonhuman primates, and rodents can readily learn arbitrary categories defined by low-level visual features, and learning distorts perceptual sensitivity for category-defining features such that differences between physically similar yet categorically distinct exemplars are enhanced while differences between equally similar but categorically identical stimuli are reduced. We report a basis for these distortions in human occipitoparietal cortex. In three experiments, we used an inverted encoding model to recover population-level representations of stimuli from multivoxel and multi-electrode patterns of human brain activity while human participants (both sexes) classified continuous stimulus sets into discrete groups. In each experiment, reconstructed representations of to-be-categorized stimuli were systematically biased towards the center of the appropriate category. These biases were largest for exemplars near a category boundary, predicted participants’ overt category judgments, emerged shortly after stimulus onset, and could not be explained by mechanisms of response selection or motor preparation. Collectively, our findings suggest that category learning can influence processing at the earliest stages of cortical visual processing. Significance Statement Category learning enhances perceptual sensitivity for physically similar yet categorically different stimuli. We report a possible mechanism for these distortions in human occipitoparietal cortex.. In three experiments, we used an inverted encoding model to recover population-level representations of stimuli from multivariate patterns in occipitoparietal cortex while participants categorized sets of continuous stimuli into discrete groups. The recovered representations were systematically biased by category membership, with larger biases for exemplars adjacent to a category boundary. These results suggest that mechanisms of categorization shape information processing at the earliest stages of the visual system.

Collaboration


Dive into the Thomas Sprague's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Vy Vo

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge