Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tatyana O. Sharpee is active.

Publication


Featured researches published by Tatyana O. Sharpee.


Nature | 2006

Adaptive filtering enhances information transmission in visual cortex.

Tatyana O. Sharpee; Hiroki Sugihara; Andrei V. Kurgansky; Sergei Rebrik; Michael P. Stryker; Kenneth D. Miller

Sensory neuroscience seeks to understand how the brain encodes natural environments. However, neural coding has largely been studied using simplified stimuli. In order to assess whether the brains coding strategy depends on the stimulus ensemble, we apply a new information-theoretic method that allows unbiased calculation of neural filters (receptive fields) from responses to natural scenes or other complex signals with strong multipoint correlations. In the cat primary visual cortex we compare responses to natural inputs with those to noise inputs matched for luminance and contrast. We find that neural filters adaptively change with the input ensemble so as to increase the information carried by the neural response about the filtered stimulus. Adaptation affects the spatial frequency composition of the filter, enhancing sensitivity to under-represented frequencies in agreement with optimal encoding arguments. Adaptation occurs over 40 s to many minutes, longer than most previously reported forms of adaptation.


Neural Computation | 2004

Analyzing Neural Responses to Natural Signals: Maximally Informative Dimensions

Tatyana O. Sharpee; Nicole C. Rust; William Bialek

We propose a method that allows for a rigorous statistical analysis of neural responses to natural stimuli that are nongaussian and exhibit strong correlations. We have in mind a model in which neurons are selective for a small number of stimulus dimensions out of a high-dimensional stimulus space, but within this subspace the responses can be arbitrarily nonlinear. Existing analysis methods are based on correlation functions between stimuli and responses, but these methods are guaranteed to work only in the case of gaussian stimulus ensembles. As an alternative to correlation functions, we maximize the mutual information between the neural responses and projections of the stimulus onto low-dimensional subspaces. The procedure can be done iteratively by increasing the dimensionality of this subspace. Those dimensions that allow the recovery of all of the information between spikes and the full unprojected stimuli describe the relevant subspace. If the dimensionality of the relevant subspace indeed is small, it becomes feasible to map the neurons input-output function even under fully natural stimulus conditions. These ideas are illustrated in simulations on model visual and auditory neurons responding to natural scenes and sounds, respectively.


Neuron | 2008

Cooperative Nonlinearities in Auditory Cortical Neurons

Craig A. Atencio; Tatyana O. Sharpee; Christoph E. Schreiner

Cortical receptive fields represent the signal preferences of sensory neurons. Receptive fields are thought to provide a representation of sensory experience from which the cerebral cortex may make interpretations. While it is essential to determine a neurons receptive field, it remains unclear which features of the acoustic environment are specifically represented by neurons in the primary auditory cortex (AI). We characterized cat AI spectrotemporal receptive fields (STRFs) by finding both the spike-triggered average (STA) and stimulus dimensions that maximized the mutual information between response and stimulus. We derived a nonlinearity relating spiking to stimulus projection onto two maximally informative dimensions (MIDs). The STA was highly correlated with the first MID. Generally, the nonlinearity for the first MID was asymmetric and often monotonic in shape, while the second MID nonlinearity was symmetric and nonmonotonic. The joint nonlinearity for both MIDs revealed that most first and second MIDs were synergistic and thus should be considered conjointly. The difference between the nonlinearities suggests different possible roles for the MIDs in auditory processing.


Neuron | 2013

Associative learning enhances population coding by inverting interneuronal correlation patterns.

James M. Jeanne; Tatyana O. Sharpee; Timothy Q. Gentner

Learning-dependent cortical encoding has been well described in single neurons. But behaviorally relevant sensory signals drive the coordinated activity of millions of cortical neurons; whether learning produces stimulus-specific changes in population codes is unknown. Because the pattern of firing rate correlations between neurons--an emergent property of neural populations--can significantly impact encoding fidelity, we hypothesize that it is a target for learning. Using an associative learning procedure, we manipulated the behavioral relevance of natural acoustic signals and examined the evoked spiking activity in auditory cortical neurons in songbirds. We show that learning produces stimulus-specific changes in the pattern of interneuronal correlations that enhance the ability of neural populations to recognize signals relevant for behavior. This learning-dependent enhancement increases with population size. The results identify the pattern of interneuronal correlation in neural populations as a target of learning that can selectively enhance the representations of specific sensory signals.


Proceedings of the National Academy of Sciences of the United States of America | 2009

Hierarchical computation in the canonical auditory cortical circuit

Craig A. Atencio; Tatyana O. Sharpee; Christoph E. Schreiner

Sensory cortical anatomy has identified a canonical microcircuit underlying computations between and within layers. This feed-forward circuit processes information serially from granular to supragranular and to infragranular layers. How this substrate correlates with an auditory cortical processing hierarchy is unclear. We recorded simultaneously from all layers in cat primary auditory cortex (AI) and estimated spectrotemporal receptive fields (STRFs) and associated nonlinearities. Spike-triggered averaged STRFs revealed that temporal precision, spectrotemporal separability, and feature selectivity varied with layer according to a hierarchical processing model. STRFs from maximally informative dimension (MID) analysis confirmed hierarchical processing. Of two cooperative MIDs identified for each neuron, the first comprised the majority of stimulus information in granular layers. Second MID contributions and nonlinear cooperativity increased in supragranular and infragranular layers. The AI microcircuit provides a valid template for three independent hierarchical computation principles. Increases in processing complexity, STRF cooperativity, and nonlinearity correlate with the synaptic distance from granular layers.


The Journal of Neuroscience | 2009

Preserving Information in Neural Transmission

Lawrence C. Sincich; Jonathan C. Horton; Tatyana O. Sharpee

Along most neural pathways, the spike trains transmitted from one neuron to the next are altered. In the process, neurons can either achieve a more efficient stimulus representation, or extract some biologically important stimulus parameter, or succeed at both. We recorded the inputs from single retinal ganglion cells and the outputs from connected lateral geniculate neurons in the macaque to examine how visual signals are relayed from retina to cortex. We found that geniculate neurons re-encoded multiple temporal stimulus features to yield output spikes that carried more information about stimuli than was available in each input spike. The coding transformation of some relay neurons occurred with no decrement in information rate, despite output spike rates that averaged half the input spike rates. This preservation of transmitted information was achieved by the short-term summation of inputs that geniculate neurons require to spike. A reduced model of the retinal and geniculate visual responses, based on two stimulus features and their associated nonlinearities, could account for >85% of the total information available in the spike trains and the preserved information transmission. These results apply to neurons operating on a single time-varying input, suggesting that synaptic temporal integration can alter the temporal receptive field properties to create a more efficient representation of visual signals in the thalamus than the retina.


The Journal of Neuroscience | 2011

Emergence of learned categorical representations within an auditory forebrain circuit.

James M. Jeanne; Jason V. Thompson; Tatyana O. Sharpee; Timothy Q. Gentner

Many learned behaviors are thought to require the activity of high-level neurons that represent categories of complex signals, such as familiar faces or native speech sounds. How these complex, experience-dependent neural responses emerge within the brains circuitry is not well understood. The caudomedial mesopallium (CMM), a secondary auditory region in the songbird brain, contains neurons that respond to specific combinations of song components and respond preferentially to the songs that birds have learned to recognize. Here, we examine the transformation of these learned responses across a broader forebrain circuit that includes the caudolateral mesopallium (CLM), an auditory region that provides input to CMM. We recorded extracellular single-unit activity in CLM and CMM in European starlings trained to recognize sets of conspecific songs and compared multiple encoding properties of neurons between these regions. We find that the responses of CMM neurons are more selective between song components, convey more information about song components, and are more variable over repeated components than the responses of CLM neurons. While learning enhances neural encoding of song components in both regions, CMM neurons encode more information about the learned categories associated with songs than do CLM neurons. Collectively, these data suggest that CLM and CMM are part of a functional sensory hierarchy that is modified by learning to yield representations of natural vocal signals that are increasingly informative with respect to behavior.


Current Opinion in Neurobiology | 2011

Hierarchical Representations in the Auditory Cortex

Tatyana O. Sharpee; Craig A. Atencio; Christoph E. Schreiner

Understanding the neural mechanisms of invariant object recognition remains one of the major unsolved problems in neuroscience. A common solution that is thought to be employed by diverse sensory systems is to create hierarchical representations of increasing complexity and tolerance. However, in the mammalian auditory system many aspects of this hierarchical organization remain undiscovered, including the prominent classes of high-level representations (that would be analogous to face selectivity in the visual system or selectivity to birds own song in the bird) and the dominant types of invariant transformations. Here we review the recent progress that begins to probe the hierarchy of auditory representations, and the computational approaches that can be helpful in achieving this feat.


Annual Review of Neuroscience | 2013

Computational Identification of Receptive Fields

Tatyana O. Sharpee

Natural stimuli elicit robust responses of neurons throughout sensory pathways, and therefore their use provides unique opportunities for understanding sensory coding. This review describes statistical methods that can be used to characterize neural feature selectivity, focusing on the case of natural stimuli. First, we discuss how such classic methods as reverse correlation/spike-triggered average and spike-triggered covariance can be generalized for use with natural stimuli to find the multiple relevant stimulus features that affect the responses of a given neuron. Second, ways to characterize neural feature selectivity while assuming that the neural responses exhibit a certain type of invariance, such as position invariance for visual neurons, are discussed. Finally, we discuss methods that do not require one to make an assumption of invariance and instead can determine the type of invariance by analyzing relationships between the multiple stimulus features that affect the neural responses.


PLOS Computational Biology | 2011

Second order dimensionality reduction using minimum and maximum mutual information models.

Jeffrey D. Fitzgerald; Ryan J. Rowekamp; Lawrence C. Sincich; Tatyana O. Sharpee

Conventional methods used to characterize multidimensional neural feature selectivity, such as spike-triggered covariance (STC) or maximally informative dimensions (MID), are limited to Gaussian stimuli or are only able to identify a small number of features due to the curse of dimensionality. To overcome these issues, we propose two new dimensionality reduction methods that use minimum and maximum information models. These methods are information theoretic extensions of STC that can be used with non-Gaussian stimulus distributions to find relevant linear subspaces of arbitrary dimensionality. We compare these new methods to the conventional methods in two ways: with biologically-inspired simulated neurons responding to natural images and with recordings from macaque retinal and thalamic cells responding to naturalistic time-varying stimuli. With non-Gaussian stimuli, the minimum and maximum information methods significantly outperform STC in all cases, whereas MID performs best in the regime of low dimensional feature spaces.

Collaboration


Dive into the Tatyana O. Sharpee's collaboration.

Top Co-Authors

Avatar

Johnatan Aljadeff

Salk Institute for Biological Studies

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ryan J. Rowekamp

Salk Institute for Biological Studies

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jeffrey D. Fitzgerald

Salk Institute for Biological Studies

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Minjoon Kouh

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge