Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jan W. H. Schnupp is active.

Publication


Featured researches published by Jan W. H. Schnupp.


The Journal of Neuroscience | 2006

Plasticity of temporal pattern codes for vocalization stimuli in primary auditory cortex.

Jan W. H. Schnupp; Thomas M. Hall; Rory F. Kokelaar; Bashir Ahmed

It has been suggested that “call-selective” neurons may play an important role in the encoding of vocalizations in primary auditory cortex (A1). For example, marmoset A1 neurons often respond more vigorously to natural than to time-reversed twitter calls, although the spectral energy distribution in the natural and time-reversed signals is the same. Neurons recorded in cat A1, in contrast, showed no such selectivity for natural marmoset calls. To investigate whether call selectivity in A1 can arise purely as a result of auditory experience, we recorded responses to marmoset calls in A1 of naive ferrets, as well as in ferrets that had been trained to recognize these natural marmoset calls. We found that training did not induce call selectivity for the trained vocalizations in A1. However, although ferret A1 neurons were not call selective, they efficiently represented the vocalizations through temporal pattern codes, and trained animals recognized marmoset twitters with a high degree of accuracy. These temporal patterns needed to be analyzed at timescales of 10–50 ms to ensure efficient decoding. Training led to a substantial increase in the amount of information transmitted by these temporal discharge patterns, but the fundamental nature of the temporal pattern code remained unaltered. These results emphasize the importance of temporal discharge patterns and cast doubt on the functional significance of call-selective neurons in the processing of animal communication sounds at the level of A1.


Journal of Computational Neuroscience | 2005

Encoding Stimulus Information by Spike Numbers and Mean Response Time in Primary Auditory Cortex

Israel Nelken; Gal Chechik; Thomas D. Mrsic-Flogel; Andrew J. King; Jan W. H. Schnupp

Neurons can transmit information about sensory stimuli via their firing rate, spike latency, or by the occurrence of complex spike patterns. Identifying which aspects of the neural responses actually encode sensory information remains a fundamental question in neuroscience. Here we compared various approaches for estimating the information transmitted by neurons in auditory cortex in two very different experimental paradigms, one measuring spatial tuning and the other responses to complex natural stimuli. We demonstrate that, in both cases, spike counts and mean response times jointly carry essentially all the available information about the stimuli. Thus, in auditory cortex, whereas spike counts carry only partial information about stimulus identity or location, the additional availability of relatively coarse temporal information is sufficient in order to extract essentially all the sensory information available in the spike discharge pattern, at least for the relatively short stimuli (< ∼ 100 ms) commonly used in auditory research.


Nature | 2001

Linear processing of spatial cues in primary auditory cortex.

Jan W. H. Schnupp; Thomas D. Mrsic-Flogel; Andrew J. King

To determine the direction of a sound source in space, animals must process a variety of auditory spatial cues, including interaural level and time differences, as well as changes in the sound spectrum caused by the direction-dependent filtering of sound by the outer ear. Behavioural deficits observed when primary auditory cortex (A1) is damaged have led to the widespread view that A1 may have an essential role in this complex computational task. Here we show, however, that the spatial selectivity exhibited by the large majority of A1 neurons is well predicted by a simple linear model, which assumes that neurons additively integrate sound levels in each frequency band and ear. The success of this linear model is surprising, given that computing sound source direction is a necessarily nonlinear operation. However, because linear operations preserve information, our results are consistent with the hypothesis that A1 may also form a gateway to higher, more specialized cortical areas.


Neuron | 2011

Contrast Gain Control in Auditory Cortex

Neil C. Rabinowitz; Ben D. B. Willmore; Jan W. H. Schnupp; Andrew J. King

Summary The auditory system must represent sounds with a wide range of statistical properties. One important property is the spectrotemporal contrast in the acoustic environment: the variation in sound pressure in each frequency band, relative to the mean pressure. We show that neurons in ferret auditory cortex rescale their gain to partially compensate for the spectrotemporal contrast of recent stimulation. When contrast is low, neurons increase their gain, becoming more sensitive to small changes in the stimulus, although the effectiveness of contrast gain control is reduced at low mean levels. Gain is primarily determined by contrast near each neurons preferred frequency, but there is also a contribution from contrast in more distant frequency bands. Neural responses are modulated by contrast over timescales of ∼100 ms. By using contrast gain control to expand or compress the representation of its inputs, the auditory system may be seeking an efficient coding of natural sounds.


The Journal of Neuroscience | 2009

Interdependent Encoding of Pitch, Timbre, and Spatial Location in Auditory Cortex

Jennifer K. Bizley; Kerry M. M. Walker; Bernard W. Silverman; Andrew J. King; Jan W. H. Schnupp

Because we can perceive the pitch, timbre, and spatial location of a sound source independently, it seems natural to suppose that cortical processing of sounds might separate out spatial from nonspatial attributes. Indeed, recent studies support the existence of anatomically segregated “what” and “where” cortical processing streams. However, few attempts have been made to measure the responses of individual neurons in different cortical fields to sounds that vary simultaneously across spatial and nonspatial dimensions. We recorded responses to artificial vowels presented in virtual acoustic space to investigate the representations of pitch, timbre, and sound source azimuth in both core and belt areas of ferret auditory cortex. A variance decomposition technique was used to quantify the way in which altering each parameter changed neural responses. Most units were sensitive to two or more of these stimulus attributes. Although indicating that neural encoding of pitch, location, and timbre cues is distributed across auditory cortex, significant differences in average neuronal sensitivity were observed across cortical areas and depths, which could form the basis for the segregation of spatial and nonspatial cues at higher cortical levels. Some units exhibited significant nonlinear interactions between particular combinations of pitch, timbre, and azimuth. These interactions were most pronounced for pitch and timbre and were less commonly observed between spatial and nonspatial attributes. Such nonlinearities were most prevalent in primary auditory cortex, although they tended to be small compared with stimulus main effects.


Trends in Cognitive Sciences | 2001

The shape of ears to come: dynamic coding of auditory space

Andrew J. King; Jan W. H. Schnupp; Timothy P. Doubell

In order to pinpoint the location of a sound source, we make use of a variety of spatial cues that arise from the direction-dependent manner in which sounds interact with the head, torso and external ears. Accurate sound localization relies on the neural discrimination of tiny differences in the values of these cues and requires that the brain circuits involved be calibrated to the cues experienced by each individual. There is growing evidence that the capacity for recalibrating auditory localization continues well into adult life. Many details of how the brain represents auditory space and of how those representations are shaped by learning and experience remain elusive. However, it is becoming increasingly clear that the task of processing auditory spatial information is distributed over different regions of the brain, some working hierarchically, others independently and in parallel, and each apparently using different strategies for encoding sound source location.


The Journal of Neuroscience | 2010

Neural Ensemble Codes for Stimulus Periodicity in Auditory Cortex

Jennifer K. Bizley; Kerry M. M. Walker; Andrew J. King; Jan W. H. Schnupp

We measured the responses of neurons in auditory cortex of male and female ferrets to artificial vowels of varying fundamental frequency (f0), or periodicity, and compared these with the performance of animals trained to discriminate the periodicity of these sounds. Sensitivity to f0 was found in all five auditory cortical fields examined, with most of those neurons exhibiting either low-pass or high-pass response functions. Only rarely was the stimulus dependence of individual neuron discharges sufficient to account for the discrimination performance of the ferrets. In contrast, when analyzed with a simple classifier, responses of small ensembles, comprising 3-61 simultaneously recorded neurons, often discriminated periodicity changes as well as the animals did. We examined four potential strategies for decoding ensemble responses: spike counts, relative first-spike latencies, a binary “spike or no-spike” code, and a spike-order code. All four codes represented stimulus periodicity effectively, and, surprisingly, the spike count and relative latency codes enabled an equally rapid readout, within 75 ms of stimulus onset. Thus, relative latency codes do not necessarily facilitate faster discrimination judgments. A joint spike count plus relative latency code was more informative than either code alone, indicating that the information captured by each measure was not wholly redundant. The responses of neural ensembles, but not of single neurons, reliably encoded f0 changes even when stimulus intensity was varied randomly over a 20 dB range. Because trained animals can discriminate stimulus periodicity across different sound levels, this implies that ensemble codes are better suited to account for behavioral performance.


Nature Neuroscience | 2009

On hearing with more than one ear: lessons from evolution.

Jan W. H. Schnupp; Catherine E. Carr

Although ears capable of detecting airborne sound have arisen repeatedly and independently in different species, most animals that are capable of hearing have a pair of ears. We review the advantages that arise from having two ears and discuss recent research on the similarities and differences in the binaural processing strategies adopted by birds and mammals. We also ask how these different adaptations for binaural and spatial hearing might inform and inspire the development of techniques for future auditory prosthetic devices.


Hearing Research | 2007

Physiological and behavioral studies of spatial coding in the auditory cortex.

Andrew J. King; Victoria M. Bajo; Jennifer K. Bizley; Robert A. A. Campbell; Fernando R. Nodal; Andreas L. Schulz; Jan W. H. Schnupp

Despite extensive subcortical processing, the auditory cortex is believed to be essential for normal sound localization. However, we still have a poor understanding of how auditory spatial information is encoded in the cortex and of the relative contribution of different cortical areas to spatial hearing. We investigated the behavioral consequences of inactivating ferret primary auditory cortex (A1) on auditory localization by implanting a sustained release polymer containing the GABA(A) agonist muscimol bilaterally over A1. Silencing A1 led to a reversible deficit in the localization of brief noise bursts in both the horizontal and vertical planes. In other ferrets, large bilateral lesions of the auditory cortex, which extended beyond A1, produced more severe and persistent localization deficits. To investigate the processing of spatial information by high-frequency A1 neurons, we measured their binaural-level functions and used individualized virtual acoustic space stimuli to record their spatial receptive fields (SRFs) in anesthetized ferrets. We observed the existence of a continuum of response properties, with most neurons preferring contralateral sound locations. In many cases, the SRFs could be explained by a simple linear interaction between the acoustical properties of the head and external ears and the binaural frequency tuning of the neurons. Azimuth response profiles recorded in awake ferrets were very similar and further analysis suggested that the slopes of these functions and location-dependent variations in spike timing are the main information-bearing parameters. Studies of sensory plasticity can also provide valuable insights into the role of different brain areas and the way in which information is represented within them. For example, stimulus-specific training allows accurate auditory localization by adult ferrets to be relearned after manipulating binaural cues by occluding one ear. Reversible inactivation of A1 resulted in slower and less complete adaptation than in normal controls, whereas selective lesions of the descending cortico collicular pathway prevented any improvement in performance. These results reveal a role for auditory cortex in training-induced plasticity of auditory localization, which could be mediated by descending cortical pathways.


The Journal of Neuroscience | 2011

The Laminar and Temporal Structure of Stimulus Information in the Phase of Field Potentials of Auditory Cortex

Francois D. Szymanski; Neil C. Rabinowitz; Cesare Magri; Stefano Panzeri; Jan W. H. Schnupp

Recent studies have shown that the phase of low-frequency local field potentials (LFPs) in sensory cortices carries a significant amount of information about complex naturalistic stimuli, yet the laminar circuit mechanisms and the aspects of stimulus dynamics responsible for generating this phase information remain essentially unknown. Here we investigated these issues by means of an information theoretic analysis of LFPs and current source densities (CSDs) recorded with laminar multi-electrode arrays in the primary auditory area of anesthetized rats during complex acoustic stimulation (music and broadband 1/f stimuli). We found that most LFP phase information originated from discrete “CSD events” consisting of granular–superficial layer dipoles of short duration and large amplitude, which we hypothesize to be triggered by transient thalamocortical activation. These CSD events occurred at rates of 2–4 Hz during both stimulation with complex sounds and silence. During stimulation with complex sounds, these events reliably reset the LFP phases at specific times during the stimulation history. These facts suggest that the informativeness of LFP phase in rat auditory cortex is the result of transient, large-amplitude events, of the “evoked” or “driving” type, reflecting strong depolarization in thalamo-recipient layers of cortex. Finally, the CSD events were characterized by a small number of discrete types of infragranular activation. The extent to which infragranular regions were activated was stimulus dependent. These patterns of infragranular activations may reflect a categorical evaluation of stimulus episodes by the local circuit to determine whether to pass on stimulus information through the output layers.

Collaboration


Dive into the Jan W. H. Schnupp's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Robert A. A. Campbell

Cold Spring Harbor Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge