Sarah M. N. Woolley
Columbia University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sarah M. N. Woolley.
The Journal of Neuroscience | 2006
Sarah M. N. Woolley; Patrick R. Gill; Frédéric E. Theunissen
Physiological studies in vocal animals such as songbirds indicate that vocalizations drive auditory neurons particularly well. But the neural mechanisms whereby vocalizations are encoded differently from other sounds in the auditory system are unknown. We used spectrotemporal receptive fields (STRFs) to study the neural encoding of song versus the encoding of a generic sound, modulation-limited noise, by single neurons and the neuronal population in the zebra finch auditory midbrain. The noise was designed to match song in frequency, spectrotemporal modulation boundaries, and power. STRF calculations were balanced between the two stimulus types by forcing a common stimulus subspace. We found that 91% of midbrain neurons showed significant differences in spectral and temporal tuning properties when birds heard song and when birds heard modulation-limited noise. During the processing of noise, spectrotemporal tuning was highly variable across cells. During song processing, the tuning of individual cells became more similar; frequency tuning bandwidth increased, best temporal modulation frequency increased, and spike timing became more precise. The outcome was a population response to song that encoded rapidly changing sounds with power and precision, resulting in a faithful neural representation of the temporal pattern of a song. Modeling responses to song using the tuning to modulation-limited noise showed that the population response would not encode song as precisely or robustly. We conclude that stimulus-dependent changes in auditory tuning during song processing facilitate the high-fidelity encoding of the temporal pattern of a song.
The Journal of Neuroscience | 2004
Anne Hsu; Sarah M. N. Woolley; Thane Fremouw; Frédéric E. Theunissen
We examined the neural encoding of synthetic and natural sounds by single neurons in the auditory system of male zebra finches by estimating the mutual information in the time-varying mean firing rate of the neuronal response. Using a novel parametric method for estimating mutual information with limited data, we tested the hypothesis that song and song-like synthetic sounds would be preferentially encoded relative to other complex, but non-song-like synthetic sounds. To test this hypothesis, we designed two synthetic stimuli: synthetic songs that matched the power of spectral-temporal modulations but lacked the modulation phase structure of zebra finch song and noise with uniform band-limited spectral-temporal modulations. By defining neural selectivity as relative mutual information, we found that the auditory system of songbirds showed selectivity for song-like sounds. This selectivity increased in a hierarchical manner along ascending processing stages in the auditory system. Midbrain neurons responded with highest information rates and efficiency to synthetic songs and thus were selective for the spectral-temporal modulations of song. Primary forebrain neurons showed increased information to zebra finch song and synthetic song equally over noise stimuli. Secondary forebrain neurons responded with the highest information to zebra finch song relative to other stimuli and thus were selective for its specific modulation phase relationships. We also assessed the relative contribution of three response properties to this selectivity: (1) spiking reliability, (2) rate distribution entropy, and (3) bandwidth. We found that rate distribution and bandwidth but not reliability were responsible for the higher average information rates found for song-like sounds.
PLOS ONE | 2011
Ana Calabrese; Joseph W. Schumacher; David M. Schneider; Liam Paninski; Sarah M. N. Woolley
In the auditory system, the stimulus-response properties of single neurons are often described in terms of the spectrotemporal receptive field (STRF), a linear kernel relating the spectrogram of the sound stimulus to the instantaneous firing rate of the neuron. Several algorithms have been used to estimate STRFs from responses to natural stimuli; these algorithms differ in their functional models, cost functions, and regularization methods. Here, we characterize the stimulus-response function of auditory neurons using a generalized linear model (GLM). In this model, each cells input is described by: 1) a stimulus filter (STRF); and 2) a post-spike filter, which captures dependencies on the neurons spiking history. The output of the model is given by a series of spike trains rather than instantaneous firing rate, allowing the prediction of spike train responses to novel stimuli. We fit the model by maximum penalized likelihood to the spiking activity of zebra finch auditory midbrain neurons in response to conspecific vocalizations (songs) and modulation limited (ml) noise. We compare this model to normalized reverse correlation (NRC), the traditional method for STRF estimation, in terms of predictive power and the basic tuning properties of the estimated STRFs. We find that a GLM with a sparse prior predicts novel responses to both stimulus classes significantly better than NRC. Importantly, we find that STRFs from the two models derived from the same responses can differ substantially and that GLM STRFs are more consistent between stimulus classes than NRC STRFs. These results suggest that a GLM with a sparse prior provides a more accurate characterization of spectrotemporal tuning than does the NRC method when responses to complex sounds are studied in these neurons.
The Journal of Neuroscience | 2009
Sarah M. N. Woolley; Patrick R. Gill; Thane Fremouw; Frédéric E. Theunissen
Auditory perception depends on the coding and organization of the information-bearing acoustic features of sounds by auditory neurons. We report here that auditory neurons can be classified into functional groups, each of which plays a specific role in extracting distinct complex sound features. We recorded the electrophysiological responses of single auditory neurons in the songbird midbrain and forebrain to conspecific song, measured their tuning by calculating spectrotemporal receptive fields (STRFs), and classified them using multiple cluster analysis methods. Based on STRF shape, cells clustered into functional groups that divided the space of acoustical features into regions that represent cues for the fundamental acoustic percepts of pitch, timbre, and rhythm. Four major groups were found in the midbrain, and five major groups were found in the forebrain. Comparing STRFs in midbrain and forebrain neurons suggested that both inheritance and emergence of tuning properties occur as information ascends the auditory processing stream.
Journal of Neurophysiology | 2010
David M. Schneider; Sarah M. N. Woolley
Many social animals including songbirds use communication vocalizations for individual recognition. The perception of vocalizations depends on the encoding of complex sounds by neurons in the ascending auditory system, each of which is tuned to a particular subset of acoustic features. Here, we examined how well the responses of single auditory neurons could be used to discriminate among bird songs and we compared discriminability to spectrotemporal tuning. We then used biologically realistic models of pooled neural responses to test whether the responses of groups of neurons discriminated among songs better than the responses of single neurons and whether discrimination by groups of neurons was related to spectrotemporal tuning and trial-to-trial response variability. The responses of single auditory midbrain neurons could be used to discriminate among vocalizations with a wide range of abilities, ranging from chance to 100%. The ability to discriminate among songs using single neuron responses was not correlated with spectrotemporal tuning. Pooling the responses of pairs of neurons generally led to better discrimination than the average of the two inputs and the most discriminating input. Pooling the responses of three to five single neurons continued to improve neural discrimination. The increase in discriminability was largest for groups of neurons with similar spectrotemporal tuning. Further, we found that groups of neurons with correlated spike trains achieved the largest gains in discriminability. We simulated neurons with varying levels of temporal precision and measured the discriminability of responses from single simulated neurons and groups of simulated neurons. Simulated neurons with biologically observed levels of temporal precision benefited more from pooling correlated inputs than did neurons with highly precise or imprecise spike trains. These findings suggest that pooling correlated neural responses with the levels of precision observed in the auditory midbrain increases neural discrimination of complex vocalizations.
Journal of Neurophysiology | 2008
Patrick R. Gill; Sarah M. N. Woolley; Thane Fremouw; Frédéric E. Theunissen
High-level sensory neurons encoding natural stimuli are not well described by linear models operating on the time-varying stimulus intensity. Here we show that firing rates of neurons in a secondary sensory forebrain area can be better modeled by linear functions of how surprising the stimulus is. We modeled auditory neurons in the caudal lateral mesopallium (CLM) of adult male zebra finches under urethane anesthesia with linear filters convolved not with stimulus intensity, but with stimulus surprise. Surprise was quantified as the logarithm of the probability of the stimulus given the local recent stimulus history and expectations based on conspecific song. Using our surprise method, the predictions of neural responses to conspecific song improved by 67% relative to those obtained using stimulus intensity. Similar prediction improvements cannot be replicated by assuming CLM performs derivative detection. The explanatory power of surprise increased from the midbrain through the primary forebrain and to CLM. When the stimulus presented was a random synthetic ripple noise, CLM neurons (but not neurons in lower auditory areas) were best described as if they were expecting conspecific song, finding the inconsistencies between birdsong and noise surprising. In summary, spikes in CLM neurons indicate stimulus surprise more than they indicate stimulus intensity features. The concept of stimulus surprise may be useful for modeling neural responses in other higher-order sensory areas whose functions have been poorly understood.
Journal of Neurophysiology | 2011
Joseph W. Schumacher; David M. Schneider; Sarah M. N. Woolley
The majority of sensory physiology experiments have used anesthesia to facilitate the recording of neural activity. Current techniques allow researchers to study sensory function in the context of varying behavioral states. To reconcile results across multiple behavioral and anesthetic states, it is important to consider how and to what extent anesthesia plays a role in shaping neural response properties. The role of anesthesia has been the subject of much debate, but the extent to which sensory coding properties are altered by anesthesia has yet to be fully defined. In this study we asked how urethane, an anesthetic commonly used for avian and mammalian sensory physiology, affects the coding of complex communication vocalizations (songs) and simple artificial stimuli in the songbird auditory midbrain. We measured spontaneous and song-driven spike rates, spectrotemporal receptive fields, and neural discriminability from responses to songs in single auditory midbrain neurons. In the same neurons, we recorded responses to pure tone stimuli ranging in frequency and intensity. Finally, we assessed the effect of urethane on population-level representations of birdsong. Results showed that intrinsic neural excitability is significantly depressed by urethane but that spectral tuning, single neuron discriminability, and population representations of song do not differ significantly between unanesthetized and anesthetized animals.
Developmental Psychobiology | 2012
Sarah M. N. Woolley
Songbirds, like humans, are highly accomplished vocal learners. The many parallels between speech and birdsong and conserved features of mammalian and avian auditory systems have led to the emergence of the songbird as a model system for studying the perceptual mechanisms of vocal communication. Laboratory research on songbirds allows the careful control of early life experience and high-resolution analysis of brain function during vocal learning, production, and perception. Here, I review what songbird studies have revealed about the role of early experience in the development of vocal behavior, auditory perception, and the processing of learned vocalizations by auditory neurons. The findings of these studies suggest general principles for how exposure to vocalizations during development and into adulthood influences the perception of learned vocal signals.
Journal of Comparative Physiology A-neuroethology Sensory Neural and Behavioral Physiology | 2007
Mark E. Hauber; Phillip Cassey; Sarah M. N. Woolley; Frédéric E. Theunissen
Female choice plays a critical role in the evolution of male acoustic displays. Yet there is limited information on the neurophysiological basis of female songbirds’ auditory recognition systems. To understand the neural mechanisms of how non-singing female songbirds perceive behaviorally relevant vocalizations, we recorded responses of single neurons to acoustic stimuli in two auditory forebrain regions, the caudal lateral mesopallium (CLM) and Field L, in anesthetized adult female zebra finches (Taeniopygia guttata). Using various metrics of response selectivity, we found consistently higher response strengths for unfamiliar conspecific songs compared to tone pips and white noise in Field L but not in CLM. We also found that neurons in the left auditory forebrain had lower response strengths to synthetics sounds, leading to overall higher neural selectivity for song in neurons of the left hemisphere. This laterality effect is consistent with previously published behavioral data in zebra finches. Overall, our results from Field L are in parallel and from CLM are in contrast with the patterns of response selectivity reported for conspecific songs over synthetic sounds in male zebra finches, suggesting some degree of sexual dimorphism of auditory perception mechanisms in songbirds.
Neuron | 2013
David M. Schneider; Sarah M. N. Woolley
Vocal communicators such as humans and songbirds readily recognize individual vocalizations, even in distracting auditory environments. This perceptual ability is likely subserved by auditory neurons whose spiking responses to individual vocalizations are minimally affected by background sounds. However, auditory neurons that produce background-invariant responses to vocalizations in auditory scenes have not been found. Here, we describe a population of neurons in the zebra finch auditory cortex that represent vocalizations with a sparse code and that maintain their vocalization-like firing patterns in levels of background sound that permit behavioral recognition. These same neurons decrease or stop spiking in levels of background sound that preclude behavioral recognition. In contrast, upstream neurons represent vocalizations with dense and background-corrupted responses. We provide experimental evidence suggesting that sparse coding is mediated by feedforward suppression. Finally, we show through simulations that feedforward inhibition can transform a dense representation of vocalizations into a sparse and background-invariant representation.