Thane Fremouw
University of California, Berkeley
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Thane Fremouw.
Nature Neuroscience | 2005
Sarah M. N. Woolley; Thane Fremouw; Anne Hsu; Frédéric E. Theunissen
Vocal communicators discriminate conspecific vocalizations from other sounds and recognize the vocalizations of individuals. To identify neural mechanisms for the discrimination of such natural sounds, we compared the linear spectro-temporal tuning properties of auditory midbrain and forebrain neurons in zebra finches with the statistics of natural sounds, including song. Here, we demonstrate that ensembles of auditory neurons are tuned to auditory features that enhance the acoustic differences between classes of natural sounds, and among the songs of individual birds. Tuning specifically avoids the spectro-temporal modulations that are redundant across natural sounds and therefore provide little information; rather, it overlaps with the temporal modulations that differ most across sounds. By comparing the real tuning and a less selective model of spectro-temporal tuning, we found that the real modulation tuning increases the neural discrimination of different sounds. Additionally, auditory neurons discriminate among zebra finch song segments better than among synthetic sound segments.
Archive | 2002
John H. Casseday; Thane Fremouw; Ellen Covey
The inferior colliculus (IC) (Fig. 7.1) occupies a strategic position in the central auditory system. Evidence reviewed in this chapter indicates that it is an interface between lower brainstem auditory pathways, the auditory cortex, and motor systems (For abbreviations see Table 7.1). The IC receives ascending input, via separate pathways, from a number of auditory nuclei in the lower brainstem. Moreover, it receives crossed input from the opposite IC and descending input from auditory cortex. These connections suggest that (1) the IC integrates information from various auditory sources and (2) at least some of the integration utilizes cortical feedback. The IC also receives input from ascending somatosensory pathways, suggesting that auditory information is integrated with somatosensory information at the midbrain. Motor-related input to the IC arises from the substantia nigra and globus pallidus. These connections raise the possibility that sensory processing in the IC is modulated by motor action. The major output of the IC is to the auditory thalamocortical system. However, it also transmits information to motor systems such as the deep superior colliculus, and the cerebellum, via the pontine gray. These connections suggest that processing in the IC not only prepares information for transmission to higher auditory centers but also modulates motor action in a direct fashion. In short, the IC is ideally suited to process auditory information based on behavioral context and to direct information for guiding action in response to this information (Aitkin 1986; Casseday and Covey 1996).
Annals of the New York Academy of Sciences | 2004
Frédéric E. Theunissen; Noopur Amin; Sarita S. Shaevitz; Sarah M. N. Woolley; Thane Fremouw; Mark E. Hauber
Abstract: The sensorimotor neurons found in the song‐system nuclei are responsive to the sounds of the birds own song. This selectivity emerges during vocal learning and appears to follow the development of the birds song vocalization in two ways: at each stage, the neurons are most selective for the birds current vocalizations and this selectivity increases as the bird learns to produce a stable adult song. Also, because of their location in the sensori‐vocal pathway and because their physiological properties are correlated with the motor program, it is postulated that these neurons play a crucial role in interpreting the auditory feedback during song to preserve a desirable vocal output. The neurons found in presynaptic auditory areas lack this selectivity for the birds own song. Auditory neurons in the secondary auditory areas caudal nidopallium and caudal mesopallium show specific responses to familiar songs or behaviorally relevant songs. These auditory areas might therefore be involved in perceptual tasks. Neurons in the primary forebrain auditory area are selective for the spectrotemporal modulations that are common in song, yielding an efficient neural representation of those sounds. Neurons that are particularly selective for the tutor song at the end of the sensory period have not yet been described in any areas. Although these three levels of selectivity found in the primary auditory forebrain areas, the secondary auditory forebrain areas, and the song system suggest a form of hierarchical sensory processing, the functional connectivity between these areas and the mechanisms generating the specific selectivity for songs that are behaviorally relevant or crucial in song learning and production have yet to be revealed.
The Journal of Neuroscience | 2004
Anne Hsu; Sarah M. N. Woolley; Thane Fremouw; Frédéric E. Theunissen
We examined the neural encoding of synthetic and natural sounds by single neurons in the auditory system of male zebra finches by estimating the mutual information in the time-varying mean firing rate of the neuronal response. Using a novel parametric method for estimating mutual information with limited data, we tested the hypothesis that song and song-like synthetic sounds would be preferentially encoded relative to other complex, but non-song-like synthetic sounds. To test this hypothesis, we designed two synthetic stimuli: synthetic songs that matched the power of spectral-temporal modulations but lacked the modulation phase structure of zebra finch song and noise with uniform band-limited spectral-temporal modulations. By defining neural selectivity as relative mutual information, we found that the auditory system of songbirds showed selectivity for song-like sounds. This selectivity increased in a hierarchical manner along ascending processing stages in the auditory system. Midbrain neurons responded with highest information rates and efficiency to synthetic songs and thus were selective for the spectral-temporal modulations of song. Primary forebrain neurons showed increased information to zebra finch song and synthetic song equally over noise stimuli. Secondary forebrain neurons responded with the highest information to zebra finch song relative to other stimuli and thus were selective for its specific modulation phase relationships. We also assessed the relative contribution of three response properties to this selectivity: (1) spiking reliability, (2) rate distribution entropy, and (3) bandwidth. We found that rate distribution and bandwidth but not reliability were responsible for the higher average information rates found for song-like sounds.
The Journal of Neuroscience | 2009
Sarah M. N. Woolley; Patrick R. Gill; Thane Fremouw; Frédéric E. Theunissen
Auditory perception depends on the coding and organization of the information-bearing acoustic features of sounds by auditory neurons. We report here that auditory neurons can be classified into functional groups, each of which plays a specific role in extracting distinct complex sound features. We recorded the electrophysiological responses of single auditory neurons in the songbird midbrain and forebrain to conspecific song, measured their tuning by calculating spectrotemporal receptive fields (STRFs), and classified them using multiple cluster analysis methods. Based on STRF shape, cells clustered into functional groups that divided the space of acoustical features into regions that represent cues for the fundamental acoustic percepts of pitch, timbre, and rhythm. Four major groups were found in the midbrain, and five major groups were found in the forebrain. Comparing STRFs in midbrain and forebrain neurons suggested that both inheritance and emergence of tuning properties occur as information ascends the auditory processing stream.
Journal of Computational Neuroscience | 2006
Patrick R. Gill; Junli Zhang; Sarah M. N. Woolley; Thane Fremouw; Frédéric E. Theunissen
The spectro-temporal receptive field (STRF) of an auditory neuron describes the linear relationship between the sound stimulus in a time-frequency representation and the neural response. Time-frequency representations of a sound in turn require a nonlinear operation on the sound pressure waveform and many different forms for this non-linear transformation are possible. Here, we systematically investigated the effects of four factors in the non-linear step in the STRF model: the choice of logarithmic or linear filter frequency spacing, the time-frequency scale, stimulus amplitude compression and adaptive gain control. We quantified the goodness of fit of these different STRF models on data obtained from auditory neurons in the songbird midbrain and forebrain. We found that adaptive gain control and the correct stimulus amplitude compression scheme are paramount to correctly modelling neurons. The time-frequency scale and frequency spacing also affected the goodness of fit of the model but to a lesser extent and the optimal values were stimulus dependant.
Behavioral Neuroscience | 1997
Thane Fremouw; Pamela Jackson-Smith; Raymond P. Kesner
Hippocampal processing is often crucial for normal spatial learning and memory in both birds and mammals, suggesting a general similarity in avian and mammalian hippocampal function. However, few studies using birds have examined the effect of hippocampal lesions on spatial tasks analogous to those typically used with mammals. Therefore, we examined how hippocampal lesions would affect the performance of pigeons in a dry version of the water maze. Experiment 1 showed that hippocampal-lesioned birds were impaired in acquiring the location of hidden food in the maze. Experiment 2 showed that hippocampal-lesioned birds were not impaired when a single cue indicated the location of hidden food. These results support the notion that avian and mammalian hippocampal functions are quite similar, in terms of the tasks for which their processing is crucial and the tasks for which it is not.
Journal of Neurophysiology | 2008
Patrick R. Gill; Sarah M. N. Woolley; Thane Fremouw; Frédéric E. Theunissen
High-level sensory neurons encoding natural stimuli are not well described by linear models operating on the time-varying stimulus intensity. Here we show that firing rates of neurons in a secondary sensory forebrain area can be better modeled by linear functions of how surprising the stimulus is. We modeled auditory neurons in the caudal lateral mesopallium (CLM) of adult male zebra finches under urethane anesthesia with linear filters convolved not with stimulus intensity, but with stimulus surprise. Surprise was quantified as the logarithm of the probability of the stimulus given the local recent stimulus history and expectations based on conspecific song. Using our surprise method, the predictions of neural responses to conspecific song improved by 67% relative to those obtained using stimulus intensity. Similar prediction improvements cannot be replicated by assuming CLM performs derivative detection. The explanatory power of surprise increased from the midbrain through the primary forebrain and to CLM. When the stimulus presented was a random synthetic ripple noise, CLM neurons (but not neurons in lower auditory areas) were best described as if they were expecting conspecific song, finding the inconsistencies between birdsong and noise surprising. In summary, spikes in CLM neurons indicate stimulus surprise more than they indicate stimulus intensity features. The concept of stimulus surprise may be useful for modeling neural responses in other higher-order sensory areas whose functions have been poorly understood.
Journal of Experimental Psychology: Animal Behavior Processes | 1998
Thane Fremouw; Walter T. Herbranson; Charles P. Shimp
Humans can shift attention between parts and wholes, as shown in experiments with complex hierarchical stimuli, such as larger, global letters constructed from smaller, local letters. In these experiments, a target stimulus appears at either the local or the global level, with a distractor at the other level. A shift of attention between levels is said to be demonstrated through a form of priming, whereby targets at one level are presented with a higher probability than at the other level. This base-rate type of priming can facilitate speed of responding to targets, as seen in shorter reaction times to targets at the primed level. Experiment 1 demonstrated such a priming effect in pigeons. Experiment 2 confirmed this priming, by showing that accuracy remained high for familiar targets, at either level, even when distractors at the other level were novel.
Annals of the New York Academy of Sciences | 2004
Frédéric E. Theunissen; Sarah M. N. Woolley; Anne Hsu; Thane Fremouw
Abstract: Understanding song perception and singing behavior in birds requires the study of auditory processing of complex sounds throughout the avian brain. We can divide the basics of auditory perception into two general processes: (1) encoding, the process whereby sound is transformed into neural activity and (2) decoding, the process whereby patterns of neural activity take on perceptual meaning and therefore guide behavioral responses to sounds. In birdsong research, most studies have focused on the decoding process: What are the responses of the specialized auditory neurons in the song control system? and What do they mean for the bird? Recently, new techniques addressing both encoding and decoding have been developed for use in songbirds. Here, we first describe some powerful methods for analyzing what acoustical aspects of complex sounds like songs are encoded by auditory processing neurons in songbird brain. These methods include the estimation and analysis of spectro‐temporal receptive fields (STRFs) for auditory neurons. Then we discuss the decoding methods that have been used to understand how songbird neurons may discriminate among different songs and other sounds based on mean spike‐count rates.