Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Dennis L. Barbour is active.

Publication


Featured researches published by Dennis L. Barbour.


The Journal of Neuroscience | 2008

Excitatory Local Connections of Superficial Neurons in Rat Auditory Cortex

Dennis L. Barbour; Edward M. Callaway

The mammalian cerebral cortex consists of multiple areas specialized for processing information for many different sensory modalities. Although the basic structure is similar for each cortical area, specialized neural connections likely mediate unique information processing requirements. Relative to primary visual (V1) and somatosensory (S1) cortices, little is known about the intrinsic connectivity of primary auditory cortex (A1). To better understand the flow of information from the thalamus to and through rat A1, we made use of a rapid, high-throughput screening method exploiting laser-induced uncaging of glutamate to construct excitatory input maps of individual neurons. We found that excitatory inputs to layer 2/3 pyramidal neurons were similar to those in V1 and S1; these cells received strong excitation primarily from layers 2–4. Both anatomical and physiological observations, however, indicate that inputs and outputs of layer 4 excitatory neurons in A1 contrast with those in V1 and S1. Layer 2/3 pyramids in A1 have substantial axonal arbors in layer 4, and photostimulation demonstrates that these pyramids can connect to layer 4 excitatory neurons. Furthermore, most or all of these layer 4 excitatory neurons project out of the local cortical circuit. Unlike S1 and V1, where feedback to layer 4 is mediated exclusively by indirect local circuits involving layer 2/3 projections to deep layers and deep feedback to layer 4, layer 4 of A1 integrates thalamic and strong layer 4 recurrent excitatory input with relatively direct feedback from layer 2/3 and provides direct cortical output.


Nature Neuroscience | 2008

Specialized neuronal adaptation for preserving input sensitivity

Paul V. Watkins; Dennis L. Barbour

Some neurons in auditory cortex respond to recent stimulus history by adapting their response functions to track stimulus statistics directly, as might be expected. In contrast, some neurons respond to loud sounds by adjusting their response functions away from high intensities and consequently remain sensitive to softer sounds. In marmoset monkey auditory cortex, the latter type of adaptation appears to exist only in neurons tuned to stimulus intensity.


The Journal of Neuroscience | 2011

Nonuniform High-Gamma (60–500 Hz) Power Changes Dissociate Cognitive Task and Anatomy in Human Cortex

Charles M. Gaona; Mohit Sharma; Zachary V. Freudenburg; Jonathan D. Breshears; David T. Bundy; Jarod L. Roland; Dennis L. Barbour; Eric C. Leuthardt

High-gamma-band (>60 Hz) power changes in cortical electrophysiology are a reliable indicator of focal, event-related cortical activity. Despite discoveries of oscillatory subthreshold and synchronous suprathreshold activity at the cellular level, there is an increasingly popular view that high-gamma-band amplitude changes recorded from cellular ensembles are the result of asynchronous firing activity that yields wideband and uniform power increases. Others have demonstrated independence of power changes in the low- and high-gamma bands, but to date, no studies have shown evidence of any such independence above 60 Hz. Based on nonuniformities in time-frequency analyses of electrocorticographic (ECoG) signals, we hypothesized that induced high-gamma-band (60–500 Hz) power changes are more heterogeneous than currently understood. Using single-word repetition tasks in six human subjects, we showed that functional responsiveness of different ECoG high-gamma sub-bands can discriminate cognitive task (e.g., hearing, reading, speaking) and cortical locations. Power changes in these sub-bands of the high-gamma range are consistently present within single trials and have statistically different time courses within the trial structure. Moreover, when consolidated across all subjects within three task-relevant anatomic regions (sensorimotor, Brocas area, and superior temporal gyrus), these behavior- and location-dependent power changes evidenced nonuniform trends across the population. Together, the independence and nonuniformity of power changes across a broad range of frequencies suggest that a new approach to evaluating high-gamma-band cortical activity is necessary. These findings show that in addition to time and location, frequency is another fundamental dimension of high-gamma dynamics.


Journal of Biomedical Optics | 2011

Photoacoustic microscopy of microvascular responses to cortical electrical stimulation

Vassiliy Tsytsarev; Song Hu; Junjie Yao; Konstantin Maslov; Dennis L. Barbour; Lihong V. Wang

Advances in the functional imaging of cortical hemodynamics have greatly facilitated the understanding of neurovascular coupling. In this study, label-free optical-resolution photoacoustic microscopy (OR-PAM) was used to monitor microvascular responses to direct electrical stimulations of the mouse somatosensory cortex through a cranial opening. The responses appeared in two forms: vasoconstriction and vasodilatation. The transition between these two forms of response was observed in single vessels by varying the stimulation intensity. Marked correlation was found between the current-dependent responses of two daughter vessels bifurcating from the same parent vessel. Statistical analysis of twenty-seven vessels from three different animals further characterized the spatial-temporal features and the current dependence of the microvascular response. Our results demonstrate that OR-PAM is a valuable tool to study neurovascular coupling at the microscopic level.


Cerebral Cortex | 2011

Level-Tuned Neurons in Primary Auditory Cortex Adapt Differently to Loud versus Soft Sounds

Paul V. Watkins; Dennis L. Barbour

The responses of auditory neurons tuned to stimulus intensity (i.e., nonmonotonic rate-level responders) have typically been analyzed with stimulus paradigms that eliminate neuronal adaptation to recent stimulus statistics. This procedure is usually accomplished by presenting individual sounds with long silent periods between them. Studies using such paradigms have led to hypotheses that nonmonotonic neurons may play a role in amplitude spectrum coding or level-invariant representations of complex spectral shapes. We have previously proposed an alternate hypothesis that level-tuned neurons may represent specialized coders of low sound levels because they preserve their sensitivity to low levels even when average sound level is relatively high. Here we demonstrate that nonmonotonic neurons in awake marmoset primary auditory cortex accomplish this feat by adapting their upper dynamic range to encode sounds with high mean level, leaving the lower dynamic range available for encoding relatively rare low-level sounds. This adaptive behavior manifests in nonmonotonic relative to monotonic neurons as 1) a lesser amount of overall shifting of rate-level response thresholds and (2) a nonmonotonic gain adjustment with increasing mean stimulus level.


Frontiers in Human Neuroscience | 2012

Temporal evolution of gamma activity in human cortex during an overt and covert word repetition task

Eric C. Leuthardt; Xiaomei Pei; Jonathan D. Breshears; Charles M. Gaona; Mohit Sharma; Zac Freudenberg; Dennis L. Barbour

Several scientists have proposed different models for cortical processing of speech. Classically, the regions participating in language were thought to be modular with a linear sequence of activations. More recently, modern theoretical models have posited a more hierarchical and distributed interaction of anatomic areas for the various stages of speech processing. Traditional imaging techniques can only define the location or time of cortical activation, which impedes the further evaluation and refinement of these models. In this study, we take advantage of recordings from the surface of the brain [electrocorticography (ECoG)], which can accurately detect the location and timing of cortical activations, to study the time course of ECoG high gamma (HG) modulations during an overt and covert word repetition task for different cortical areas. For overt word production, our results show substantial perisylvian cortical activations early in the perceptual phase of the task that were maintained through word articulation. However, this broad activation is attenuated during the expressive phase of covert word repetition. Across the different repetition tasks, the utilization of the different cortical sites within the perisylvian region varied in the degree of activation dependent on which stimulus was provided (auditory or visual cue) and whether the word was to be spoken or imagined. Taken together, the data support current models of speech that have been previously described with functional imaging. Moreover, this study demonstrates that the broad perisylvian speech network activates early and maintains suprathreshold activation throughout the word repetition task that appears to be modulated by the demands of different conditions.


Hearing Research | 2011

Rate-level responses in awake marmoset auditory cortex.

Paul V. Watkins; Dennis L. Barbour

Investigations of auditory neuronal firing rate as a function of sound level have revealed a wide variety of rate-level function shapes, including neurons with nonmonotonic or level-tuned functions. These neurons have an unclear role in auditory processing but have been found to be quite common. In the present study of awake marmoset primary auditory cortex (A1) neurons, 56% (305 out of 544), when stimulated with tones at the highest sound level tested, exhibited a decrement in driven rate of at least 50% from the maximum. These nonmonotonic neurons demonstrated significantly lower response thresholds than monotonic neurons, although both populations exhibited thresholds skewed toward lower values. Nonmonotonic neurons significantly outnumbered monotonic neurons in the frequency range 6-13 kHz, which is the frequency range containing most marmoset vocalization energy. Spontaneous rate was inversely correlated with threshold in both populations, and spontaneous rates of nonmonotonic neurons had significantly lower values than spontaneous rates of monotonic neurons, although distributions of maximum driven rates were not significantly different. Finally, monotonicity was found to be organized within electrode penetrations like characteristic frequency but with less structure. These findings are consistent with the hypothesis that nonmonotonic neurons play a unique role in representing sound level, particularly at the lowest sound levels and for complex vocalizations.


Neuroscience & Biobehavioral Reviews | 2011

Intensity-invariant coding in the auditory system.

Dennis L. Barbour

The auditory system faithfully represents sufficient details from sound sources such that downstream cognitive processes are capable of acting upon this information effectively even in the face of signal uncertainty, degradation or interference. This robust sound source representation leads to an invariance in perception vital for animals to interact effectively with their environment. Due to unique nonlinearities in the cochlea, sound representations early in the auditory system exhibit a large amount of variability as a function of stimulus intensity. In other words, changes in stimulus intensity, such as for sound sources at differing distances, create a unique challenge for the auditory system to encode sounds invariantly across the intensity dimension. This challenge and some strategies available to sensory systems to eliminate intensity as an encoding variable are discussed, with a special emphasis upon sound encoding.


Biological Cybernetics | 2009

A computational framework for topographies of cortical areas

Paul V. Watkins; Thomas L. Chen; Dennis L. Barbour

Self-organizing feature maps (SOFMs) represent a dimensionality-reduction algorithm that has been used to replicate feature topographies observed experimentally in primary visual cortex (V1). We used the SOFM algorithm to model possible topographies of generic sensory cortical areas containing up to five arbitrary physiological features. This study explored the conditions under which these multi-feature SOFMs contained two features that were mapped monotonically and aligned orthogonally with one another (i.e., “globally orthogonal”, as well as the conditions under which the map of one feature aligned with the longest anatomical dimension of the modeled cortical area (i.e., “dominant”. In a single SOFM with more than two features, we never observed more than one dominant feature, nor did we observe two globally orthogonal features in the same map in which a dominant feature occurred. Whether dominance or global orthogonality occurred depended upon how heavily weighted the features were relative to one another. The most heavily weighted features are likely to correspond to those physical stimulus properties transduced directly by the sensory epithelium of a particular sensory modality. Our results imply, therefore, that in the primary cortical area of sensory modalities with a two-dimensional sensory epithelium, these two features are likely to be organized globally orthogonally to one another, and neither feature is likely to be dominant. In the primary cortical area of sensory modalities with a one-dimensional sensory epithelium, however, this feature is likely to be dominant, and no two features are likely to be organized globally orthogonally to one another. Because the auditory system transduces a single stimulus feature (i.e., frequency) along the entire length of the cochlea, these findings may have particular relevance for topographic maps of primary auditory cortex.


Ear and Hearing | 2015

Fast, Continuous Audiogram Estimation Using Machine Learning.

Xinyu D. Song; Brittany M. Wallace; Jacob R. Gardner; Noah M. Ledbetter; Kilian Q. Weinberger; Dennis L. Barbour

Objectives: Pure-tone audiometry has been a staple of hearing assessments for decades. Many different procedures have been proposed for measuring thresholds with pure tones by systematically manipulating intensity one frequency at a time until a discrete threshold function is determined. The authors have developed a novel nonparametric approach for estimating a continuous threshold audiogram using Bayesian estimation and machine learning classification. The objective of this study was to assess the accuracy and reliability of this new method relative to a commonly used threshold measurement technique. Design: The authors performed air conduction pure-tone audiometry on 21 participants between the ages of 18 and 90 years with varying degrees of hearing ability. Two repetitions of automated machine learning audiogram estimation and one repetition of conventional modified Hughson-Westlake ascending–descending audiogram estimation were acquired by an audiologist. The estimated hearing thresholds of these two techniques were compared at standard audiogram frequencies (i.e., 0.25, 0.5, 1, 2, 4, 8 kHz). Results: The two threshold estimate methods delivered very similar estimates at standard audiogram frequencies. Specifically, the mean absolute difference between estimates was 4.16 ± 3.76 dB HL. The mean absolute difference between repeated measurements of the new machine learning procedure was 4.51 ± 4.45 dB HL. These values compare favorably with those of other threshold audiogram estimation procedures. Furthermore, the machine learning method generated threshold estimates from significantly fewer samples than the modified Hughson-Westlake procedure while returning a continuous threshold estimate as a function of frequency. Conclusions: The new machine learning audiogram estimation technique produces continuous threshold audiogram estimates accurately, reliably, and efficiently, making it a strong candidate for widespread application in clinical and research audiometry.

Collaboration


Dive into the Dennis L. Barbour's collaboration.

Top Co-Authors

Avatar

Paul V. Watkins

Washington University in St. Louis

View shared research outputs
Top Co-Authors

Avatar

Eric C. Leuthardt

Washington University in St. Louis

View shared research outputs
Top Co-Authors

Avatar

Noah M. Ledbetter

Washington University in St. Louis

View shared research outputs
Top Co-Authors

Avatar

Xinyu D. Song

Washington University in St. Louis

View shared research outputs
Top Co-Authors

Avatar

Jacob R. Gardner

Washington University in St. Louis

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kiron A. Sukesan

Washington University in St. Louis

View shared research outputs
Top Co-Authors

Avatar

Ruiye Ni

Washington University in St. Louis

View shared research outputs
Top Co-Authors

Avatar

Thomas L. Chen

University of Washington

View shared research outputs
Researchain Logo
Decentralizing Knowledge