Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kristofer E. Bouchard is active.

Publication


Featured researches published by Kristofer E. Bouchard.


Nature | 2013

Functional organization of human sensorimotor cortex for speech articulation

Kristofer E. Bouchard; Nima Mesgarani; Keith Johnson; Edward F. Chang

Speaking is one of the most complex actions that we perform, but nearly all of us learn to do it effortlessly. Production of fluent speech requires the precise, coordinated movement of multiple articulators (for example, the lips, jaw, tongue and larynx) over rapid time scales. Here we used high-resolution, multi-electrode cortical recordings during the production of consonant-vowel syllables to determine the organization of speech sensorimotor cortex in humans. We found speech-articulator representations that are arranged somatotopically on ventral pre- and post-central gyri, and that partially overlap at individual electrodes. These representations were coordinated temporally as sequences during syllable production. Spatial patterns of cortical activity showed an emergent, population-level representation, which was organized by phonetic features. Over tens of milliseconds, the spatial patterns transitioned between distinct representations for different consonants and vowels. These results reveal the dynamic organization of speech sensorimotor cortex during the generation of multi-articulator movements that underlies our ability to speak.


The Journal of Neuroscience | 2012

Cannabinoid receptor 2 signaling in peripheral immune cells modulates disease onset and severity in mouse models of Huntington's disease

Jill Bouchard; Jennifer Truong; Kristofer E. Bouchard; Diana Dunkelberger; Sandrine Desrayaud; Saliha Moussaoui; Sarah J. Tabrizi; Nephi Stella; Paul J. Muchowski

Peripheral immune cells and brain microglia exhibit an activated phenotype in premanifest Huntingtons disease (HD) patients that persists chronically and correlates with clinical measures of neurodegeneration. However, whether activation of the immune system contributes to neurodegeneration in HD, or is a consequence thereof, remains unclear. Signaling through cannabinoid receptor 2 (CB2) dampens immune activation. Here, we show that the genetic deletion of CB2 receptors in a slowly progressing HD mouse model accelerates the onset of motor deficits and increases their severity. Treatment of mice with a CB2 receptor agonist extends life span and suppresses motor deficits, synapse loss, and CNS inflammation, while a peripherally restricted CB2 receptor antagonist blocks these effects. CB2 receptors regulate blood interleukin-6 (IL-6) levels, and IL-6 neutralizing antibodies partially rescue motor deficits and weight loss in HD mice. These findings support a causal link between CB2 receptor signaling in peripheral immune cells and the onset and severity of neurodegeneration in HD, and they provide a novel therapeutic approach to treat HD.


The Journal of Neuroscience | 2014

Control of Spoken Vowel Acoustics and the Influence of Phonetic Context in Human Speech Sensorimotor Cortex

Kristofer E. Bouchard; Edward F. Chang

Speech production requires the precise control of vocal tract movements to generate individual speech sounds (phonemes) which, in turn, are rapidly organized into complex sequences. Multiple productions of the same phoneme can exhibit substantial variability, some of which is inherent to control of the vocal tract and its biomechanics, and some of which reflects the contextual effects of surrounding phonemes (“coarticulation”). The role of the CNS in these aspects of speech motor control is not well understood. To address these issues, we recorded multielectrode cortical activity directly from human ventral sensory-motor cortex (vSMC) during the production of consonant-vowel syllables. We analyzed the relationship between the acoustic parameters of vowels (pitch and formants) and cortical activity on a single-trial level. We found that vSMC activity robustly predicted acoustic parameters across vowel categories (up to 80% of variance), as well as different renditions of the same vowel (up to 25% of variance). Furthermore, we observed significant contextual effects on vSMC representations of produced phonemes that suggest active control of coarticulation: vSMC representations for vowels were biased toward the representations of the preceding consonant, and conversely, representations for consonants were biased toward upcoming vowels. These results reveal that vSMC activity for phonemes are not invariant and provide insight into the cortical mechanisms of coarticulation.


The Journal of Neuroscience | 2015

Dynamic encoding of speech sequence probability in human temporal cortex.

Matthew K. Leonard; Kristofer E. Bouchard; Claire Tang; Edward F. Chang

Sensory processing involves identification of stimulus features, but also integration with the surrounding sensory and cognitive context. Previous work in animals and humans has shown fine-scale sensitivity to context in the form of learned knowledge about the statistics of the sensory environment, including relative probabilities of discrete units in a stream of sequential auditory input. These statistics are a defining characteristic of one of the most important sequential signals humans encounter: speech. For speech, extensive exposure to a language tunes listeners to the statistics of sound sequences. To address how speech sequence statistics are neurally encoded, we used high-resolution direct cortical recordings from human lateral superior temporal cortex as subjects listened to words and nonwords with varying transition probabilities between sound segments. In addition to their sensitivity to acoustic features (including contextual features, such as coarticulation), we found that neural responses dynamically encoded the language-level probability of both preceding and upcoming speech sounds. Transition probability first negatively modulated neural responses, followed by positive modulation of neural responses, consistent with coordinated predictive and retrospective recognition processes, respectively. Furthermore, transition probability encoding was different for real English words compared with nonwords, providing evidence for online interactions with high-order linguistic knowledge. These results demonstrate that sensory processing of deeply learned stimuli involves integrating physical stimulus features with their contextual sequential structure. Despite not being consciously aware of phoneme sequence statistics, listeners use this information to process spoken input and to link low-level acoustic representations with linguistic information about word identity and meaning.


Current Opinion in Neurobiology | 2014

Speech map in the human ventral sensory-motor cortex

David F. Conant; Kristofer E. Bouchard; Edward F. Chang

The study of spatial maps of the ventral sensory-motor cortex (vSMC) dates back to the earliest cortical stimulation studies. This review surveys a number of recent and historical reports of the features and function of spatial maps within vSMC towards the human behavior of speaking. Representations of the vocal tract, like other body parts, are arranged in a somatotopic fashion within ventral SMC. This region has unique features and connectivity that may give insight into its specialized function in speech production. New methods allow us to probe further into the functional role of this organization by studying the spatial dynamics of vSMC during natural speaking in humans.


international conference of the ieee engineering in medicine and biology society | 2014

Neural decoding of spoken vowels from human sensory-motor cortex with high-density electrocorticography.

Kristofer E. Bouchard; Edward F. Chang

We present the first demonstration of single-trial neural decoding of vowel acoustic features during speech production with high performance. The ability to predict trial-by-trial fluctuations in speech production was facilitated by using high-density, large-area electrocorticography (ECoG) combined with an adaptive principal components regression. In experiments from two human neurosurgical patients with a high-density 256-channel ECoG grid implanted over speech cortices, we demonstrate that as much as 81% of the acoustic variability across vowels could be accurately predicted from the spatial patterns of neural activity during speech production. These results demonstrate continuous, single-trial decoding of vowel acoustics.


The Journal of Neuroscience | 2013

Neural Encoding and Integration of Learned Probabilistic Sequences in Avian Sensory-Motor Circuitry

Kristofer E. Bouchard; Michael S. Brainard

Many complex behaviors, such as human speech and birdsong, reflect a set of categorical actions that can be flexibly organized into variable sequences. However, little is known about how the brain encodes the probabilities of such sequences. Behavioral sequences are typically characterized by the probability of transitioning from a given action to any subsequent action (which we term “divergence probability”). In contrast, we hypothesized that neural circuits might encode the probability of transitioning to a given action from any preceding action (which we term “convergence probability”). The convergence probability of repeatedly experienced sequences could naturally become encoded by Hebbian plasticity operating on the patterns of neural activity associated with those sequences. To determine whether convergence probability is encoded in the nervous system, we investigated how auditory-motor neurons in vocal premotor nucleus HVC of songbirds encode different probabilistic characterizations of produced syllable sequences. We recorded responses to auditory playback of pseudorandomly sequenced syllables from the birds repertoire, and found that variations in responses to a given syllable could be explained by a positive linear dependence on the convergence probability of preceding sequences. Furthermore, convergence probability accounted for more response variation than other probabilistic characterizations, including divergence probability. Finally, we found that responses integrated over >7–10 syllables (∼700–1000 ms) with the sign, gain, and temporal extent of integration depending on convergence probability. Our results demonstrate that convergence probability is encoded in sensory-motor circuitry of the song-system, and suggest that encoding of convergence probability is a general feature of sensory-motor circuits.


PLOS ONE | 2016

High-Resolution, Non-Invasive Imaging of Upper Vocal Tract Articulators Compatible with Human Brain Recordings

Kristofer E. Bouchard; David F. Conant; Gopala K. Anumanchipalli; Benjamin K. Dichter; Kris S. Chaisanguanthum; Keith Johnson; Edward F. Chang

A complete neurobiological understanding of speech motor control requires determination of the relationship between simultaneously recorded neural activity and the kinematics of the lips, jaw, tongue, and larynx. Many speech articulators are internal to the vocal tract, and therefore simultaneously tracking the kinematics of all articulators is nontrivial—especially in the context of human electrophysiology recordings. Here, we describe a noninvasive, multi-modal imaging system to monitor vocal tract kinematics, demonstrate this system in six speakers during production of nine American English vowels, and provide new analysis of such data. Classification and regression analysis revealed considerable variability in the articulator-to-acoustic relationship across speakers. Non-negative matrix factorization extracted basis sets capturing vocal tract shapes allowing for higher vowel classification accuracy than traditional methods. Statistical speech synthesis generated speech from vocal tract measurements, and we demonstrate perceptual identification. We demonstrate the capacity to predict lip kinematics from ventral sensorimotor cortical activity. These results demonstrate a multi-modal system to non-invasively monitor articulator kinematics during speech production, describe novel analytic methods for relating kinematic data to speech acoustics, and provide the first decoding of speech kinematics from electrocorticography. These advances will be critical for understanding the cortical basis of speech production and the creation of vocal prosthetics.


Journal of Neural Engineering | 2016

Spatial resolution dependence on spectral frequency in human speech cortex electrocorticography

Leah Muller; Liberty S. Hamilton; Erik Edwards; Kristofer E. Bouchard; Edward F. Chang

OBJECTIVE Electrocorticography (ECoG) has become an important tool in human neuroscience and has tremendous potential for emerging applications in neural interface technology. Electrode array design parameters are outstanding issues for both research and clinical applications, and these parameters depend critically on the nature of the neural signals to be recorded. Here, we investigate the functional spatial resolution of neural signals recorded at the human cortical surface. We empirically derive spatial spread functions to quantify the shared neural activity for each frequency band of the electrocorticogram. APPROACH Five subjects with high-density (4 mm center-to-center spacing) ECoG grid implants participated in speech perception and production tasks while neural activity was recorded from the speech cortex, including superior temporal gyrus, precentral gyrus, and postcentral gyrus. The cortical surface field potential was decomposed into traditional EEG frequency bands. Signal similarity between electrode pairs for each frequency band was quantified using a Pearson correlation coefficient. MAIN RESULTS The correlation of neural activity between electrode pairs was inversely related to the distance between the electrodes; this relationship was used to quantify spatial falloff functions for cortical subdomains. As expected, lower frequencies remained correlated over larger distances than higher frequencies. However, both the envelope and phase of gamma and high gamma frequencies (30-150 Hz) are largely uncorrelated (<90%) at 4 mm, the smallest spacing of the high-density arrays. Thus, ECoG arrays smaller than 4 mm have significant promise for increasing signal resolution at high frequencies, whereas less additional gain is achieved for lower frequencies. SIGNIFICANCE Our findings quantitatively demonstrate the dependence of ECoG spatial resolution on the neural frequency of interest. We demonstrate that this relationship is consistent across patients and across cortical areas during activity.


The Journal of Neuroscience | 2018

Human sensorimotor cortex control of directly-measured vocal tract movements during vowel production

David F. Conant; Kristofer E. Bouchard; Matthew K. Leonard; Edward F. Chang

During speech production, we make vocal tract movements with remarkable precision and speed. Our understanding of how the human brain achieves such proficient control is limited, in part due to the challenge of simultaneously acquiring high-resolution neural recordings and detailed vocal tract measurements. To overcome this challenge, we combined ultrasound and video monitoring of the supralaryngeal articulators (lips, jaw, and tongue) with electrocorticographic recordings from the cortical surface of 4 subjects (3 female, 1 male) to investigate how neural activity in the ventral sensory-motor cortex (vSMC) relates to measured articulator movement kinematics (position, speed, velocity, acceleration) during the production of English vowels. We found that high-gamma activity at many individual vSMC electrodes strongly encoded the kinematics of one or more articulators, but less so for vowel formants and vowel identity. Neural population decoding methods further revealed the structure of kinematic features that distinguish vowels. Encoding of articulator kinematics was sparsely distributed across time and primarily occurred during the time of vowel onset and offset. In contrast, encoding was low during the steady-state portion of the vowel, despite sustained neural activity at some electrodes. Significant representations were found for all kinematic parameters, but speed was the most robust. These findings enabled by direct vocal tract monitoring demonstrate novel insights into the representation of articulatory kinematic parameters encoded in the vSMC during speech production. SIGNIFICANCE STATEMENT Speaking requires precise control and coordination of the vocal tract articulators (lips, jaw, and tongue). Despite the impressive proficiency with which humans move these articulators during speech production, our understanding of how the brain achieves such control is rudimentary, in part because the movements themselves are difficult to observe. By simultaneously measuring speech movements and the neural activity that gives rise to them, we demonstrate how neural activity in sensorimotor cortex produces complex, coordinated movements of the vocal tract.

Collaboration


Dive into the Kristofer E. Bouchard's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Prabhat

Lawrence Berkeley National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Max Dougherty

Lawrence Berkeley National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Oliver Rübel

Lawrence Berkeley National Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge