Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where John F. Houde is active.

Publication


Featured researches published by John F. Houde.


Journal of Cognitive Neuroscience | 2002

Modulation of the Auditory Cortex during Speech: An MEG Study

John F. Houde; Srikantan S. Nagarajan; Kensuke Sekihara; Michael M. Merzenich

Several behavioral and brain imaging studies have demonstrated a significant interaction between speech perception and speech production. In this study, auditory cortical responses to speech were examined during self-production and feedback alteration. Magnetic field recordings were obtained from both hemispheres in subjects who spoke while hearing controlled acoustic versions of their speech feedback via earphones. These responses were compared to recordings made while subjects listened to a tape playback of their production. The amplitude of tape playback was adjusted to match the amplitude of self-produced speech. Recordings of evoked responses to both self-produced and tape-recorded speech were obtained free of movement-related artifacts. Responses to self-produced speech were weaker than were responses to tape-recorded speech. Responses to tones were also weaker during speech production, when compared with responses to tones recorded in the presence of speech from tape playback. However, responses evoked by gated noise stimuli did not differ for recordings made during self-produced speech versus recordings made during tape-recorded speech playback. These data suggest that during speech production, the auditory cortex (1) attenuates its sensitivity and (2) modulates its activity as a function of the expected acoustic feedback.


Frontiers in Human Neuroscience | 2011

Speech Production as State Feedback Control

John F. Houde; Srikantan S. Nagarajan

Spoken language exists because of a remarkable neural process. Inside a speakers brain, an intended message gives rise to neural signals activating the muscles of the vocal tract. The process is remarkable because these muscles are activated in just the right way that the vocal tract produces sounds a listener understands as the intended message. What is the best approach to understanding the neural substrate of this crucial motor control process? One of the key recent modeling developments in neuroscience has been the use of state feedback control (SFC) theory to explain the role of the CNS in motor control. SFC postulates that the CNS controls motor output by (1) estimating the current dynamic state of the thing (e.g., arm) being controlled, and (2) generating controls based on this estimated state. SFC has successfully predicted a great range of non-speech motor phenomena, but as yet has not received attention in the speech motor control community. Here, we review some of the key characteristics of speech motor control and what they say about the role of the CNS in the process. We then discuss prior efforts to model the role of CNS in speech motor control, and argue that these models have inherent limitations – limitations that are overcome by an SFC model of speech motor control which we describe. We conclude by discussing a plausible neural substrate of our model.


Journal of Cognitive Neuroscience | 2009

Motor-induced suppression of the auditory cortex

Sheye O. Aliu; John F. Houde; Srikantan S. Nagarajan

Sensory responses to stimuli that are triggered by a self-initiated motor act are suppressed when compared with the response to the same stimuli triggered externally, a phenomenon referred to as motor-induced suppression (MIS) of sensory cortical feedback. Studies in the somatosensory system suggest that such suppression might be sensitive to delays between the motor act and the stimulus onset, and a recent study in the auditory system suggests that such MIS develops rapidly. In three MEG experiments, we characterize the properties of MIS by examining the M100 response from the auditory cortex to a simple tone triggered by a button press. In Experiment 1, we found that MIS develops for zero delays but does not generalize to nonzero delays. In Experiment 2, we found that MIS developed for 100-msec delays within 300 trials and occurs in excess of auditory habituation. In Experiment 3, we found that unlike MIS for zero delays, MIS for nonzero delays does not exhibit sensitivity to sensory, delay, or motor-command changes. These results are discussed in relation to suppression to self-produced speech and a general model of sensory motor processing and control.


NeuroImage | 2013

Language mapping with navigated repetitive TMS: proof of technique and validation.

Phiroz E. Tarapore; Anne M. Findlay; Susanne Honma; Danielle Mizuiri; John F. Houde; Mitchel S. Berger; Srikantan S. Nagarajan

OBJECTIVE Lesion-based mapping of speech pathways has been possible only during invasive neurosurgical procedures using direct cortical stimulation (DCS). However, navigated transcranial magnetic stimulation (nTMS) may allow for lesion-based interrogation of language pathways noninvasively. Although not lesion-based, magnetoencephalographic imaging (MEGI) is another noninvasive modality for language mapping. In this study, we compare the accuracy of nTMS and MEGI with DCS. METHODS Subjects with lesions around cortical language areas underwent preoperative nTMS and MEGI for language mapping. nTMS maps were generated using a repetitive TMS protocol to deliver trains of stimulations during a picture naming task. MEGI activation maps were derived from adaptive spatial filtering of beta-band power decreases prior to overt speech during picture naming and verb generation tasks. The subjects subsequently underwent awake language mapping via intraoperative DCS. The language maps obtained from each of the 3 modalities were recorded and compared. RESULTS nTMS and MEGI were performed on 12 subjects. nTMS yielded 21 positive language disruption sites (11 speech arrest, 5 anomia, and 5 other) while DCS yielded 10 positive sites (2 speech arrest, 5 anomia, and 3 other). MEGI isolated 32 sites of peak activation with language tasks. Positive language sites were most commonly found in the pars opercularis for all three modalities. In 9 instances the positive DCS site corresponded to a positive nTMS site, while in 1 instance it did not. In 4 instances, a positive nTMS site corresponded to a negative DCS site, while 169 instances of negative nTMS and DCS were recorded. The sensitivity of nTMS was therefore 90%, specificity was 98%, the positive predictive value was 69% and the negative predictive value was 99% as compared with intraoperative DCS. MEGI language sites for verb generation and object naming correlated with nTMS sites in 5 subjects, and with DCS sites in 2 subjects. CONCLUSION Maps of language function generated with nTMS correlate well with those generated by DCS. Negative nTMS mapping also correlates with negative DCS mapping. In our study, MEGI lacks the same level of correlation with intraoperative mapping; nevertheless it provides useful adjunct information in some cases. nTMS may offer a lesion-based method for noninvasively interrogating language pathways and be valuable in managing patients with peri-eloquent lesions.


Proceedings of the National Academy of Sciences of the United States of America | 2013

Human cortical sensorimotor network underlying feedback control of vocal pitch

Edward F. Chang; Caroline A. Niziolek; Robert T. Knight; Srikantan S. Nagarajan; John F. Houde

The control of vocalization is critically dependent on auditory feedback. Here, we determined the human peri-Sylvian speech network that mediates feedback control of pitch using direct cortical recordings. Subjects phonated while a real-time signal processor briefly perturbed their output pitch (speak condition). Subjects later heard the same recordings of their auditory feedback (listen condition). In posterior superior temporal gyrus, a proportion of sites had suppressed responses to normal feedback, whereas other spatially independent sites had enhanced responses to altered feedback. Behaviorally, speakers compensated for perturbations by changing their pitch. Single-trial analyses revealed that compensatory vocal changes were predicted by the magnitude of both auditory and subsequent ventral premotor responses to perturbations. Furthermore, sites whose responses to perturbation were enhanced in the speaking condition exhibited stronger correlations with behavior. This sensorimotor cortical network appears to underlie auditory feedback-based control of vocal pitch in humans.


BMC Neuroscience | 2009

Speech target modulates speaking induced suppression in auditory cortex

Maria I. Ventura; Srikantan S. Nagarajan; John F. Houde

BackgroundPrevious magnetoencephalography (MEG) studies have demonstrated speaking-induced suppression (SIS) in the auditory cortex during vocalization tasks wherein the M100 response to a subjects own speaking is reduced compared to the response when they hear playback of their speech.ResultsThe present MEG study investigated the effects of utterance rapidity and complexity on SIS: The greatest difference between speak and listen M100 amplitudes (i.e., most SIS) was found in the simple speech task. As the utterances became more rapid and complex, SIS was significantly reduced (p = 0.0003).ConclusionThese findings are highly consistent with our model of how auditory feedback is processed during speaking, where incoming feedback is compared with an efference-copy derived prediction of expected feedback. Thus, the results provide further insights about how speech motor output is controlled, as well as the computational role of auditory cortex in transforming auditory feedback.


NeuroImage | 2014

Optimal timing of pulse onset for language mapping with navigated repetitive transcranial magnetic stimulation

Sandro M. Krieg; Phiroz E. Tarapore; Thomas Picht; Noriko Tanigawa; John F. Houde; Nico Sollmann; Bernhard Meyer; Peter Vajkoczy; Mitchel S. Berger; Florian Ringel; Srikantan S. Nagarajan

OBJECT Within the primary motor cortex, navigated transcranial magnetic stimulation (nTMS) has been shown to yield maps strongly correlated with those generated by direct cortical stimulation (DCS). However, the stimulation parameters for repetitive nTMS (rTMS)-based language mapping are still being refined. For this purpose, the present study compares two rTMS protocols, which differ in the timing of pulse train onset relative to picture presentation onset during object naming. Results were the correlated with DCS language mapping during awake surgery. METHODS Thirty-two patients with left-sided perisylvian tumors were examined by rTMS prior to awake surgery. Twenty patients underwent rTMS pulse trains starting at 300 ms after picture presentation onset (delayed TMS), whereas another 12 patients received rTMS pulse trains starting at the picture presentation onset (ONSET TMS). These rTMS results were then evaluated for correlation with intraoperative DCS results as gold standard in terms of differential consistencies in receiver operating characteristics (ROC) statistics. Logistic regression analysis by protocols and brain regions were conducted. RESULTS Within and around Brocas area, there was no difference in sensitivity (onset TMS: 100%, delayed TMS: 100%), negative predictive value (NPV) (onset TMS: 100%, delayed TMS: 100%), and positive predictive value (PPV) (onset TMS: 55%, delayed TMS: 54%) between the two protocols compared to DCS. However, specificity differed significantly (onset TMS: 67%, delayed TMS: 28%). In contrast, for posterior language regions, such as supramarginal gyrus, angular gyrus, and posterior superior temporal gyrus, early pulse train onset stimulation showed greater specificity (onset TMS: 92%, delayed TMS: 20%), NPV (onset TMS: 92%, delayed TMS: 57%) and PPV (onset TMS: 75%, delayed TMS: 30%) with comparable sensitivity (onset TMS: 75%, delayed TMS: 70%). Logistic regression analysis also confirmed the greater fit of the predictions by rTMS that had the pulse train onset coincident with the picture presentation onset when compared to the delayed stimulation. Analyses of differential disruption patterns of mapped cortical regions were further able to distinguish clusters of cortical regions standardly associated with semantic and pre-vocalization phonological networks proposed in various models of word production. Repetitive nTMS predictions by both protocols correlate well with DCS outcomes especially in Brocas region, particularly with regard to TMS negative predictions. CONCLUSIONS With this study, we have demonstrated that rTMS stimulation onset coincident with picture presentation onset improves the accuracy of preoperative language maps, particularly within posterior language areas. Moreover, immediate and delayed pulse train onsets may have complementary disruption patterns that could differentially capture cortical regions causally necessary for semantic and pre-vocalization phonological networks.


Acoustics Research Letters Online-arlo | 2005

Compensatory responses to brief perturbations of speech amplitude

Theda H. Heinks-Maldonado; John F. Houde

One of the key questions about speech production is the role that auditory feedback plays in the process. Here, it was investigated whether speakers show fast compensatory changes to brief perturbations of the loudness of their feedback. Speech from each subject was amplified and fed into an Eventide Ultraharmonizer that introduced brief (400 ms)±10-dB perturbations to volume of the subject’s audio feedback. Compensatory responses to both amplitude perturbation types were found in all subjects, which occurred at a similar latency (−10 dB: 171 ms; +10 dB: 287 ms) to that previously observed in responses to pitch perturbations.


Annals of Neurology | 2012

Dynamics of hemispheric dominance for language assessed by magnetoencephalographic imaging

Anne M. Findlay; Josiah B. Ambrose; Deborah A. Cahn-Weiner; John F. Houde; Susanne Honma; Leighton B. Hinkley; Mitchel S. Berger; Srikantan S. Nagarajan; Heidi E. Kirsch

The goal of the current study was to examine the dynamics of language lateralization using magnetoencephalographic (MEG) imaging, to determine the sensitivity and specificity of MEG imaging, and to determine whether MEG imaging can become a viable alternative to the intracarotid amobarbital procedure (IAP), the current gold standard for preoperative language lateralization in neurosurgical candidates.


Language and Speech | 2012

Partial Compensation for Altered Auditory Feedback: A Tradeoff with Somatosensory Feedback?.

Shira Katseff; John F. Houde; Keith Johnson

Talkers are known to compensate only partially for experimentally-induced changes to their auditory feedback. In a typical experiment, talkers might hear their F1 feedback shifted higher (so that /ε/ sounds like /æ/, for example), and compensate by lowering F1 in their subsequent speech by about a quarter of that distance. Here, we sought to characterize and understand partial compensation by examining how talkers respond to each step on a staircase of increasing shifts in auditory feedback. Subjects wore an apparatus which altered their real time auditory feedback. They were asked to repeat visually-presented hVd stimulus words while feedback was altered stepwise over the course of 360 trials. We used a novel analysis method to calculate each subject’s compensation at each compensation step relative to their baseline. Results demonstrated that subjects compensated more for small feedback shifts than for larger shifts. We suggest that this pattern is consistent with vowel targets that incorporate auditory and somatosensory information, and a speech motor control system that is driven by differential weighting of auditory and somatosensory feedback.

Collaboration


Dive into the John F. Houde's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shira Katseff

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hardik Kothare

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Susanne Honma

University of California

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge