Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Frank H. Guenther is active.

Publication


Featured researches published by Frank H. Guenther.


Brain and Language | 2006

Neural Modeling and Imaging of the Cortical Interactions Underlying Syllable Production

Frank H. Guenther; Satrajit S. Ghosh; Jason A. Tourville

This paper describes a neural model of speech acquisition and production that accounts for a wide range of acoustic, kinematic, and neuroimaging data concerning the control of speech movements. The model is a neural network whose components correspond to regions of the cerebral cortex and cerebellum, including premotor, motor, auditory, and somatosensory cortical areas. Computer simulations of the model verify its ability to account for compensation to lip and jaw perturbations during speech. Specific anatomical locations of the models components are estimated, and these estimates are used to simulate fMRI experiments of simple syllable production.


NeuroImage | 2006

An fMRI investigation of syllable sequence production.

Jason W. Bohland; Frank H. Guenther

Fluent speech comprises sequences that are composed from a finite alphabet of learned words, syllables, and phonemes. The sequencing of discrete motor behaviors has received much attention in the motor control literature, but relatively little has been focused directly on speech production. In this paper, we investigate the cortical and subcortical regions involved in organizing and enacting sequences of simple speech sounds. Sparse event-triggered functional magnetic resonance imaging (fMRI) was used to measure responses to preparation and overt production of non-lexical three-syllable utterances, parameterized by two factors: syllable complexity and sequence complexity. The comparison of overt production trials to preparation only trials revealed a network related to the initiation of a speech plan, control of the articulators, and to hearing ones own voice. This network included the primary motor and somatosensory cortices, auditory cortical areas, supplementary motor area (SMA), the precentral gyrus of the insula, and portions of the thalamus, basal ganglia, and cerebellum. Additional stimulus complexity led to increased engagement of the basic speech network and recruitment of additional areas known to be involved in sequencing non-speech motor acts. In particular, the left hemisphere inferior frontal sulcus and posterior parietal cortex, and bilateral regions at the junction of the anterior insula and frontal operculum, the SMA and pre-SMA, the basal ganglia, anterior thalamus, and the cerebellum showed increased activity for more complex stimuli. We hypothesize mechanistic roles for the extended speech production network in the organization and execution of sequences of speech sounds.


Psychological Review | 1995

SPEECH SOUND ACQUISITION, COARTICULATION, AND RATE EFFECTS IN A NEURAL NETWORK MODEL OF SPEECH PRODUCTION

Frank H. Guenther

This article describes a neural network model of speech motor skill acquisition and speech production that explains a wide range of data on variability, motor equivalence, coarticulation, and rate effects. Model parameters are learned during a babbling phase. To explain how infants learn language-specific variability limits, speech sound targets take the form of convex regions, rather than points, in orosensory coordinates. Reducing target size for better accuracy during slower speech leads to differential effects for vowels and consonants, as seen in experiments previously used as evidence for separate control processes for the 2 sound types. Anticipatory coarticulation arises when targets are reduced in size on the basis of context; this generalizes the well-known look-ahead model of coarticulation. Computer simulations verify the models properties.


NeuroImage | 2008

Neural mechanisms underlying auditory feedback control of speech

Jason A. Tourville; Kevin J. Reilly; Frank H. Guenther

The neural substrates underlying auditory feedback control of speech were investigated using a combination of functional magnetic resonance imaging (fMRI) and computational modeling. Neural responses were measured while subjects spoke monosyllabic words under two conditions: (i) normal auditory feedback of their speech and (ii) auditory feedback in which the first formant frequency of their speech was unexpectedly shifted in real time. Acoustic measurements showed compensation to the shift within approximately 136 ms of onset. Neuroimaging revealed increased activity in bilateral superior temporal cortex during shifted feedback, indicative of neurons coding mismatches between expected and actual auditory signals, as well as right prefrontal and Rolandic cortical activity. Structural equation modeling revealed increased influence of bilateral auditory cortical areas on right frontal areas during shifted speech, indicating that projections from auditory error cells in posterior superior temporal cortex to motor correction cells in right frontal cortex mediate auditory feedback control of speech.


Journal of Cognitive Neuroscience | 1993

A self-organizing neural model of motor equivalent reaching and tool use by a multijoint arm

Daniel Bullock; Stephen Grossberg; Frank H. Guenther

This paper describes a self-organizing neural model for eye-hand coordination. Called the DIRECT model, it embodies a solution of the classical motor equivalence problem. Motor equivalence computations allow humans and other animals to flexibly employ an arm with more degrees of freedom than the space in which it moves to carry out spatially defined tasks under conditions that may require novel joint configurations. During a motor babbling phase, the model endogenously generates movement commands that activate the correlated visual, spatial, and motor information that are used to learn its internal coordinate transformations. After learning occurs, the model is capable of controlling reaching movements of the arm to prescribed spatial targets using many different combinations of joints. When allowed visual feedback, the model can automatically perform, without additional learning, reaches with tools of variable lengths, with clamped joints, with distortions of visual input by a prism, and with unexpected perturbations. These compensatory computations occur within a single accurate reaching movement. No corrective movements are needed. Blind reaches using internal feedback have also been simulated. The model achieves its competence by transforming visual information about target position and end effector position in 3-D space into a body-centered spatial representation of the direction in 3-D space that the end effector must move to contact the target. The spatial direction vector is adaptively transformed into a motor direction vector, which represents the joint rotations that move the end effector in the desired spatial direction from the present arm configuration. Properties of the model are compared with psychophysical data on human reaching movements, neurophysiological data on the tuning curves of neurons in the monkey motor cortex, and alternative models of movement control.


Journal of the Acoustical Society of America | 1996

The perceptual magnet effect as an emergent property of neural map formation

Frank H. Guenther; Marin N. Gjaja

The perceptual magnet effect is one of the earliest known language-specific phenomena arising in infant speech development. The effect is characterized by a warping of perceptual space near phonemic category centers. Previous explanations have been formulated within the theoretical framework of cognitive psychology. The model proposed in this paper builds on research from both psychology and neuroscience in working toward a more complete account of the effect. The model embodies two principal hypotheses supported by considerable experimental and theoretical research from the neuroscience literature: (1) sensory experience guides language-specific development of an auditory neural map, and (2) a population vector can predict psychological phenomena based on map cell activities. These hypotheses are realized in a self-organizing neural network model. The magnet effect arises in the model from language-specific nonuniformities in the distribution of map cell firing preferences. Numerical simulations verify that the model captures the known general characteristics of the magnet effect and provides accurate fits to specific psychophysical data.


Language and Cognitive Processes | 2011

The DIVA model: A neural theory of speech acquisition and production.

Jason A. Tourville; Frank H. Guenther

The DIVA model of speech production provides a computationally and neuroanatomically explicit account of the network of brain regions involved in speech acquisition and production. An overview of the model is provided along with descriptions of the computations performed in the different brain regions represented in the model. The latest version of the model, which contains a new right-lateralised feedback control map in ventral premotor cortex, will be described, and experimental results that motivated this new model component will be discussed. Application of the model to the study and treatment of communication disorders will also be briefly described.


Biological Cybernetics | 1994

A neural network model of speech acquisition and motor equivalent speech production

Frank H. Guenther

This article describes a neural network model that addresses the acquisition of speaking skills by infants and subsequent motor equivalent production of speech sounds. The model learns two mappings during a babbling phase. A phonetic-to-orosensory mapping specifies a vocal tract target for each speech sound; these targets take the form of convex regions in orosensory coordinates defining the shape of the vocal tract. The babbling process wherein these convex region targets are formed explains how an infant can learn phoneme-specific and language-specific limits on acceptable variability of articulator movements. The model also learns an orosensory-to-articulatory mapping wherein cells coding desired movement directions in orosensory space learn articulator movements that achieve these orosensory movement directions. The resulting mapping provides a natural explanation for the formation of coordinative structures. This mapping also makes efficient use of redundancy in the articulator system, thereby providing the model with motor equivalent capabilities. Simulations verify the models ability to compensate for constraints or perturbations applied to the articulators automatically and without new learning and to explain contextual variability seen in human speech production.


PLOS ONE | 2009

A Wireless Brain-Machine Interface for Real-Time Speech Synthesis

Frank H. Guenther; Jonathan S. Brumberg; E. Joseph Wright; Alfonso Nieto-Castanon; Jason A. Tourville; Mikhail Panko; Robert Law; Steven A. Siebert; Jess Bartels; Dinal Andreasen; Princewill Ehirim; Hui Mao; Philip R. Kennedy

Background Brain-machine interfaces (BMIs) involving electrodes implanted into the human cerebral cortex have recently been developed in an attempt to restore function to profoundly paralyzed individuals. Current BMIs for restoring communication can provide important capabilities via a typing process, but unfortunately they are only capable of slow communication rates. In the current study we use a novel approach to speech restoration in which we decode continuous auditory parameters for a real-time speech synthesizer from neuronal activity in motor cortex during attempted speech. Methodology/Principal Findings Neural signals recorded by a Neurotrophic Electrode implanted in a speech-related region of the left precentral gyrus of a human volunteer suffering from locked-in syndrome, characterized by near-total paralysis with spared cognition, were transmitted wirelessly across the scalp and used to drive a speech synthesizer. A Kalman filter-based decoder translated the neural signals generated during attempted speech into continuous parameters for controlling a synthesizer that provided immediate (within 50 ms) auditory feedback of the decoded sound. Accuracy of the volunteers vowel productions with the synthesizer improved quickly with practice, with a 25% improvement in average hit rate (from 45% to 70%) and 46% decrease in average endpoint error from the first to the last block of a three-vowel task. Conclusions/Significance Our results support the feasibility of neural prostheses that may have the potential to provide near-conversational synthetic speech output for individuals with severely impaired speech motor control. They also provide an initial glimpse into the functional properties of neurons in speech motor cortical areas.


NeuroImage | 2003

Region of interest based analysis of functional imaging data

Alfonso Nieto-Castanon; Satrajit S. Ghosh; Jason A. Tourville; Frank H. Guenther

fMRI analysis techniques are presented that test functional hypotheses at the region of interest (ROI) level. An SPM-compatible Matlab toolbox has been developed that allows the creation of subject-specific ROI masks based on anatomical markers and the testing of functional hypotheses on the regional response using multivariate time-series analysis techniques. The combined application of subject-specific ROI definition and region-level functional analysis is shown to appropriately compensate for intersubject anatomical variability, offering finer localization and increased sensitivity to task-related effects than standard techniques based on whole-brain normalization and voxel or cluster-level functional analysis, while providing a more direct link between discrete brain region hypotheses and the statistical analyses used to test them.

Collaboration


Dive into the Frank H. Guenther's collaboration.

Top Co-Authors

Avatar

Joseph S. Perkell

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Satrajit S. Ghosh

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Majid Zandipour

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Melanie L. Matthies

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Mark Tiede

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge