Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jack Gandour is active.

Publication


Featured researches published by Jack Gandour.


Journal of Cognitive Neuroscience | 2000

A Crosslinguistic PET Study of Tone Perception

Jack Gandour; Donald Wong; Li Hsieh; Bret Weinzapfel; Diana Van Lancker; Gary D. Hutchins

In studies of pitch processing, a fundamental question is whether shared neural mechanisms at higher cortical levels are engaged for pitch perception of linguistic and nonlinguistic auditory stimuli. Positron emission tomography (PET) was used in a crosslinguistic study to compare pitch processing in native speakers of two tone languages (that is, languages in which variations in pitch patterns are used to distinguish lexical meaning), Chinese and Thai, with those of English, a nontone language. Five subjects from each language group were scanned under three active tasks (tone, pitch, and consonant) that required focused-attention, speeded-response, auditory discrimination judgments, and one passive baseline as silence. Subjects were instructed to judge pitch patterns of Thai lexical tones in the tone condition; pitch patterns of nonspeech stimuli in the pitch condition; syllable-initial consonants in the consonant condition. Analysis was carried out by paired-image subtraction. When comparing the tone to the pitch task, only the Thai group showed significant activation in the left frontal operculum. Activation of the left frontal operculum in the Thai group suggests that phonological processing of suprasegmental as well as segmental units occurs in the vicinity of Brocas area. Baseline subtractions showed significant activation in the anterior insular region for the English and Chinese groups, but not Thai, providing further support for the existence of possibly two parallel, separate pathways projecting from the temporo-parietal to the frontal language area. More generally, these differential patterns of brain activation across language groups and tasks support the view that pitch patterns are processed at higher cortical levels in a top-down manner according to their linguistic function in a particular language.


Brain and Language | 2001

Functional Heterogeneity of Inferior Frontal Gyrus Is Shaped by Linguistic Experience

Li Hsieh; Jack Gandour; Donald Wong; Gary D. Hutchins

A crosslinguistic, positron emission tomography (PET) study was conducted to determine the influence of linguistic experience on the perception of segmental (consonants and vowels) and suprasegmental (tones) information. Chinese and English subjects (10 per group) were presented binaurally with lists consisting of five Chinese monosyllabic morphemes (speech) or low-pass-filtered versions of the same stimuli (nonspeech). The first and last items were targeted for comparison; the time interval between target tones was filled with irrelevant distractor tones. A speeded-response, selective attention paradigm required subjects to make discrimination judgments of the target items while ignoring intervening distractor tones. PET scans were acquired for five tasks presented twice: one passive listening to pitch (nonspeech) and four active (speech = consonant, vowel, and tone; nonspeech = pitch). Significant regional changes in blood flow were identified from comparisons of group-averaged images of active tasks relative to passive listening. Chinese subjects show increased activity in left premotor cortex, pars opercularis, and pars triangularis across the four tasks. English subjects, on the other hand, show increased activity in left inferior frontal gyrus regions only in the vowel task and in right inferior frontal gyrus regions in the pitch task. Findings suggest that functional circuits engaged in speech perception depend on linguistic experience. All linguistic information signaled by prosodic cues engages left-hemisphere mechanisms. Storage and executive processes of working memory that are implicated in phonological processing are mediated in discrete regions of the left frontal lobe.


Neuroreport | 1998

Pitch processing in the human brain is influenced by language experience

Jack Gandour; Donald Wong; Gary D. Hutchins

POSITRON emission tomography (PET) was used in a cross-linguistic study to compare pitch processing in native speakers of English, a nontone language, with those of Thai, a tone language. When discriminating pitch patterns in Thai words, only the Thai subjects showed activation in the left frontal operculum. Activation of this region near the classically defined Brocas area suggests that the brain recognizes functional properties, rather than simply acoustic properties, of complex auditory cues in accessing language-specific mechanisms in pitch perception.


Journal of Cognitive Neuroscience | 2002

A Cross-Linguistic fMRI Study of Spectral and Temporal Cues Underlying Phonological Processing

Jack Gandour; Donald Wong; Mark J. Lowe; Mario Dzemidzic; Nakarin Satthamnuwong; Yunxia Tong; Xiaojian Li

It remains a matter of controversy precisely what kind of neural mechanisms underlie functional asymmetries in speech processing. Whereas some studies support speech-specific circuits, others suggest that lateralization is dictated by relative computational demands of complex auditory signals in the spectral or time domains. To examine how the brain processes linguistically relevant spectral and temporal information, a functional magnetic resonance imaging study was conducted using Thai speech, in which spectral processing associated with lexical tones and temporal processing associated with vowel length can be differentiated. Ten Thai and 10 Chinese subjects were asked to perform discrimination judgments of pitch and timing patterns presented in the same auditory stimuli under two different conditions: speech (Thai) and nonspeech (hums). In the speech condition, tasks required judging Thai tones (T) and vowel length (VL); in the nonspeech condition, homologous pitch contours (P) and duration patterns (D). A remaining task required listening passively to nonspeech hums (L). Only the Thai group showed activation in the left inferior prefrontal cortex in speech minus nonspeech contrasts for spectral (T vs. P) and temporal (VL vs. D) cues. Thai and Chinese groups, however, exhibited similar fronto-parietal activation patterns in nonspeech hums minus passive listening contrasts for spectral (P vs. L) and temporal (D vs. L) cues. It appears that lower level specialization for acoustic cues in the spectral and temporal domains cannot be generalized to abstract higher order levels of phonological processing. Regardless of the neural mechanisms underlying low-level auditory processing, our findings clearly indicate that hemispheric specialization is sensitive to language-specific factors.


Brain and Language | 1984

Voice onset time in aphasia: Thai II. Production☆

Jack Gandour; Rochana Dardarananda

The aim of this study was to investigate voice onset time (VOT) production in homorganic word-initial stops in Thai in order to explore the nature of speech production deficits across clinical varieties of aphasia. Thai exhibits a three-category distinction in bilabial (/b,p,ph/) and alveolar (/d,t,t,h/) stops, and a two-category distinction in velar (/k,kh/) stops. Subjects included three Broca asphasics, one transcortical motor asphasic, two global asphasics, one conduction aphasic, one Wernicke aphasic, one nonaphasic dysarthric patient, one right-brain-damaged patient, and five normal controls. Test stimuli consisted of eight monosyllabic real words. The results of VOT measurements indicated that Broca and global asphasics exhibited a more severe production disorder than Wernicke, conduction, or transcortical motor asphasics. The right-brain-damaged patient showed no impairment in VOT production. Comparisons are drawn to earlier studies of VOT production in aphasia in two-category languages. Issues concerning the underlying basis of the production deficit for nonfluent aphasics, fluent aphasics, and nonaphasic dysarthrics as well as the relation between perception and production of VOT are discussed.


Phonetica | 1996

Acoustic Correlates of Stress in Thai

Siripong Potisuk; Jack Gandour; Mary P. Harper

Acoustic correlates of stress [duration, fundamental frequency (Fo), and intensity] were investigated in a language (Thai) in which both duration and Fo are employed to signal lexical contrasts. Stimuli consisted of 25 pairs of segmentally/tonally identical, syntactically ambiguous sentences. The first member of each sentence pair contained a two-syllable noun-verb sequence exhibiting a strong-strong (--) stress pattern, the second member a two-syllable noun compound exhibiting a weak-strong (--) stress pattern. Measures were taken of five prosodic dimensions of the rhyme portion of the target syllable: duration, average Fo, Fo standard deviation, average intensity, and intensity standard deviation. Results of linear regression indicated that duration is the predominant cue in signaling the distinction between stressed and unstressed syllables in Thai. Discriminant analysis showed a stress classification accuracy rate of over 99%. Findings are discussed in relation to the varying roles that Fo, intensity, and duration have in different languages given their phonological structure.


Aphasiology | 1996

Hemispheric Specialization In Processing Intonation Contours

Judy M. Perkins; Jane A. Baran; Jack Gandour

Abstract Hemispheric specialization in processing linguistic and non-linguistic intonation contours during sentence processing was examined in four experiments using subjects with unilateral left (LHD) or right hemisphere damage (RHD). When subjects were asked to identify intonation contours as questions or statements in semantically neutral sentences, the LHD group demonstrated a significantly poorer performance than the control group. No significant differences were found between the RHD and control groups. When subjects were asked to identify syntactically ambiguous sentences through the perception of intonation cues located at syntactic boundaries the same pattern of results emerged. In discriminating between the aforementioned segmentally identical sentences, no significant differences were found between groups. However, when the segmental information was degraded and subjects were asked to discriminate between isolated prosodic structures, the RHD group demonstrated a significantly poorer performanc...


Human Brain Mapping | 2003

Neural Correlates of Segmental and Tonal Information in Speech Perception

Jack Gandour; Yisheng Xu; Donald Wong; Mario Dzemidzic; Mark J. Lowe; Xiaojian Li; Yunxia Tong

The Chinese language provides an optimal window for investigating both segmental and suprasegmental units. The aim of this cross‐linguistic fMRI study is to elucidate neural mechanisms involved in extraction of Chinese consonants, rhymes, and tones from syllable pairs that are distinguished by only one phonetic feature (minimal) vs. those that are distinguished by two or more phonetic features (non‐minimal). Triplets of Chinese monosyllables were constructed for three tasks comparing consonants, rhymes, and tones. Each triplet consisted of two target syllables with an intervening distracter. Ten Chinese and English subjects were asked to selectively attend to targeted sub‐syllabic components and make same‐different judgments. Direct between‐group comparisons in both minimal and non‐minimal pairs reveal increased activation for the Chinese group in predominantly left‐sided frontal, parietal, and temporal regions. Within‐group comparisons of non‐minimal and minimal pairs show that frontal and parietal activity varies for each sub‐syllabic component. In the frontal lobe, the Chinese group shows bilateral activation of the anterior middle frontal gyrus (MFG) for rhymes and tones only. Within‐group comparisons of consonants, rhymes, and tones show that rhymes induce greater activation in the left posterior MFG for the Chinese group when compared to consonants and tones in non‐minimal pairs. These findings collectively support the notion of a widely distributed cortical network underlying different aspects of phonological processing. This neural network is sensitive to the phonological structure of a listeners native language. Hum. Brain Mapping 20:185–200, 2003.


Brain and Language | 1992

Lexical tones in Thai after unilateral brain damage

Jack Gandour; Suvit Ponglorpisit; Fuangfa Khunadorn; Sumalee Dechongkit; Prasert Boongird; Rachanee Boonklam; Siripong Potisuk

An acoustic perceptual investigation of the five lexical tones of Thai was conducted to evaluate the nature of tonal disruption in patients with unilateral lesions in the left and right hemisphere. Subjects (n = 48) included 10 young normal adults, 10 old normal adults, 11 right hemisphere nonaphasics, 9 left hemisphere fluent aphasics, and 8 left hemisphere nonfluent aphasics. The five Thai tones (mid, low, falling, high, rising) were produced in isolated monosyllables, presented for tonal identification judgments, and measured for fundamental frequency (Fo) and duration. Results of an analysis of variance indicated that left hemisphere nonfluent speakers signaled and tonal contrasts at a lower level of proficiency. The extent of their impairment varied depending on severity level of aphasia. When compared to normal speakers, tonal identification for less severe nonfluent aphasics differed more in degree than in kind, and for more severe nonfluent aphasics differed both in kind and in degree. Acoustic analysis revealed that with the exception of one left nonfluent, average Fo contours were comparable in shape across speaker groups. Variability in Fo production, however, was greater in left nonfluent speakers than in any of the other four groups of speakers. Issues are discussed regarding the extent and nature of tonal disruption in aphasia and hemispheric specialization for tone production.


Brain and Language | 1994

Speech Timing at the Sentence Level in Thai After Unilateral Brain Damage

Jack Gandour; Sumalee Dechongkit; Suvit Ponglorpisit; Fuangfa Khunadorn

The present study examined temporal characteristics of spoken sentences in Thai to evaluate timing control in brain-damaged patients with unilateral left and right hemisphere lesions. Subjects included 10 young and 10 old normal adults, 14 right hemisphere patients, and 2 left hemisphere nonfluent and 7 fluent aphasic patients. Utterances were produced at a conversational speaking rate. Duration measures were taken from wide-band spectrograms. Results indicated that left hemisphere patients exhibited abnormal timing on both absolute and relative timing measures, whereas right hemisphere patients did not. Among left hemisphere patients, fluent as well as nonfluent aphasics exhibited aberrant temporal patterns. Left hemisphere patients were also more variable than right hemisphere patients which, in turn, were more variable than normals in their production of sentences. In comparison to earlier findings on timing at segment and word levels, it was found that deficit profiles varied between fluent and nonfluent aphasics as a function of the size of the linguistic unit with nonfluent aphasics being affected at the segment, word, and sentence level but fluent aphasics affected at the sentence level only. Findings are discussed in relation to issues pertaining to hemispheric specialization and the nature of timing deficits in nonfluent and fluent aphasic patients.

Collaboration


Dive into the Jack Gandour's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge