Grace H. Yeni-Komshian
University of Maryland, College Park
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Grace H. Yeni-Komshian.
Journal of the Acoustical Society of America | 1973
Alfonso Caramazza; Grace H. Yeni-Komshian; Edgar Zurif; E. Carbone
Cross‐language studies have shown that Voice Onset Time (VOT) is a sufficient cue to separate initial stop consonants into phonemic categories. The present study used VOT as a linguistic cue in examining the perception and production of stop consonants in three groups of subjects: unilingual Canadian French, unilingual Canadian English, and bilingual French‐English speakers. Perception was studied by having subjects label synthetically produced stop‐vowel syllables while production was assessed through spectrographic measurements of VOT in word‐initial stops. Six stop consonants, common to both languages, were used for these tasks. On the perception task, the two groups of unilingual subjects showed different perceptual crossovers with those of the bilinguals occupying an intermediate position. The production data indicate that VOT measures can separate voicing contrasts for speakers of Canadian English, but not for speakers of Canadian French. The data also show that language switching in bilinguals is w...
Bilingualism: Language and Cognition | 2000
Grace H. Yeni-Komshian; James Emil Flege; Serena Liu
This study examined pronunciation proficiency in both the first (Korean) and second (English) languages of bilinguals. The participants were adult immigrants whose age of arrival in the USA ranged from 1–23 years. English and Korean sentences were rated by native listeners to obtain measures of pronunciation proficiency. English pronunciation of participants with ages of arrival of 1–5 years was close to monolinguals, heavier accents were noted as ages of arrival increased from 6 to 23 years. Korean pronunciation of participants with ages of arrival of 1–7 years was distinctly accented, while those with ages of arrival of 12–23 years were rated the same as monolinguals. Participants with ages of arrival of 1–9 years pronounced English better than Korean, whereas the reverse was true for ages of arrival of 12–23 years. Overall, the results were more consistent with the view that deviations from native pronunciation result from interactions between the languages of bilinguals rather than with the view of a maturationally defined critical period for language learning.
Applied Psycholinguistics | 2000
Susan G. Guion; James Emil Flege; Serena H. Liu; Grace H. Yeni-Komshian
Research has shown that L2 utterances diverge increasingly from target language phonetic norms as the age of L2 learning increases. Other research has suggested that L2 speakers produce longer utterances than native speakers. The aim of this study was to determine whether L2 utterance durations increase as age of learning increases. Fluently produced English sentences spoken by 240 native speakers of both Italian and Korean (selected on the basis of age of arrival [AOA]) were examined. For both L1 groups, the duration of English sentences was positively correlated with AOA. The AOA effect was found to be significant even when confounding variables were partialed out. These results are taken as preliminary support for the proposal that the more established the L1 is at the time of first exposure to the L2, the more it will interfere with L2 production, thus requiring greater processing resources to suppress it.
Journal of the Acoustical Society of America | 2006
Sandra Gordon-Salant; Grace H. Yeni-Komshian; Peter J. Fitzgibbons; Jessica Barrett
This study investigated age-related differences in sensitivity to temporal cues in modified natural speech sounds. Listeners included young noise-masked subjects, elderly normal-hearing subjects, and elderly hearing-impaired subjects. Four speech continua were presented to listeners, with stimuli from each continuum varying in a single temporal dimension. The acoustic cues varied in separate continua were voice-onset time, vowel duration, silence duration, and transition duration. In separate conditions, the listeners identified the word stimuli, discriminated two stimuli in a same-different paradigm, and discriminated two stimuli in a 3-interval, 2-alternative forced-choice procedure. Results showed age-related differences in the identification function crossover points for the continua that varied in silence duration and transition duration. All listeners demonstrated shorter difference limens (DLs) for the three-interval paradigm than the two-interval paradigm, with older hearing-impaired listeners showing larger DLs than the other listener groups for the silence duration cue. The findings support the general hypothesis that aging can influence the processing of specific temporal cues that are related to consonant manner distinctions.
Brain and Language | 1988
Frances Shipley-Brown; William Orr Dingwall; Charles I. Berlin; Grace H. Yeni-Komshian; Sandra Gordon-Salant
In a previous study of the comprehension of linguistic prosody in brain-damaged subjects, S. R. Grant and W. O. Dingwall (1984. The role of the right hemisphere in processing linguistic prosody, presentation at the Academy of Aphasia, 1984) demonstrated that the right hemisphere (RH) of nonaphasic patients plays a prominent role in the processing of stress and intonation. The present study examines laterality for affective and linguistic prosody using the dichotic listening paradigm. Both types of prosody elicited a significant left ear advantage. This advantage was more pronounced for affective than for linguistic prosody. These findings strongly support previously documented evidence of RH involvement in the processing of affective prosody (R. G. Ley & M. P. Bryden, 1982. A dissociation of right and left hemispheric effects for recognizing emotional tone and verbal content, Brain and Cognition, 1, 3-9). They also provide support for the previously mentioned demonstration of RH involvement in the processing of linguistic intonation (S. Blumstein & W. E. Cooper, 1974. Hemispheric processing of intonation contours, Cortex, 10, 146-158; Grant & Dingwall, 1984).
Neuropsychologia | 1983
Brenda J. Spiegler; Grace H. Yeni-Komshian
Hand preference data were obtained for 1816 university students, 4793 siblings and 3632 parents. Results support the following conclusions. (1) There is currently a 13.8% incidence of left handedness among young adults, representing a dramatic increase over past generations. (2) Left and right-handed respondents do not differ in terms of familial sinistrality. (3) Mothers left-handedness is associated with an increase in the incidence of sinistrality for sons and daughters, while fathers left handedness is related only to sons.
Journal of the Acoustical Society of America | 2010
Sandra Gordon-Salant; Grace H. Yeni-Komshian; Peter J. Fitzgibbons
This study investigated the effects of age and hearing loss on perception of accented speech presented in quiet and noise. The relative importance of alterations in phonetic segments vs. temporal patterns in a carrier phrase with accented speech also was examined. English sentences recorded by a native English speaker and a native Spanish speaker, together with hybrid sentences that varied the native language of the speaker of the carrier phrase and the final target word of the sentence were presented to younger and older listeners with normal hearing and older listeners with hearing loss in quiet and noise. Effects of age and hearing loss were observed in both listening environments, but varied with speaker accent. All groups exhibited lower recognition performance for the final target word spoken by the accented speaker compared to that spoken by the native speaker, indicating that alterations in segmental cues due to accent play a prominent role in intelligibility. Effects of the carrier phrase were minimal. The findings indicate that recognition of accented speech, especially in noise, is a particularly challenging communication task for older people.
Journal of the Acoustical Society of America | 2010
Sandra Gordon-Salant; Grace H. Yeni-Komshian; Peter J. Fitzgibbons
This investigation examined the effects of listener age and hearing loss on recognition of accented speech. Speech materials were isolated English words and sentences that featured phonemes that are often mispronounced by non-native speakers of English whose first language is Spanish. These stimuli were recorded by a native speaker of English and two non-native speakers of English: one with a mild accent and one with a moderate accent. The stimuli were presented in quiet to younger and older adults with normal-hearing and older adults with hearing loss. Analysis of percent correct recognition scores showed that all listeners performed more poorly with increasing accent, and older listeners with hearing loss performed more poorly than the younger and older normal-hearing listeners in all accent conditions. Context and age effects were minimal. Consonant confusion patterns in the moderate accent condition showed that error patterns of all listeners reflected temporal alterations with accented speech, with major errors of word-final consonant voicing in stops and fricatives, and word-initial fricatives.
Journal of the Acoustical Society of America | 2008
Sandra Gordon-Salant; Grace H. Yeni-Komshian; Peter J. Fitzgibbons
Prior investigations, using isolated words as stimuli, have shown that older listeners tend to require longer temporal cues than younger listeners to switch their percept from one word to its phonetically contrasting counterpart. The extent to which this age effect occurs in sentence contexts is investigated in the present study. The hypothesis was that perception of temporal cues differs for words presented in isolation and a sentence context and that this effect may vary between younger and older listeners. Younger and older listeners with normal-hearing and older listeners with hearing loss identified phonetically contrasting word pairs in natural speech continua that varied by a single temporal cue: voice-onset time, vowel duration, transition duration, and silent interval duration. The words were presented in isolation and in sentences. A context effect was shown for most continua, in which listeners required longer temporal cues in sentences than in isolated words. Additionally, older listeners required longer cues at the crossover points than younger listeners for most but not all continua. In general, the findings support the conclusion that older listeners tend to require longer target temporal cues than younger normal-hearing listeners in identifying phonetically contrasting word pairs in isolation and sentence contexts.
Journal of the Acoustical Society of America | 1977
Grace H. Yeni-Komshian; Moise H. Goldstein
A vocoder‐type speech analyzer interfaced to the tactile display of an Optacon was used to investigate how subjects learned, over a six‐week period, to identify the vibrotactile patterns of different speech signals. Closed sets of four vowel durations, three different vowels, and four spondee words constituted the test material. Learning to identify vibrotactile patterns required many hours of training. Subjects in this study showed significant improvement in the identification of all three types of speech signals, especially the vowel durations and spondee words. Tests for transfer at the end of training showed that shifting the locus of stimulation did not result in decrement in performance.