Amy T. Neel
University of New Mexico
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Amy T. Neel.
Dysphagia | 2005
Phyllis M. Palmer; Timothy M. McCulloch; Debra Jaffe; Amy T. Neel
A sour bolus has been used as a modality in the treatment of oropharyngeal dysphagia based on the hypothesis that this stimulus provides an effective preswallow sensory input that lowers the threshold required to trigger a pharyngeal swallow. The result is a more immediate swallow onset time. Additionally, the sour bolus may invigorate the oral muscles resulting in stronger contractions during the swallow. The purpose of this investigation was to compare the intramuscular electromyographic activity of the mylohyoid, geniohyoid, and anterior belly of the digastric muscles during sour and water boluses with regard to duration, strength, and timing of muscle activation. Muscle duration, swallow onset time, and pattern of muscle activation did not differ for the two bolus types. Muscle activation time was more tightly approximated across the onsets of the three muscles when a sour bolus was used. A sour bolus also resulted in a stronger muscle contraction as evidenced by greater electromyographic activity. These data support the use of a sour bolus as part of a treatment paradigm.
Journal of the Acoustical Society of America | 1994
Diane Kewley-Port; Xiaofeng Li; Yijian Zheng; Amy T. Neel
The present experiments examined the effect of fundamental frequency (F0) on thresholds for the discrimination of formant frequency for male vowels. Thresholds for formant-frequency discrimination were obtained for six vowels with two fundamental frequencies: normal F0 (126 Hz) and low F0 (101 Hz). Four well-trained subjects performed an adaptive tracking task under low stimulus uncertainty. Comparisons between the normal-F0 and the low-F0 conditions showed that formants were resolved more accurately for low F0. These thresholds for male vowels were compared to thresholds for female vowels previously reported by Kewley-Port and Watson [J. Acoust. Soc. Am. 95, 485-496 (1994)]. Analyses of the F0 sets demonstrated that formant thresholds were significantly degraded for increases both in formant frequency and in F0. A piece-wise linear function was fit to each of the three sets of delta F thresholds as a function of formant frequency. The shape of the three parallel functions was similar such that delta F was constant in the F1 region and increased with formant frequency in the F2 region. The capability for humans to discriminate formant frequency may therefore be described as uniform in the F1 region (< 800 Hz) when represented as delta F and also uniform in the F2 region when represented as a ratio of delta F/F. A model of formant discrimination is proposed in which the effects of formant frequency are represented by the shape of an underlying piece-wise linear function. Increases in F0 significantly degrade overall discrimination independently from formant frequency.
Acoustics Research Letters Online-arlo | 2004
Amy T. Neel
Changes in formant frequency over time are important for vowel identification: listeners identify stimuli containing time-varying formants better than stimuli with steady-state formants. Statistically based pattern classifiers used as models for human perception have shown that very coarse representations of formant change over time result in accurate classification of American English vowels. In this study, using synthetic stimuli with five levels of formant contour detail, human listeners achieved maximum vowel identification for relatively coarse representations of formant movement containing information about onset, offset, and midpoint frequencies. More detailed representations of contour did not improve identification for most vowels.
Journal of Speech Language and Hearing Research | 2015
Amy T. Neel; Phyllis M. Palmer; Gwyneth Sprouls; Leslie Morrison
PURPOSE We documented speech and voice characteristics associated with oculopharyngeal muscular dystrophy (OPMD). Although it is a rare disease, OPMD offers the opportunity to study the impact of myopathic weakness on speech production in the absence of neurologic deficits in a relatively homogeneous group of speakers. METHODS Twelve individuals with OPMD and 12 healthy age-matched controls underwent comprehensive assessment of the speech mechanism including spirometry (respiratory support), nasometry (resonance balance), phonatory measures (pitch, loudness, and quality), articulatory measures (diadochokinetic rates, segment duration measures, spectral moments, and vowel space), tongue-to-palate strength measures during maximal isometric and speechlike tasks, quality-of-life questionnaire, and perceptual speech ratings by listeners. RESULTS Individuals with OPMD had substantially reduced tongue strength compared to the controls. However, little impact on speech and voice measures or on speech intelligibility was observed except for slower diadochokinetic rates. CONCLUSIONS Despite having less than half the maximal tongue strength of healthy controls, the individuals with OPMD exhibited minimal speech deficits. The threshold of weakness required for noticeable speech impairment may not have been reached by this group of adults with OPMD.
Journal of the Acoustical Society of America | 1997
Amy T. Neel
Vowel identification generally poses little difficulty for listeners with mild to moderate hearing impairment despite evidence for poorer frequency resolution compared to normal‐hearing listeners. Normal‐hearing listeners use formant movement and duration cues as well as spectral target cues to identify vowels. This study examines use of formant movement and duration cues by hearing‐impaired listeners. Sets of very natural‐sounding vowel stimuli with and without formant movement cues (dynamic and static) and with and without duration cues (appropriate and fixed) were made with a new speech resynthesis technique [H. Kawahara, Proc. ICASSP, 1–4 (1997)] using vowels produced by male and female talkers in /dVd/ context. Two groups of young normal‐hearing listeners, one with simulated hearing loss typical of 70 year‐old males and one control group, displayed no difference in overall vowel identification scores. Both groups obtained significant benefit from tokens containing formant movement cues. Duration was ...
Journal of the Acoustical Society of America | 1996
Amy T. Neel; Diane Kewley-Port
The importance of dynamic formant information for vowel identification has been shown by several studies in recent years. Using sine‐wave vowel analogs, Neel and Kewley‐Port [J. Acoust. Soc. Am. 96, 3284(A) (1994)] demonstrated that vowel duration is an important cue for vowel identity and that dynamic formant information is more effective when duration cues are absent. However, because identification rates for sine‐wave stimuli were low, a training study was conducted to determine the impact of training on the effectiveness of dynamic formant and duration cues for vowel identification. Sine‐wave stimuli consisted of two tones representing F1 and F2 from 10 vowels produced by a female speaker. Four types of stimuli were constructed by varying two factors: (1) dynamic versus static formants and (2) appropriate versus fixed vowel duration. Listeners were trained to criterion on one set of stimuli and were tested on another. Training significantly improved identification accuracy. In comparison to performanc...
Journal of the Acoustical Society of America | 2007
Amy T. Neel; Jean E. Andruski
Acoustic characteristics of ten vowels produced by 45 men and 48 women from the Hillenbrand et al. (1995) study were correlated with identification accuracy. Global (mean f0, F1 and F2, duration, and amount of formant movement) and distinctive measures (vowel space area, mean distance among vowels, f0, F1 and F2 ranges, duration ratio between long and short vowels, and dynamic ratio between dynamic and static vowels) were used to predict identification scores. Global and distinctive measures accounted for less than one‐fourth of variance in identification scores: vowel space area alone accounted for 9% to 12% of variance. Differences in vowel identification across talkers were largely due to poor identification of two spectrally similar vowel pairs /ae/‐/eh/ and /uh/‐/ah/. Results of acoustic analysis and goodness ratings for well‐identified and poorly identified versions of these vowels will be presented. Preliminary analysis revealed that well‐identified vowels differed from poorly identified tokens not...
Journal of the Acoustical Society of America | 1998
Diane Kewley-Port; Amy T. Neel
Although resolution for formant frequency is considerably better than needed for vowel identification in English, adjacent vowels in the F1‐by‐F2 plane may not be equally discriminable. Kuhl has reported better discrimination for poor vowel exemplars than good exemplars (‘‘perceptual magnet effect’’) for a few vowels in high stimulus uncertainty tasks. The purpose of this experiment was to determine the relation between discrimination thresholds and the judgment of vowels as good, confusable, or non‐English. A large F1‐by‐F2 vowel space (190 tokens) encompassing five English front vowels was synthesized in equal bark steps. Based on an identification task with goodness ratings, two vowels representing each goodness category were selected for the discrimination task. Eight listeners participated in a minimal uncertainty discrimination task to estimate thresholds for F1 and F2. Analysis of variance of the 12 thresholds (converted to Δ Barks) found that thresholds for non‐English vowels were significantly po...
Journal of the Acoustical Society of America | 1995
Amy T. Neel
Errors made by listeners in sentence transcription were analyzed to determine their contributions to intelligibility. One hundred Harvard sentences produced by ten males and ten females were transcribed by ten listeners per talker. Two measures of intelligibility were obtained: a keyword score in which a sentence was correct if, and only if, all five keywords were correct; and a total error count. Analysis of error types revealed that typing/spelling errors accounted for a third of total errors, and phonetic errors (consonant and vowel errors) accounted for another third. The remainder were semantic errors, added or deleted words, or unclassifiable. Further analysis of consonant errors did not reveal any particular type of consonant to be more susceptible to error than others. Male talkers had significantly worse keyword scores than females but did not have significantly greater total error counts indicating more errors on function words for females. The difference between high and low intelligibility speakers (by total error count) was accounted for by increased typing/spelling and consonant errors. Acoustic analysis of incorrectly transcribed words revealed that phonetic errors appeared to originate in the mouths of speakers while errors like word substitutions arise in the ears (or brains) of listeners.
Journal of the Acoustical Society of America | 2016
Amy T. Neel
In order to determine which aspects of speech contribute most to speech intelligibility, listeners rated several speech features for sentences produced by speakers with dysarthria and for passages read aloud by speakers of English as a second language. For the dysarthric speech, speech components included rate, stress, intonation, articulation, voice quality, and nasality in addition to speech intelligibility. For the accented English passages, the speech components were rate, stress, intonation, articulation, and other aspects of speech as well as overall intelligibility, accentedness, and language competence. For both types of speech, articulation was the best predictor of intelligibility ratings. Voice quality, rate, and nasality made minor contributions to intelligibility ratings for dysarthric speech. Rate also contributed slightly to intelligibility ratings for the accented English passages. Potential differences in the salience of speech components for listeners will be discussed. Use of the percep...