Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Robert L. Sherbecoe is active.

Publication


Featured researches published by Robert L. Sherbecoe.


Journal of the Acoustical Society of America | 1999

Monosyllabic word recognition at higher-than-normal speech and noise levels

Gerald A. Studebaker; Robert L. Sherbecoe; D. Michael McDaniel; Catherine Gwaltney

The effects of intensity on monosyllabic word recognition were studied in adults with normal hearing and mild-to-moderate sensorineural hearing loss. The stimuli were bandlimited NU#6 word lists presented in quiet and talker-spectrum-matched noise. Speech levels ranged from 64 to 99 dB SPL and S/N ratios from 28 to -4 dB. In quiet, the performance of normal-hearing subjects remained essentially constant in noise, at a fixed S/N ratio, it decreased as a linear function of speech level. Hearing-impaired subjects performed like normal-hearing subjects tested in noise when the data were corrected for the effects of audibility loss. From these and other results, it was concluded that: (1) speech intelligibility in noise decreases when speech levels exceed 69 dB SPL and the S/N ratio remains constant; (2) the effects of speech and noise level are synergistic; (3) the deterioration in intelligibility can be modeled as a relative increase in the effective masking level; (4) normal-hearing and hearing-impaired subjects are affected similarly by increased signal level when differences in speech audibility are considered; (5) the negative effects of increasing speech and noise levels on speech recognition are similar for all adult subjects, at least up to 80 years; and (6) the effective dynamic range of speech may be larger than the commonly assumed value of 30 dB.


Journal of the Acoustical Society of America | 1987

A frequency importance function for continuous discourse

Gerald A. Studebaker; Chaslav V. Pavlovic; Robert L. Sherbecoe

Normal hearing subjects estimated the intelligibility of continuous discourse (CD) passages spoken by three talkers (two male and one female) under 135 conditions of filtering and signal-to-noise ratio. The relationship between the intelligibility of CD and the articulation index (the transfer function) was different from any found in ANSI S3.5-1969. Also, the lower frequencies were found to be relatively more important for the intelligibility of CD than for identification of nonsense syllables and other types of speech for which data are available except for synthetic sentences [Speaks, J. Speech Hear. Res. 10, 289-298 (1967)]. The frequency which divides the auditory spectrum into two equally important halves (the crossover frequency) was found to be about 0.5 oct lower for the CD used in this study than the crossover frequency for male talkers of nonsense syllables found in ANSI S3.5-1969 and about 0.7 oct lower than the one for combined male and female talkers of nonsense syllables reported by French and Steinberg [J. Acoust. Soc. Am. 19, 90-119 (1947)].


Journal of the Acoustical Society of America | 2002

Intensity-importance functions for bandlimited monosyllabic words

Gerald A. Studebaker; Robert L. Sherbecoe

A study was carried out to determine the relative importance to speech intelligibility of different intensities within the speech dynamic range. The functions that were derived are analogous to previous descriptions of the relative importance of different frequencies and are referred to here as intensity-importance functions (IIFs). They were obtained as follows. Sharply filtered bands of speech (NU6 monosyllabic words) were mixed with filtered noise and presented alone or in pairs at 19 signal-to-noise ratios (-25 to 41 dB). When paired bands were tested, the level and signal-to-noise ratio (SNR) of one band were held constant while the level and SNR of the other band were varied. The listeners were 100 normal hearers, organized into five 20-person groups. Each group provided speech recognition data for one of five frequency regions (141-562, 562-1122, 1122-1778, 1778-2818, and 2818-8913 Hz). Comparisons of the results for each group indicated that IIFs vary with frequency and SNR. Current methods for predicting intelligibility from physical measurements of speech audibility would need to be revised in order to take such findings into consideration.


International Journal of Audiology | 2004

Supplementary formulas and tables for calculating and interconverting speech recognition scores in transformed arcsine units.

Robert L. Sherbecoe; Gerald A. Studebaker

Formulas that convert speech recognition scores, in percent or proportions, into units based on the arcsine transform have been described previously. This report reviews that work and presents various supplementary equations and tables for calculating and interconverting the proposed units. The relative merits of these data and their application to scores from closed-set tests are also discussed. Sumario Ya han sido descritas con anterioridad las fórmulas que convierten las puntuaciones de reconocimiento del lenguaje, en porcentaje o proporciones, a unidades basadas en la transformación de arcoseno. Este reporte revisa ese trabajo y presenta varias tablas y ecuaciones suplementarias para calcular y convertir las unidades propuestas. Se discuten los méritos relativos de estos datos y su aplicación a las puntuaciones obtenidas en pruebas de contexto cerrado.


Journal of the Acoustical Society of America | 1988

Frequency importance functions for a feature recognition test material

Gerald A. Studebaker; Chaslav V. Pavlovic; Robert L. Sherbecoe

The relative importance of different parts of the auditory spectrum to recognition of the Diagnostic Rhyme Test (DRT) and its six speech feature subtests was determined. Three normal hearing subjects were tested twice in each of 70 experimental conditions. The analytical procedures of French and Steinberg [J. Acoust. Soc. Am. 19, 90-119 (1947)] were applied to the data to derive frequency importance functions for each of the DRT subtests and the test as a whole over the frequency range 178-8912 Hz. For the DRT as a whole, the low frequencies were found to be more important than is the case for nonsense syllables. Importance functions for the feature subtests also differed from those for nonsense syllables and from each other as well. These results suggest that test materials loaded with different proportions of particular phonemes have different frequency importance functions. Comparison of the results with those from other studies suggests that importance functions depend to a degree on the available response options as well.


Ear and Hearing | 2003

Audibility-index predictions of normal-hearing and hearing-impaired listeners' performance on the connected speech test.

Robert L. Sherbecoe; Gerald A. Studebaker

Objective In a previous study (Sherbecoe & Studebaker, 2002), we derived a frequency-importance function and a transfer function for the audio compact disc version of the Connected Speech Test (CST). The current investigation evaluated the validity of these audibility-index (AI) functions based on how well they predicted data from four published studies that presented the CST to normal-hearing and hearing-impaired subjects. Design AI values were calculated for the test conditions received by 78 normal-hearing and 72 hearing-impaired subjects from the selected studies. The observed CST scores and AI values for these conditions/subjects were then plotted and the dispersion of the data compared to the expected range based on critical differences. The AI values for the conditions/subjects were also converted into expected CST scores and subtracted from their corresponding observed scores to determine the distribution of the resulting difference scores and the relationship between the difference scores and subject age. Results Good predictions were obtained for normal-hearing subjects who had been tested under audio-only conditions but not those who had received audiovisual tests. The expected scores for the latter subjects were too low when the AI accounted only for audibility and too high when it included the correction for visual cues from ANSI S3.5-1997. All of the hearing-impaired subjects had been tested under audio-only conditions. In their case, the mean difference between the observed and the expected scores was comparable with the audio-only mean for the normal-hearing subjects when the AI included corrections for speech level distortion and hearing loss desensitization. However, the hearing-impaired subject data had greater variability. The predictions for these subjects also decreased in accuracy when subject age increased beyond 70 yr despite the application of an AI correction for age. Conclusions The results of this study suggest that the AI functions derived for the CST satisfactorily predict the scores of normal-hearing subjects when they listen in speech babble under audio-only conditions but not when they receive visual cues. To obtain accurate predictions for the audiovisual form of the CST, it will be necessary to develop new ANSI-style AI correction equations for visual cues or new AI functions based on audiovisual test scores. If the current AI functions are used to predict the scores of hearing-impaired listeners tested under audio-only conditions, the AI should include corrections for the effects of speech level and hearing loss. A correction for subject age also could be applied, if it seems appropriate to do so. In either case, however, the predictions are still likely to be less accurate than the predictions for normal-hearing subjects. This may be because speech recognition deficits in people with hearing loss are not due solely to diminished audibility. Hearing-impaired subjects, particularly if they are elderly, also may be more susceptible to masking effects or other factors not accounted for by the AI.


Journal of the Acoustical Society of America | 1990

Regression equations for the transfer functions of ANSI S3.5-1969.

Robert L. Sherbecoe; Gerald A. Studebaker

Regression equations for the transfer functions displayed in Fig. 15 of ANSI Standard S3.5‐1969 are reported. Predicted values are within 2.15 percentage points of the expected values for most inputs.


Ear and Hearing | 1993

Speech spectra for six recorded monosyllabic word tests.

Robert L. Sherbecoe; Gerald A. Studebaker; Margie R. Crawford

ABSTRACT Speech spectra (long-term RMS levels and 1 % speech peaks] in third-octave bands were determined for six monosyllabic word test materials: digital recordings of the Central Institute for the Deaf W-22 word test and the Northwestern University NU-6 word test obtained from Qualitone; audio-tape recordings of the Central Institute for the Deaf W-22 word test, the Northwestern University NU-6 word test, and the Harvard PsycheAcoustic Laboratory PE50 word test obtained from Auditec of St. Louis; and an audio-tape recording of the Maryland CNC word test obtained from Olsen Distributors. The spectra were generally within 2 SD of previous results for continuous speech speken by an average male talker [Cox, Matesich, & Moore, 1988; Cox & Moore, 19881, but differed sufficiently from those data and from one another to affect the accuracy of Articulation Index calculations. The relationship between the level of the Calibration tone and the speech in the third-octave band centered at 1000 Hz was different for each recording.


Ear and Hearing | 1988

Magnitude Estimations of the Intelligibility and Quality of Speech in Noise

Gerald A. Studebaker; Robert L. Sherbecoe

Inexperienced normal hearing listeners judged the intelligibility and quality of hearing aid processed speech using magnitude estimation. Four trials were conducted for each judgment type at two S/N ratios, 0 and 7 dB. There were no significant effects due to judgment type, S/N ratio or trial; however, noticeable differences in the variability of these factors were apparent. Inter- and intrasubject standard deviations for quality estimations were lower than for intelligibility estimations while intersubject standard deviations were greater at 0 dB than at 7 dB S/N ratio and decreased over trial. Overall intrasubject variability was greater than would probably be acceptable for clinical applications. Across hearing aid conditions, magnitude estimations were positively correlated with word recognition scores but were less affected by changes in S/N ratio.


Journal of the Acoustical Society of America | 1993

Performance‐intensity functions at absolute and masked thresholds

Gerald A. Studebaker; Christine Gilmore; Robert L. Sherbecoe

In most applications of audibility and articulation theories, it is assumed that absolute thresholds and thermal noise maskers affect speech recognition performance-intensity (P-I) functions similarly. The purpose of this study was to evaluate that assumption. Performance-intensity functions for NU-6 monosyllabic words were obtained from eight normal-hearing subjects in quiet and in the presence of two levels of a noise that produced masked pure-tone thresholds parallel to, but higher than, those of each individual in quiet. The results support the practice of treating absolute threshold as a noise-masked threshold in predictions of speech recognition performance.

Collaboration


Dive into the Robert L. Sherbecoe's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge