Erwin L. J. George
VU University Medical Center
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Erwin L. J. George.
Journal of the Acoustical Society of America | 2006
Erwin L. J. George; Joost M. Festen; Tammo Houtgast
The Speech Reception Threshold for sentences in stationary noise and in several amplitude-modulated noises was measured for 8 normal-hearing listeners, 29 sensorineural hearing-impaired listeners, and 16 normal-hearing listeners with simulated hearing loss. This approach makes it possible to determine whether the reduced benefit from masker modulations, as often observed for hearing-impaired listeners, is due to a loss of signal audibility, or due to suprathreshold deficits, such as reduced spectral and temporal resolution, which were measured in four separate psychophysical tasks. Results show that the reduced masking release can only partly be accounted for by reduced audibility, and that, when considering suprathreshold deficits, the normal effects associated with a raised presentation level should be taken into account. In this perspective, reduced spectral resolution does not appear to qualify as an actual suprathreshold deficit, while reduced temporal resolution does. Temporal resolution and age are shown to be the main factors governing masking release for speech in modulated noise, accounting for more than half of the intersubject variance. Their influence appears to be related to the processing of mainly the higher stimulus frequencies. Results based on calculations of the Speech Intelligibility Index in modulated noise confirm these conclusions.
Journal of the Acoustical Society of America | 2007
Erwin L. J. George; Adriana A. Zekveld; Sophia E. Kramer; S. Theo Goverts; Joost M. Festen; Tammo Houtgast
Speech reception thresholds (SRTs) for sentences were determined in stationary and modulated background noise for two age-matched groups of normal-hearing (N = 13) and hearing-impaired listeners (N = 21). Correlations were studied between the SRT in noise and measures of auditory and nonauditory performance, after which stepwise regression analyses were performed within both groups separately. Auditory measures included the pure-tone audiogram and tests of spectral and temporal acuity. Nonauditory factors were assessed by measuring the text reception threshold (TRT), a visual analogue of the SRT, in which partially masked sentences were adaptively presented. Results indicate that, for the normal-hearing group, the variance in speech reception is mainly associated with nonauditory factors, both in stationary and in modulated noise. For the hearing-impaired group, speech reception in stationary noise is mainly related to the audiogram, even when audibility effects are accounted for. In modulated noise, both auditory (temporal acuity) and nonauditory factors (TRT) contribute to explaining interindividual differences in speech reception. Age was not a significant factor in the results. It is concluded that, under some conditions, nonauditory factors are relevant for the perception of speech in noise. Further evaluation of nonauditory factors might enable adapting the expectations from auditory rehabilitation in clinical settings.
Journal of the Acoustical Society of America | 2008
Erwin L. J. George; Joost M. Festen; Tammo Houtgast
Listening conditions in everyday life typically include a combination of reverberation and nonstationary background noise. It is well known that sentence intelligibility is adversely affected by these factors. To assess their combined effects, an approach is introduced which combines two methods of predicting speech intelligibility, the extended speech intelligibility index (ESII) and the speech transmission index. First, the effects of reverberation on nonstationary noise (i.e., reduction of masker modulations) and on speech modulations are evaluated separately. Subsequently, the ESII is applied to predict the speech reception threshold (SRT) in the masker with reduced modulations. To validate this approach, SRTs were measured for ten normal-hearing listeners, in various combinations of nonstationary noise and artificially created reverberation. After taking the characteristics of the speech corpus into account, results show that the approach accurately predicts SRTs in nonstationary noise and reverberation for normal-hearing listeners. Furthermore, it is shown that, when reverberation is present, the benefit from masker fluctuations may be substantially reduced.
Audiology and Neuro-otology | 2015
Elke M.J. Devocht; Erwin L. J. George; A.M.L. Janssen; Robert J. Stokroos
The goal of this study was to investigate contralateral hearing aid (HA) use after unilateral cochlear implantation and to identify factors of influence on the occurrence of a unilateral cochlear implant (CI) recipient becoming a bimodal user. A retrospective cross-sectional chart review was carried out among 77 adult unilateral CI recipients 1 year after implantation. A bimodal HA retention rate of 64% was observed. Associations with demographics, hearing history, residual hearing and speech recognition ability were investigated. Better pure-tone thresholds and unaided speech scores in the non-implanted ear, as well as a smaller difference in speech recognition scores between both ears, were significantly associated with HA retention. A combined model of HA retention was proposed, and cut-off points were determined to identify those CI recipients who were most likely to become bimodal users. These results can provide input to clinical guidelines concerning bimodal CI candidacy.
Trends in hearing | 2017
Elke M.J. Devocht; A. Miranda L. Janssen; Josef Chalupper; Robert J. Stokroos; Erwin L. J. George
The benefits of combining a cochlear implant (CI) and a hearing aid (HA) in opposite ears on speech perception were examined in 15 adult unilateral CI recipients who regularly use a contralateral HA. A within-subjects design was carried out to assess speech intelligibility testing, listening effort ratings, and a sound quality questionnaire for the conditions CI alone, CIHA together, and HA alone when applicable. The primary outcome of bimodal benefit, defined as the difference between CIHA and CI, was statistically significant for speech intelligibility in quiet as well as for intelligibility in noise across tested spatial conditions. A reduction in effort on top of intelligibility at the highest tested signal-to-noise ratio was found. Moreover, the bimodal listening situation was rated to sound more voluminous, less tinny, and less unpleasant than CI alone. Listening effort and sound quality emerged as feasible and relevant measures to demonstrate bimodal benefit across a clinically representative range of bimodal users. These extended dimensions of speech perception can shed more light on the array of benefits provided by complementing a CI with a contralateral HA.
Journal of the Acoustical Society of America | 2012
Erwin L. J. George; Joost M. Festen; S. Theo Goverts
In daily life, listeners use two ears to understand speech in situations which typically include reverberation and non-stationary noise. In headphone experiments, the binaural benefit for speech in noise is often expressed as the difference in speech reception threshold between diotic (N(0)S(0)) and dichotic (N(0)S(π)) conditions. This binaural advantage (BA), arising from the use of inter-aural phase differences, is about 5-6 dB in stationary noise, but may be lower in everyday conditions. In the current study, BA was measured in various combinations of noise and artificially created diotic reverberation, for normal-hearing and hearing-impaired listeners. Speech-intelligibility models were applied to quantify the combined effects. Results showed that in stationary noise, diotic reverberation did not affect BA. BA was reduced in conditions where the masker fluctuated. With additional reverberation, however, it was restored. Results for both normal-hearing and hearing-impaired listeners were accounted for by assuming that binaural unmasking is only effectively realized at low instantaneous speech-to-noise ratios (SNRs). The observed BA was related to the distribution of SNRs resulting from fluctuations, reverberation, and peripheral processing. It appears that masker fluctuations and reverberation, both relevant for everyday communication, interact in their effects on binaural unmasking and need to be considered together.
PLOS ONE | 2016
Elke M.J. Devocht; A. Miranda L. Janssen; Josef Chalupper; Robert J. Stokroos; Erwin L. J. George
Objective To evaluate monaural beamforming in bimodally aided cochlear implant (CI) users. Design The study enrolled twelve adult bimodal listeners with at least six months of CI-experience and using a contralateral hearing aid (HA) most of the daytime. Participants were uniformly fitted with the same CI speech processor and HA, giving access to an identical monaural beamformer in both ears. A within-subject repeated measures design evaluated three directional configurations [omnidirectional, asymmetric directivity (in CI alone) and symmetric directivity (in both CI and HA)] in two noise types [stationary and fluctuating]. Bimodal speech reception thresholds (SRT) as well as listening effort ratings were assessed in a diffuse noise field. Results Symmetric monaural beamforming provided a significant SRT improvement of 2.6 dB SNR, compared to 1.6 dB SNR for asymmetric monaural beamforming. Directional benefits were similarly observed in stationary and fluctuating noise. Directivity did not contribute to less listening effort in addition to improvement in speech intelligibility. Bimodal performance was about 7 dB SNR worse in fluctuating than in stationary noise. Conclusions Monaural beamforming provided substantial benefit for speech intelligibility in noise for bimodal listeners. The greatest benefit occurred when monaural beamforming was activated symmetrically in both CI and HA. Monaural beamforming does not bridge the gap between bimodal and normal hearing performance, especially in fluctuating noise. Results advocate further bimodal co-operation. Trial Registration This trial was registered in www.trialregister.nl under number NTR4901.
Journal of the Acoustical Society of America | 2007
Erwin L. J. George; Joost M. Festen; Tammo Houtgast
Listening conditions in everyday life typically include a combination of reverberation and nonstationary background noise. It is well known that sentence intelligibility is adversely affected by these factors. To assess their combined effects, a model is introduced that combines two models of speech perception, the Extended Speech Intelligibility Index (E‐SII), and the Speech Transmission Index (STI). First, the effect of reverberation on nonstationary noise—reduction of modulations—is determined. The E‐SII is then used to evaluate the effect of this modified nonstationary noise, while the STI is applied to quantify the effects of reverberation and noise on speech quality. To validate this model, speech reception thresholds (SRTs) were measured for ten normal‐hearing listeners, under various combinations of nonstationary noise and artificially created reverberation. After taking the characteristics of the speech corpus into account, results show that the model accurately predicts SRTs in fluctuating noise...
Journal of Speech Language and Hearing Research | 2007
Adriana A. Zekveld; Erwin L. J. George; Sophia E. Kramer; S. Theo Goverts; Tammo Houtgast
Journal of Speech Language and Hearing Research | 2010
Erwin L. J. George; S. Theo Goverts; Joost M. Festen; Tammo Houtgast
Collaboration
Dive into the Erwin L. J. George's collaboration.
Netherlands Organisation for Applied Scientific Research
View shared research outputs