Joseph G. Desloge
Massachusetts Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Joseph G. Desloge.
IEEE Transactions on Speech and Audio Processing | 1997
Joseph G. Desloge; William M. Rabinowitz; Patrick M. Zurek
This work is aimed at developing a design for the use of a microphone array with binaural hearing aids. The goal of such a hearing aid is to provide both the spatial-filtering benefits of the array and the natural benefits to sound localization and speech intelligibility that accrue from binaural listening. The present study examines two types of designs for fixed-processing systems: one in which independent arrays provide outputs to the two ears, and another in which the binaural outputs are derived from a single array. For the latter, various methods are used to merge array processing with binaural listening. In one approach, filters are designed to satisfy a frequency-dependent trade between directionality and binaural cue fidelity. In another, the microphone signals are filtered into low- and high-frequency components with the lowpass signals providing binaural cues and the highpass signal being the single output of the array processor. Acoustic and behavioral measurements were made in an anechoic chamber and in a moderately reverberant room to evaluate example systems. Theoretical performance was calculated for model arrays mounted on an idealized spherical head. Results show that both single- and dual-array systems provided target-intelligibility enhancements (2-4 dB improvements in speech reception threshold) relative to binaural cardioid microphones. In addition, the binaural-output systems provided cues that assist in sound localization, with resulting performance depending directly upon the cue fidelity. Finally, the sphere-based calculations accurately reflected the major features of the actual head-mounted array results, both in terms of directional sensitivity and output binaural cues.
Journal of the Acoustical Society of America | 2010
Joseph G. Desloge; Charlotte M. Reed; Louis D. Braida; Zachary D. Perez; Lorraine A. Delhorne
The effects of audibility and age on masking for sentences in continuous and interrupted noise were examined in listeners with real and simulated hearing loss. The absolute thresholds of each of ten listeners with sensorineural hearing loss were simulated in normal-hearing listeners through a combination of spectrally-shaped threshold noise and multi-band expansion for octave bands with center frequencies from 0.25-8 kHz. Each individual hearing loss was simulated in two groups of three normal-hearing listeners (an age-matched and a non-age-matched group). The speech-to-noise ratio (S/N) for 50%-correct identification of hearing in noise test (HINT) sentences was measured in backgrounds of continuous and temporally-modulated (10 Hz square-wave) noise at two overall levels for unprocessed speech and for speech that was amplified with the NAL-RP prescription. The S/N in both continuous and interrupted noise of the hearing-impaired listeners was relatively well-simulated in both groups of normal-hearing listeners. Thus, release from masking (the difference in S/N obtained in continuous versus interrupted noise) appears to be determined primarily by audibility. Minimal age effects were observed in this small sample. Observed values of masking release were compared to predictions derived from intelligibility curves generated using the extended speech intelligibility index (ESII) [Rhebergen et al. (2006). J. Acoust. Soc. Am. 120, 3988-3997].
Journal of the Acoustical Society of America | 2003
Julie E. Greenberg; Joseph G. Desloge; Patrick M. Zurek
Several array-processing algorithms were implemented and evaluated with experienced hearing-aid users. The array consisted of four directional microphones mounted broadside on a headband worn on the top of the listeners head. The algorithms included two adaptive array-processing algorithms, one fixed array-processing algorithm, and a reference condition consisting of binaural directional microphones. The algorithms were evaluated under conditions with both one and three independent noise sources. Performance metrics included quantitative speech reception thresholds and qualitative subject preference ratings for ease-of-listening measured using a paired-comparison procedure. On average, the fixed algorithm improved speech reception thresholds by 2 dB, while the adaptive algorithms provided 7-9-dB improvement over the reference condition. Subjects judging ease-of-listening generally preferred all array-processing algorithms over the reference condition. The results suggest that these adaptive algorithms should be evaluated further in more realistic acoustic environments.
The Hearing journal | 2007
Patrick M. Zurek; Joseph G. Desloge
Hearing healthcare professionals frequently need to explain hearing loss to laypersons. For example, parents who have been told that their child has a hearing loss are understandably eager to learn as much as possible about the condition. But providing this information is no simple task, as few people have a good understanding of even the most basic concepts of sound, such as frequency and intensity, not to mention hearing, hearing loss, and acoustic cues for speech perception. In such situations, it would be helpful to be able to demonstrate both the handicap of a particular hearing loss and the benefits that a hearing aid or a cochlear implant can provide. Such demonstrations could be provided, in principle, by a hearing loss and prosthesis simulator. The ideal hearing loss simulator would be unlimited in the types and range of hearing losses it could simulate. It would be easily programmed, and would allow demonstrations with a variety of relevant test stimuli. This ideal hearing loss simulator would be supplemented with simulations of hearing aids and cochlear implants. Together, they would be capable of duplicating for normal-hearing listeners the communication deficits associated with actual hearing impairment and the benefits from prostheses. Although various hearing loss simulations have been developed over the years, none has become widely used. Even the notion of using hearing loss simulation as a tool may be foreign to many in the hearing healthcare field. We shall begin, therefore, by discussing some of the uses of hearing loss and prosthesis simulations. We will then review methods for simulating hearing loss, and will conclude with a description of a new simulation system that has been designed specifically for audiologic applications.
Journal of the Acoustical Society of America | 2015
Agnès C. Léger; Charlotte M. Reed; Joseph G. Desloge; Jayaganesh Swaminathan; Louis D. Braida
Consonant-identification ability was examined in normal-hearing (NH) and hearing-impaired (HI) listeners in the presence of steady-state and 10-Hz square-wave interrupted speech-shaped noise. The Hilbert transform was used to process speech stimuli (16 consonants in a-C-a syllables) to present envelope cues, temporal fine-structure (TFS) cues, or envelope cues recovered from TFS speech. The performance of the HI listeners was inferior to that of the NH listeners both in terms of lower levels of performance in the baseline condition and in the need for higher signal-to-noise ratio to yield a given level of performance. For NH listeners, scores were higher in interrupted noise than in steady-state noise for all speech types (indicating substantial masking release). For HI listeners, masking release was typically observed for TFS and recovered-envelope speech but not for unprocessed and envelope speech. For both groups of listeners, TFS and recovered-envelope speech yielded similar levels of performance and consonant confusion patterns. The masking release observed for TFS and recovered-envelope speech may be related to level effects associated with the manner in which the TFS processing interacts with the interrupted noise signal, rather than to the contributions of TFS cues per se.
Journal of the Acoustical Society of America | 2013
Gerald Kidd; Sylvain Favrot; Joseph G. Desloge; Timothy Streeter; Christine R. Mason
An approach to hearing aid design is described, and preliminary acoustical and perceptual measurements are reported, in which an acoustic beam-forming microphone array is coupled to an eye-glasses-mounted eye-tracker. This visually guided hearing aid (VGHA)-currently a laboratory-based prototype-senses direction of gaze using the eye tracker and an interface converts those values into control signals that steer the acoustic beam accordingly. Preliminary speech intelligibility measurements with noise and speech maskers revealed near- or better-than normal spatial release from masking with the VGHA. Although not yet a wearable prosthesis, the principle underlying the device is supported by these findings.
Journal of the Acoustical Society of America | 2014
Jayaganesh Swaminathan; Charlotte M. Reed; Joseph G. Desloge; Louis D. Braida; Lorraine A. Delhorne
The contribution of recovered envelopes (RENVs) to the utilization of temporal-fine structure (TFS) speech cues was examined in normal-hearing listeners. Consonant identification experiments used speech stimuli processed to present TFS or RENV cues. Experiment 1 examined the effects of exposure and presentation order using 16-band TFS speech and 40-band RENV speech recovered from 16-band TFS speech. Prior exposure to TFS speech aided in the reception of RENV speech. Performance on the two conditions was similar (∼50%-correct) for experienced listeners as was the pattern of consonant confusions. Experiment 2 examined the effect of varying the number of RENV bands recovered from 16-band TFS speech. Mean identification scores decreased as the number of RENV bands decreased from 40 to 8 and were only slightly above chance levels for 16 and 8 bands. Experiment 3 examined the effect of varying the number of bands in the TFS speech from which 40-band RENV speech was constructed. Performance fell from 85%- to 31%-correct as the number of TFS bands increased from 1 to 32. Overall, these results suggest that the interpretation of previous studies that have used TFS speech may have been confounded with the presence of RENVs.
Journal of the Acoustical Society of America | 2015
Agnès C. Léger; Joseph G. Desloge; Louis D. Braida; Jayaganesh Swaminathan
Narrowband speech can be separated into fast temporal cues [temporal fine structure (TFS)], and slow amplitude modulations (envelope). Speech processed to contain only TFS leads to envelope recovery through cochlear filtering, which has been suggested to account for TFS-speech intelligibility for normal-hearing listeners. Hearing-impaired listeners have deficits with TFS-speech identification, but the contribution of recovered-envelope cues to these deficits is unknown. This was assessed for hearing-impaired listeners by measuring identification of disyllables processed to contain TFS or recovered-envelope cues. Hearing-impaired listeners performed worse than normal-hearing listeners, but TFS-speech intelligibility was accounted for by recovered-envelope cues for both groups.
Trends in Amplification | 2012
Joseph G. Desloge; Charlotte M. Reed; Louis D. Braida; Zachary D. Perez; Lorraine A. Delhorne
Functional simulation of sensorineural hearing impairment is an important research tool that can elucidate the nature of hearing impairments and suggest or eliminate compensatory signal-processing schemes. The objective of the current study was to evaluate the capability of an audibility-based functional simulation of hearing loss to reproduce the auditory-filter characteristics of listeners with sensorineural hearing loss. The hearing-loss simulation used either threshold-elevating noise alone or a combination of threshold-elevating noise and multiband expansion to reproduce the audibility-based characteristics of the loss (including detection thresholds, dynamic range, and loudness recruitment). The hearing losses of 10 listeners with bilateral, mild-to-severe hearing loss were simulated in 10 corresponding groups of 3 age-matched normal-hearing listeners. Frequency selectivity was measured using a notched-noise masking paradigm at five probe frequencies in the range of 250 to 4000 Hz with a fixed probe level of either 70 dB SPL or 8 dB SL (whichever was greater) and probe duration of 200 ms. The hearing-loss simulation reproduced the absolute thresholds of individual hearing-impaired listeners with an average root-mean-squared (RMS) difference of 2.2 dB and the notched-noise masked thresholds with an RMS difference of 5.6 dB. A rounded-exponential model of the notched-noise data was used to estimate equivalent rectangular bandwidths and slopes of the auditory filters. For some subjects and probe frequencies, the simulations were accurate in reproducing the auditory-filter characteristics of the hearing-impaired listeners. In other cases, however, the simulations underestimated the magnitude of the auditory bandwidths for the hearing-impaired listeners, which suggests the possibility of suprathreshold deficits.
Journal of the Acoustical Society of America | 2009
Joseph G. Desloge; Charlotte M. Reed; Louis D. Braida; Zachary D. Perez; Lorraine A. Delhorne
A functional simulation of hearing loss was evaluated in its ability to reproduce the temporal modulation transfer functions (TMTFs) for nine listeners with mild to profound sensorineural hearing loss. Each hearing loss was simulated in a group of three age-matched normal-hearing listeners through spectrally shaped masking noise or a combination of masking noise and multiband expansion. TMTFs were measured for both groups of listeners using a broadband noise carrier as a function of modulation rate in the range 2 to 1024 Hz. The TMTFs were fit with a lowpass filter function that provided estimates of overall modulation-depth sensitivity and modulation cutoff frequency. Although the simulations were capable of accurately reproducing the threshold elevations of the hearing-impaired listeners, they were not successful in reproducing the TMTFs. On average, the simulations resulted in lower sensitivity and higher cutoff frequency than were observed in the TMTFs of the hearing-impaired listeners. Discrepancies in performance between listeners with real and simulated hearing loss are possibly related to inaccuracies in the simulation of recruitment.