Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lorraine A. Delhorne is active.

Publication


Featured researches published by Lorraine A. Delhorne.


Journal of the Acoustical Society of America | 1992

Relations among different measures of speech reception in subjects using a cochlear implant.

William M. Rabinowitz; Donald K. Eddington; Lorraine A. Delhorne; P. A. Cuneo

A comprehensive set of speech reception measures were obtained in a group of about 20 postlingually deafened adult users of the Ineraid multichannel cochlear implant. The measures included audio, visual, and audiovisual recognition of words embedded in two types of sentences (with differing degrees of difficulty) and audio-only recognition of isolated monosyllabic words, consonant identification (12 alternatives, /Ca/), and vowel identification (8 alternatives, /bVt/). For most implantees, the audiovisual gains in the sentence tests were very high. Quantitative relations among audio-only scores were assessed using power-law transformations suggested by Boothroyd and Nittrouer [J. Acoust. Soc. Am. 84, 101-114 (1988)] that can account for the benefit of sentence context (via a factor k) and the relation between word and phoneme recognition (via a factor j). Across the broad range of performance that existed among the subjects, substantial order was observed among measures of speech reception along the continuum from recognition of words in sentences, words in isolation, speech segments, and the retrieval of underlying phonetic features. Correlations exceeded 0.85 among direct and sentence-derived measures of isolated word recognition as well as among direct and word-derived measures of segmental recognition. Results from a variety of other studies involving presentation of limited auditory signals, single-channel and multichannel implants, and tactual systems revealed a similar pattern among word recognition, overall consonant identification performance, and consonantal feature recruitment. Finally, improving the reception of consonantal place cues was identified as key to producing the greatest potential gains in speech reception.


Journal of the Acoustical Society of America | 2010

Speech reception by listeners with real and simulated hearing impairment: Effects of continuous and interrupted noise

Joseph G. Desloge; Charlotte M. Reed; Louis D. Braida; Zachary D. Perez; Lorraine A. Delhorne

The effects of audibility and age on masking for sentences in continuous and interrupted noise were examined in listeners with real and simulated hearing loss. The absolute thresholds of each of ten listeners with sensorineural hearing loss were simulated in normal-hearing listeners through a combination of spectrally-shaped threshold noise and multi-band expansion for octave bands with center frequencies from 0.25-8 kHz. Each individual hearing loss was simulated in two groups of three normal-hearing listeners (an age-matched and a non-age-matched group). The speech-to-noise ratio (S/N) for 50%-correct identification of hearing in noise test (HINT) sentences was measured in backgrounds of continuous and temporally-modulated (10 Hz square-wave) noise at two overall levels for unprocessed speech and for speech that was amplified with the NAL-RP prescription. The S/N in both continuous and interrupted noise of the hearing-impaired listeners was relatively well-simulated in both groups of normal-hearing listeners. Thus, release from masking (the difference in S/N obtained in continuous versus interrupted noise) appears to be determined primarily by audibility. Minimal age effects were observed in this small sample. Observed values of masking release were compared to predictions derived from intelligibility curves generated using the extended speech intelligibility index (ESII) [Rhebergen et al. (2006). J. Acoust. Soc. Am. 120, 3988-3997].


Ear and Hearing | 2005

Reception of environmental sounds through cochlear implants.

Charlotte M. Reed; Lorraine A. Delhorne

Objective: The objective of this study was to measure the performance of persons with cochlear implants on a test of environmental-sound reception. Design: The reception of environmental sounds was studied using a test employing closed sets of 10 sounds in each of four different settings (General Home, Kitchen, Office, and Outside). The participants in the study were 11 subjects with cochlear implants. Identification testing was conducted under each of the four closed sets of stimuli using a one-interval, 10-alternative, forced-choice procedure. The data were summarized in terms of overall percent correct identification scores and information transfer (IT) in bits. Confusion patterns were described using a hierarchical-clustering analysis. In addition, individual performance on the environmental-sound task was related to the ability to recognize isolated words through the cochlear implant alone. Results: Levels of performance were similar across the four stimulus sets. Mean scores across subjects ranged from 45.3% correct (and IT of 1.5 bits) to 93.8% correct (and IT of 3.1 bits). Performance on the environmental-sound identification test was roughly related to NU-6 word recognition ability. Specifically, those subjects with word scores greater than 34% correct performed at levels of 80 to 94% on environmental-sound recognition, whereas subjects with word scores less than 34% had greater difficulty on the task. Results of the hierarchical clustering analysis, conducted on two groups of subjects (a high-performing [HP] group and a low-performing [LP] group), indicated that confusions were confined to three or four specific stimuli for the HP subjects and that larger clusters of confused stimuli were observed in the data of the LP group. Signals with distinct temporal-envelope characteristics were easily perceived by all subjects, and confused items tended to share similar overall durations and temporal envelopes. Conclusions: Temporal-envelope cues appear to play a large role in the identification of environmental sounds through cochlear implants. The finer distinctions made by the HP group compared with the LP group may be related to a better ability both to resolve temporal differences and to use gross spectral cues. These findings are qualitatively consistent with patterns of confusions observed in the reception of speech segments through cochlear implants.


Speech Communication | 2009

A non-linear efferent-inspired model of the auditory system; matching human confusions in stationary noise

David P. Messing; Lorraine A. Delhorne; Ed Bruckert; Louis D. Braida; Oded Ghitza

Current predictors of speech intelligibility are inadequate for understanding and predicting speech confusions caused by acoustic interference. We develop a model of auditory speech processing that includes a phenomenological representation of the action of the Medial Olivocochlear efferent pathway and that is capable of predicting consonant confusions made by normal hearing listeners in speech-shaped Gaussian noise. We then use this model to predict human error patterns of initial consonants in consonant-vowel-consonant words in the context of a Dynamic Rhyme Test. In the process we demonstrate its potential for speech discrimination in noise. Our results produced performance that was robust to varying levels of stationary additive speech-shaped noise and which mimicked human performance in discrimination of synthetic speech as measured by the Chi-squared test.


Journal of the Acoustical Society of America | 2003

Temporal masking of multidimensional tactual stimuli

Hong Z. Tan; Charlotte M. Reed; Lorraine A. Delhorne; Nathaniel I. Durlach; Natasha Wan

Experiments were performed to examine the temporal masking properties of multidimensional tactual stimulation patterns delivered to the left index finger. The stimuli consisted of fixed-frequency sinusoidal motions in the kinesthetic (2 or 4 Hz), midfrequency (30 Hz), and cutaneous (300 Hz) frequency ranges. Seven stimuli composed of one, two, or three spectral components were constructed at each of two signal durations (125 or 250 ms). Subjects identified target signals under three different masking paradigms: forward masking, backward masking, and sandwiched masking (in which the target is presented between two maskers). Target identification was studied as a function of interstimulus interval (ISI) in the range 0 to 640 ms. For both signal durations, percent-correct scores increased with ISI for each of the three masking paradigms. Scores with forward and backward masking were similar and significantly higher than scores obtained with sandwiched masking. Analyses of error trials revealed that subjects showed a tendency to respond, more often than chance, with the masker, the composite of the masker and target, or the combination of the target and a component of the masker. The current results are compared to those obtained in previous studies of tactual recognition masking with brief cutaneous spatial patterns. The results are also discussed in terms of estimates of information transfer (IT) and IT rate, are compared to previous studies with multidimensional tactual signals, and are related to research on the development of tactual aids for the deaf.


Journal of the Acoustical Society of America | 1987

Spectral‐shape discrimination. I. Results from normal‐hearing listeners for stationary broadband noises

Catherine L. Farrar; Charlotte M. Reed; Yoshiko Ito; Nathaniel I. Durlach; Lorraine A. Delhorne; Patrick M. Zurek; Louis D. Braida

This research is concerned with the ability of normal-hearing listeners to discriminate broadband signals on the basis of spectral shape. The signals were six broadband noises whose spectral shapes were modeled after the spectra of unvoiced fricative and plosive consonants. The difficulty of the discriminations was controlled by the addition of noise filtered to match the long-term speech spectrum. Two-interval discrimination measurements were made in which loudness cues were eliminated by randomizing (roving) the overall stimulus level between presentation intervals. Experimental results, examined as a function of intensity rove width, stimulus duration, and stimulus pair, were related to the predictions of a simple filter-bank model whose fitting parameter provides an estimate of internal noise. Most results, with the notable exception of duration effects, were predicted by the model. Estimates of internal noise in each frequency channel averaged roughly 7 dB for long-duration stimuli and 13 dB for short-duration stimuli. Results and predictions are compared to results of other studies concerned with the discrimination of spectral shape.


Ear and Hearing | 2003

The reception of environmental sounds through wearable tactual aids

Charlotte M. Reed; Lorraine A. Delhorne

Objective The objective of this study was to investigate the ability to identify environmental sounds through a wearable tactual aid. Design A test of the ability to identify environmental sounds was developed, employing closed sets of ten sounds in each of four different settings (General Home, Kitchen, Office, and Outdoors). The participants in the study included a group of three laboratory-trained subjects with normal hearing and a group of three subjects with profound deafness who were experienced users of a tactual device (the Tactaid 7). Identification testing was conducted in each of the four environmental-sound settings using a one-interval, ten-alternative, forced-choice procedure. The laboratory-trained subjects received training with trial-by-trial correct-answer feedback, followed by testing in the absence of feedback using the Tactaid 7 device. The experienced tactual-aid users were tested initially without feedback to establish baseline levels of performance derived from their prior field experience with the Tactaid 7. These subjects then received additional trials in the presence of correct-answer feedback to determine the effects of training on their performance. The data were summarized in terms of overall percent-correct identification scores and information transfer (IT) in bits. Confusion patterns were described using a hierarchical clustering analysis. Results Post-training results with the laboratory-trained subjects on the Tactaid 7 indicated that performance was similar for the four test environments, with percent-correct scores averaging 65% (and IT of 2.0 bits). For the experienced tactual-aid users, performance was similar across the four environments, averaging 36% correct (and IT of 1.4 bits) for initial testing without feedback. Scores were increased to 60% correct (and IT of 1.9 bits) in the presence of correct-answer feedback. Similar trends were observed in the hierarchical-clustering analysis across both groups of subjects. Within each stimulus set, certain items tended to cluster together, whereas other items tended to appear in single-item clusters. The highly identified stimuli tended to be characterized by unique temporal patterns and confused stimuli seemed to be most similar in terms of their spectral characteristics. Conclusions Through the multi-channel spectral display of the Tactaid 7 device, subjects were able to identify roughly 2 bits of information in each of four 10-item sets of sounds representative of different environmental settings. Temporal cues appeared to play a larger role in identification of sounds than spectral or intensive cues.


Journal of the Acoustical Society of America | 2014

Otoacoustic-emission-based medial-olivocochlear reflex assays for humans.

Lynne Marshall; Judi A. Lapsley Miller; John J. Guinan; Christopher A. Shera; Charlotte M. Reed; Zachary D. Perez; Lorraine A. Delhorne; Paul Boege

Otoacoustic emission (OAE) tests of the medial-olivocochlear reflex (MOCR) in humans were assessed for viability as clinical assays. Two reflection-source OAEs [TEOAEs: transient-evoked otoacoustic emissions evoked by a 47 dB sound pressure level (SPL) chirp; and discrete-tone SFOAEs: stimulus-frequency otoacoustic emissions evoked by 40 dB SPL tones, and assessed with a 60 dB SPL suppressor] were compared in 27 normal-hearing adults. The MOCR elicitor was a 60 dB SPL contralateral broadband noise. An estimate of MOCR strength, MOCR%, was defined as the vector difference between OAEs measured with and without the elicitor, normalized by OAE magnitude (without elicitor). An MOCR was reliably detected in most ears. Within subjects, MOCR strength was correlated across frequency bands and across OAE type. The ratio of across-subject variability to within-subject variability ranged from 2 to 15, with wideband TEOAEs and averaged SFOAEs giving the highest ratios. MOCR strength in individual ears was reliably classified into low, normal, and high groups. SFOAEs using 1.5 to 2 kHz tones and TEOAEs in the 0.5 to 2.5 kHz band gave the best statistical results. TEOAEs had more clinical advantages. Both assays could be made faster for clinical applications, such as screening for individual susceptibility to acoustic trauma in a hearing-conservation program.


Archive | 2007

Towards Predicting Consonant Confusions of Degraded Speech

O. Ghitza; David P. Messing; Lorraine A. Delhorne; Louis D. Braida; E. Bruckert; M. M. Sondhi

The work described here arose from the need to understand and predict speech confusions caused by acoustic interference and by hearing impairment. Current predictors of speech intelligibility are inadequate for making such predictions (even for normal-hearing listeners). The Articulation Index, and related measures, STI and SII, are geared to predicting speech intelligibility. But such measures only predict average intelligibility, not error patterns, and they make predictions for a limited set of acoustic conditions (linear filtering, additive noise, reverberation). We aim at predicting consonant confusions made by normally-hearing listeners, listening to degraded speech. Our prediction engine comprises an efferent-inspired peripheral auditory model (PAM) connected to a template-match circuit (TMC) based upon basic concepts of neural processing. The extent to which this engine is an accurate model of auditory perception will be measured by its ability to predict consonant confusions in the presence of noise. The approach we have taken involves two separate steps. First, we tune the parameters of the PAM in isolation from the TMC. We then freeze the resulting PAM and use it to tune the parameters of the TMC. In Sect. 2 we describe a closed-loop model of the auditory periphery that comprises a nonlinear model of the cochlea (Goldstein 1990) with efferent-inspired feedback. To adjust the parameters of the PAM with a minimal interference of the TMC we use confusion patterns for speech segments generated in a paradigm with a minimal cognitive load (DRT; Voiers 1983). To reduce further PAM-TMC interaction we have synthesized DRT word-pairs, restricting stimulus differences to the initial diphones. In Sect. 3 we describe initial steps in a study towards predicting confusions of naturally spoken diphones, i.e. tokens that inherently exhibit phonemic variability. We describe a TMC inspired by principles of cortical neural processing (Hopfield 2004). A desirable property of the circuit is insensitivity to time-scale variations of the input stimuli. We demonstrate the validity of this hypothesis in the context of the DRT consonant discrimination task.


Journal of the Acoustical Society of America | 2014

Consonant identification using temporal fine structure and recovered envelope cues

Jayaganesh Swaminathan; Charlotte M. Reed; Joseph G. Desloge; Louis D. Braida; Lorraine A. Delhorne

The contribution of recovered envelopes (RENVs) to the utilization of temporal-fine structure (TFS) speech cues was examined in normal-hearing listeners. Consonant identification experiments used speech stimuli processed to present TFS or RENV cues. Experiment 1 examined the effects of exposure and presentation order using 16-band TFS speech and 40-band RENV speech recovered from 16-band TFS speech. Prior exposure to TFS speech aided in the reception of RENV speech. Performance on the two conditions was similar (∼50%-correct) for experienced listeners as was the pattern of consonant confusions. Experiment 2 examined the effect of varying the number of RENV bands recovered from 16-band TFS speech. Mean identification scores decreased as the number of RENV bands decreased from 40 to 8 and were only slightly above chance levels for 16 and 8 bands. Experiment 3 examined the effect of varying the number of bands in the TFS speech from which 40-band RENV speech was constructed. Performance fell from 85%- to 31%-correct as the number of TFS bands increased from 1 to 32. Overall, these results suggest that the interpretation of previous studies that have used TFS speech may have been confounded with the presence of RENVs.

Collaboration


Dive into the Lorraine A. Delhorne's collaboration.

Top Co-Authors

Avatar

Charlotte M. Reed

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Louis D. Braida

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Joseph G. Desloge

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Nathaniel I. Durlach

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

William M. Rabinowitz

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

David P. Messing

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Donald K. Eddington

Massachusetts Eye and Ear Infirmary

View shared research outputs
Top Co-Authors

Avatar

Susan D. Fischer

National Technical Institute for the Deaf

View shared research outputs
Top Co-Authors

Avatar

Bonnie Gough

Salk Institute for Biological Studies

View shared research outputs
Top Co-Authors

Avatar

Donna Gustina

National Technical Institute for the Deaf

View shared research outputs
Researchain Logo
Decentralizing Knowledge