Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Louise Loiselle is active.

Publication


Featured researches published by Louise Loiselle.


Ear and Hearing | 2012

Development and Validation of the AzBio Sentence Lists

Anthony J. Spahr; Michael F. Dorman; Leonid M. Litvak; Susan Van Wie; René H. Gifford; Philipos C. Loizou; Louise Loiselle; Tyler Oakes; Sarah Cook

Objectives: The goal of this study was to create and validate a new set of sentence lists that could be used to evaluate the speech perception abilities of hearing-impaired listeners and cochlear implant (CI) users. Our intention was to generate a large number of sentence lists with an equivalent level of difficulty for the evaluation of performance over time and across conditions. Design: The AzBio sentence corpus includes 1000 sentences recorded from two female and two male talkers. The mean intelligibility of each sentence was estimated by processing each sentence through a five-channel CI simulation and calculating the mean percent correct score achieved by 15 normal-hearing listeners. Sentences from each talker were sorted by percent correct score, and 165 sentences were selected from each talker and were then sequentially assigned to 33 lists, each containing 20 sentences (5 sentences from each talker). List equivalency was validated by presenting all lists, in random order, to 15 CI users. Results: Using sentence scores from the CI simulation study produced 33 lists of sentences with a mean score of 85% correct. The results of the validation study with CI users revealed no significant differences in percent correct scores for 29 of the 33 sentence lists. However, individual listeners demonstrated considerable variability in performance on the 29 lists. The binomial distribution model was used to account for the inherent variability observed in the lists. This model was also used to generate 95% confidence intervals for one and two list comparisons. A retrospective analysis of 172 instances where research subjects had been tested on two lists within a single condition revealed that 94% of results were accurately contained within these confidence intervals. Conclusions: The use of a five-channel CI simulation to estimate the intelligibility of individual sentences allowed for the creation of a large number of sentence lists with an equivalent level of difficulty. The results of the validation procedure with CI users found that 29 of 33 lists allowed scores that were not statistically different. However, individual listeners demonstrated considerable variability in performance across lists. This variability was accurately described by the binomial distribution model and was used to estimate the magnitude of change required to achieve statistical significance when comparing scores from one and two lists per condition. Fifteen sentence lists have been included in the AzBio Sentence Test for use in the clinical evaluation of hearing-impaired listeners and CI users. An additional eight sentence lists have been included in the Minimum Speech Test Battery to be distributed by the CI manufacturers for the evaluation of CI candidates.


Otology & Neurotology | 2015

Sound Source Localization and Speech Understanding in Complex Listening Environments by Single-sided Deaf Listeners After Cochlear Implantation.

Daniel M. Zeitler; Michael F. Dorman; Sarah Natale; Louise Loiselle; William A. Yost; René H. Gifford

Objective: To assess improvements in sound source localization and speech understanding in complex listening environments after unilateral cochlear implantation for single-sided deafness (SSD). Study Design: Nonrandomized, open, prospective case series. Setting: Tertiary referral center. Patients: Nine subjects with a unilateral cochlear implant (CI) for SSD (SSD-CI) were tested. Reference groups for the task of sound source localization included young (n = 45) and older (n = 12) normal-hearing (NH) subjects and 27 bilateral CI (BCI) subjects. Intervention: Unilateral cochlear implantation. Main Outcome Measures: Sound source localization was tested with 13 loudspeakers in a 180 arc in front of the subject. Speech understanding was tested with the subject seated in an 8-loudspeaker sound system arrayed in a 360-degree pattern. Directionally appropriate noise, originally recorded in a restaurant, was played from each loudspeaker. Speech understanding in noise was tested using the Azbio sentence test and sound source localization quantified using root mean square error. Results: All CI subjects showed poorer-than-normal sound source localization. SSD-CI subjects showed a bimodal distribution of scores: six subjects had scores near the mean of those obtained by BCI subjects, whereas three had scores just outside the 95th percentile of NH listeners. Speech understanding improved significantly in the restaurant environment when the signal was presented to the side of the CI. Conclusion: Cochlear implantation for SSD can offer improved speech understanding in complex listening environments and improved sound source localization in both children and adults. On tasks of sound source localization, SSD-CI patients typically perform as well as BCI patients and, in some cases, achieve scores at the upper boundary of normal performance.


Ear and Hearing | 2014

Interaural level differences and sound source localization for bilateral cochlear implant patients

Michael F. Dorman; Louise Loiselle; Josh Stohl; William A. Yost; Anthony J. Spahr; Christopher A. Brown; Sarah Cook

Objectives: The aims of this study were (i) to determine the magnitude of the interaural level differences (ILDs) that remain after cochlear implant (CI) signal processing and (ii) to relate the ILDs to the pattern of errors for sound source localization on the horizontal plane. Design: The listeners were 16 bilateral CI patients fitted with MED-EL CIs and 34 normal-hearing listeners. The stimuli were wideband, high-pass, and low-pass noise signals. ILDs were calculated by passing signals, filtered by head-related transfer functions (HRTFs) to a Matlab simulation of MED-EL signal processing. Results: For the wideband signal and high-pass signals, maximum ILDs of 15 to 17 dB in the input signal were reduced to 3 to 4 dB after CI signal processing. For the low-pass signal, ILDs were reduced to 1 to 2 dB. For wideband and high-pass signals, the largest ILDs for ±15 degree speaker locations were between 0.4 and 0.7 dB; for the ±30 degree speaker locations between 0.9 and 1.3 dB; for the 45 degree speaker locations between 2.4 and 2.9 dB; for the ±60 degree speaker locations, between 3.2 and 4.1 dB; and for the ±75 degree speaker locations between 2.7 and 3.4 dB. All of the CI patients in all the stimulus conditions showed poorer localization than the normal-hearing listeners. Localization accuracy for the CI patients was best for the wideband and high-pass signals and was poorest for the low-pass signal. Conclusions: Localization accuracy was related to the magnitude of the ILD cues available to the normal-hearing listeners and CI patients. The pattern of localization errors for the CI patients was related to the magnitude of the ILD differences among loudspeaker locations. The error patterns for the wideband and high-pass signals, suggest that, for the conditions of this experiment, patients, on an average, sorted signals on the horizontal plane into four sectors—on each side of the midline, one sector including 0, 15, and possibly 30 degree speaker locations, and a sector from 45 degree speaker locations to 75 degree speaker locations. The resolution within a sector was relatively poor.


Ear and Hearing | 2014

Development and validation of the pediatric AzBio sentence lists.

Anthony J. Spahr; Michael F. Dorman; Leonid M. Litvak; Sarah Cook; Louise Loiselle; Melissa D. DeJong; Andrea Hedley-Williams; Linsey S. Sunderhaus; Catherine A. Hayes; René H. Gifford

Objectives: The goal of this study was to create and validate a new set of sentence lists that could be used to evaluate the speech-perception abilities of listeners with hearing loss in cases where adult materials are inappropriate due to difficulty level or content. The authors aimed to generate a large number of sentence lists with an equivalent level of difficulty for the evaluation of performance over time and across conditions. Design: The original Pediatric AzBio sentence corpus included 450 sentences recorded from one female talker. All sentences included in the corpus were successfully repeated by kindergarten and first-grade students with normal hearing. The mean intelligibility of each sentence was estimated by processing each sentence through a cochlear implant simulation and calculating the mean percent correct score achieved by 15 normal-hearing listeners. After sorting sentences by mean percent correct scores, 320 sentences were assigned to 16 lists of equivalent difficulty. List equivalency was then validated by presenting all sentence lists, in a novel random order, to adults and children with hearing loss. A final-validation stage examined single-list comparisons from adult and pediatric listeners tested in research or clinical settings. Results: The results of the simulation study allowed for the creation of 16 lists of 20 sentences. The average intelligibility of each list ranged from 78.4 to 78.7%. List equivalency was then validated, when the results of 16 adult cochlear implant users and 9 pediatric hearing aid and cochlear implant users revealed no significant differences across lists. The binomial distribution model was used to account for the inherent variability observed in the lists. This model was also used to generate 95% confidence intervals for one and two list comparisons. A retrospective analysis of 361 instances from 78 adult cochlear implant users and 48 instances from 36 pediatric cochlear implant users revealed that the 95% confidence intervals derived from the model captured 94% of all responses (385 of 409). Conclusions: The cochlear implant simulation was shown to be an effective method for estimating the intelligibility of individual sentences for use in the evaluation of cochlear implant users. Furthermore, the method used for constructing equivalent sentence lists and estimating the inherent variability of the materials has also been validated. Thus, the AzBio Pediatric Sentence Lists are equivalent and appropriate for the assessment of speech-understanding abilities of children with hearing loss as well as adults for whom performance on AzBio sentences is near the floor.


Journal of the Acoustical Society of America | 2013

Sound source localization of filtered noises by listeners with normal hearing: A statistical analysis

William A. Yost; Louise Loiselle; Michael F. Dorman; Jason Burns; Christopher A. Brown

Several measures of sound source localization performance of 45 listeners with normal hearing were obtained when loudspeakers were in the front hemifield. Localization performance was not statistically affected by filtering the 200-ms, 2-octave or wider noise bursts (125 to 500, 1500 to 6000, and 125 to 6000 Hz wide noise bursts). This implies that sound source localization performance for noise stimuli is not differentially affected by which interaural cue (interaural time or level difference) a listener with normal hearing uses for sound source localization, at least for relatively broadband signals. This sound source localization task suggests that listeners with normal hearing perform with high reliability/repeatability, little response bias, and with performance measures that are normally distributed with a mean root-mean-square error of 6.2° and a standard deviation of 1.79°.


Audiology and Neuro-otology | 2009

Word Recognition following Implantation of Conventional and 10-mm Hybrid Electrodes

Michael F. Dorman; René H. Gifford; Kristen Lewis; Sharon A. McKarns; Jennifer Ratigan; Anthony J. Spahr; Jon K. Shallop; Colin L. W. Driscoll; Charles M. Luetje; Bradley S. Thedinger; Charles W. Beatty; Mark Syms; Mike Novak; David M. Barrs; Lisa Cowdrey; Jennifer M. Black; Louise Loiselle

We compared the effectiveness of 2 surgical interventions for improving word recognition ability in a quiet environment among patients who presented with: (1) bilateral, precipitously sloping, high-frequency hearing loss; (2) relatively good auditory thresholds at and below 500 Hz, and (3) poor speech recognition. In 1 intervention (n = 25), a conventional electrode array was inserted into 1 cochlea. As a consequence, hearing was lost in the implanted ear. In the other intervention (n = 22), a Nucleus Hybrid short-electrode array was inserted 10 mm into 1 cochlea with the aim of preserving hearing in that ear. Both groups of patients had similar low-frequency hearing and speech understanding in the ear contralateral to the implant. Following surgery, both groups had significantly higher word recognition scores than before surgery. Between-group comparisons indicated that the conventional electrode array group had higher word recognition scores than the 10-mm group when stimulation was presented to the operated ear and when stimulation was presented to both ears.


Ear and Hearing | 2013

Localization and speech understanding by a patient with bilateral cochlear implants and bilateral hearing preservation.

Michael F. Dorman; Anthony J. Spahr; Louise Loiselle; Ting Zhang; Sarah Cook; Christopher A. Brown; William A. Yost

Objectives: The authors describe the localization and speech-understanding abilities of a patient fit with bilateral cochlear implants (CIs) for whom acoustic low-frequency hearing was preserved in both cochleae. Design: Three signals were used in the localization experiments: low-pass, high-pass, and wideband noise. Speech understanding was assessed with the AzBio sentences presented in noise. Results: Localization accuracy was best in the aided, bilateral acoustic hearing condition, and was poorer in both the bilateral CI condition and when the bilateral CIs were used in addition to bilateral low-frequency hearing. Speech understanding was best when low-frequency acoustic hearing was combined with at least one CI. Conclusions: The authors found that (1) for sound source localization in patients with bilateral CIs and bilateral hearing preservation, interaural level difference cues may dominate interaural time difference cues and (2) hearing-preservation surgery can be of benefit to patients fit with bilateral CIs.


Audiology and Neuro-otology | 2014

Bimodal Cochlear Implants: The Role of Acoustic Signal Level in Determining Speech Perception Benefit

Michael F. Dorman; Philip Loizou; Shuai Wang; Ting Zhang; Anthony J. Spahr; Louise Loiselle; Sarah Cook

The aim of this project was to determine for bimodal cochlear implant (CI) patients, i.e. patients with low-frequency hearing in the ear contralateral to the implant, how speech understanding varies as a function of the difference in level between the CI signal and the acoustic signal. The data suggest that (1) acoustic signals perceived as significantly softer than a CI signal can contribute to speech understanding in the bimodal condition, (2) acoustic signals that are slightly softer than, or balanced with, a CI signal provide the largest benefit to speech understanding, and (3) acoustic signals presented at maximum comfortable loudness levels provide nearly as much benefit as signals that have been balanced with a CI signal.


Journal of The American Academy of Audiology | 2012

Current Research with Cochlear Implants at Arizona State University

Michael F. Dorman; Anthony J. Spahr; René H. Gifford; Sarah Cook; Ting Zhang; Louise Loiselle; William A. Yost; Lara Cardy; JoAnne Whittingham; David Schramm

In this article we review, and discuss the clinical implications of, five projects currently underway in the Cochlear Implant Laboratory at Arizona State University. The projects are (1) norming the AzBio sentence test, (2) comparing the performance of bilateral and bimodal cochlear implant (CI) patients in realistic listening environments, (3) accounting for the benefit provided to bimodal patients by low-frequency acoustic stimulation, (4) assessing localization by bilateral hearing aid patients and the implications of that work for hearing preservation patients, and (5) studying heart rate variability as a possible measure for quantifying the stress of listening via an implant. The long-term goals of the laboratory are to improve the performance of patients fit with cochlear implants and to understand the mechanisms, physiological or electronic, that underlie changes in performance. We began our work with cochlear implant patients in the mid-1980s and received our first grant from the National Institutes of Health (NIH) for work with implanted patients in 1989. Since that date our work with cochlear implant patients has been funded continuously by the NIH. In this report we describe some of the research currently being conducted in our laboratory.


Journal of The American Academy of Audiology | 2017

Speech Understanding and Sound Source Localization by Cochlear Implant Listeners Using a Pinna-Effect Imitating Microphone and an Adaptive Beamformer

Michael F. Dorman; Sarah Natale; Louise Loiselle

BACKGROUND Sentence understanding scores for patients with cochlear implants (CIs) when tested in quiet are relatively high. However, sentence understanding scores for patients with CIs plummet with the addition of noise. PURPOSE To assess, for patients with CIs (MED-EL), (1) the value to speech understanding of two new, noise-reducing microphone settings and (2) the effect of the microphone settings on sound source localization. RESEARCH DESIGN Single-subject, repeated measures design. For tests of speech understanding, repeated measures on (1) number of CIs (one, two), (2) microphone type (omni, natural, adaptive beamformer), and (3) type of noise (restaurant, cocktail party). For sound source localization, repeated measures on type of signal (low-pass [LP], high-pass [HP], broadband noise). STUDY SAMPLE Ten listeners, ranging in age from 48 to 83 yr (mean = 57 yr), participated in this prospective study. INTERVENTION Speech understanding was assessed in two noise environments using monaural and bilateral CIs fit with three microphone types. Sound source localization was assessed using three microphone types. DATA COLLECTION AND ANALYSIS In Experiment 1, sentence understanding scores (in terms of percent words correct) were obtained in quiet and in noise. For each patient, noise was first added to the signal to drive performance off of the ceiling in the bilateral CI-omni microphone condition. The other conditions were then administered at that signal-to-noise ratio in quasi-random order. In Experiment 2, sound source localization accuracy was assessed for three signal types using a 13-loudspeaker array over a 180° arc. The dependent measure was root-mean-score error. RESULTS Both the natural and adaptive microphone settings significantly improved speech understanding in the two noise environments. The magnitude of the improvement varied between 16 and 19 percentage points for tests conducted in the restaurant environment and between 19 and 36 percentage points for tests conducted in the cocktail party environment. In the restaurant and cocktail party environments, both the natural and adaptive settings, when implemented on a single CI, allowed scores that were as good as, or better, than scores in the bilateral omni test condition. Sound source localization accuracy was unaltered by either the natural or adaptive settings for LP, HP, or wideband noise stimuli. CONCLUSION The data support the use of the natural microphone setting as a default setting. The natural setting (1) provides better speech understanding in noise than the omni setting, (2) does not impair sound source localization, and (3) retains low-frequency sensitivity to signals from the rear. Moreover, bilateral CIs equipped with adaptive beamforming technology can engender speech understanding scores in noise that fall only a little short of scores for a single CI in quiet.

Collaboration


Dive into the Louise Loiselle's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sarah Cook

Arizona State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ting Zhang

Arizona State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

JoAnne Whittingham

Children's Hospital of Eastern Ontario

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sarah Natale

Arizona State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge