Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Laura K. Holden is active.

Publication


Featured researches published by Laura K. Holden.


Otology & Neurotology | 2008

Role of electrode placement as a contributor to variability in cochlear implant outcomes.

Charles C. Finley; Timothy A. Holden; Laura K. Holden; Bruce R. Whiting; Richard A. Chole; Gail J Neely; Timothy E. Hullar; Margaret W. Skinner

Suboptimal cochlear implant (CI) electrode array placement may reduce presentation of coded information to the central nervous system and, consequently, limit speech recognition. Background: Generally, mean speech reception scores for CI recipients are similar across different CI systems, yet large outcome variation is observed among recipients implanted with the same device. These observations suggest significant recipient-dependent factors influence speech reception performance. This study examines electrode array insertion depth and scalar placement as recipient-dependent factors affecting outcome. Methods: Scalar location and depth of insertion of intracochlear electrodes were measured in 14 patients implanted with Advanced Bionics electrode arrays and whose word recognition scores varied broadly. Electrode position was measured using computed tomographic images of the cochlea and correlated with stable monosyllabic word recognition scores. Results: Electrode placement, primarily in terms of depth of insertion and scala tympani versus scala vestibuli location, varies widely across subjects. Lower outcome scores are associated with greater insertion depth and greater number of contacts being located in scala vestibuli. Three patterns of scalar placement are observed suggesting variability in insertion dynamics arising from surgical technique. Conclusion: A significant portion of variability in word recognition scores across a broad range of performance levels of CI subjects is explained by variability in scalar location and insertion depth of the electrode array. We suggest that this variability in electrode placement can be reduced and average speech reception improved by better selection of cochleostomy sites, revised insertion approaches, and control of insertion depth during surgical placement of the array.


Ear and Hearing | 2013

Factors affecting open-set word recognition in adults with cochlear implants.

Laura K. Holden; Charles C. Finley; Jill B. Firszt; Timothy A. Holden; Christine Brenner; Lisa G. Potts; Brenda D. Gotter; Sallie S. Vanderhoof; Karen M. Mispagel; Gitry Heydebrand; Margaret W. Skinner

Objective: A great deal of variability exists in the speech-recognition abilities of postlingually deaf adult cochlear implant (CI) recipients. A number of previous studies have shown that duration of deafness is a primary factor affecting CI outcomes; however, there is little agreement regarding other factors that may affect performance. The objective of the present study was to determine the source of variability in CI outcomes by examining three main factors, biographic/audiologic information, electrode position within the cochlea, and cognitive abilities in a group of newly implanted CI recipients. Design: Participants were 114 postlingually deaf adults with either the Cochlear or Advanced Bionics CI systems. Biographic/audiologic information, aided sentence-recognition scores, a high resolution temporal bone CT scan and cognitive measures were obtained before implantation. Monosyllabic word recognition scores were obtained during numerous test intervals from 2 weeks to 2 years after initial activation of the CI. Electrode position within the cochlea was determined by three-dimensional reconstruction of pre- and postimplant CT scans. Participants’ word scores over 2 years were fit with a logistic curve to predict word score as a function of time and to highlight 4-word recognition metrics (CNC initial score, CNC final score, rise time to 90% of CNC final score, and CNC difference score). Results: Participants were divided into six outcome groups based on the percentile ranking of their CNC final score, that is, participants in the bottom 10% were in group 1; those in the top 10% were in group 6. Across outcome groups, significant relationships from low to high performance were identified. Biographic/audiologic factors of age at implantation, duration of hearing loss, duration of hearing aid use, and duration of severe-to-profound hearing loss were significantly and inversely related to performance as were frequency modulated tone, sound-field threshold levels obtained with the CI. That is, the higher-performing outcome groups were younger in age at the time of implantation, had shorter duration of severe-to-profound hearing loss, and had lower CI sound-field threshold levels. Significant inverse relationships across outcome groups were also observed for electrode position, specifically the percentage of electrodes in scala vestibuli as opposed to scala tympani and depth of insertion of the electrode array. In addition, positioning of electrode arrays closer to the modiolar wall was positively correlated with outcome. Cognitive ability was significantly and positively related to outcome; however, age at implantation and cognition were highly correlated. After controlling for age, cognition was no longer a factor affecting outcomes. Conclusion: There are a number of factors that limit CI outcomes. They can act singularly or collectively to restrict an individual’s performance and to varying degrees. The highest performing CI recipients are those with the least number of limiting factors. Knowledge of when and how these factors affect performance can favorably influence counseling, device fitting, and rehabilitation for individual patients and can contribute to improved device design and application.


Ear and Hearing | 1991

Performance of postlinguistically deaf adults with the Wearable Speech Processor (WSP III) and Mini Speech Processor (MSP) of the Nucleus Multi-Electrode Cochlear Implant.

Margaret W. Skinner; Laura K. Holden; Timothy A. Holden; Richard C. Dowell; Peter M. Seligman; Judith A. Brimacombe; Anne L. Beiter

Seven postlinguistically deaf adults implanted with the Nucleus Multi-Electrode Cochlear Implant participated in an evaluation of speech perception performance with three speech processors: the Wearable Speech Process (WSP III), a prototype of the Mini Speech Processor, and the Mini Speech Processor. The first experiment was performed with the prototype and Wearable Speech Processor both programmed using the F0F1F2 speech coding strategy. The second experiment compared performance with the Mini Speech Processor programmed with the Multi-Peak speech coding strategy and the Wearable Speech Processor programmed with the F0F1F2 speech coding strategy. Performance was evaluated in the sound-only condition using recorded speech tests presented in quiet and in noise. Questionnaires and informal reports provided information about use in everyday life. In experiment I, there was no significant difference in performance using the Wearable Speech Processor and prototype on any of the tests. Nevertheless, six out of seven subjects preferred the prototype for use in everyday life. In experiment II, performance on open-set tests in quiet and noise was significantly higher with the Mini Speech Processor (Multi-Peak speech coding strategy) than with the Wearable Speech Processor. Subjects reported an increase in their ability to communicate with other people using the Mini Speech Processor (Multi-Peak speech coding strategy) compared with the Wearable Speech Processor in everyday life.


Ear and Hearing | 2002

Speech recognition with the Nucleus 24 SPEAK, ACE, and CIS speech coding strategies in newly implanted adults

Margaret W. Skinner; Laura K. Holden; Lesley A. Whitford; Kerrie Plant; Colleen Psarros; Timothy A. Holden

Objective The objective of this study was to determine whether 1) the SPEAK, ACE or CIS speech coding strategy was associated with significantly better speech recognition for individual subjects implanted with the Nucleus CI24M internal device who used the SPrint™ speech processor, and 2) whether a subject’s preferred strategy for use in everyday life provided the best speech recognition. Design Twelve postlinguistically deaf, newly implanted adults participated. Initial preference for the three strategies was obtained with paired-comparison testing on the first day of implant stimulation with seven of eight U.S. subjects. During the first 12 wk, all subjects used each strategy alone for 4 wk to give them experience with the strategy and to identify preferred speech processor program parameters and settings that would be used in subsequent testing. For the next 6 wk, subjects used one strategy at a time for 2-wk intervals in the same order they had for the first 12 wk. At the end of each 2-wk interval, speech recognition testing was conducted with all three strategies. At the end of the 6 wk, all three strategies were placed on each subject’s processor, and subjects were asked to compare listening with these three programs in as many situations as possible for the next 2 wk. When they returned, subjects responded to a questionnaire asking about their preferred strategy and responded to two lists of medial consonants using each of the three strategies. The U.S. subjects also responded to two lists of medial vowels with the three strategies. Results Six of the 12 subjects in the present study had significantly higher CUNY sentence scores with the ACE strategy than with one or both of the other strategies; one of the 12 subjects had a significantly higher score with SPEAK than with ACE. In contrast, only two subjects had significantly higher CNC word and phoneme scores with one or two strategies than with the third strategy. One subject had a significantly higher vowel score with the SPEAK strategy than with the CIS strategy; and no subjects had significantly higher consonant scores with any strategy. Seven of 12 subjects preferred the ACE strategy, three preferred the SPEAK strategy, and two preferred the CIS strategy. Subjects’ responses on a questionnaire agreed closely with strategy preference from comparisons made in everyday life. There was a strong relation between the preferred strategy and scores on CUNY sentences but not for the other speech tests. For all subjects, except one, the preferred strategy was the one with the highest CUNY sentence score or was a strategy with a CUNY score not significantly lower than the highest score. Conclusions Despite differences in research design, there was remarkably close agreement in the pattern of group mean scores for the three strategies for CNC words and CUNY sentences in noise between the present study and the Conversion study (Arndt, Staller, Arcaroli, Hines, & Ebinger, Reference Note 1). In addition, essentially the same percentage of subjects preferred each strategy. For both studies, the strategy with which subjects had the highest score on the CUNY sentences in noise evaluation was strongly related to the preferred strategy; this relation was not strong for CNC words, CNC phonemes, vowels or consonants (Skinner, Arndt, & Staller, 2002). These results must be considered within the following context. For each strategy, programming parameters preferred for use in everyday life were determined before speech recognition was evaluated. In addition, implant recipients had experience listening with all three strategies in many situations in everyday life before choosing a preferred strategy. Finally, 11 of the 12 subjects strongly preferred one of the three strategies. Given the results and research design, it is recommended that clinicians fit each strategy sequentially starting with the ACE strategy so that the preferred programming parameters are determined for each strategy before recipients compare pairs of strategies. The goal is to provide the best opportunity for individuals to hear in everyday life within a clinically acceptable time period (e.g., 6 wk).


Otology & Neurotology | 2012

Auditory abilities after cochlear implantation in adults with unilateral deafness: a pilot study.

Jill B. Firszt; Laura K. Holden; Ruth M. Reeder; Susan B. Waltzman; Susan Arndt

Objective This pilot study examined speech recognition, localization, temporal and spectral discrimination, and subjective reports of cochlear implant (CI) recipients with unilateral deafness. Study Design Three adult male participants with short-term unilateral deafness (<5 yr) participated. All had sudden onset of severe-to-profound hearing loss in 1 ear, which then received a CI, and normal or near normal hearing in the other ear. Speech recognition in quiet and noise, localization, discrimination of temporal and spectral cues, and a subjective questionnaire were obtained over several days. Listening conditions were CI, normal hearing (NH) ear, and bilaterally (CI and NH). Results All participants had open-set speech recognition and excellent audibility (250–6,000 Hz) with the CI. Localization improved bilaterally compared with the NH ear alone. Word recognition in noise was significantly better bilaterally than with the NH ear for 2 participants. Sentence recognition in various noise conditions did not show significant bilateral improvement; however, the CI did not hinder performance in noise even when noise was toward the CI side. The addition of the CI improved temporal difference discrimination for 2 participants and spectral difference discrimination for all participants. Participants wore the CI full time, and subjective reports were positive. Conclusion Overall, the CI recipients with unilateral deafness obtained open-set speech recognition, improved localization, improved word recognition in noise, and improved perception of their ability to hear in everyday life. A larger study is warranted to further quantify the benefits and limitations of cochlear implantation in individuals with unilateral deafness.


Ear and Hearing | 2012

Cochlear implantation in adults with asymmetric hearing loss.

Jill B. Firszt; Laura K. Holden; Ruth M. Reeder; Lisa Cowdrey; Sarah King

Objective: Bilateral severe to profound sensorineural hearing loss is a standard criterion for cochlear implantation. Increasingly, patients are implanted in one ear and continue to use a hearing aid in the nonimplanted ear to improve abilities such as sound localization and speech understanding in noise. Patients with severe to profound hearing loss in one ear and a more moderate hearing loss in the other ear (i.e., asymmetric hearing) are not typically considered candidates for cochlear implantation. Amplification in the poorer ear is often unsuccessful because of limited benefit, restricting the patient to unilateral listening from the better ear alone. The purpose of this study was to determine whether patients with asymmetric hearing loss could benefit from cochlear implantation in the poorer ear with continued use of a hearing aid in the better ear. Design: Ten adults with asymmetric hearing between ears participated. In the poorer ear, all participants met cochlear implant candidacy guidelines; seven had postlingual onset, and three had pre/perilingual onset of severe to profound hearing loss. All had open-set speech recognition in the better-hearing ear. Assessment measures included word and sentence recognition in quiet, sentence recognition in fixed noise (four-talker babble) and in diffuse restaurant noise using an adaptive procedure, localization of word stimuli, and a hearing handicap scale. Participants were evaluated preimplant with hearing aids and postimplant with the implant alone, the hearing aid alone in the better ear, and bimodally (the implant and hearing aid in combination). Postlingual participants were evaluated at 6 mo postimplant, and pre/perilingual participants were evaluated at 6 and 12 mo postimplant. Data analysis compared the following results: (1) the poorer-hearing ear preimplant (with hearing aid) and postimplant (with cochlear implant); (2) the device(s) used for everyday listening pre- and postimplant; and (3) the hearing aid-alone and bimodal listening conditions postimplant. Results: The postlingual participants showed significant improvements in speech recognition after 6 mo cochlear implant use in the poorer ear. Five postlingual participants had a bimodal advantage over the hearing aid-alone condition on at least one test measure. On average, the postlingual participants had significantly improved localization with bimodal input compared with the hearing aid-alone. Only one pre/perilingual participant had open-set speech recognition with the cochlear implant. This participant had better hearing than the other two pre/perilingual participants in both the poorer and better ear. Localization abilities were not significantly different between the bimodal and hearing aid-alone conditions for the pre/perilingual participants. Mean hearing handicap ratings improved postimplant for all participants indicating perceived benefit in everyday life with the addition of the cochlear implant. Conclusions: Patients with asymmetric hearing loss who are not typical cochlear implant candidates can benefit from using a cochlear implant in the poorer ear with continued use of a hearing aid in the better ear. For this group of 10, the 7 postlingually deafened participants showed greater benefits with the cochlear implant than the pre/perilingual participants; however, further study is needed to determine maximum benefit for those with early onset of hearing loss.


Otology & Neurotology | 2009

Speech recognition in cochlear implant recipients: comparison of standard HiRes and HiRes 120 sound processing.

Jill B. Firszt; Laura K. Holden; Ruth M. Reeder; Margaret W. Skinner

Objective: HiRes (HR) 120 is a sound-processing strategy purported to offer an increase in the precision of frequency-to-place mapping through the use of current steering. This within-subject study was designed to compare speech recognition as well as music and sound quality ratings for HR and HR 120 processing. Setting: Cochlear implant/tertiary referral center. Subjects: Eight postlinguistically deafened adults implanted with an Advanced Bionics CII or HR 90K cochlear implant. Study Design/Main Outcome Measures: Performance with HR and HR 120 was assessed during 4 test sessions with a battery of measures including monosyllabic words, sentences in quiet and in noise, and ratings of sound quality and musical passages. Results: Compared with HR, speech recognition results in adult cochlear implant recipients revealed small but significant improvements with HR 120 for single syllable words and for 2 of 3 sentence recognition measures in noise. Both easy and more difficult sentence material presented in quiet were not significantly different between strategies. Additionally, music quality ratings were significantly better for HR 120 than for HR, and 7 of 8 subjects preferred HR 120 over HR for listening in everyday life. Conclusion: HR 120 may offer equivalent or improved benefit to patients compared with HR. Differences in performance on test measures between strategies are dependent on speech recognition materials and listening conditions.


Ear and Hearing | 2003

An investigation of input level range for the nucleus 24 cochlear implant system: speech perception performance, program preference, and loudness comfort ratings.

Chris James; Margaret W. Skinner; Lois F. A. Martin; Laura K. Holden; Karyn L. Galvin; Timothy A. Holden; Lesley A. Whitford

Objective Cochlear implant recipients often have limited access to lower level speech sounds. In this study we evaluated the effects of varying the input range characteristics of the Nucleus 24 cochlear implant system on recognition of vowels, consonants, and sentences in noise and on listening in everyday life. Design Twelve subjects participated in the study that was divided into two parts. In Part 1 subjects used speech processor (Nucleus 24 SPrint™) programs adjusted for three input sensitivity settings: a standard or default microphone sensitivity setting (MS 8), a setting that increased the input sensitivity by 10.5 dB (MS 15), and the same setting that increased input sensitivity but also incorporated the automatic sensitivity control (ASC; i.e., MS 15A) that is designed to reduce the loudness of noise. The default instantaneous input dynamic range (IIDR) of 30 dB was used in these programs (i.e., base level of 4; BL 4). Subjects were tested using each sensitivity program with vowels and consonants presented at very low to casual conversational levels of 40 dB SPL and 55 dB SPL, respectively. They were also tested with sentences presented at a raised level of 65 dB SPL in multi-talker babble at individually determined signal to noise ratios. In addition, subjects were given experience outside of the laboratory for several weeks. They were asked to complete a questionnaire where they compared the programs in different listening situations as well as the loudness of environmental sounds, and state the setting they preferred overall. In Part 2 of the study, subjects used two programs. The first program was their preferred sensitivity program from Part 1 that had an IIDR of 30 dB (BL 4). Seven subjects used MS 8 and four used MS 15, and one used the noise reduction program MS 15A. The second program used the same microphone sensitivity but had the IIDR extended by an additional 8 to 10 dB (BL 1/0). These two programs were evaluated similarly in the speech laboratory and with take-home experience as in Part 1. Results Part 1 Increasing the microphone input sensitivity by 10.5 dB (from MS 8 to MS 15) significantly improved the perception of vowels and consonants at 40 and 55 dB SPL. The group mean improvement in vowel scores was 25 percentage points at 40 dB SPL and 4 percentage points at 55 dB SPL. The group mean improvement for consonants was 23 percentage points at 40 dB SPL and 11 percentage points at 55 dB SPL. Increased input sensitivity did not significantly reduce the perception of sentences presented at 65 dB SPL in babble despite the fact that speech peaks were then within the compressed range above the SPrint processors automatic gain control (AGC) knee-point. Although there was a demonstrable advantage for perception of low-level speech with the higher input sensitivity (MS 15 and 15A), seven of the 12 subjects preferred MS 8, four preferred MS 15 or 15A, and one had no preference overall. Approximately half the subjects preferred MS 8 across the 18 listening situations, whereas an average of two subjects preferred MS 15 or 15A. The increased microphone sensitivity of MS 15 substantially increased the loudness of environmental sounds. However, use of the ASC noise reduction setting with MS 15 reduced the loudness of environmental sounds to equal or below that for MS 8. Results Part 2 The increased instantaneous input range gave some improvement (8 to 9 percentage points for the 40 dB SPL presentation level) in the perception of consonants. There was no statistically significant increase in vowel scores. Mean scores for sentences presented at 65 dB SPL in babble were significantly lower (5 percentage points) for the increased IIDR setting. Subjects had no preference for the increased IIDR over the default. The IIDR setting had no effect on the loudness of environmental sounds. Conclusions Given the fact that individuals differ in threshold (T) and comfort (C) levels for electrical stimulation, and preferred microphone sensitivity, volume control, and noise-reduction settings, it is essential for the clinician and recipient to determine what combination is best for the individual over several sessions. The results of this study clearly show the advantage of using higher microphone sensitivity settings than the default MS 8 to provide better speech recognition for low-level stimuli. However, it was also necessary to adjust other parameters such as map C levels, automatic sensitivity control and base level, to optimize loudness comfort in the diversity of listening situations an individual encounters in everyday life.


Ear and Hearing | 2007

Clinical evaluation of higher stimulation rates in the nucleus research platform 8 system.

Kerrie Plant; Laura K. Holden; Margo Skinner; Jennifer Arcaroli; Lesley A. Whitford; Mary-Ann Law; Esti Nel

Objective: The effect on speech perception of using higher stimulation rates than the 14.4 kHz available in the Nucleus 24 cochlear implant system was investigated. The study used the Nucleus Research Platform 8 (RP8) system, comprising the CI24RE receiver-stimulator with the Contour electrode array, the L34SP body-worn research speech processor, and the Nucleus Programming Environment (NPE) fitting and Neural Response Telemetry (NRT) software. This system enabled clinical investigation of higher stimulation rates before an implementation in the Freedom cochlear implant system commercially released by Cochlear Limited. Design: Use of higher stimulation rates in the ACE coding strategy was assessed in 15 adult subjects. An ABAB experimental design was used to control for order effects. Program A used a total stimulation rate of between 12 kHz and 14.4 kHz. This program was used for at least the first 3 mo after initial device activation. After evaluation with this program, each subject was provided with two different higher stimulation rate programs: one with a total stimulation rate of 24 kHz and the other with a total stimulation rate of 32 kHz. After a 6-week period of familiarization, each subject identified his/her preferred higher rate program (program B), and this was used for the evaluation. Subjects then repeated their use of program A for 3 wk, then program B for 3 wk, before the second evaluation with each. Speech perception was evaluated by using CNC open-set monosyllabic words presented in quiet and CUNY open-set sentences presented in noise. Preference for stimulation rate program was assessed via a subjective questionnaire. Threshold (T)- and Comfortable (C)-levels, as well as subjective reports of tinnitus, were monitored for each subject throughout the study to determine whether there were any changes that might be associated with the use of higher stimulation rates. Results: No significant mean differences in speech perception results were found for the group between the two programs for tests in either quiet or noise. Analysis of individual subject data showed that five subjects had significant benefit from use of program B for tests administered in quiet and for tests administered in noise. However, only two of these subjects showed benefit in both test conditions. One subject showed significant benefit from use of program A when tested in quiet, whereas another showed benefit with this program in noise. Each subject’s preferred program varied. Five subjects reported a preference for program A, eight subjects reported a preference for program B and two reported no overall preference. Preference between the different stimulation rates provided within program B also varied, with 10 subjects preferring 24 kHz and five preferring 32 kHz total stimulation rates. A significant increase in T-levels from baseline measures was observed after three weeks of initial experience with program B, however there was no difference between the baseline levels and those obtained after five weeks of use. No significant change in C-levels was found over the monitoring period. No long-term changes in tinnitus that could be associated with the use of the higher stimulation rates were reported by any of the subjects. Conclusions: The use of higher stimulation rates may provide benefit to some but not all cochlear implant recipients. It is important to optimize the stimulation rate for an individual to ensure maximal benefit. The absence of any changes in T- and C-levels or in tinnitus suggests that higher stimulation rates are safe for clinical use.


Otolaryngology-Head and Neck Surgery | 1997

Parameter Selection to Optimize Speech Recognition with the Nucleus Implant

Margaret W. Skinner; Laura K. Holden; Timothy A. Holden

Speech coding strategy, frequency boundary assignment table, and speech processor program minimum and maximum stimulation levels are parameters of the Nucleus Cochlear Implant System whose selection affects speech recognition performance in adults and children. Research studies show that speech recognition is significantly better with (1) the Spectral Peak than with the Multipeak speech coding strategy and (2) frequency boundary assignment Table 7 than with Table 9 in an individuals speech processor program (MAP). Minimum and maximum stimulation levels in this MAP are based on psychophysical measurements on each electrode but often need to be modified for optimum use in everyday life. Many children and adults have increases, decreases, or fluctuations in electrical hearing that require changes in the MAP minimum and maximum levels to maintain their ability to recognize speech and other sounds.

Collaboration


Dive into the Laura K. Holden's collaboration.

Top Co-Authors

Avatar

Timothy A. Holden

Washington University in St. Louis

View shared research outputs
Top Co-Authors

Avatar

Jill B. Firszt

Washington University in St. Louis

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ruth M. Reeder

Washington University in St. Louis

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Marios Fourakis

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Charles C. Finley

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

J. Gail Neely

Washington University in St. Louis

View shared research outputs
Researchain Logo
Decentralizing Knowledge