Dawn Burton Koch
Northwestern University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Dawn Burton Koch.
Science | 1996
Nina Kraus; Therese McGee; Thomas D. Carrell; Steven G. Zecker; Trent Nicol; Dawn Burton Koch
Children with learning problems often cannot discriminate rapid acoustic changes that occur in speech. In this study of normal children and children with learning problems, impaired behavioral discrimination of a rapid speech change (/dα/versus/gα/) was correlated with diminished magnitude of an electrophysiologic measure that is not dependent on attention or a voluntary response. The ability of children with learning problems to discriminate another rapid speech change (/bα/versus/wα/) also was reflected in the neurophysiology. These results indicate that some childrens discrimination deficits originate in the auditory pathway before conscious perception and have implications for differential diagnosis and targeted therapeutic strategies for children with learning disabilities and attention disorders.
Jaro-journal of The Association for Research in Otolaryngology | 2000
Nina Kraus; Ann R. Bradlow; Mary Ann Cheatham; Jenna Cunningham; Cynthia King; Dawn Burton Koch; Trent Nicol; Therese McGee; Laszlo Stein; Beverly A. Wright
AbstractAbstract The neural representation of sensory events depends upon neural synchrony. Auditory neuropathy, a disorder of stimulus-timing-related neural synchrony, provides a model for studying the role of synchrony in auditory perception. This article presents electrophysiological and behavioral data from a rare case of auditory neuropathy in a woman with normal hearing thresholds, making it possible to separate audibility from neuropathy. The experimental results, which encompass a wide range of auditory perceptual abilities and neurophysiologic responses to sound, provide new information linking neural synchrony with auditory perception. Findings illustrate that optimal eighth nerve and auditory brainstem synchrony do not appear to be essential for understanding speech in quiet listening situations. However, synchrony is critical for understanding speech in the presence of noise.
Hearing Research | 1993
Nina Kraus; Alan G. Micco; Dawn Burton Koch; Therese McGee; Thomas D. Carrell; Anu Sharma; Richard J. Wiet; Charles Z. Weingarten
The mismatch negativity (MMN) event-related potential is a non-task related neurophysiologic index of auditory discrimination. The MMN was elicited in eight cochlear implant recipients by the synthesized speech stimulus pair /da/ and /ta/. The response was remarkably similar to the MMN measured in normal-hearing individuals to the same stimuli. The results suggest that the central auditory system can process certain aspects of speech consistently, independent of whether the stimuli are processed through a normal cochlea or mediated by a cochlear prosthesis. The MMN shows promise as a measure for the objective evaluation of cochlear-implant function, and for the study of central neurophysiological processes underlying speech perception.
Ear and Hearing | 2007
Dawn Burton Koch; Mark Downing; Mary Joe Osberger; Leonid M. Litvak
Objectives: The HiResolution Bionic Ear has the capability of creating virtual spectral channels using current steering. Through simultaneous delivery of current to pairs of adjacent electrodes, it is hypothesized that the effective locus of stimulation can be steered to sites between the contacts by varying the proportion of current delivered to each electrode of the pair. Thus, theoretically, many intermediate regions of stimulation can be created with fine control over the proportion and amplitude of current delivered to each electrode. This study investigated the number of spectral channels—or different pitches—that could be resolved by adult users of the CII and HiRes 90K cochlear implants when current steering was applied to three pairs of electrodes along the implanted array. Design: Subjects were postlinguistically deafened adults recruited from the general CII and HiRes 90K user populations at 11 participating study sites. After loudness balancing and pitch ranking electrode pairs (2 and 3, 8 and 9, 13 and 14), an adaptive paradigm was used to estimate the number of intermediate pitch percepts that could be heard for each pair when current steering was implemented. Those data were used to estimate the potential number of spectral channels for each electrode pair. Results: Data from 57 implanted ears indicated that the numbers of spectral channels per electrode pair ranged from one (subjects who could not tell the electrodes apart) to 52 (an individual who had 52 different pitch percepts for the midarray pair of electrodes). The average numbers of spectral channels that could be distinguished were 5.4 for the basal electrode pair, 8.7 for the midarray electrode pair, and 7.2 for the apical electrode pair. Assuming that the average numbers of spectral channels for these three electrode pairs were representative of the entire 16-contact array, the potential total numbers of spectral channels could be estimated. For the 57 ears, the number of potential channels ranged from 8 to 466, with an average of 93. Conclusions: The HiResolution Bionic Ear has the ability to steer current through simultaneous stimulation of adjacent electrode contacts. These data show that the majority of subjects perceive additional spectral channels other than those associated with stimulation of the fixed electrodes when current steering is implemented. The results suggest that the average cochlear implant user may have significantly more place-pitch capability than is exploited presently by cochlear implant systems. Current steering will be implemented in a wearable sound-processing strategy that can deliver up to 120 spectral channels to CII and HiRes 90K recipients. The new strategy takes advantage of untapped capabilities of the CII/HiRes 90K implanted electronics and will be implemented through software, with no additional surgery required. It is anticipated that the improved spectral resolution offered by current steering will lead to better speech perception in noise and improved music appreciation.
Audiology and Neuro-otology | 2004
Dawn Burton Koch; Mary Joe Osberger; Phil Segel; Dorcas Kessler
Objective: This study compared speech perception benefits in adults implanted with the HiResolutionTM (HiRes) Bionic Ear who used both conventional and HiRes sound processing. A battery of speech tests was used to determine which formats were most appropriate for documenting the wide range of benefit experienced by cochlear-implant users. Study Design: A repeated-measures design was used to assess postimplantation speech perception in adults who received the HiResolution Bionic Ear in a recent clinical trial. Patients were fit first with conventional strategies and assessed after 3 months of use. Patients were then switched to HiRes sound processing and assessed again after 3 months of use. To assess the immediate effect of HiRes sound processing on speech perception performance, consonant recognition testing was performed in a subset of patients after 3 days of HiRes use and compared with their 3-month performance with conventional processing. Setting: Subjects were implanted and evaluated at 19 cochlear implant programs in the USA and Canada affiliated primarily with tertiary medical centers. Patients: Patients were 51 postlinguistically deafened adults. Main Outcome Measures: Speech perception was assessed using CNC monosyllabic words, CID sentences and HINT sentences in quiet and noise. Consonant recognition testing was also administered to a subset of patients (n = 30) using the Iowa Consonant Test presented in quiet and noise. All patients completed a strategy preference questionnaire after 6 months of device use. Results: Consonant identification in quiet and noise improved significantly after only 3 days of HiRes use. The mean improvement from conventional to HiRes processing was significant on all speech perception tests. The largest differences occurred for the HINT sentences in noise. Ninety-six percent of the patients preferred HiRes to conventional sound processing. Ceiling effects occurred for both sentence tests in quiet. Conclusions: Although most patients improved after switching to HiRes sound processing, the greatest differences were seen in the ‘poor’ performers because ‘good’ performers often reached ceiling performance, especially on tests in quiet. Future evaluations of cochlear-implant benefit should make use of more difficult measures, especially for ‘good’ users. Nonetheless, a range of difficulty must remain in test materials to document benefit in the entire population of implant recipients.
Audiology and Neuro-otology | 1998
Nina Kraus; Therese McGee; Dawn Burton Koch
Historically, auditory research has focused predominately upon how relatively simple acoustic signals are represented in the neuronal responses of the auditory periphery. However, in order to understand the neurophysiology underlying speech perception, the ultimate objective is to discover how speech sounds are represented in the central auditory system and to relate that representation to the perception of speech as a meaningful acoustic signal. This paper reviews three areas that pertain to the central auditory representation of speech: (1) the differences in neural representation of speech sounds at different levels of the auditory system; (2) the relation between the representation of sound in the auditory pathway and the perception/misperception of speech, and (3) the training-related plasticity of speech sound neural representation and speech perception.
Cochlear Implants International | 2010
Dawn Burton Koch; Sigfrid D. Soli; Mark Downing; Mary Joe Osberger
Abstract Normal-hearing listeners gain important everyday benefits from having two ears, particularly for determining where sounds come from and for understanding speech in noisy environments. Users of two cochlear implants may have the opportunity to experience some of these bilateral advantages. The primary aim of this study was to document bilateral versus unilateral listening benefit in 15 postlinguistically deafened adults implanted simultaneously with two Harmony® (HiRes 90K®) cochlear implants. Speech perception (in quiet and in noise) and localization accuracy were assessed for each ear alone and both ears together. Subjects showed improved sound localization and better speech perception in quiet and in noise when using two implants compared with using one implant alone.
Scandinavian audiology. Supplementum | 1998
Nina Kraus; Therese McGee; Dawn Burton Koch
Historically, auditory research has focused predominantly on how relatively simple acoustic signals are represented in the neuronal responses of the auditory periphery. However, in order to understand the neurophysiology underlying speech perception, the ultimate objective is to discover how speech sounds are represented in the central auditory system and to relate that representation to the perception of speech as a meaningful acoustic signal. This paper reviews three areas pertaining to the central auditory representation of speech: (1) the differences in neural representation of speech sounds at different levels of the auditory system, (2) the relation between the representation of sound in the auditory pathway and the perception/misperception of speech, and (3) the plasticity of speech-sound neural representation and speech perception.
Journal of The American Academy of Audiology | 2015
Jace Wolfe; Mila Morais; Erin C. Schafer; Smita Agrawal; Dawn Burton Koch
BACKGROUND Cochlear implant recipients often experience difficulty with understanding speech in the presence of noise. Cochlear implant manufacturers have developed sound processing algorithms designed to improve speech recognition in noise, and research has shown these technologies to be effective. Remote microphone technology utilizing adaptive, digital wireless radio transmission has also been shown to provide significant improvement in speech recognition in noise. There are no studies examining the potential improvement in speech recognition in noise when these two technologies are used simultaneously. PURPOSE The goal of this study was to evaluate the potential benefits and limitations associated with the simultaneous use of a sound processing algorithm designed to improve performance in noise (Advanced Bionics ClearVoice) and a remote microphone system that incorporates adaptive, digital wireless radio transmission (Phonak Roger). RESEARCH DESIGN A two-by-two way repeated measures design was used to examine performance differences obtained without these technologies compared to the use of each technology separately as well as the simultaneous use of both technologies. STUDY SAMPLE Eleven Advanced Bionics (AB) cochlear implant recipients, ages 11 to 68 yr. DATA COLLECTION AND ANALYSIS AzBio sentence recognition was measured in quiet and in the presence of classroom noise ranging in level from 50 to 80 dBA in 5-dB steps. Performance was evaluated in four conditions: (1) No ClearVoice and no Roger, (2) ClearVoice enabled without the use of Roger, (3) ClearVoice disabled with Roger enabled, and (4) simultaneous use of ClearVoice and Roger. RESULTS Speech recognition in quiet was better than speech recognition in noise for all conditions. Use of ClearVoice and Roger each provided significant improvement in speech recognition in noise. The best performance in noise was obtained with the simultaneous use of ClearVoice and Roger. CONCLUSIONS ClearVoice and Roger technology each improves speech recognition in noise, particularly when used at the same time. Because ClearVoice does not degrade performance in quiet settings, clinicians should consider recommending ClearVoice for routine, full-time use for AB implant recipients. Roger should be used in all instances in which remote microphone technology may assist the user in understanding speech in the presence of noise.
Otology & Neurotology | 2014
Dawn Burton Koch; Andrew Quick; Mary Joe Osberger; Aniket Saoji; Leonid M. Litvak
Objective To demonstrate benefits for speech perception and everyday listening in quiet and in noise with a speech-enhancement strategy called ClearVoice, which was designed to improve listening in complex acoustic environments without compromising hearing in quiet. Study Design A 2-week randomized crossover design was used to evaluate ClearVoice in 46 adults unilaterally implanted with a CII/HiRes 90K cochlear implant who had at least 6 months experience with HiRes Fidelity 120 sound processing. Speech perception was assessed using the AzBio sentences presented in quiet, in speech-spectrum noise and in multitalker babble. Subjective listening benefit and strategy preference were assessed with a questionnaire. ClearVoice has 3 gain settings (low, medium, and high), each intended as a full-time listening option according to individual preference. Speech understanding after acute use of ClearVoice-low was compared with HiRes Fidelity 120 during an initial test session. Speech perception abilities were compared with HiRes Fidelity 120 after 2 weeks of exclusive use of ClearVoice-medium, and after 2 weeks of exclusive use of ClearVoice-high. During a fifth week, participants were fit with 3 programs for comparison (HiRes Fidelity 120, ClearVoice-medium, and ClearVoice-high), after which, they reported preference and everyday listening benefits via a questionnaire. Results ClearVoice significantly improved speech understanding in speech-spectrum noise and multitalker babble, did not compromise listening in quiet, was preferred for everyday listening, and provided improved hearing in real-life situations. Conclusion ClearVoice improves hearing in noise for cochlear implant recipients who use HiRes Fidelity 120 sound processing.