Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Margaret W. Skinner is active.

Publication


Featured researches published by Margaret W. Skinner.


International Journal of Technology Assessment in Health Care | 2000

THE SOCIETAL COSTS OF SEVERE TO PROFOUND HEARING LOSS IN THE UNITED STATES

Penny E. Mohr; Jacob J. Feldman; Jennifer L. Dunbar; Amy McConkey-Robbins; John K. Niparko; Robert K. Rittenhouse; Margaret W. Skinner

Objective: Severe to profound hearing impairment affects one-halfn to three-quarters of a million Americans. To function in a hearingn society, hearing-impaired persons require specialized educational,n social services, and other resources. The primary purpose of thisn study is to provide a comprehensive, national, and recent estimate ofn the economic burden of hearing impairment. Methods: We constructed a cohort-survival model to estimate then lifetime costs of hearing impairment. Data for the model weren derived principally from the analyses of secondary data sources,n including the National Health Interview Survey Hearing Loss andn Disability Supplements (1990–91 and 1994–95), the Department ofn Educations National Longitudinal Transition Study (1987), andn Gallaudet Universitys Annual Survey of Deaf and Hard of Hearingn Youth (1997–98). These analyses were supplemented by a review of then literature and consultation with a four-member expert panel. Monten Carlo analysis was used for sensitivity testing. Results: Severe to profound hearing loss is expected to cost societyn


Otology & Neurotology | 2008

Role of electrode placement as a contributor to variability in cochlear implant outcomes.

Charles C. Finley; Timothy A. Holden; Laura K. Holden; Bruce R. Whiting; Richard A. Chole; Gail J Neely; Timothy E. Hullar; Margaret W. Skinner

297,000 over the lifetime of an individual. Most of these lossesn (67%) are due to reduced work productivity, although the use of specialn education resources among children contributes an additional 21%.n Life time costs for those with prelingual onset exceed


Jaro-journal of The Association for Research in Otolaryngology | 2002

CT-Derived Estimation of Cochlear Morphology and Electrode Array Position in Relation to Word Recognition in Nucleus-22 Recipients

Margaret W. Skinner; Darlene R. Ketten; Laura K. Holden; Gary W. Harding; Peter G. Smith; George A. Gates; J. Gail Neely; G. Robert Kletzker; Barry S. Brunsden; Barbara Blocker

1 million. Conclusions: Results indicate that an additional


Ear and Hearing | 2004

Recognition of speech presented at soft to loud levels by adult cochlear implant recipients of three cochlear implant systems

Jill B. Firszt; Laura K. Holden; Margaret W. Skinner; Emily A. Tobey; Ann Peterson; Wolfgang Gaggl; Christina L. Runge-Samuelson; P. Ashley Wackym

4.6 billion willn be spent over the lifetime of persons who acquired their impairment inn 1998. The particularly high costs associated with prelingual onsetn of severe to profound hearing impairment suggest interventions aimedn at children, such as early identification and/or aggressive medicaln intervention, may have a substantial payback.


Annals of Otology, Rhinology, and Laryngology | 2007

In vivo estimates of the position of advanced bionics electrode arrays in the human cochlea.

Margaret W. Skinner; Timothy A. Holden; Bruce R. Whiting; Arne H. Voie; Barry S. Brunsden; J. Gail Neely; Eugene A. Saxon; Timothy E. Hullar; Charles C. Finley

Suboptimal cochlear implant (CI) electrode array placement may reduce presentation of coded information to the central nervous system and, consequently, limit speech recognition. Background: Generally, mean speech reception scores for CI recipients are similar across different CI systems, yet large outcome variation is observed among recipients implanted with the same device. These observations suggest significant recipient-dependent factors influence speech reception performance. This study examines electrode array insertion depth and scalar placement as recipient-dependent factors affecting outcome. Methods: Scalar location and depth of insertion of intracochlear electrodes were measured in 14 patients implanted with Advanced Bionics electrode arrays and whose word recognition scores varied broadly. Electrode position was measured using computed tomographic images of the cochlea and correlated with stable monosyllabic word recognition scores. Results: Electrode placement, primarily in terms of depth of insertion and scala tympani versus scala vestibuli location, varies widely across subjects. Lower outcome scores are associated with greater insertion depth and greater number of contacts being located in scala vestibuli. Three patterns of scalar placement are observed suggesting variability in insertion dynamics arising from surgical technique. Conclusion: A significant portion of variability in word recognition scores across a broad range of performance levels of CI subjects is explained by variability in scalar location and insertion depth of the electrode array. We suggest that this variability in electrode placement can be reduced and average speech reception improved by better selection of cochleostomy sites, revised insertion approaches, and control of insertion depth during surgical placement of the array.


Ear and Hearing | 2002

Speech recognition with the Nucleus 24 SPEAK, ACE, and CIS speech coding strategies in newly implanted adults

Margaret W. Skinner; Laura K. Holden; Lesley A. Whitford; Kerrie Plant; Colleen Psarros; Timothy A. Holden

This study extended the findings of Ketten et al. [Ann. Otol. Rhinol. Laryngol. Suppl. 175:1–16 (1998)] by estimating the three-dimensional (3D) cochlear lengths, electrode array intracochlear insertion depths, and characteristic frequency ranges for 13 more Nucleus-22 implant recipients based on in vivo computed tomography (CT) scans. Array insertion depths were correlated with NU-6 word scores (obtained one year after SPEAK strategy use) by these patients and the 13 who used the SPEAK strategy from the Ketten et al. study. For these 26 patients, the range of cochlear lengths was 29.1–37.4 mm. Array insertion depth range was 11.9–25.9 mm, and array insertion depth estimated from the surgeons report was 1.14 mm longer than CT-based estimates. Given the assumption that the human hearing range is fixed (20–20,000 Hz) regardless of cochlear length, characteristic frequencies at the most apical electrode (estimated with Greenwoods equation [Greenwood DD (1990) A cochlear frequency–position function of several species–29 years later. J Acoust. Soc. Am. 33: 1344–1356] and a patient-specific constant as) ranged from 308 to 3674 Hz. Patients NU-6 word scores were significantly correlated with insertion depth as a percentage of total cochlear length (R = 0.452; r2 = 0.204; p = 0.020), suggesting that part of the variability in word recognition across implant recipients can be accounted for by the position of the electrode array in the cochlea. However, NU-6 scores ranged from 4% to 81% correct for patients with array insertion depths between 47% and 68% of total cochlear length. Lower scores appeared related to low spiral ganglion cell survival (e.g., lues), aberrant current paths that produced facial nerve stimulation by apical electrodes (i.e., otosclerosis), central auditory processing difficulty, below-average verbal abilities, and early Alzheimers disease. Higher scores appeared related to patients high-average to above-average verbal abilities. Because most patients scores increased with SPEAK use, it is hypothesized that they accommodated to the shift in frequency of incoming sound to a higher pitch percept with the implant than would normally be perceived acoustically.


Journal of the Acoustical Society of America | 1997

SPEECH RECOGNITION AT SIMULATED SOFT, CONVERSATIONAL, AND RAISED-TO-LOUD VOCAL EFFORTS BY ADULTS WITH COCHLEAR IMPLANTS

Margaret W. Skinner; Laura K. Holden; Timothy A. Holden; Marilyn E. Demorest; Marios Fourakis

Objective: The purpose of this study was to conduct a large-scale investigation with adult recipients of the Clarion, Med-El, and Nucleus cochlear implant systems to (1) determine average scores and ranges of performance for word and sentence stimuli presented at three intensity levels (70, 60, and 50 dB SPL); (2) provide information on the variability of scores for each subject by obtaining test-retest measures for all test conditions; and (3) further evaluate the potential use of lower speech presentation levels (i.e., 60 and/or 50 dB SPL) in cochlear implant candidacy assessment. Design: Seventy-eight adult cochlear implant recipients, 26 with each of the three cochlear implant systems, participated in the study. To ensure that the data collected reflect the range of performance of adult recipients using recent technology for the three implant systems (Clarion HiFocus I or II, Med-El Combi 40+, Nucleus 24M or 24R), a composite range and distribution of consonant-nucleus-consonant (CNC) monosyllabic word scores was determined. Subjects using each device were selected to closely represent this range and distribution of CNC performance. During test sessions, subjects were administered the Hearing in Noise Test (HINT) sentence test and the CNC word test at three presentation levels (70, 60, and 50 dB SPL). HINT sentences also were administered at 60 dB SPL with a signal-to-noise ratio (SNR) of +8 dB. Warble tones were used to determine sound-field threshold levels from 250 to 4000 Hz. Test-retest measures were obtained for each of the speech recognition tests as well as for warble-tone sound-field thresholds. Results: Cochlear implant recipients using the Clarion, Med-El, or Nucleus devices performed on average equally as well at 60 compared with 70 dB SPL when listening for words and sentences. Additionally, subjects had substantial open-set speech perception performance at the softer level of 50 dB SPL for the same stimuli; however, subjects ability to understand speech was poorer when listening in noise to signals of greater intensity (60 dB SPL + 8 SNR) than when listening to signals presented at a soft presentation level (50 dB SPL) in quiet. A significant correlation was found between sound-field thresholds and speech recognition scores for presentation levels below 70 dB SPL. The results demonstrated a high test-retest reliability with cochlear implant users for these presentation levels and stimuli. Average sound-field thresholds were between 24 and 29 dB HL for frequencies of 250 to 4000 Hz, and results across sessions were essentially the same. Conclusions: Speech perception measures used with cochlear implant candidates and recipients should reflect the listening challenges that individuals encounter in natural communication situations. These data provide the basis for recommending new candidacy criteria based on speech recognition tests presented at 60 and/or 50 dB SPL, intensity levels that reflect real-life listening, rather than 70 dB SPL.


Journal of the Acoustical Society of America | 1980

Speech intelligibility in noise‐induced hearing loss: Effects of high‐frequency compensation

Margaret W. Skinner

Objectives: A new technique for determining the position of each electrode in the cochlea is described and applied to spiral computed tomography data from 15 patients implanted with Advanced Bionics HiFocus I, Ij, or Helix arrays. Methods: ANALYZE imaging software was used to register 3-dimensional image volumes from patients preoperative and postoperative scans and from a single body donor whose unimplanted ears were scanned clinically, with micro computed tomography and with orthogonal-plane fluorescence optical sectioning (OPFOS) microscopy. By use of this registration, we compared the atlas of OPFOS images of soft tissue within the body donors cochlea with the bone and fluid/ tissue boundary available in patient scan data to choose the midmodiolar axis position and judge the electrode position in the scala tympani or scala vestibuli, including the distance to the medial and lateral scalar walls. The angular rotation 0° start point is a line joining the midmodiolar axis and the middle of the cochlear canal entry from the vestibule. Results: The group mean array insertion depth was 477° (range, 286° to 655°). The word scores were negatively correlated (r = −0.59; p = .028) with the number of electrodes in the scala vestibuli. Conclusions: Although the individual variability in all measures was large, repeated patterns of suboptimal electrode placement were observed across subjects, underscoring the applicability of this technique.


Journal of Rehabilitation Research and Development | 2008

Restoring Hearing Symmetry with Two Cochlear Implants or One Cochlear Implant and a Contralateral Hearing Aid

Jill B. Firszt; Ruth M. Reeder; Margaret W. Skinner

Objective The objective of this study was to determine whether 1) the SPEAK, ACE or CIS speech coding strategy was associated with significantly better speech recognition for individual subjects implanted with the Nucleus CI24M internal device who used the SPrint™ speech processor, and 2) whether a subject’s preferred strategy for use in everyday life provided the best speech recognition. Design Twelve postlinguistically deaf, newly implanted adults participated. Initial preference for the three strategies was obtained with paired-comparison testing on the first day of implant stimulation with seven of eight U.S. subjects. During the first 12 wk, all subjects used each strategy alone for 4 wk to give them experience with the strategy and to identify preferred speech processor program parameters and settings that would be used in subsequent testing. For the next 6 wk, subjects used one strategy at a time for 2-wk intervals in the same order they had for the first 12 wk. At the end of each 2-wk interval, speech recognition testing was conducted with all three strategies. At the end of the 6 wk, all three strategies were placed on each subject’s processor, and subjects were asked to compare listening with these three programs in as many situations as possible for the next 2 wk. When they returned, subjects responded to a questionnaire asking about their preferred strategy and responded to two lists of medial consonants using each of the three strategies. The U.S. subjects also responded to two lists of medial vowels with the three strategies. Results Six of the 12 subjects in the present study had significantly higher CUNY sentence scores with the ACE strategy than with one or both of the other strategies; one of the 12 subjects had a significantly higher score with SPEAK than with ACE. In contrast, only two subjects had significantly higher CNC word and phoneme scores with one or two strategies than with the third strategy. One subject had a significantly higher vowel score with the SPEAK strategy than with the CIS strategy; and no subjects had significantly higher consonant scores with any strategy. Seven of 12 subjects preferred the ACE strategy, three preferred the SPEAK strategy, and two preferred the CIS strategy. Subjects’ responses on a questionnaire agreed closely with strategy preference from comparisons made in everyday life. There was a strong relation between the preferred strategy and scores on CUNY sentences but not for the other speech tests. For all subjects, except one, the preferred strategy was the one with the highest CUNY sentence score or was a strategy with a CUNY score not significantly lower than the highest score. Conclusions Despite differences in research design, there was remarkably close agreement in the pattern of group mean scores for the three strategies for CNC words and CUNY sentences in noise between the present study and the Conversion study (Arndt, Staller, Arcaroli, Hines, & Ebinger, Reference Note 1). In addition, essentially the same percentage of subjects preferred each strategy. For both studies, the strategy with which subjects had the highest score on the CUNY sentences in noise evaluation was strongly related to the preferred strategy; this relation was not strong for CNC words, CNC phonemes, vowels or consonants (Skinner, Arndt, & Staller, 2002). These results must be considered within the following context. For each strategy, programming parameters preferred for use in everyday life were determined before speech recognition was evaluated. In addition, implant recipients had experience listening with all three strategies in many situations in everyday life before choosing a preferred strategy. Finally, 11 of the 12 subjects strongly preferred one of the three strategies. Given the results and research design, it is recommended that clinicians fit each strategy sequentially starting with the ACE strategy so that the preferred programming parameters are determined for each strategy before recipients compare pairs of strategies. The goal is to provide the best opportunity for individuals to hear in everyday life within a clinically acceptable time period (e.g., 6 wk).


IEEE Transactions on Medical Imaging | 2003

Blind deblurring of spiral CT images

Ming Jiang; Ge Wang; Margaret W. Skinner; Jay T. Rubinstein; Michael W. Vannier

Ten postlinguistically deaf adults who used the Nucleus Cochlear Implant System and SPEAK speech coding strategy responded to vowels, consonants, words, and sentences presented sound-only at 70, 60, and 50 dB sound-pressure level. Highest group mean scores were at a raised-to-loud level of 70 dB for consonants (73%), words (44%), and sentences (87%); the highest score for vowels (70%) was at a conversational level of 60 dB. Lowest group mean scores were at a soft level of 50 dB for vowels (56%), consonants (47%), words (10%), and sentences (29%); all except subject 7 had some open-set speech recognition at this level. For the conversational level (60 dB), group mean scores for sentences and words were 72% and 29%, respectively. With this performance and sound-pressure level, it was observed that these subjects communicated successfully in a variety of listening situations. Given these subjects speech recognition scores at 60 dB and the fact that 70 dB does not simulate the vocal effort used in everyday speaking situations, it is suggested that cochlear implant candidates and implantees be evaluated with speech tests presented at 60 dB instead of the customary 70 dB sound-pressure level to simulate benefit provided by implants in everyday life. Analysis of individuals scores at the three levels for the four speech materials revealed different patterns of speech recognition among subjects (e.g., subjects 1 and 5). Future research on the relation between stimuli, sound processing, and subjects responses associated with these different patterns may provide guidelines to select parameter values with which to map incoming sound onto an individuals electrical dynamic range between threshold and maximum acceptable loudness level to improve speech recognition.

Collaboration


Dive into the Margaret W. Skinner's collaboration.

Top Co-Authors

Avatar

Timothy A. Holden

Washington University in St. Louis

View shared research outputs
Top Co-Authors

Avatar

Laura K. Holden

Washington University in St. Louis

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ge Wang

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Charles C. Finley

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Susan M. Binzer

Washington University in St. Louis

View shared research outputs
Top Co-Authors

Avatar

Barry S. Brunsden

Washington University in St. Louis

View shared research outputs
Researchain Logo
Decentralizing Knowledge