Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Marcia J. Hay-McCutcheon is active.

Publication


Featured researches published by Marcia J. Hay-McCutcheon.


Acta Oto-laryngologica | 2008

Language skills of profoundly deaf children who received cochlear implants under 12 months of age: a preliminary study

Richard T. Miyamoto; Marcia J. Hay-McCutcheon; Karen Iler Kirk; Derek M. Houston; Tonya Bergeson-Dana

Conclusion. This study demonstrated that children who receive a cochlear implant below the age of 2 years obtain higher mean receptive and expressive language scores than children implanted over the age of 2 years. Objective. The purpose of this study was to compare the receptive and expressive language skills of children who received a cochlear implant before 1 year of age to the language skills of children who received an implant between 1 and 3 years of age. Subjects and methods. Standardized language measures, the Reynell Developmental Language Scale (RDLS) and the Preschool Language Scale (PLS), were used to assess the receptive and expressive language skills of 91 children who received an implant before their third birthday. Results. The mean receptive and expressive language scores for the RDLS and the PLS were slightly higher for the children who were implanted below the age of 2 years compared with the children who were implanted over 2 years old. For the PLS, both the receptive and expressive mean standard scores decreased with increasing age at implantation.


Audiology and Neuro-otology | 2008

Using early language outcomes to predict later language ability in children with cochlear implants.

Marcia J. Hay-McCutcheon; Karen Iler Kirk; Shirley C. Henning; Sujuan Gao; Rong Qi

The increased access to sound that cochlear implants have provided to profoundly deaf children has allowed them to develop English speech and language skills more successfully than using hearing aids alone. The purpose of this study was to determine how well early postimplant language skills were able to predict later language ability. Thirty children who received a cochlear implant between the years 1991 and 2000 were study participants. The Reynell Developmental Language Scales (RDLS) and the Clinical Evaluation of Language Fundamentals (CELF) were used as language measures. Results revealed that early receptive language skills as measured using the RDLS were good predictors of later core language ability assessed by the CELF. Alternatively, early expressive language skills were not found to be good predictors of later language performance. The age at which a child received an implant was found to have a significant impact on the early language measures, but not the later language measure, or on the ability of the RDLS to predict performance on the CELF measure.


Laryngoscope | 2005

Audiovisual Speech Perception in Elderly Cochlear Implant Recipients

Marcia J. Hay-McCutcheon; David B. Pisoni; Karen Iler Kirk

Objectives/Hypothesis: This study examined the speech perception skills of a younger and older group of cochlear implant recipients to determine the benefit that auditory and visual information provides for speech understanding.


The Annals of otology, rhinology & laryngology. Supplement | 2000

Speech perception in children with cochlear implants: Effects of lexical difficulty, talker variability, and word length

Karen Iler Kirk; Marcia J. Hay-McCutcheon; Susan Todd Sehgal; Richard T. Miyamoto

The present results demonstrated that all 3 factors --lexical difficulty, stimulus variability, and word length--significantly influenced spoken word recognition by children with multichannel cochlear implants. Lexically easy words were recognized significantly better than lexically hard words, regardless of talker condition or word length of the stimuli. These results support the earlier findings of Kirk et al(12) obtained with live-voice stimulus presentation and suggest that lexical effects are very robust. Despite the fact that listeners with cochlear implants receive a degraded speech signal, it appears that they organize and access words from memory relationally in the context of other words. The present results concerning talker variability contradict those previously reported in the literature for listeners with normal hearing(7,11) and for listeners with mild-to-moderate hearing loss who use hearing aids.(14) The previous investigators used talkers and word lists different from those used in the current study and found that word recognition declined as talker variability increased. In the current study, word recognition was better in the multiple-talker condition than in the single-talker condition. Kirk(15) reported similar results for postlingually deafened adults with cochlear implants who were tested on the recorded word lists used in the present study. Although the talkers were equally intelligible to listeners with normal hearing in the pilot study, they were not equally intelligible to children or adults with cochlear implants. It appears that either the man in the single-talker condition was particularly difficult to understand or that some of the talkers in the multiple-talker condition were particularly easy to understand. Despite the unexpected direction of the talker effects, the present results demonstrate that children with cochlear implants are sensitive to differences among talkers and that talker characteristics influence their spoken word recognition. We are conducting a study to assess the intelligibility of each of the 6 talkers to listeners with cochlear implants. Such studies should aid the development of equivalent testing conditions for listeners with cochlear implants. There are 2 possible reasons the children in the present study identified multisyllabic words better than monosyllabic words. First, they may use the linguistic redundancy cues in multisyllabic words to aid in spoken word recognition. Second, multisyllabic words come from relatively sparse lexical neighborhoods compared with monosyllabic tokens. That is, multisyllabic words have fewer phonetically similar words, or neighbors, competing for selection than do monosyllabic stimuli. These lexical characteristics most likely contribute to the differences in identification noted as a function of word length. The significant lexical and word length effects noted here may yield important diagnostic information about spoken word recognition by children with sensory aids. For example, children who can make relatively fine phonetic distinctions should demonstrate only small differences in the recognition of lexically easy versus hard words or of monosyllabic versus multisyllabic stimuli. In contrast, children who process speech using broad phonetic categories should show much larger differences. That is, they may not be able to accurately encode words in general or lexically hard words specifically. Further study is warranted to determine the interaction between spoken word recognition and individual word encoding strategies.


International Journal of Audiology | 2009

Audiovisual asynchrony detection and speech perception in hearing-impaired listeners with cochlear implants: A preliminary analysis

Marcia J. Hay-McCutcheon; David B. Pisoni; Kristopher K. Hunt

This preliminary study examined the effects of hearing loss and aging on the detection of AV asynchrony in hearing-impaired listeners with cochlear implants. Additionally, the relationship between AV asynchrony detection skills and speech perception was assessed. Individuals with normal-hearing and cochlear implant recipients were asked to make judgments about the synchrony of AV speech. The cochlear implant recipients also completed three speech perception tests, the CUNY, HINT sentences, and the CNC test. No significant differences were observed in the detection of AV asynchronous speech between the normal-hearing listeners and the cochlear implant recipients. Older adults in both groups displayed wider timing windows, over which they identified AV asynchronous speech as being synchronous, than younger adults. For the cochlear implant recipients, no relationship between the size of the temporal asynchrony window and speech perception performance was observed. The findings from this preliminary experiment suggest that aging has a greater effect on the detection of AV asynchronous speech than the use of a cochlear implant. Additionally, the temporal width of the AV asynchrony function was not correlated with speech perception skills for hearing-impaired individuals who use cochlear implants.


Audiological Medicine | 2007

Audiovisual spoken word recognition by children with cochlear implants

Karen Iler Kirk; Marcia J. Hay-McCutcheon; Rachael Frush Holt; Sujuan Gao; Rong Qi; Bethany L. Gerlain

This study examined how prelingually deafened children with cochlear implants combine visual information from lip-reading with auditory cues in an open-set speech perception task. A secondary aim was to examine lexical effects on the recognition of words in isolation and in sentences. Fifteen children with cochlear implants served as participants in this study. Participants were administered two tests of spoken word recognition. The LNT assessed isolated word recognition in an auditory-only format. The AV-LNST assessed recognition of key words in sentences in a visual-only, auditory-only and audiovisual presentation format. On each test, lexical characteristics of the stimulus items were controlled to assess the effects of lexical competition. The children also were administered a test of receptive vocabulary knowledge. The results revealed that recognition of key words was significantly influenced by presentation format. Audiovisual speech perception was best, followed by auditory-only and visual-only presentation, respectively. Lexical effects on spoken word recognition were evident for isolated words, but not when words were presented in sentences. Finally, there was a significant relationship between auditory-only and audiovisual word recognition and language knowledge. The results demonstrate that children with cochlear implants obtain significant benefit from audiovisual speech integration, and suggest such tests should be included in test batteries intended to evaluate cochlear implant outcomes.


Journal of Speech Language and Hearing Research | 2017

An Exploration of the Associations among Hearing Loss, Physical Health, and Visual Memory in Adults from West Central Alabama.

Marcia J. Hay-McCutcheon; Adriana Hyams; Xin Yang; Jason M. Parton; Brianna Panasiuk; Sarah Ondocsin; Mary Margaret James; Forrest Scogin

Purpose The purpose of this preliminary study was to explore the associations among hearing loss, physical health, and visual memory in adults living in rural areas, urban clusters, and an urban city in west Central Alabama. Method Two hundred ninety-seven adults (182 women, 115 men) from rural areas, urban clusters, and an urban city of west Central Alabama completed a hearing assessment, a physical health questionnaire, a hearing handicap measure, and a visual memory test. Results A greater number of adults with hearing loss lived in rural areas and urban clusters than in an urban area. In addition, poorer physical health was significantly associated with hearing loss. A greater number of individuals with poor physical health who lived in rural towns and urban clusters had hearing loss compared with the adults with other physical health issues who lived in an urban city. Poorer hearing sensitivity resulted in poorer outcomes on the Emotional and Social subscales of the Hearing Handicap Inventory for Adults. And last, visual memory, a working-memory task, was not associated with hearing loss but was associated with educational level. Conclusions The outcomes suggest that hearing loss is associated with poor physical and emotional health but not with visual-memory skills. A greater number of adults living in rural areas experienced hearing loss compared with adults living in an urban city, and consequently, further research will be necessary to confirm this relationship and to explore the reasons behind it. Also, further exploration of the relationship between cognition and hearing loss in adults living in rural and urban areas will be needed.


Laryngoscope | 2010

An analysis of hearing aid fittings in adults using cochlear implants and contralateral hearing aids

Michael S. Harris; Marcia J. Hay-McCutcheon

The objective of this study was to assess the appropriateness of hearing aid fittings within a sample of adult cochlear implant recipients who use a hearing aid in the contralateral ear (i.e., bimodal stimulation).


The Annals of otology, rhinology & laryngology. Supplement | 2000

Structure of mental lexicons of children who use cochlear implants: Preliminary findings

Steven B. Chin; Ted A. Meyer; Marcia J. Hay-McCutcheon; Gary A. Wright; David B. Pisoni

Logan1 has proposed that children older than 2 years with normal hearing organize and retrieve words from their mental lexicons using a phoneme-based strategy similar to that of adults. It is not clear whether the mental lexicons of children with profound deafness who use cochlear implants (CIs) are similarly organized, but the structures of such lexicons may indicate how these children acquire the ability to recognize and produce spoken words. To examine this, errors produced by such children on the Lexical Neighborhood Test (LNT),2 an open-set, monosyllabic word recognition test, were analyzed within the framework of the Neighborhood Activation Model (NAM).3,4 This model proposes that spoken word recognition occurs in the context of phonologically similar words, such that recognition of a spoken word is dependent on 1) the frequency of occurrence of the word and 2) words in the lexicon that are phonologically similar (by single-phoneme substitution, addition, or deletion) to the target word (“neighbors”), including both the number of neighbors (“neighborhood density”) and the neighbors’ mean frequency of occurrence in the language (“neighborhood frequency”).


Journal of Communication Disorders | 2018

Performance variability on perceptual discrimination tasks in profoundly deaf adults with cochlear implants

Marcia J. Hay-McCutcheon; Nathaniel R. Peterson; David B. Pisoni; Karen Iler Kirk; Xin Yang; Jason M. Parton

OBJECTIVES The purpose of this study was to evaluate performance on two challenging listening tasks, talker and regional accent discrimination, and to assess variables that could have affected the outcomes. STUDY DESIGN A prospective study using 35 adults with one cochlear implant (CI) or a CI and a contralateral hearing aid (bimodal hearing) was conducted. Adults completed talker and regional accent discrimination tasks. METHODS Two-alternative forced-choice tasks were used to assess talker and accent discrimination in a group of adults who ranged in age from 30 years old to 81 years old. RESULTS A large amount of performance variability was observed across listeners for both discrimination tasks. Three listeners successfully discriminated between talkers for both listening tasks, 14 participants successfully completed one discrimination task and 18 participants were not able to discriminate between talkers for either listening task. Some adults who used bimodal hearing benefitted from the addition of acoustic cues provided through a HA but for others the HA did not help with discrimination abilities. Acoustic speech feature analysis of the test signals indicated that both the talker speaking rate and the fundamental frequency (F0) helped with talker discrimination. For accent discrimination, findings suggested that access to more salient spectral cues was important for better discrimination performance. CONCLUSIONS The ability to perform challenging discrimination tasks successfully likely involves a number of complex interactions between auditory and non-auditory pre- and post-implant factors. To understand why some adults with CIs perform similarly to adults with normal hearing and others experience difficulty discriminating between talkers, further research will be required with larger populations of adults who use unilateral CIs, bilateral CIs and bimodal hearing.

Collaboration


Dive into the Marcia J. Hay-McCutcheon's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

David B. Pisoni

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xin Yang

University of Alabama

View shared research outputs
Top Co-Authors

Avatar

Allen F. Ryan

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge