Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Marc A. Brennan is active.

Publication


Featured researches published by Marc A. Brennan.


Ear and Hearing | 2013

Maximizing audibility and speech recognition with nonlinear frequency compression by estimating audible bandwidth

Ryan W. McCreery; Marc A. Brennan; Brenda Hoover; Judy G. Kopun; Patricia G. Stelmachowicz

Objective: Nonlinear frequency compression attempts to restore high-frequency audibility by lowering high-frequency input signals. Methods of determining the optimal parameters that maximize speech understanding have not been evaluated. The effect of maximizing the audible bandwidth on speech recognition for a group of listeners with normal hearing is described. Design: Nonword recognition was measured with 20 normal-hearing adults. Three audiograms with different high-frequency thresholds were used to create conditions with varying high-frequency audibility. Bandwidth was manipulated using three conditions for each audiogram: conventional processing, the manufacturer’s default compression parameters, and compression parameters that optimized bandwidth. Results: Nonlinear frequency compression optimized to provide the widest audible bandwidth improved nonword recognition compared with both conventional processing and the default parameters. Conclusions: These results showed that using the widest audible bandwidth maximized speech identification when using nonlinear frequency compression. Future studies should apply these methods to listeners with hearing loss to demonstrate efficacy in clinical populations.


Ear and Hearing | 2014

The influence of audibility on speech recognition with nonlinear frequency compression for children and adults with hearing loss.

Ryan W. McCreery; Joshua M. Alexander; Marc A. Brennan; Brenda Hoover; Judy G. Kopun; Patricia G. Stelmachowicz

Objective: The primary goal of nonlinear frequency compression (NFC) and other frequency-lowering strategies is to increase the audibility of high-frequency sounds that are not otherwise audible with conventional hearing aid (HA) processing due to the degree of hearing loss, limited HA bandwidth, or a combination of both factors. The aim of the present study was to compare estimates of speech audibility processed by NFC with improvements in speech recognition for a group of children and adults with high-frequency hearing loss. Design: Monosyllabic word recognition was measured in noise for 24 adults and 12 children with mild to severe sensorineural hearing loss. Stimuli were amplified based on each listener’s audiogram with conventional processing (CP) with amplitude compression or with NFC and presented under headphones using a software-based HA simulator. A modification of the speech intelligibility index (SII) was used to estimate audibility of information in frequency-lowered bands. The mean improvement in SII was compared with the mean improvement in speech recognition. Results: All but 2 listeners experienced improvements in speech recognition with NFC compared with CP, consistent with the small increase in audibility that was estimated using the modification of the SII. Children and adults had similar improvements in speech recognition with NFC. Conclusion: Word recognition with NFC was higher than CP for children and adults with mild to severe hearing loss. The average improvement in speech recognition with NFC (7%) was consistent with the modified SII, which indicated that listeners experienced an increase in audibility with NFC compared with CP. Further studies are necessary to determine whether changes in audibility with NFC are related to speech recognition with NFC for listeners with greater degrees of hearing loss, with a greater variety of compression settings, and using auditory training.


Ear and Hearing | 2009

Effects of audibility and multichannel wide dynamic range compression on consonant recognition for listeners with severe hearing loss.

Evelyn Davies-Venn; Pamela E. Souza; Marc A. Brennan; G. Christopher Stecker

Objective: This study examined the effects of multichannel wide-dynamic range compression (WDRC) amplification and stimulus audibility on consonant recognition and error patterns. Design: Listeners had either severe or mild to moderate sensorineural hearing loss. Each listener was monaurally fit with a wearable hearing aid using typical clinical procedures, frequency-gain parameters, and a hybrid of clinically prescribed compression ratios for desired sensation level (Scollie et al., 2005) and NAL-NL (Dillon, 1999). Consonant-vowel nonsense syllables were presented in soundfield at multiple input levels (50, 65, 80 dB SPL). Test conditions were four-channel fast-acting WDRC amplification and a control compression limiting (CL) amplification condition. Listeners identified the stimulus heard from choices presented on an on-screen display. A between-subject repeated measures design was used to evaluate consonant recognition and consonant confusion patterns. Results: Fast-acting WDRC provided a considerable audibility advantage at 50 dB SPL, especially for listeners with severe hearing loss. Listeners with mild to moderate hearing loss received less audibility improvement from the fast-acting WDRC amplification, for conversational and high level speech, when compared with listeners with severe hearing loss. Analysis of WDRC benefit scores revealed that listeners had slightly lower scores with fast-acting WDRC amplification (relative to CL) when WDRC provided minimal improvement in audibility. The negative effect was greater for listeners with mild to moderate hearing loss compared with their counterparts with severe hearing loss. Conclusions: All listeners, but particularly the severe loss group, benefited from fast-acting WDRC amplification for low-level speech. For conversational and higher speech levels (i.e., when WDRC does not confer a significant audibility advantage), fast-acting WDRC amplification seems to slightly degrade performance. Listeners’ consonant confusion patterns suggest that this negative effect may be partly due to fast-acting WDRC-induced distortions, which alter specific consonant features. In support of this view, audibility accounted for a greater percentage of the variance in listeners’ performance with CL amplification compared with fast-acting WDRC amplification.


The Hearing journal | 2002

Is functional gain really functional

Patricia G. Stelmachowicz; Brenda Hoover; Dawna E. Lewis; Marc A. Brennan

For many years, functional gain was the only method ava i lable to quantify the in situ p e rformance of hearing instruments. Te c h n i c a l l y, functional gain is defined as the differe n c e in dB between aided and unaided sound-field thre s h o l d s as a function of frequency; typically, the goal has been to “s h i f t” thresholds into the range of 20-25 dB HL. To d a y, pro b e m i c rophone technology can provide a m o re reliable and efficient method of quantifying the i n s i t u p e rformance of hearing instruments than a functional gain method. In current audiologic practice, howe ve r, aided sound-field thresholds alone often are used to determine the appropriateness of a hearing instrument fitting.


Journal of The American Academy of Audiology | 2014

Paired comparisons of nonlinear frequency compression, extended bandwidth, and restricted bandwidth hearing aid processing for children and adults with hearing loss.

Marc A. Brennan; Ryan W. McCreery; Judy G. Kopun; Brenda Hoover; Joshua M. Alexander; Dawna E. Lewis; Patricia G. Stelmachowicz

BACKGROUND Preference for speech and music processed with nonlinear frequency compression (NFC) and two controls (restricted bandwidth [RBW] and extended bandwidth [EBW] hearing aid processing) was examined in adults and children with hearing loss. PURPOSE The purpose of this study was to determine if stimulus type (music, sentences), age (children, adults), and degree of hearing loss influence listener preference for NFC, RBW, and EBW. RESEARCH DESIGN Design was a within-participant, quasi-experimental study. Using a round-robin procedure, participants listened to amplified stimuli that were (1) frequency lowered using NFC, (2) low-pass filtered at 5 kHz to simulate the RBW of conventional hearing aid processing, or (3) low-pass filtered at 11 kHz to simulate EBW amplification. The examiner and participants were blinded to the type of processing. Using a two-alternative forced-choice task, participants selected the preferred music or sentence passage. STUDY SAMPLE Participants included 16 children (ages 8-16 yr) and 16 adults (ages 19-65 yr) with mild to severe sensorineural hearing loss. INTERVENTION All participants listened to speech and music processed using a hearing aid simulator fit to the Desired Sensation Level algorithm v5.0a. RESULTS Children and adults did not differ in their preferences. For speech, participants preferred EBW to both NFC and RBW. Participants also preferred NFC to RBW. Preference was not related to the degree of hearing loss. For music, listeners did not show a preference. However, participants with greater hearing loss preferred NFC to RBW more than participants with less hearing loss. Conversely, participants with greater hearing loss were less likely to prefer EBW to RBW. CONCLUSIONS Both age groups preferred access to high-frequency sounds, as demonstrated by their preference for either the EBW or NFC conditions over the RBW condition. Preference for EBW can be limited for those with greater degrees of hearing loss, but participants with greater hearing loss may be more likely to prefer NFC. Further investigation using participants with more severe hearing loss may be warranted.


Ear and Hearing | 2017

Effect of Context and Hearing Loss on Time-Gated Word Recognition in Children

Dawna E. Lewis; Judy G. Kopun; Ryan W. McCreery; Marc A. Brennan; Kanae Nishi; Evan Cordrey; Patricia G. Stelmachowicz; Mary Pat Moeller

Objectives: The purpose of this study was to examine word recognition in children who are hard of hearing (CHH) and children with normal hearing (CNH) in response to time-gated words presented in high- versus low-predictability sentences (HP, LP), where semantic cues were manipulated. Findings inform our understanding of how CHH combine cognitive-linguistic and acoustic-phonetic cues to support spoken word recognition. It was hypothesized that both groups of children would be able to make use of linguistic cues provided by HP sentences to support word recognition. CHH were expected to require greater acoustic information (more gates) than CNH to correctly identify words in the LP condition. In addition, it was hypothesized that error patterns would differ across groups. Design: Sixteen CHH with mild to moderate hearing loss and 16 age-matched CNH participated (5 to 12 years). Test stimuli included 15 LP and 15 HP age-appropriate sentences. The final word of each sentence was divided into segments and recombined with the sentence frame to create series of sentences in which the final word was progressively longer by the gated increments. Stimuli were presented monaurally through headphones and children were asked to identify the target word at each successive gate. They also were asked to rate their confidence in their word choice using a five- or three-point scale. For CHH, the signals were processed through a hearing aid simulator. Standardized language measures were used to assess the contribution of linguistic skills. Results: Analysis of language measures revealed that the CNH and CHH performed within the average range on language abilities. Both groups correctly recognized a significantly higher percentage of words in the HP condition than in the LP condition. Although CHH performed comparably with CNH in terms of successfully recognizing the majority of words, differences were observed in the amount of acoustic-phonetic information needed to achieve accurate word recognition. CHH needed more gates than CNH to identify words in the LP condition. CNH were significantly lower in rating their confidence in the LP condition than in the HP condition. CHH, however, were not significantly different in confidence between the conditions. Error patterns for incorrect word responses across gates and predictability varied depending on hearing status. Conclusions: The results of this study suggest that CHH with age-appropriate language abilities took advantage of context cues in the HP sentences to guide word recognition in a manner similar to CNH. However, in the LP condition, they required more acoustic information (more gates) than CNH for word recognition. Differences in the structure of incorrect word responses and their nomination patterns across gates for CHH compared with their peers with NH suggest variations in how these groups use limited acoustic information to select word candidates.


Journal of The American Academy of Audiology | 2016

Stability of audiometric thresholds for children with hearing AIDS applying the American Academy of audiology pediatric amplification guideline: Implications for safety

Ryan W. McCreery; Elizabeth A. Walker; Meredith Spratford; Benjamin Kirby; Jacob Oleson; Marc A. Brennan

BACKGROUND Children who wear hearing aids may be at risk for further damage to their hearing from overamplification. Previous research on amplification-induced hearing loss has included children using linear amplification or simulations of predicted threshold shifts based on nonlinear amplification formulae. A relationship between threshold shifts and the use of nonlinear hearing aids in children has not been empirically verified. PURPOSE The purpose of the study was to compare predicted threshold shifts from amplification to longitudinal behavioral thresholds in a large group of children who wear hearing aids to determine the likelihood of amplification-induced hearing loss. RESEARCH DESIGN An accelerated longitudinal design was used to collect behavioral threshold and amplification data prospectively. STUDY SAMPLE Two-hundred and thirteen children with mild-to-profound hearing loss who wore hearing aids were included in the analysis. DATA COLLECTION AND ANALYSIS Behavioral audiometric thresholds, hearing aid outputs, and hearing aid use data were collected for each participant across four study visits. Individual ear- and frequency-specific safety limits were derived based on the Modified Power Law to determine the level at which increased amplification could result in permanent threshold shifts. Behavioral thresholds were used to estimate which children would be above the safety limit at 500, 1000, 2000, and 4000 Hz using thresholds in dB HL and then in dB SPL in the ear canal. Changes in thresholds across visits were compared for children who were above and below the safety limits. RESULTS Behavioral thresholds decreased across study visits for all children, regardless of whether their amplification was above the safety limits. The magnitude of threshold change across time corresponded with changes in ear canal acoustics as measured by the real-ear-to-coupler difference. CONCLUSIONS Predictions of threshold changes due to amplification for children with hearing loss did not correspond with observed changes in threshold over across 2-4 yr of monitoring amplification. Use of dB HL thresholds and predictions of hearing aid output to set the safety limit resulted in a larger number of children being classified as above the safety limit than when safety limits were based on dB SPL thresholds and measured hearing aid output. Children above the safety limit for the dB SPL criteria tended to be fit above prescriptive targets. Additional research should seek to explain how the Modified Power Law predictions of threshold shift overestimated risk for children who wear hearing aids.


Journal of the Acoustical Society of America | 2015

The influence of hearing-aid compression on forward-masked thresholds for adults with hearing loss

Marc A. Brennan; Ryan W. McCreery; Walt Jesteadt

This paper describes forward-masked thresholds for adults with hearing loss. Previous research has demonstrated that the loss of cochlear compression contributes to deficits in this measure of temporal resolution. Cochlear compression can be mimicked with fast-acting compression where the normal dynamic range is mapped to the impaired dynamic range. To test the hypothesis that fast-acting compression will most-closely approximate the normal ability to perceive forward-masked pure-tones, forward-masked thresholds were measured for two groups of adults (normal hearing, hearing loss). Adults with normal hearing were tested without amplification. Adults with hearing loss were tested with three different compression speeds and two different prescriptive procedures using a hearing-aid simulator. The two prescriptive procedures differed in the extent to which the normal dynamic range was mapped onto the impaired dynamic range. When using a faster compression speed with the prescriptive procedure that best restored the lost dynamic range, forward-masked thresholds for the listeners with hearing loss approximated those observed for the listeners with normal hearing.


Journal of The American Academy of Audiology | 2017

Listener performance with a novel hearing aid frequency lowering technique

Benjamin Kirby; Judy G. Kopun; Meredith Spratford; Clairissa M. Mollak; Marc A. Brennan; Ryan W. McCreery

Background: Sloping hearing loss imposes limits on audibility for high‐frequency sounds in many hearing aid users. Signal processing algorithms that shift high‐frequency sounds to lower frequencies have been introduced in hearing aids to address this challenge by improving audibility of high‐frequency sounds. Purpose: This study examined speech perception performance, listening effort, and subjective sound quality ratings with conventional hearing aid processing and a new frequency‐lowering signal processing strategy called frequency composition (FC) in adults and children. Research Design: Participants wore the study hearing aids in two signal processing conditions (conventional processing versus FC) at an initial laboratory visit and subsequently at home during two approximately six‐week long trials, with the order of conditions counterbalanced across individuals in a double‐blind paradigm. Study Sample: Children (N = 12, 7 females, mean age in years = 12.0, SD = 3.0) and adults (N = 12, 6 females, mean age in years = 56.2, SD = 17.6) with bilateral sensorineural hearing loss who were full‐time hearing aid users. Data Collection and Analyses: Individual performance with each type of processing was assessed using speech perception tasks, a measure of listening effort, and subjective sound quality surveys at an initial visit. At the conclusion of each subsequent at‐home trial, participants were retested in the laboratory. Linear mixed effects analyses were completed for each outcome measure with signal processing condition, age group, visit (prehome versus posthome trial), and measures of aided audibility as predictors. Results: Overall, there were few significant differences in speech perception, listening effort, or subjective sound quality between FC and conventional processing, effects of listener age, or longitudinal changes in performance. Listeners preferred FC to conventional processing on one of six subjective sound quality metrics. Better speech perception performance was consistently related to higher aided audibility. Conclusions: These results indicate that when high‐frequency speech sounds are made audible with conventional processing, speech recognition ability and listening effort are similar between conventional processing and FC. Despite the lack of benefit to speech perception, some listeners still preferred FC, suggesting that qualitative measures should be considered when evaluating candidacy for this signal processing strategy.


Journal of the Acoustical Society of America | 2015

Spectral resolution and speech recognition in noise for children with hearing loss

Ryan W. McCreery; Jenna M. Browning; Benjamin Kirby; Meredith Spratford; Marc A. Brennan

Better spectral resolution has been associated with higher speech recognition in noise in adults with hearing loss who use hearing aids and adults with cochlear implants. However, the role of signal audibility and age on this relationship has not been reported. The goal of this study was to evaluate the effect of aided audibility and spectral resolution on speech recognition in noise for a group of children with sensorineural hearing loss and a group of adults with hearing loss. Higher age, better aided audibility, and the ability to detect more ripples per octave in a spectral ripple discrimination task were associated with better sentence recognition in noise for children with hearing loss.

Collaboration


Dive into the Marc A. Brennan's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Benjamin Kirby

Illinois State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Emily Buss

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge