Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Erin M. Picou is active.

Publication


Featured researches published by Erin M. Picou.


Ear and Hearing | 2013

How hearing aids, background noise, and visual cues influence objective listening effort.

Erin M. Picou; Todd A. Ricketts; Benjamin W. Y. Hornsby

Objectives: The purpose of this article was to evaluate factors that influence the listening effort experienced when processing speech for people with hearing loss. Specifically, the change in listening effort resulting from introducing hearing aids, visual cues, and background noise was evaluated. An additional exploratory aim was to investigate the possible relationships between the magnitude of listening effort change and individual listeners’ working memory capacity, verbal processing speed, or lipreading skill. Design: Twenty-seven participants with bilateral sensorineural hearing loss were fitted with linear behind-the-ear hearing aids and tested using a dual-task paradigm designed to evaluate listening effort. The primary task was monosyllable word recognition and the secondary task was a visual reaction time task. The test conditions varied by hearing aids (unaided, aided), visual cues (auditory-only, auditory-visual), and background noise (present, absent). For all participants, the signal to noise ratio was set individually so that speech recognition performance in noise was approximately 60% in both the auditory-only and auditory-visual conditions. In addition to measures of listening effort, working memory capacity, verbal processing speed, and lipreading ability were measured using the Automated Operational Span Task, a Lexical Decision Task, and the Revised Shortened Utley Lipreading Test, respectively. Results: In general, the effects measured using the objective measure of listening effort were small (~10 msec). Results indicated that background noise increased listening effort, and hearing aids reduced listening effort, while visual cues did not influence listening effort. With regard to the individual variables, verbal processing speed was negatively correlated with hearing aid benefit for listening effort; faster processors were less likely to derive benefit. Working memory capacity, verbal processing speed, and lipreading ability were related to benefit from visual cues. No variables were related to changes in listening effort resulting from the addition of background noise. Conclusions: The results of this study suggest that, on the average, hearing aids can reduce objectively measured listening effort. Furthermore, people who are slow verbal processors are more likely to derive hearing aid benefit for listening effort, perhaps because hearing aids improve the auditory input. Although background noise increased objective listening effort, no listener characteristic predicted susceptibility to noise. With regard to visual cues, while there was no effect on average of providing visual cues, there were some listener characteristics that were related to changes in listening effort with vision. Although these relationships are exploratory, they do suggest that these inherent listener characteristics like working memory capacity, verbal processing speed, and lipreading ability may influence susceptibility to changes in listening effort and thus warrant further study.


Ear and Hearing | 2011

Effects of degree and configuration of hearing loss on the contribution of high- and low-frequency speech information to bilateral speech understanding.

Benjamin W. Y. Hornsby; Earl E. Johnson; Erin M. Picou

Objectives:The purpose of this study was to examine the effects of degree and configuration of hearing loss on the use of, and benefit from, information in amplified high- and low-frequency speech presented in background noise. Design:Sixty-two adults with a wide range of high- and low-frequency sensorineural hearing loss (5 to 115+ dB HL) participated in the study. To examine the contribution of speech information in different frequency regions, speech understanding in noise was assessed in multiple low- and high-pass filter conditions, as well as a band-pass (713 to 3534 Hz) and wideband (143 to 8976 Hz) condition. To increase audibility over a wide frequency range, speech and noise were amplified based on each individuals hearing loss. A stepwise multiple linear regression approach was used to examine the contribution of several factors to (1) absolute performance in each filter condition and (2) the change in performance with the addition of amplified high- and low-frequency speech components. Results:Results from the regression analysis showed that degree of hearing loss was the strongest predictor of absolute performance for low- and high-pass filtered speech materials. In addition, configuration of hearing loss affected both absolute performance for severely low-pass filtered speech and benefit from extending high-frequency (3534 to 8976 Hz) bandwidth. Specifically, individuals with steeply sloping high-frequency losses made better use of low-pass filtered speech information than individuals with similar low-frequency thresholds but less high-frequency loss. In contrast, given similar high-frequency thresholds, individuals with flat hearing losses received more benefit from extending high-frequency bandwidth than individuals with more sloping losses. Conclusions:Consistent with previous work, benefit from speech information in a given frequency region generally decreases as degree of hearing loss in that frequency region increases. However, given a similar degree of loss, the configuration of hearing loss also affects the ability to use speech information in different frequency regions. Except for individuals with steeply sloping high-frequency losses, providing high-frequency amplification (3534 to 8976 Hz) had either a beneficial effect on, or did not significantly degrade, speech understanding. These findings highlight the importance of extended high-frequency amplification for listeners with a wide range of high-frequency hearing losses, when seeking to maximize intelligibility.


Ear and Hearing | 2014

Potential benefits and limitations of three types of directional processing in hearing aids.

Erin M. Picou; Elizabeth Aspell; Todd A. Ricketts

Objectives: The purpose of this study was to evaluate hearing aid users’ performance on four tasks across three types of directional processing implemented by the same pair of commercially available behind-the-ear hearing aids. The three types of directional processing were mild, moderate, and strong. The mild processing aimed at emulating the directionality of an unoccluded ear. The moderate processing was a traditional adaptive directional type. The strong directional processing was a cue-preserving bilateral beamformer. The four tasks included gross localization, sentence recognition, listening effort, and subjective preference. Methods: Eighteen adults aged 48 to 83 years ( = 69.1, &sgr; = 10.9) with sensorineural hearing loss participated in this study. Each participant was fitted bilaterally and the three types of directional processing were matched for frequency response but varied by directionality (mild, moderate, and strong). Performance was always evaluated in background noise, which surrounded the listener. Sentence recognition was evaluated in low and moderate reverberation, while gross localization, listening effort, and subjective ratings were evaluated only in moderate reverberation. Sentence recognition and gross localization were evaluated using auditory-only and auditory–visual stimuli (talker’s face visible). The gross localization task included assessment of the ability to identify the origin of words, in addition to the ability to recall those words. Listening effort was evaluated using auditory–visual stimuli and a dual-task paradigm where the secondary task was a simple reaction time to a visual stimulus. Results: The results revealed similar gross localization abilities across moderate and strong directional processing when visual stimuli were present. Conversely, localization accuracy was significantly poorer with the strong directional processing than with moderate directional processing in auditory-only conditions, but only for signals presented at the greatest eccentricities (±60 degrees). Regardless of signal to noise ratio or degree of reverberation, the moderate and strong directional processing resulted in significantly better sentence recognition in noise than the mild directional processing. In addition, sentence recognition in moderate reverberation was significantly better with strong directional processing than with moderate directional processing (~ 4 to 12 rationalized arcsine units across conditions), regardless of signal to noise ratio. Although not statistically significant, the same trend was present in low reverberation. There were no significant differences in listening effort or subjective preference across directional processing. Conclusions: The strong directional processing, which was a cue-preserving bilateral beamformer, provided additional sentence recognition benefit in realistic listening situations. Furthermore, despite reducing the interaural differences, the authors measured no significant negative consequences on listening effort or subjective preference, although it is unknown whether differences might be found using more sensitive measures. In addition, gross localization was disrupted at large eccentricities if visual cues were not present. While further study is needed, these results support consideration of this cue-preserving, bilateral beamformer technology for patients who experience difficulty with speech recognition in noise, which is not adequately addressed by conventional directional hearing aid processing.


Ear and Hearing | 2014

The effect of changing the secondary task in dual-task paradigms for measuring listening effort.

Erin M. Picou; Todd A. Ricketts

Objectives: The purpose of this study was to evaluate the effect of changing the secondary task in dual-task paradigms that measure listening effort. Specifically, the effects of increasing the secondary task complexity or the depth of processing on a paradigm’s sensitivity to changes in listening effort were quantified in a series of two experiments. Specific factors investigated within each experiment were background noise and visual cues. Design: Participants in Experiment 1 were adults with normal hearing (mean age 23 years) and participants in Experiment 2 were adults with mild sloping to moderately severe sensorineural hearing loss (mean age 60.1 years). In both experiments, participants were tested using three dual-task paradigms. These paradigms had identical primary tasks, which were always monosyllable word recognition. The secondary tasks were all physical reaction time measures. The stimulus for the secondary task varied by paradigm and was a (1) simple visual probe, (2) a complex visual probe, or (3) the category of word presented. In this way, the secondary tasks mainly varied from the simple paradigm by either complexity or depth of speech processing. Using all three paradigms, participants were tested in four conditions, (1) auditory-only stimuli in quiet, (2) auditory-only stimuli in noise, (3) auditory-visual stimuli in quiet, and (4) auditory-visual stimuli in noise. During auditory-visual conditions, the talker’s face was visible. Signal-to-noise ratios used during conditions with background noise were set individually so word recognition performance was matched in auditory-only and auditory-visual conditions. In noise, word recognition performance was approximately 80% and 65% for Experiments 1 and 2, respectively. Results: For both experiments, word recognition performance was stable across the three paradigms, confirming that none of the secondary tasks interfered with the primary task. In Experiment 1 (listeners with normal hearing), analysis of median reaction times revealed a significant main effect of background noise on listening effort only with the paradigm that required deep processing. Visual cues did not change listening effort as measured with any of the three dual-task paradigms. In Experiment 2 (listeners with hearing loss), analysis of median reaction times revealed expected significant effects of background noise using all three paradigms, but no significant effects of visual cues. Conclusions: None of the dual-task paradigms were sensitive to the effects of visual cues. Furthermore, changing the complexity of the secondary task did not change dual-task paradigm sensitivity to the effects of background noise on listening effort for either group of listeners. However, the paradigm whose secondary task involved deeper processing was more sensitive to the effects of background noise for both groups of listeners. While this paradigm differed from the others in several respects, depth of processing may be partially responsible for the increased sensitivity. Therefore, this paradigm may be a valuable tool for evaluating other factors that affect listening effort.


Journal of The American Academy of Audiology | 2013

Efficacy of hearing-aid based telephone strategies for listeners with moderate-to-severe hearing loss.

Erin M. Picou; Todd A. Ricketts

BACKGROUND Understanding speech over the telephone when listening in noisy environments may present a significant challenge for listeners with moderate-to-severe hearing loss. PURPOSE The purpose of this study was to compare speech recognition and subjective ratings across several hearing aid-based telephone listening strategies for individuals with moderate-to-severe sensorineural hearing loss. RESEARCH DESIGN Speech recognition and subjective ratings were evaluated for a simulated telephone signal. The strategies evaluated included acoustic telephone, unilateral telecoil, unilateral wireless streaming, and bilateral wireless streaming. Participants were seated in a noisy room for all evaluations. STUDY SAMPLE Eighteen adults, aged 49-88 yr, with moderate-to-severe sensorineural hearing loss participated. DATA COLLECTION AND ANALYSIS Speech recognition scores on the Connected Speech Test were converted to rationalized arcsine units and analyzed using analysis of variance testing and Tukey post hoc analyses. Subjective ratings of ease and comfort were also analyzed in this manner. RESULTS Speech recognition performance was poorest with acoustic coupling to the telephone and best with bilateral wireless routing. Telecoil coupling resulted in better speech recognition performance than acoustic coupling, but was significantly poorer than bilateral wireless routing. Furthermore, unilateral wireless routing and telecoil coupling generally led to similar speech recognition performance, except in lower-level background noise conditions, for which unilateral routing resulted in better performance than the telecoil. CONCLUSIONS For people with moderate-to-severe sensorineural hearing loss, acoustic telephone listening with a hearing aid may not lead to acceptable performance in noise. Although unilateral routing options (telecoil and wireless streaming) improved performance, speech recognition performance and subjective ratings of ease and comfort were best when bilateral wireless routing was used. These results suggest that wireless routing is a potentially beneficial telephone listening strategy for listeners with moderate-to-severe hearing loss who are fitted with limited venting if the telephone signal is routed to both ears. Unilateral wireless routing may provide similar benefits to traditional unilateral telecoil. However, the newer wireless systems may have the advantage for some listeners in that they do not include some of the positioning constraints associated with telecoil use.


Ear and Hearing | 2011

Comparison of wireless and acoustic hearing aid-based telephone listening strategies.

Erin M. Picou; Todd A. Ricketts

Objective: The purpose of this study was to examine speech recognition through hearing aids for seven telephone listening conditions. Design: Speech recognition scores were measured for 20 participants in six wireless routing transmission conditions and one acoustic telephone condition. In the wireless conditions, the speech signal was delivered to both ears simultaneously (bilateral speech) or to one ear (unilateral speech). The effect of changing the noise level in the nontest ear during unilateral conditions was also examined. Participants were fitted with hearing aids using both nonoccluding and occluding dome ear tips. Participants were seated in a room with background noise present and speech was transmitted to the participants without additional noise. Results: There was no effect of changing the noise level in the nontest ear and no difference between unilateral wireless routing and acoustic telephone listening. For wireless transmission, bilateral presentation resulted in significantly better speech recognition than unilateral presentation. Bilateral wireless conditions allowed for significantly better recognition than the acoustic telephone condition for participants fitted with occluding ear tips only. Conclusion: Routing the signal to both hearing aids resulted in significantly better speech recognition than unilateral signal routing. Wireless signal routing was shown to be beneficial compared with acoustic telephone listening and in some conditions resulted in the best performance of all of the listening conditions evaluated. However, this advantage was only evident when the signal was routed to both ears and when hearing aid wearers were fitted with occluding domes. Therefore, it is expected that the benefits of this new wireless streaming technology over existing telephone coupling methods will be most evident clinically in hearing aid wearers who require more limited venting than is typically used in open canal fittings.


Ear and Hearing | 2016

The Effects of Noise and Reverberation on Listening Effort in Adults With Normal Hearing.

Erin M. Picou; Julia Gordon; Todd A. Ricketts

Objectives: The purpose of this study was to investigate the effects of background noise and reverberation on listening effort. Four specific research questions were addressed related to listening effort: (A) With comparable word recognition performance across levels of reverberation, what are the effects of noise and reverberation on listening effort? (B) What is the effect of background noise when reverberation time is constant? (C) What is the effect of increasing reverberation from low to moderate when signal to noise ratio is constant? (D) What is the effect of increasing reverberation from moderate to high when signal to noise ratio is constant? Design: Eighteen young adults (mean age 24.8 years) with normal hearing participated. A dual-task paradigm was used to simultaneously assess word recognition and listening effort. The primary task was monosyllable word recognition, and the secondary task was word categorization (press a button if the word heard was judged to be a noun). Participants were tested in quiet and in background noise in three levels of reverberation (T30 < 100 ms, T30 = 475 ms, and T30 = 834 ms). Signal to noise ratios used were chosen individually for each participant and varied by reverberation to address the specific research questions. Results: As expected, word recognition performance was negatively affected by both background noise and by increases in reverberation. Furthermore, analysis of mean response times revealed that background noise increased listening effort, regardless of degree of reverberation. Conversely, reverberation did not affect listening effort, regardless of whether word recognition performance was comparable or signal to noise ratio was constant. Conclusions: The finding that reverberation did not affect listening effort, even when word recognition performance was degraded, is inconsistent with current models of listening effort. The reasons for this surprising finding are unclear and warrant further investigation. However, the results of this study are limited in generalizability to young listeners with normal hearing and to the signal to noise ratios, loudspeaker to listener distance, and reverberation times evaluated. Other populations, like children, older listeners, and listeners with hearing loss, have been previously shown to be more sensitive to reverberation. Therefore, the effects of reverberation for these vulnerable populations also warrant further investigation.


International Journal of Audiology | 2015

Evaluation of the effects of nonlinear frequency compression on speech recognition and sound quality for adults with mild to moderate hearing loss

Erin M. Picou; Steven C. Marcrum; Todd A. Ricketts

Abstract Objective: While potentially improving audibility for listeners with considerable high frequency hearing loss, the effects of implementing nonlinear frequency compression (NFC) for listeners with moderate high frequency hearing loss are unclear. The purpose of this study was to investigate the effects of activating NFC for listeners who are not traditionally considered candidates for this technology. Design: Participants wore study hearing aids with NFC activated for a 3–4 week trial period. After the trial period, they were tested with NFC and with conventional processing on measures of consonant discrimination threshold in quiet, consonant recognition in quiet, sentence recognition in noise, and acceptableness of sound quality of speech and music. Study sample: Seventeen adult listeners with symmetrical, mild to moderate sensorineural hearing loss participated. Better ear, high frequency pure-tone averages (4, 6, and 8 kHz) were 60 dB HL or better. Results: Activating NFC resulted in lower (better) thresholds for discrimination of /s/, whose spectral center was 9 kHz. There were no other significant effects of NFC compared to conventional processing. Conclusion: These data suggest that the benefits, and detriments, of activating NFC may be limited for this population.


International Journal of Audiology | 2014

Increasing motivation changes subjective reports of listening effort and choice of coping strategy

Erin M. Picou; Todd A. Ricketts

Abstract Objective: The purpose of this project was to examine the effect of changing motivation on subjective ratings of listening effort and on the likelihood that a listener chooses either a controlling or an avoidance coping strategy. Design: Two experiments were conducted, one with auditory-only (AO) and one with auditory-visual (AV) stimuli, both using the same speech recognition in noise materials. Four signal-to-noise ratios (SNRs) were used, two in each experiment. The two SNRs targeted 80% and 50% correct performance. Motivation was manipulated by either having participants listen carefully to the speech (low motivation), or listen carefully to the speech and then answer quiz questions about the speech (high motivation). Study sample: Sixteen participants with normal hearing participated in each experiment. Eight randomly selected participants participated in both. Results: Using AO and AV stimuli, motivation generally increased subjective ratings of listening effort and tiredness. In addition, using auditory-visual stimuli, motivation generally increased listeners’ willingness to do something to improve the situation, and decreased their willingness to avoid the situation. Conclusions: These results suggest a listeners mental state may influence listening effort and choice of coping strategy.


Journal of Speech Language and Hearing Research | 2017

The Effects of Directional Processing on Objective and Subjective Listening Effort

Erin M. Picou; Travis M. Moore; Todd A. Ricketts

Purpose The purposes of this investigation were (a) to evaluate the effects of hearing aid directional processing on subjective and objective listening effort and (b) to investigate the potential relationships between subjective and objective measures of effort. Method Sixteen adults with mild to severe hearing loss were tested with study hearing aids programmed with 3 settings: omnidirectional, fixed directional, and bilateral beamformer. A dual-task paradigm and subjective ratings were used to assess objective and subjective listening effort, respectively, in 2 signal-to-noise ratios. Testing occurred in rooms with either low or moderate reverberation. Results Directional processing improved subjective and objective listening effort, although benefit for objective effort was found only in moderate reverberation. Subjective reports of work and tiredness were more highly correlated with word recognition performance than objective listening effort. However, subjective ratings about control were significantly correlated with objective listening effort. Conclusions Directional microphone technology in hearing aids has the potential to improve listening effort in moderately reverberant environments. In addition, subjective questions that probe a listeners desire to exercise control may be a viable method for eliciting ratings that are significantly related to objective listening effort.

Collaboration


Dive into the Erin M. Picou's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Earl E. Johnson

East Tennessee State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jill M. Gruenwald

Vanderbilt University Medical Center

View shared research outputs
Top Co-Authors

Avatar

Lauren M. Charles

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Travis M. Moore

Vanderbilt University Medical Center

View shared research outputs
Researchain Logo
Decentralizing Knowledge