Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Benjamin W. Y. Hornsby is active.

Publication


Featured researches published by Benjamin W. Y. Hornsby.


Ear and Hearing | 2013

The effects of hearing aid use on listening effort and mental fatigue associated with sustained speech processing demands.

Benjamin W. Y. Hornsby

Objectives: To maintain optimal understanding, persons with sensorineural hearing loss (SNHL) often report a need for increased attention, concentration, and “listening effort” compared with persons without hearing loss. It is generally assumed that this increased effort is related to subjective reports of mental fatigue in persons with hearing loss. Although the benefits of hearing aids for improving intelligibility are well documented, their impact on listening effort and mental fatigue are less clear. This study used subjective and objective measures to examine the effects of hearing aid use and advanced hearing aid features on listening effort and mental fatigue in adults with SNHL. Design: Sixteen adults (aged 47–69 years) with mild to severe sloping SNHL participated. A dual-task paradigm assessed word recognition, word recall, and visual reaction times (RTs) to objectively quantify listening effort and fatigue. Mental fatigue was operationally defined as a decrement in performance over the duration of the experiment (approximately 1 hr). Participants were fitted with study hearing aids and tested unaided and in two aided conditions (omnidirectional and with directional processing and digital noise reduction active). Subjective ratings of listening effort experienced during the day and ratings of fatigue and attentiveness immediately before and after the dual-task were also obtained. Results: Word recall was better and dual-task RTs were significantly faster in the aided compared with unaided conditions, suggesting a decrease in listening effort when listening aided. Word recognition and recall in unaided and aided conditions remained relatively stable over the duration of the dual-task, suggesting these processes were resistant to mental fatigue. In contrast, dual-task RTs systematically increased over the duration of the speech task when listening unaided, consistent with development of mental fatigue. However, dual-task RTs remained stable over time in both aided conditions suggesting that hearing aid use reduced susceptibility to mental fatigue. Subjective ratings of fatigue and attentiveness also increased significantly after completion of the dual-task; however, no differences between unaided and aided subjective ratings were observed. Correlation analyses between subjective and objective measures of listening effort and mental fatigue showed no strong or consistent relationship. Likewise, subject variables such as age and degree of hearing loss showed no strong or consistent relationship to either subjective or objective measures of listening effort or mental fatigue. Conclusions: Results from subjective and select objective measures suggest sustained speech-processing demands can lead to mental fatigue in persons with hearing loss. It is important to note that the use of clinically fit hearing aids may reduce listening effort and susceptibility to mental fatigue associated with sustained speech-processing demands. The present study design did not reveal additional benefits, in terms of reduced listening effort or fatigue, from use of directional processing and digital noise-reduction algorithms. However, experimental design limitations suggest further work in this area is needed. Finally, subjective and objective measures of listening effort and mental fatigue due to sustained speech-processing demands, were not strongly associated, suggesting that these measures may assess different aspects of listening effort and mental fatigue.


Ear and Hearing | 2016

Hearing impairment and cognitive energy: the Framework for Understanding Effortful Listening (FUEL)

M. Kathleen Pichora-Fuller; Sophia E. Kramer; Mark A. Eckert; Brent Edwards; Benjamin W. Y. Hornsby; Larry E. Humes; Ulrike Lemke; Thomas Lunner; Mohan Matthen; Carol L. Mackersie; Graham Naylor; Natalie A. Phillips; Michael Richter; Mary Rudner; Mitchell S. Sommers; Kelly L. Tremblay; Arthur Wingfield

The Fifth Eriksholm Workshop on “Hearing Impairment and Cognitive Energy” was convened to develop a consensus among interdisciplinary experts about what is known on the topic, gaps in knowledge, the use of terminology, priorities for future research, and implications for practice. The general term cognitive energy was chosen to facilitate the broadest possible discussion of the topic. It goes back to Titchener (1908) who described the effects of attention on perception; he used the term psychic energy for the notion that limited mental resources can be flexibly allocated among perceptual and mental activities. The workshop focused on three main areas: (1) theories, models, concepts, definitions, and frameworks; (2) methods and measures; and (3) knowledge translation. We defined effort as the deliberate allocation of mental resources to overcome obstacles in goal pursuit when carrying out a task, with listening effort applying more specifically when tasks involve listening. We adapted Kahneman’s seminal (1973) Capacity Model of Attention to listening and proposed a heuristically useful Framework for Understanding Effortful Listening (FUEL). Our FUEL incorporates the well-known relationship between cognitive demand and the supply of cognitive capacity that is the foundation of cognitive theories of attention. Our FUEL also incorporates a motivation dimension based on complementary theories of motivational intensity, adaptive gain control, and optimal performance, fatigue, and pleasure. Using a three-dimensional illustration, we highlight how listening effort depends not only on hearing difficulties and task demands but also on the listener’s motivation to expend mental effort in the challenging situations of everyday life.


Trends in Amplification | 2006

The Effects of Digital Noise Reduction on the Acceptance of Background Noise

H. Gustav Mueller; Jennifer Weber; Benjamin W. Y. Hornsby

Modern hearing aids commonly employ digital noise reduction (DNR) algorithms. The potential benefit of these algorithms is to provide improved speech understanding in noise or, at the least, to provide relaxed listening or increased ease of listening. In this study, 22 adults were fitted with 16-channel wide-dynamic-range compression hearing aids containing DNR processing. The DNR includes both modulation-based and Wiener-filter-type algorithms working simultaneously. Both speech intelligibility and acceptable noise level (ANL) were assessed using the Hearing in Noise Test (HINT) with DNR on and DNR off. The ANL was also assessed without hearing aids. The results showed a significant mean improvement for the ANL (4.2 dB) for the DNR-on condition when compared to DNR-off condition. Moreover, there was a significant correlation between the magnitude of ANL improvement (relative to DNR on) and the DNR-off ANL. There was no significant mean improvement for the HINT for the DNR-on condition, and on an individual basis, the HINT score did not significantly correlate with either aided ANL (DNR on or DNR off). These findings suggest that at least within the constraints of the DNR algorithms and test conditions employed in this study, DNR can significantly improve the clinically measured ANL, which may result in improved ease of listening for speech-in-noise situations.


Journal of the Acoustical Society of America | 2003

Auditory spatial resolution in horizontal, vertical, and diagonal planes

D. Wesley Grantham; Benjamin W. Y. Hornsby; Eric A. Erpenbeck

Minimum audible angle (MAA) and minimum audible movement angle (MAMA) thresholds were measured for stimuli in horizontal, vertical, and diagonal (60 degrees) planes. A pseudovirtual technique was employed in which signals were recorded through KEMARs ears and played back to subjects through insert earphones. Thresholds were obtained for wideband, high-pass, and low-pass noises. Only 6 of 20 subjects obtained wideband vertical-plane MAAs less than 10 degrees, and only these 6 subjects were retained for the complete study. For all three filter conditions thresholds were lowest in the horizontal plane, slightly (but significantly) higher in the diagonal plane, and highest for the vertical plane. These results were similar in magnitude and pattern to those reported by Perrott and Saberi [J. Acoust. Soc. Am. 87, 1728-1731 (1990)] and Saberi and Perrott [J. Acoust. Soc. Am. 88, 2639-2644 (1990)], except that these investigators generally found that thresholds for diagonal planes were as good as those for the horizontal plane. The present results are consistent with the hypothesis that diagonal-plane performance is based on independent contributions from a horizontal-plane system (sensitive to interaural differences) and a vertical-plane system (sensitive to pinna-based spectral changes). Measurements of the stimuli recorded through KEMAR indicated that sources presented from diagonal planes can produce larger interaural level differences (ILDs) in certain frequency regions than would be expected based on the horizontal projection of the trajectory. Such frequency-specific ILD cues may underlie the very good performance reported in previous studies for diagonal spatial resolution. Subjects in the present study could apparently not take advantage of these cues in the diagonal-plane condition, possibly because they did not externalize the images to their appropriate positions in space or possibly because of the absence of a patterned visual field.


Ear and Hearing | 2003

Distance and reverberation effects on directional benefit.

Todd A. Ricketts; Benjamin W. Y. Hornsby

Objective Understanding the potential benefits and limitations of directional hearing aids across a wide range of listening environments is important when counseling persons with hearing loss regarding realistic expectations for these devices. The purpose of this study was to examine the impact of speaker-to-listener distance on directional benefit in two reverberant environments, in which the dominate noise sources were placed close to the hearing aid wearer. In addition, speech transmission index (STI) measures made in the test environments were compared to measured sentence recognition to determine if performance was predictable across changes in distance, reverberation and microphone mode. Design The aided sentence recognition, in noise, for fourteen adult participants with symmetrical sensorineural hearing impairment was measured in six environmental conditions in both directional and omnidirectional modes. A single room, containing four uncorrelated noise sources served as the test environment. The room was modified to exhibit either low (RT60 = 0.3 sec) or moderate (RT60 = 0.9 sec) levels of reverberation. Sentence recognition was measured in both reverberant environments at three different speech loudspeaker-to-listener distances (1.2 m, 2.4 m, and 4.8 m). STI measures also were made in each of the 12 listening conditions (2 microphone modes × 3 distances × 2 reverberation environments). Results A decrease in directional benefit was measured with increasing distance in the moderate reverberation condition. Although reduced, directional benefit was still present in the moderately reverberant environment at the farthest speech speaker-to-listener distance tested in this experiment. A similar decrease with increasing speaker-to-listener distance was not measured in the low reverberation condition. The pattern of average sentence recognition results across varying distances and two different reverberation times agreed with the pattern of STI values measured under the same conditions. Conclusions Although these data support increased directional benefit in noise for reduced speaker-to-listener distance, some benefit was still obtained by listeners when listening beyond “effective” critical distance under conditions of low (300 msec) to moderate (900 msec) reverberation. It is assumed that the directional benefit was due to the reduction of the direct sound energy from the noise sources near the listener. The use of aided STI values for the prediction of average word recognition across listening conditions that differ in reverberation, microphone directivity, and speaker-to-listener distance also was supported.


Ear and Hearing | 2013

How hearing aids, background noise, and visual cues influence objective listening effort.

Erin M. Picou; Todd A. Ricketts; Benjamin W. Y. Hornsby

Objectives: The purpose of this article was to evaluate factors that influence the listening effort experienced when processing speech for people with hearing loss. Specifically, the change in listening effort resulting from introducing hearing aids, visual cues, and background noise was evaluated. An additional exploratory aim was to investigate the possible relationships between the magnitude of listening effort change and individual listeners’ working memory capacity, verbal processing speed, or lipreading skill. Design: Twenty-seven participants with bilateral sensorineural hearing loss were fitted with linear behind-the-ear hearing aids and tested using a dual-task paradigm designed to evaluate listening effort. The primary task was monosyllable word recognition and the secondary task was a visual reaction time task. The test conditions varied by hearing aids (unaided, aided), visual cues (auditory-only, auditory-visual), and background noise (present, absent). For all participants, the signal to noise ratio was set individually so that speech recognition performance in noise was approximately 60% in both the auditory-only and auditory-visual conditions. In addition to measures of listening effort, working memory capacity, verbal processing speed, and lipreading ability were measured using the Automated Operational Span Task, a Lexical Decision Task, and the Revised Shortened Utley Lipreading Test, respectively. Results: In general, the effects measured using the objective measure of listening effort were small (~10 msec). Results indicated that background noise increased listening effort, and hearing aids reduced listening effort, while visual cues did not influence listening effort. With regard to the individual variables, verbal processing speed was negatively correlated with hearing aid benefit for listening effort; faster processors were less likely to derive benefit. Working memory capacity, verbal processing speed, and lipreading ability were related to benefit from visual cues. No variables were related to changes in listening effort resulting from the addition of background noise. Conclusions: The results of this study suggest that, on the average, hearing aids can reduce objectively measured listening effort. Furthermore, people who are slow verbal processors are more likely to derive hearing aid benefit for listening effort, perhaps because hearing aids improve the auditory input. Although background noise increased objective listening effort, no listener characteristic predicted susceptibility to noise. With regard to visual cues, while there was no effect on average of providing visual cues, there were some listener characteristics that were related to changes in listening effort with vision. Although these relationships are exploratory, they do suggest that these inherent listener characteristics like working memory capacity, verbal processing speed, and lipreading ability may influence susceptibility to changes in listening effort and thus warrant further study.


Ear and Hearing | 2014

Commentary: listening can be exhausting--fatigue in children and adults with hearing loss.

Fred H. Bess; Benjamin W. Y. Hornsby

Anecdotal reports of fatigue after sustained speech-processing demands are common among adults with hearing loss; however, systematic research examining hearing loss-related fatigue is limited, particularly with regard to fatigue among children with hearing loss (CHL). Many audiologists, educators, and parents have long suspected that CHL experience stress and fatigue as a result of the difficult listening demands they encounter throughout the day at school. Recent research in this area provides support for these intuitive suggestions. In this article, the authors provide a framework for understanding the construct of fatigue and its relation to hearing loss, particularly in children. Although empirical evidence is limited, preliminary data from recent studies suggest that some CHL experience significant fatigue-and such fatigue has the potential to compromise a childs performance in the classroom. In this commentary, the authors discuss several aspects of fatigue including its importance, definitions, prevalence, consequences, and potential linkage to increased listening effort in persons with hearing loss. The authors also provide a brief synopsis of subjective and objective methods to quantify listening effort and fatigue. Finally, the authors suggest a common-sense approach for identification of fatigue in CHL; and, the authors briefly comment on the use of amplification as a management strategy for reducing hearing-related fatigue.


Ear and Hearing | 2011

Effects of degree and configuration of hearing loss on the contribution of high- and low-frequency speech information to bilateral speech understanding.

Benjamin W. Y. Hornsby; Earl E. Johnson; Erin M. Picou

Objectives:The purpose of this study was to examine the effects of degree and configuration of hearing loss on the use of, and benefit from, information in amplified high- and low-frequency speech presented in background noise. Design:Sixty-two adults with a wide range of high- and low-frequency sensorineural hearing loss (5 to 115+ dB HL) participated in the study. To examine the contribution of speech information in different frequency regions, speech understanding in noise was assessed in multiple low- and high-pass filter conditions, as well as a band-pass (713 to 3534 Hz) and wideband (143 to 8976 Hz) condition. To increase audibility over a wide frequency range, speech and noise were amplified based on each individuals hearing loss. A stepwise multiple linear regression approach was used to examine the contribution of several factors to (1) absolute performance in each filter condition and (2) the change in performance with the addition of amplified high- and low-frequency speech components. Results:Results from the regression analysis showed that degree of hearing loss was the strongest predictor of absolute performance for low- and high-pass filtered speech materials. In addition, configuration of hearing loss affected both absolute performance for severely low-pass filtered speech and benefit from extending high-frequency (3534 to 8976 Hz) bandwidth. Specifically, individuals with steeply sloping high-frequency losses made better use of low-pass filtered speech information than individuals with similar low-frequency thresholds but less high-frequency loss. In contrast, given similar high-frequency thresholds, individuals with flat hearing losses received more benefit from extending high-frequency bandwidth than individuals with more sloping losses. Conclusions:Consistent with previous work, benefit from speech information in a given frequency region generally decreases as degree of hearing loss in that frequency region increases. However, given a similar degree of loss, the configuration of hearing loss also affects the ability to use speech information in different frequency regions. Except for individuals with steeply sloping high-frequency losses, providing high-frequency amplification (3534 to 8976 Hz) had either a beneficial effect on, or did not significantly degrade, speech understanding. These findings highlight the importance of extended high-frequency amplification for listeners with a wide range of high-frequency hearing losses, when seeking to maximize intelligibility.


Journal of the Acoustical Society of America | 2001

The effects of compression ratio, signal-to-noise ratio, and level on speech recognition in normal-hearing listeners.

Benjamin W. Y. Hornsby; Todd A. Ricketts

Previous research has demonstrated reduced speech recognition when speech is presented at higher-than-normal levels (e.g., above conversational speech levels), particularly in the presence of speech-shaped background noise. Persons with hearing loss frequently listen to speech-in-noise at these levels through hearing aids, which incorporate multiple-channel, wide dynamic range compression. This study examined the interactive effects of signal-to-noise ratio (SNR), speech presentation level, and compression ratio on consonant recognition in noise. Nine subjects with normal hearing identified CV and VC nonsense syllables in a speech-shaped noise at two SNRs (0 and +6 dB), three presentation levels (65, 80, and 95 dB SPL) and four compression ratios (1:1, 2:1, 4:1, and 6:1). Stimuli were processed through a simulated three-channel, fast-acting, wide dynamic range compression hearing aid. Consonant recognition performance decreased as compression ratio increased and presentation level increased. Interaction effects were noted between SNR and compression ratio, as well as between presentation level and compression ratio. Performance decrements due to increases in compression ratio were larger at the better (+6 dB) SNR and at the lowest (65 dB SPL) presentation level. At higher levels (95 dB SPL), such as those experienced by persons with hearing loss, increasing compression ratio did not significantly affect speech intelligibility.


Ear and Hearing | 2007

Effects of noise source configuration on directional benefit using symmetric and asymmetric directional hearing aid fittings.

Benjamin W. Y. Hornsby; Todd A. Ricketts

Objective: The benefits of directional processing in hearing aids are well documented in laboratory settings. Likewise, substantial research has shown that speech understanding is optimized in many settings when listening binaurally. Although these findings suggest that speech understanding would be optimized by using bilateral directional technology (e.g., a symmetric directional fitting), recent research suggests similar performance with an asymmetrical fitting (directional in one ear and omnidirectional in the other). The purpose of this study was to explore the benefits of using bilateral directional processing, as opposed to an asymmetric fitting, in environments where the primary speech and noise sources come from different directions. Design: Sixteen older adults with mild-to-severe sensorineural hearing loss (SNHL) were recruited for the study. Aided sentence recognition using the Hearing in Noise Test (HINT) was assessed in a moderately reverberant room, in three different speech and noise conditions in which the locations of the speech and noise sources were varied. In each speech and noise condition, speech understanding was assessed in four different microphone modes (bilateral omnidirectional mode; bilateral directional mode; directional mode left and omnidirectional mode right; omnidirectional mode left and directional mode right). The benefits and limitations of bilateral directional processing were assessed by comparing HINT thresholds across the various symmetric and asymmetric microphone processing conditions. Results: Study results revealed directional benefit varied based on microphone mode symmetry (i.e., symmetric versus asymmetric directional processing) and the specific speech and noise configuration. In noise configurations in which the speech was located in the front of the listener and the noise was located to the side or surrounded the listener, maximum directional benefit (approximately 3.3 dB) was observed with the symmetric directional fitting. HINT thresholds obtained when using bilateral directional processing were approximately 1.4 dB better than when an asymmetric fitting (directional processing in only one ear) was used. When speech was located on the side of the listener, the use of directional processing on the ear near the speech significantly reduced speech understanding. Conclusions: Although directional benefit is present in asymmetric fittings, the use of bilateral directional processing optimizes speech understanding in noise conditions in which the speech comes from in front of the listener and the noise sources are located to the side of or surround the listener. In situations in which the speech is located to the side of the listener, the use of directional processing on the ear adjacent to the speaker is likely to reduce speech audibility and thus degrade speech understanding.

Collaboration


Dive into the Benjamin W. Y. Hornsby's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Earl E. Johnson

East Tennessee State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alexandra P. Key

Vanderbilt University Medical Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Erin M. Picou

Vanderbilt University Medical Center

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge