Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Benjamin Sheffield is active.

Publication


Featured researches published by Benjamin Sheffield.


Journal of the Acoustical Society of America | 2012

The relative phonetic contributions of a cochlear implant and residual acoustic hearing to bimodal speech perception

Benjamin Sheffield; Fan-Gang Zeng

The addition of low-passed (LP) speech or even a tone following the fundamental frequency (F0) of speech has been shown to benefit speech recognition for cochlear implant (CI) users with residual acoustic hearing. The mechanisms underlying this benefit are still unclear. In this study, eight bimodal subjects (CI users with acoustic hearing in the non-implanted ear) and eight simulated bimodal subjects (using vocoded and LP speech) were tested on vowel and consonant recognition to determine the relative contributions of acoustic and phonetic cues, including F0, to the bimodal benefit. Several listening conditions were tested (CI/Vocoder, LP, T(F0-env), CI/Vocoder + LP, CI/Vocoder + T(F0-env)). Compared with CI/Vocoder performance, LP significantly enhanced both consonant and vowel perception, whereas a tone following the F0 contour of target speech and modulated with an amplitude envelope of the maximum frequency of the F0 contour (T(F0-env)) enhanced only consonant perception. Information transfer analysis revealed a dual mechanism in the bimodal benefit: The tone representing F0 provided voicing and manner information, whereas LP provided additional manner, place, and vowel formant information. The data in actual bimodal subjects also showed that the degree of the bimodal benefit depended on the cutoff and slope of residual acoustic hearing.


Journal of the Acoustical Society of America | 2014

Development of a test battery for evaluating speech perception in complex listening environments.

Douglas S. Brungart; Benjamin Sheffield; Lina R. Kubli

In the real world, spoken communication occurs in complex environments that involve audiovisual speech cues, spatially separated sound sources, reverberant listening spaces, and other complicating factors that influence speech understanding. However, most clinical tools for assessing speech perception are based on simplified listening environments that do not reflect the complexities of real-world listening. In this study, speech materials from the QuickSIN speech-in-noise test by Killion, Niquette, Gudmundsen, Revit, and Banerjee [J. Acoust. Soc. Am. 116, 2395-2405 (2004)] were modified to simulate eight listening conditions spanning the range of auditory environments listeners encounter in everyday life. The standard QuickSIN test method was used to estimate 50% speech reception thresholds (SRT50) in each condition. A method of adjustment procedure was also used to obtain subjective estimates of the lowest signal-to-noise ratio (SNR) where the listeners were able to understand 100% of the speech (SRT100) and the highest SNR where they could detect the speech but could not understand any of the words (SRT0). The results show that the modified materials maintained most of the efficiency of the QuickSIN test procedure while capturing performance differences across listening conditions comparable to those reported in previous studies that have examined the effects of audiovisual cues, binaural cues, room reverberation, and time compression on the intelligibility of speech.


Scientific Reports | 2017

Electro-Tactile Stimulation Enhances Cochlear Implant Speech Recognition in Noise

Juan Huang; Benjamin Sheffield; Payton Lin; Fan-Gang Zeng

For cochlear implant users, combined electro-acoustic stimulation (EAS) significantly improves the performance. However, there are many more users who do not have any functional residual acoustic hearing at low frequencies. Because tactile sensation also operates in the same low frequencies (<500 Hz) as the acoustic hearing in EAS, we propose electro-tactile stimulation (ETS) to improve cochlear implant performance. In ten cochlear implant users, a tactile aid was applied to the index finger that converted voice fundamental frequency into tactile vibrations. Speech recognition in noise was compared for cochlear implants alone and for the bimodal ETS condition. On average, ETS improved speech reception thresholds by 2.2 dB over cochlear implants alone. Nine of the ten subjects showed a positive ETS effect ranging from 0.3 to 7.0 dB, which was similar to the amount of the previously-reported EAS benefit. The comparable results indicate similar neural mechanisms that underlie both the ETS and EAS effects. The positive results suggest that the complementary auditory and tactile modes also be used to enhance performance for normal hearing listeners and automatic speech recognition for machines.


Journal of The American Academy of Audiology | 2015

Benefits of Nonlinear Frequency Compression in Adult Hearing Aid Users.

Melissa Kokx-Ryan; Julie I. Cohen; Mary T. Cord; Therese C. Walden; Matthew J. Makashay; Benjamin Sheffield; Douglas S. Brungart

BACKGROUND Frequency-lowering (FL) algorithms are an alternative method of providing access to high-frequency speech cues. There is currently a lack of independent research addressing: (1) what functional, measureable benefits FL provides; (2) which, if any, FL algorithm provides the maximum benefit, (3) how to clinically program algorithms, and (4) how to verify algorithm settings. PURPOSE Two experiments were included in this study. The purpose of Experiment 1 was to (1) determine if a commercially available nonlinear frequency compression (NLFC) algorithm provides benefit as measured by improved speech recognition in noise when fit and verified using standard clinical procedures; and (2) evaluate the impact of acclimatization. The purpose of Experiment 2 was to (1) evaluate the benefit of using enhanced verification procedures to systematically determine the optimal application of a prototype NLFC algorithm, and (2) determine if the optimized prototype NLFC settings provide benefit as measured by improved speech recognition in quiet and in noise. RESEARCH DESIGN A single-blind, within-participant repeated measures design in which participants served as their own controls. STUDY SAMPLE Experiment 1 included 26 participants with a mean age of 68.3 yr and Experiment 2 included 37 participants with a mean age of 68.8 yr. Participants were recruited from the Audiology and Speech Pathology Center at Walter Reed National Military Medical Center in Bethesda, MD. INTERVENTION Participants in Experiment 1 wore bilateral commercially available hearing aids fit using standard clinical procedures and clinician expertise. Participants in Experiment 2 wore a single prototype hearing aid for which FL settings were systematically examined to determine the optimum application. In each experiment, FL-On versus FL-Off settings were examined in a variety of listening situations to determine benefit and possible implications. DATA COLLECTION AND ANALYSIS In Experiment 1, speech recognition measures using the QuickSIN and Modified Rhyme Test stimuli were obtained at initial bilateral fitting and 3-5 weeks later during a follow-up visit. In Experiment 2, Modified Rhyme Test, /sə/, /∫ə/ consonant discrimination task, and dual-task cognitive load speech recognition performance measures were conducted. Participants in Experiment 2 received four different systematic hearing aid programs during an initial visit and speech recognition data were collected over 2-3 follow-up sessions. RESULTS Some adults with hearing loss obtained small-to-moderate benefits from implementation of FL, while others maintained performance without detriment in both experiments. There was no significant difference among FL-On settings systematically obtained in Experiment 2. There was a modest but significant age effect in listeners of both experiments that indicated older listeners (>65 yr) might benefit more on average from FL than younger listeners. In addition, there were reliable improvements in the intelligibility of the phonemes /ŋ/ and /b/ for both groups, and /ð/ for older listeners from the FL in both experiments. CONCLUSIONS Although the optimum settings, application, and benefits of FL remain unclear at this time, there does not seem to be degradation in listener performance when FL is activated. The benefits of FL should be explored in older adult (>65 yr) listeners, as they tended to benefit more from FL applications.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2015

The Relationship Between Hearing Acuity and Operational Performance in Dismounted Combat

Benjamin Sheffield; Douglas S. Brungart; Jennifer B. Tufts; Col James Ness

The ability to detect, identify, and localize sounds is critical for successful execution of military operations. However, very little quantitative data are available to determine the minimum hearing levels needed to execute complex military tasks. In this experiment, wearable hearing loss simulation systems were used to evaluate the effect of audibility on combat effectiveness in a paintball-based simulated military exercise. The results indicate that impaired hearing has a greater impact on the offensive capabilities of dismounted personnel than it does on their survival in combat, likely due to the tendency for individuals with simulated impairment to adopt a more conservative behavioral strategy than those with normal hearing. These preliminary results provide valuable insights into the impact of impaired hearing on combat effectiveness, with implications for the development of improved auditory fitness-for-duty standards, the establishment of performance requirements for acquiring hearing protection technologies, and the refinement of strategies to train military personnel on how to use hearing protection in combat environments.


International Journal of Audiology | 2017

The effects of elevated hearing thresholds on performance in a paintball simulation of individual dismounted combat

Benjamin Sheffield; Douglas S. Brungart; Jennifer B. Tufts; James Ness

Abstract Objective: To examine the relationship between hearing acuity and operational performance in simulated dismounted combat. Design: Individuals wearing hearing loss simulation systems competed in a paintball-based exercise where the objective was to be the last player remaining. Four hearing loss profiles were tested in each round (no hearing loss, mild, moderate and severe) and four rounds were played to make up a match. This allowed counterbalancing of simulated hearing loss across participants. Study sample: Forty-three participants across two data collection sites (Fort Detrick, Maryland and the United States Military Academy, New York). All participants self-reported normal hearing except for two who reported mild hearing loss. Results: Impaired hearing had a greater impact on the offensive capabilities of participants than it did on their “survival”, likely due to the tendency for individuals with simulated impairment to adopt a more conservative behavioural strategy than those with normal hearing. Conclusions: These preliminary results provide valuable insights into the impact of impaired hearing on combat effectiveness, with implications for the development of improved auditory fitness-for-duty standards, the establishment of performance requirements for hearing protection technologies, and the refinement of strategies to train military personnel on how to use hearing protection in combat environments.


Hearing Research | 2017

Performance in noise: Impact of reduced speech intelligibility on Sailor performance in a Navy command and control environment

M. David Keller; John Ziriax; William Barns; Benjamin Sheffield; Douglas S. Brungart; Tony Thomas; Bobby Jaeger; Kurt Yankaskas

ABSTRACT Noise, hearing loss, and electronic signal distortion, which are common problems in military environments, can impair speech intelligibility and thereby jeopardize mission success. The current study investigated the impact that impaired communication has on operational performance in a command and control environment by parametrically degrading speech intelligibility in a simulated shipborne Combat Information Center. Experienced U.S. Navy personnel served as the study participants and were required to monitor information from multiple sources and respond appropriately to communications initiated by investigators playing the roles of other personnel involved in a realistic Naval scenario. In each block of the scenario, an adaptive intelligibility modification system employing automatic gain control was used to adjust the signal‐to‐noise ratio to achieve one of four speech intelligibility levels on a Modified Rhyme Test: No Loss, 80%, 60%, or 40%. Objective and subjective measures of operational performance suggested that performance systematically degraded with decreasing speech intelligibility, with the largest drop occurring between 80% and 60%. These results confirm the importance of noise reduction, good communication design, and effective hearing conservation programs to maximize the operational effectiveness of military personnel. HighlightsNoise and hearing loss are common problems in the military that can impair speech intelligibility.Understanding effects of speech intelligibility levels is vital for determining mission success and risks.Performance in a variety of measures is impacted when communications is disrupted by decreased speech intelligibility.Often compensation strategies are not sufficient to prevent real world performance declines when hearing is poor.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2016

The Effects of Hearing Impairment on Fire Team Performance in Dismounted Combat

Benjamin Sheffield; Douglas S. Brungart; Amy Blank

Although hearing is known to play an essential role in military operations, few studies have directly measured the impact of hearing loss on combat effectiveness. In this study, Soldiers from the 101st Airborne were equipped with hearing loss simulators allowing parametric adjustment of hearing between normal and profound deafness. They then participated in a combat exercise requiring multiple fire teams with different levels of hearing loss to progress through a series of waypoints in a wooded area as quickly as possible without being eliminated by enemy gunfire. A GPS-based tracking system made it possible to record the progress of each team throughout the exercise, including information on player eliminations and the players credited with these kills. Results show that hearing impairment has a substantial negative impact on the performance of experienced Soldiers in terms of survivability, lethality, and mission success.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2013

The Relationship between Speech Intelligibility and Operational Performance in a Simulated Naval Command Information Center

Karen Mentel; John Ziriax; Jon Dachos; Alexander Salunga; Hope Turner; Benjamin Sheffield; Douglas S. Brungart

Despite nearly universal agreement that hearing is critical to the success of military operations, very little quantitative data exists to support this assertion. Unfortunately, hearing-related issues abound across our military services. To design and implement effective measures to mitigate these issues, data is needed to determine the extent to which military effectiveness is impaired when speech communication falls below some specific measurable threshold. In this study, U.S. Navy personnel participated in two experiments to examine the impact of speech intelligibility on operational performance in an Aegis Combat System Command Information Center simulation. Subjects wore headsets with custom-designed software to control speech intelligibility in real time. In the first experiment, speech intelligibility was measured using the modified rhyme test (MRT). In the second experiment, subjects acted as either commanding officer or tactical action officer in a combat scenario divided into time segments, each conducted at a different level of speech intelligibility. Results indicate that mission success, as measured by the percentage of tasks accomplished, decreased dramatically for MRT scores below approximately 65 percent.


Journal of the Acoustical Society of America | 2018

Clear speech adaptations in spontaneous speech produced by young and older adults

Valerie Hazan; Outi Tuomainen; Jeesun Kim; Chris Davis; Benjamin Sheffield; Douglas S. Brungart

The study investigated the speech adaptations by older adults (OA) with and without age-related hearing loss made to communicate effectively in challenging communicative conditions. Acoustic analyses were carried out on spontaneous speech produced during a problem-solving task (diapix) carried out by talker pairs in different listening conditions. There were 83 talkers of Southern British English. Fifty-seven talkers were OAs aged 65-84, 30 older adults with normal hearing (OANH), and 27 older adults with hearing loss (OAHL) [mean pure tone average (PTA) 0.250-4 kHz: 27.7 dB HL]. Twenty-six talkers were younger adults (YA) aged 18-26 with normal hearing. Participants were recorded while completing the diapix task with a conversational partner (YA of the same sex) when (a) both talkers heard normally (NORM), (b) the partner had a simulated hearing loss, and (c) both talkers heard babble noise. Irrespective of hearing status, there were age-related differences in some acoustic characteristics of YA and OA speech produced in NORM, most likely linked to physiological factors. In challenging conditions, while OANH talkers typically patterned with YA talkers, OAHL talkers made adaptations more consistent with an increase in vocal effort. The study suggests that even mild presbycusis in healthy OAs can affect the speech adaptations made to maintain effective communication.

Collaboration


Dive into the Benjamin Sheffield's collaboration.

Top Co-Authors

Avatar

Douglas S. Brungart

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

John Ziriax

Naval Surface Warfare Center

View shared research outputs
Top Co-Authors

Avatar

Fan-Gang Zeng

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

M. David Keller

Naval Surface Warfare Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Outi Tuomainen

University College London

View shared research outputs
Top Co-Authors

Avatar

Valerie Hazan

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jeesun Kim

University of Western Sydney

View shared research outputs
Researchain Logo
Decentralizing Knowledge