Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Joshua G. Bernstein is active.

Publication


Featured researches published by Joshua G. Bernstein.


Journal of the Acoustical Society of America | 2003

Pitch discrimination of diotic and dichotic tone complexes: Harmonic resolvability or harmonic number?

Joshua G. Bernstein; Andrew J. Oxenham

Three experiments investigated the relationship between harmonic number, harmonic resolvability, and the perception of harmonic complexes. Complexes with successive equal-amplitude sine- or random-phase harmonic components of a 100- or 200-Hz fundamental frequency (f0) were presented dichotically, with even and odd components to opposite ears, or diotically, with all harmonics presented to both ears. Experiment 1 measured performance in discriminating a 3.5%-5% frequency difference between a component of a harmonic complex and a pure tone in isolation. Listeners achieved at least 75% correct for approximately the first 10 and 20 individual harmonics in the diotic and dichotic conditions, respectively, verifying that only processes before the binaural combination of information limit frequency selectivity. Experiment 2 measured fundamental frequency difference limens (f0 DLs) as a function of the average lowest harmonic number. Similar results at both f0s provide further evidence that harmonic number, not absolute frequency, underlies the order-of-magnitude increase observed in f0 DLs when only harmonics above about the 10th are presented. Similar results under diotic and dichotic conditions indicate that the auditory system, in performing f0 discrimination, is unable to utilize the additional peripherally resolved harmonics in the dichotic case. In experiment 3, dichotic complexes containing harmonics below the 12th, or only above the 15th, elicited pitches of the f0 and twice the f0, respectively. Together, experiments 2 and 3 suggest that harmonic number, regardless of peripheral resolvability, governs the transition between two different pitch percepts, one based on the frequencies of individual resolved harmonics and the other based on the periodicity of the temporal envelope.


Journal of the Acoustical Society of America | 2009

Auditory and auditory-visual intelligibility of speech in fluctuating maskers for normal-hearing and hearing-impaired listeners

Joshua G. Bernstein; Ken W. Grant

Speech intelligibility for audio-alone and audiovisual (AV) sentences was estimated as a function of signal-to-noise ratio (SNR) for a female target talker presented in a stationary noise, an interfering male talker, or a speech-modulated noise background, for eight hearing-impaired (HI) and five normal-hearing (NH) listeners. At the 50% keywords-correct performance level, HI listeners showed 7-12 dB less fluctuating-masker benefit (FMB) than NH listeners, consistent with previous results. Both groups showed significantly more FMB under AV than audio-alone conditions. When compared at the same stationary-noise SNR, FMB differences between listener groups and modalities were substantially smaller, suggesting that most of the FMB differences at the 50% performance level may reflect a SNR dependence of the FMB. Still, 1-5 dB of the FMB difference between listener groups remained, indicating a possible role for reduced audibility, limited spectral or temporal resolution, or an inability to use auditory source-segregation cues, in directly limiting the ability to listen in the dips of a fluctuating masker. A modified version of the extended speech-intelligibility index that predicts a larger FMB at less favorable SNRs accounted for most of the FMB differences between listener groups and modalities. Overall, these data suggest that HI listeners retain more of an ability to listen in the dips of a fluctuating masker than previously thought. Instead, the fluctuating-masker difficulties exhibited by HI listeners may derive from the reduced FMB associated with the more favorable SNRs they require to identify a reasonable proportion of the target speech.


Journal of the Acoustical Society of America | 2011

Effects of spectral smearing and temporal fine-structure distortion on the fluctuating-masker benefit for speech at a fixed signal-to-noise ratio

Joshua G. Bernstein; Douglas S. Brungart

Normal-hearing listeners receive less benefit from momentary dips in the level of a fluctuating masker for speech processed to degrade spectral detail or temporal fine structure (TFS) than for unprocessed speech. This has been interpreted as evidence that the magnitude of the fluctuating-masker benefit (FMB) reflects the ability to resolve spectral detail and TFS. However, the FMB for degraded speech is typically measured at a higher signal-to-noise ratio (SNR) to yield performance similar to normal speech for the baseline (stationary-noise) condition. Because the FMB decreases with increasing SNR, this SNR difference might account for the reduction in FMB for degraded speech. In this study, the FMB for unprocessed and processed (TFS-removed or spectrally smeared) speech was measured in a paradigm that adjusts word-set size, rather than SNR, to equate stationary-noise performance across processing conditions. Compared at the same SNR and percent-correct level (but with different set sizes), processed and unprocessed stimuli yielded a similar FMB for four different fluctuating maskers (speech-modulated noise, one opposite-gender interfering talker, two same-gender interfering talkers, and 16-Hz interrupted noise). These results suggest that, for these maskers, spectral or TFS distortions do not directly impair the ability to benefit from momentary dips in masker level.


Journal of the Acoustical Society of America | 2006

The relationship between frequency selectivity and pitch discrimination: effects of stimulus level.

Joshua G. Bernstein; Andrew J. Oxenham

Three experiments tested the hypothesis that fundamental frequency (fo) discrimination depends on the resolvability of harmonics within a tone complex. Fundamental frequency difference limens (fo DLs) were measured for random-phase harmonic complexes with eight fos between 75 and 400 Hz, bandpass filtered between 1.5 and 3.5 kHz, and presented at 12.5-dB/component average sensation level in threshold equalizing noise with levels of 10, 40, and 65 dB SPL per equivalent rectangular auditory filter bandwidth. With increasing level, the transition from large (poor) to small (good) fo DLs shifted to a higher fo. This shift corresponded to a decrease in harmonic resolvability, as estimated in the same listeners with excitation patterns derived from measures of auditory filter shape and with a more direct measure that involved hearing out individual harmonics. The results are consistent with the idea that resolved harmonics are necessary for good fo discrimination. Additionally, fo DLs for high fos increased with stimulus level in the same way as pure-tone frequency DLs, suggesting that for this frequency range, the frequencies of harmonics are more poorly encoded at higher levels, even when harmonics are well resolved.


Journal of the Acoustical Society of America | 2008

Harmonic segregation through mistuning can improve fundamental frequency discrimination

Joshua G. Bernstein; Andrew J. Oxenham

This study investigated the relationship between harmonic frequency resolution and fundamental frequency (f(0)) discrimination. Consistent with earlier studies, f(0) discrimination of a diotic bandpass-filtered harmonic complex deteriorated sharply as the f(0) decreased to the point where only harmonics above the tenth were presented. However, when the odd harmonics were mistuned by 3%, performance improved dramatically, such that performance nearly equaled that found with only even harmonics present. Mistuning also improved performance when alternating harmonics were presented to opposite ears (dichotic condition). In a task involving frequency discrimination of individual harmonics within the complexes, mistuning the odd harmonics yielded no significant improvement in the resolution of individual harmonics. Pitch matches to the mistuned complexes suggested that the even harmonics dominated the pitch for f(0)s at which a benefit of mistuning was observed. The results suggest that f(0) discrimination performance can benefit from perceptual segregation based on inharmonicity, and that poor performance when only high-numbered harmonics are present is not due to limited peripheral harmonic resolvability. Taken together with earlier results, the findings suggest that f(0) discrimination may depend on auditory filter bandwidths, but that spectral resolution of individual harmonics is neither necessary nor sufficient for accurate f(0) discrimination.


Ear and Hearing | 2015

Trimodal speech perception: how residual acoustic hearing supplements cochlear-implant consonant recognition in the presence of visual cues.

Benjamin M. Sheffield; Gerald I. Schuchman; Joshua G. Bernstein

Objectives: As cochlear implant (CI) acceptance increases and candidacy criteria are expanded, these devices are increasingly recommended for individuals with less than profound hearing loss. As a result, many individuals who receive a CI also retain acoustic hearing, often in the low frequencies, in the nonimplanted ear (i.e., bimodal hearing) and in some cases in the implanted ear (i.e., hybrid hearing) which can enhance the performance achieved by the CI alone. However, guidelines for clinical decisions pertaining to cochlear implantation are largely based on expectations for postsurgical speech-reception performance with the CI alone in auditory-only conditions. A more comprehensive prediction of postimplant performance would include the expected effects of residual acoustic hearing and visual cues on speech understanding. An evaluation of auditory-visual performance might be particularly important because of the complementary interaction between the speech information relayed by visual cues and that contained in the low-frequency auditory signal. The goal of this study was to characterize the benefit provided by residual acoustic hearing to consonant identification under auditory-alone and auditory-visual conditions for CI users. Additional information regarding the expected role of residual hearing in overall communication performance by a CI listener could potentially lead to more informed decisions regarding cochlear implantation, particularly with respect to recommendations for or against bilateral implantation for an individual who is functioning bimodally. Design: Eleven adults 23 to 75 years old with a unilateral CI and air-conduction thresholds in the nonimplanted ear equal to or better than 80 dB HL for at least one octave frequency between 250 and 1000 Hz participated in this study. Consonant identification was measured for conditions involving combinations of electric hearing (via the CI), acoustic hearing (via the nonimplanted ear), and speechreading (visual cues). Results: The results suggest that the benefit to CI consonant-identification performance provided by the residual acoustic hearing is even greater when visual cues are also present. An analysis of consonant confusions suggests that this is because the voicing cues provided by the residual acoustic hearing are highly complementary with the mainly place-of-articulation cues provided by the visual stimulus. Conclusions: These findings highlight the need for a comprehensive prediction of trimodal (acoustic, electric, and visual) postimplant speech-reception performance to inform implantation decisions. The increased influence of residual acoustic hearing under auditory-visual conditions should be taken into account when considering surgical procedures or devices that are intended to preserve acoustic hearing in the implanted ear. This is particularly relevant when evaluating the candidacy of a current bimodal CI user for a second CI (i.e., bilateral implantation). Although recent developments in CI technology and surgical techniques have increased the likelihood of preserving residual acoustic hearing, preservation cannot be guaranteed in each individual case. Therefore, the potential gain to be derived from bilateral implantation needs to be weighed against the possible loss of the benefit provided by residual acoustic hearing.


Journal of the Acoustical Society of America | 2003

Effects of relative frequency, absolute frequency, and phase on fundamental frequency discrimination: Data and an autocorrelation model

Joshua G. Bernstein; Andrew J. Oxenham

Fundamental frequency (F0) difference limens (DLs) were measured versus F0 for sine‐ and random‐phase harmonic complexes bandpass‐filtered into low‐ or high‐frequency regions, with 3‐dB passbands of 2.5–3.5 and 5–7 kHz, respectively. In all cases, F0 DLs decreased dramatically with increasing F0 as approximately the tenth harmonic appeared in the passband. Generally, F0 DLs were similar in both frequency regions for complexes with similar harmonic numbers and phase relationships. However, F0 DLs were larger in the high‐frequency than the low‐frequency region for random‐phase complexes containing only harmonics above the tenth, suggesting a possible role for additional fine‐structure information in the low‐frequency region. The dependence of F0 discrimination on relative frequency presents a significant challenge to autocorrelation (AC) models of pitch perception, in which predictions generally depend more on absolute frequency and phase locking. To represent this relative frequency effect, a ‘‘lag window’...


Ear and Hearing | 2017

The Effect of Interaural Mismatches on Contralateral Unmasking With Single-Sided Vocoders

Jessica M. Wess; Douglas S. Brungart; Joshua G. Bernstein

Objectives: Cochlear-implant (CI) users with single-sided deafness (SSD)—that is, one normal-hearing (NH) ear and one CI ear—can obtain some unmasking benefits when a mixture of target and masking voices is presented to the NH ear and a copy of just the masking voices is presented to the CI ear. NH listeners show similar benefits in a simulation of SSD-CI listening, whereby a mixture of target and masking voices is presented to one ear and a vocoded copy of the masking voices is presented to the opposite ear. However, the magnitude of the benefit for SSD-CI listeners is highly variable across individuals and is on average less than for NH listeners presented with vocoded stimuli. One possible explanation for the limited benefit observed for some SSD-CI users is that temporal and spectral discrepancies between the acoustic and electric ears might interfere with contralateral unmasking. The present study presented vocoder simulations to NH participants to examine the effects of interaural temporal and spectral mismatches on contralateral unmasking. Design: Speech-reception performance was measured in a competing-talker paradigm for NH listeners presented with vocoder simulations of SSD-CI listening. In the monaural condition, listeners identified target speech masked by two same-gender interferers, presented to the left ear. In the bilateral condition, the same stimuli were presented to the left ear, but the right ear was presented with a noise-vocoded copy of the interfering voices. This paradigm tested whether listeners could integrate the interfering voices across the ears to better hear the monaural target. Three common distortions inherent in CI processing were introduced to the vocoder processing: spectral shifts, temporal delays, and reduced frequency selectivity. Results: In experiment 1, contralateral unmasking (i.e., the benefit from adding the vocoded maskers to the second ear) was impaired by spectral mismatches of four equivalent rectangular bandwidths or greater. This is equivalent to roughly a 3.6-mm mismatch between the cochlear places stimulated in the electric and acoustic ears, which is on the low end of the average expected mismatch for SSD-CI listeners. In experiment 2, performance was negatively affected by a temporal mismatch of 24 ms or greater, but not for mismatches in the 0 to 12 ms range expected for SSD-CI listeners. Experiment 3 showed an interaction between spectral shift and spectral resolution, with less effect of interaural spectral mismatches when the number of vocoder channels was reduced. Experiment 4 applied interaural spectral and temporal mismatches in combination. Performance was best when both frequency and timing were aligned, but in cases where a mismatch was present in one dimension (either frequency or latency), the addition of mismatch in the second dimension did not further disrupt performance. Conclusions: These results emphasize the need for interaural alignment—in timing and especially in frequency—to maximize contralateral unmasking for NH listeners presented with vocoder simulations of SSD-CI listening. Improved processing strategies that reduce mismatch between the electric and acoustic ears of SSD-CI listeners might improve their ability to obtain binaural benefits in multitalker environments.


Journal of the Acoustical Society of America | 2015

Release from informational masking in a monaural competing-speech task with vocoded copies of the maskers presented contralaterally

Joshua G. Bernstein; Nandini Iyer; Douglas S. Brungart

Single-sided deafness prevents access to the binaural cues that help normal-hearing listeners extract target speech from competing voices. Little is known about how listeners with one normal-hearing ear might benefit from access to severely degraded audio signals that preserve only envelope information in the second ear. This study investigated whether vocoded masker-envelope information presented to one ear could improve performance for normal-hearing listeners in a multi-talker speech-identification task presented to the other ear. Target speech and speech or non-speech maskers were presented unprocessed to the left ear. The right ear received no signal, or either an unprocessed or eight-channel noise-vocoded copy of the maskers. Presenting the vocoded maskers contralaterally yielded significant masking release from same-gender speech maskers, albeit less than in the unprocessed case, but not from opposite-gender speech, stationary-noise, or modulated-noise maskers. Unmasking also occurred with as few as two vocoder channels and when an attenuated copy of the target signal was added to the maskers before vocoding. These data show that delivering masker-envelope information contralaterally generates masking release in situations where target-masker similarity impedes monaural speech-identification performance. By delivering speech-envelope information to a deaf ear, cochlear implants for single-sided deafness have the potential to produce a similar effect.


Journal of the Acoustical Society of America | 2012

Set-size procedures for controlling variations in speech-reception performance with a fluctuating masker

Joshua G. Bernstein; Van Summers; Nandini Iyer; Douglas S. Brungart

Adaptive signal-to-noise ratio (SNR) tracking is often used to measure speech reception in noise. Because SNR varies with performance using this method, data interpretation can be confounded when measuring an SNR-dependent effect such as the fluctuating-masker benefit (FMB) (the intelligibility improvement afforded by brief dips in the masker level). One way to overcome this confound, and allow FMB comparisons across listener groups with different stationary-noise performance, is to adjust the response set size to equalize performance across groups at a fixed SNR. However, this technique is only valid under the assumption that changes in set size have the same effect on percentage-correct performance for different masker types. This assumption was tested by measuring nonsense-syllable identification for normal-hearing listeners as a function of SNR, set size and masker (stationary noise, 4- and 32-Hz modulated noise and an interfering talker). Set-size adjustment had the same impact on performance scores for all maskers, confirming the independence of FMB (at matched SNRs) and set size. These results, along with those of a second experiment evaluating an adaptive set-size algorithm to adjust performance levels, establish set size as an efficient and effective tool to adjust baseline performance when comparing effects of masker fluctuations between listener groups.

Collaboration


Dive into the Joshua G. Bernstein's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Douglas S. Brungart

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Gerald I. Schuchman

Walter Reed National Military Medical Center

View shared research outputs
Top Co-Authors

Avatar

Kenneth K. Jensen

Walter Reed National Military Medical Center

View shared research outputs
Top Co-Authors

Avatar

Van Summers

Walter Reed Army Institute of Research

View shared research outputs
Top Co-Authors

Avatar

Arnaldo Rivera

Walter Reed National Military Medical Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ken W. Grant

Walter Reed Army Medical Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nandini Iyer

Air Force Research Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge