Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Richard L. Freyman is active.

Publication


Featured researches published by Richard L. Freyman.


Journal of the Acoustical Society of America | 2001

Spatial release from informational masking in speech recognition

Richard L. Freyman; Uma Balakrishnan; Karen S. Helfer

Three experiments were conducted to determine the extent to which perceived separation of speech and interference improves speech recognition in the free field. Target speech stimuli were 320 grammatically correct but nonmeaningful sentences spoken by a female talker. In the first experiment the interference was a recording of either one or two female talkers reciting a continuous stream of similar nonmeaningful sentences. The target talker was always presented from a loudspeaker directly in front (0 degrees). The interference was either presented from the front loudspeaker (the F-F condition) or from both a right loudspeaker (60 degrees) and the front loudspeaker, with the right leading the front by 4 ms (the F-RF condition). Due to the precedence effect, the interference in the F-RF condition was perceived to be well to the right, while the target talker was heard from the front. For both the single-talker and two-talker interference, there was a sizable improvement in speech recognition in the F-RF condition compared with the F-F condition. However, a second experiment showed that there was no F-RF advantage when the interference was noise modulated by the single- or multi-channel envelope of the two-talker masker. Results of the third experiment indicated that the advantage of perceived separation is not limited to conditions where the interfering speech is understandable.


Ear and Hearing | 2007

Aging and Speech-on-Speech Masking

Karen S. Helfer; Richard L. Freyman

Objectives: A common complaint of many older adults is difficulty communicating in situations where they must focus on one talker in the presence of other people speaking. In listening environments containing multiple talkers, age-related changes may be caused by increased sensitivity to energetic masking, increased susceptibility to informational masking (e.g., confusion between the target voice and masking voices), and/or cognitive deficits. The purpose of the present study was to tease out these contributions to the difficulties that older adults experience in speech-on-speech masking situations. Design: Groups of younger, normal-hearing individuals and older adults with varying degrees of hearing sensitivity (n = 12 per group) participated in a study of sentence recognition in the presence of four types of maskers: a two-talker masker consisting of voices of the same sex as the target voice, a two-talker masker of voices of the opposite sex as the target, a signal-envelope-modulated noise derived from the two-talker complex, and a speech-shaped steady noise. Subjects also completed a voice discrimination task to determine the extent to which they were able to incidentally learn to tell apart the target voice from the same-sex masking voices and to examine whether this ability influenced speech-on-speech masking. Results: Results showed that older adults had significantly poorer performance in the presence of all four types of maskers, with the largest absolute difference for the same-sex masking condition. When the data were analyzed in terms of relative group differences (i.e., adjusting for absolute performance) the greatest effect was found for the opposite-sex masker. Degree of hearing loss was significantly related to performance in several listening conditions. Some older subjects demonstrated a reduced ability to discriminate between the masking and target voices; performance on this task was not related to speech recognition ability. Conclusions: The overall pattern of results suggests that although amount of informational masking does not seem to differ between older and younger listeners, older adults (particularly those with hearing loss) evidence a deficit in the ability to selectively attend to a target voice, even when the masking voices are from talkers of the opposite sex. Possible explanations for these findings include problems understanding speech in the presence of a masker with temporal and spectral fluctuations and/or age-related changes in cognitive function.


Journal of the Acoustical Society of America | 2005

The role of visual speech cues in reducing energetic and informational masking

Karen S. Helfer; Richard L. Freyman

Two experiments compared the effect of supplying visual speech information (e.g., lipreading cues) on the ability to hear one female talkers voice in the presence of steady-state noise or a masking complex consisting of two other female voices. In the first experiment intelligibility of sentences was measured in the presence of the two types of maskers with and without perceived spatial separation of target and masker. The second study tested detection of sentences in the same experimental conditions. Results showed that visual cues provided more benefit for both recognition and detection of speech when the masker consisted of other voices (versus steady-state noise). Moreover, visual cues provided greater benefit when the target speech and masker were spatially coincident versus when they appeared to arise from different spatial locations. The data obtained here are consistent with the hypothesis that lipreading cues help to segregate a target voice from competing voices, in addition to the established benefit of supplementing masked phonetic information.


Journal of the Acoustical Society of America | 2007

Speech Intelligibility In Cochlear Implant Simulations: Effects Of Carrier Type, Interfering Noise, And Subject Experience

Nathaniel A. Whitmal; Sarah F. Poissant; Richard L. Freyman; Karen S. Helfer

Channel vocoders using either tone or band-limited noise carriers have been used in experiments to simulate cochlear implant processing in normal-hearing listeners. Previous results from these experiments have suggested that the two vocoder types produce speech of nearly equal intelligibility in quiet conditions. The purpose of this study was to further compare the performance of tone and noise-band vocoders in both quiet and noisy listening conditions. In each of four experiments, normal-hearing subjects were better able to identify tone-vocoded sentences and vowel-consonant-vowel syllables than noise-vocoded sentences and syllables, both in quiet and in the presence of either speech-spectrum noise or two-talker babble. An analysis of consonant confusions for listening in both quiet and speech-spectrum noise revealed significantly different error patterns that were related to each vocoders ability to produce tone or noise output that accurately reflected the consonants manner of articulation. Subject experience was also shown to influence intelligibility. Simulations using a computational model of modulation detection suggest that the noise vocoders disadvantage is in part due to the intrinsic temporal fluctuations of its carriers, which can interfere with temporal fluctuations that convey speech recognition cues.


Journal of the Acoustical Society of America | 1997

Onset dominance in lateralization

Richard L. Freyman; Patrick M. Zurek; Uma Balakrishnan; Yuan-Chuan Chiang

Saberi and Perrott [Acustica 81, 272-275 (1995)] found that the in-head lateralization of a relatively long-duration pulse train could be controlled by the interaural delay of the single pulse pair that occurs at onset. The present study examined this further, using an acoustic pointer measure of lateralization, with stimulus manipulations designed to determine conditions under which lateralization was consistent with the interaural onset delay. The present stimuli were wideband pulse trains, noise-burst trains, and inharmonic complexes, 250 ms in duration, chosen for the ease with which interaural delays and correlations of select temporal segments of the stimulus could be manipulated. The stimulus factors studied were the periodicity of the ongoing part of the signal as well as the multiplicity and ambiguity of interaural delays. The results, in general, showed that the interaural onset delay controlled lateralization when the steady state binaural cues were relatively weak, either because the spectral components were only sparsely distributed across frequency or because the interaural time delays were ambiguous. Onset dominance can be disrupted by sudden stimulus changes within the train, and several examples of such changes are described. Individual subjects showed strong left-right asymmetries in onset effectiveness. The results have implications for understanding how onset and ongoing interaural delay cues contribute to the location estimates formed by the binaural auditory system.


PLOS ONE | 2014

Influence of musical training on understanding voiced and whispered speech in noise.

Dorea R. Ruggles; Richard L. Freyman; Andrew J. Oxenham

This study tested the hypothesis that the previously reported advantage of musicians over non-musicians in understanding speech in noise arises from more efficient or robust coding of periodic voiced speech, particularly in fluctuating backgrounds. Speech intelligibility was measured in listeners with extensive musical training, and in those with very little musical training or experience, using normal (voiced) or whispered (unvoiced) grammatically correct nonsense sentences in noise that was spectrally shaped to match the long-term spectrum of the speech, and was either continuous or gated with a 16-Hz square wave. Performance was also measured in clinical speech-in-noise tests and in pitch discrimination. Musicians exhibited enhanced pitch discrimination, as expected. However, no systematic or statistically significant advantage for musicians over non-musicians was found in understanding either voiced or whispered sentences in either continuous or gated noise. Musicians also showed no statistically significant advantage in the clinical speech-in-noise tests. Overall, the results provide no evidence for a significant difference between young adult musicians and non-musicians in their ability to understand speech in noise.


Journal of the Acoustical Society of America | 2006

Effects Of Reverberation And Masking On Speech Intelligibility In Cochlear Implant Simulations

Sarah F. Poissant; Nathaniel A. Whitmal; Richard L. Freyman

Two experiments investigated the impact of reverberation and masking on speech understanding using cochlear implant (CI) simulations. Experiment 1 tested sentence recognition in quiet. Stimuli were processed with reverberation simulation (T=0.425, 0.266, 0.152, and 0.0 s) and then either processed with vocoding (6, 12, or 24 channels) or were subjected to no further processing. Reverberation alone had only a small impact on perception when as few as 12 channels of information were available. However, when the processing was limited to 6 channels, perception was extremely vulnerable to the effects of reverberation. In experiment 2, subjects listened to reverberated sentences, through 6- and 12-channel processors, in the presence of either speech-spectrum noise (SSN) or two-talker babble (TTB) at various target-to-masker ratios. The combined impact of reverberation and masking was profound, although there was no interaction between the two effects. This differs from results obtained in subjects listening to unprocessed speech where interactions between reverberation and masking have been shown to exist. A speech transmission index (STI) analysis indicated a reasonably good prediction of speech recognition performance. Unlike previous investigations, the SSN and TTB maskers produced equivalent results, raising questions about the role of informational masking in CI processed speech.


Journal of the Acoustical Society of America | 2005

Precedence-based speech segregation in a virtual auditory environment

Douglas S. Brungart; Brian D. Simpson; Richard L. Freyman

When a masking sound is spatially separated from a target speech signal, substantial releases from masking typically occur both for speech and noise maskers. However, when a delayed copy of the masker is also presented at the location of the target speech (a condition that has been referred to as the front target, right-front masker or F-RF configuration), the advantages of spatial separation vanish for noise maskers but remain substantial for speech maskers. This effect has been attributed to precedence, which introduces an apparent spatial separation between the target and masker in the F-RF configuration that helps the listener to segregate the target from a masking voice but not from a masking noise. In this study, virtual synthesis techniques were used to examine variations of the F-RF configuration in an attempt to more fully understand the stimulus parameters that influence the release from masking obtained in that condition. The results show that the release from speech-on-speech masking caused by the addition of the delayed copy of the masker is robust across a wide variety of source locations, masker locations, and masker delay values. This suggests that the speech unmasking that occurs in the F-RF configuration is not dependent on any single perceptual cue and may indicate that F-RF speech segregation is only partially based on the apparent left-right location of the RF masker.


Journal of the Acoustical Society of America | 1986

Frequency discrimination as a function of tonal duration and excitation‐pattern slopes in normal and hearing‐impaired listeners

Richard L. Freyman; David A. Nelson

Frequency difference limens were determined as a function of stimulus duration in five normal-hearing and seven hearing-impaired subjects. The frequency DL duration functions obtained from normal-hearing subjects were similar to those reported by Liang and Chistovich [Sov. Phys. Acoust. 6, 75-80 (1961)]. As duration increased, the DLs improved rapidly over a range of short durations, improved more gradually over a middle range of durations, and reached an asymptote around 200 ms. The functions obtained from the hearing-impaired subjects were similar to those from normal subjects over the middle and longer duration, but did not display the rapid changes at short durations. The paper examines the ability of a variation of Zwickers excitation-pattern model of frequency discrimination to explain these duration effects. Most, although not all, of the effects can be adequately explained by the model.


Journal of the Acoustical Society of America | 1984

Broadened forward‐masked tuning curves from intense masking tones: Delay‐time and probe‐level manipulations

David A. Nelson; Richard L. Freyman

Forward-masked psychophysical tuning curves were obtained from normal-hearing listeners under two conditions: lengthened delay time between masker and probe, and increased probe level. Both conditions required higher-level masking tones and both conditions resulted in broader tuning curves. Comparisons were made of tuning curves obtained with different probe-level and delay-time combinations that were chosen to require equivalent masker levels at the probe frequency. Nearly identical tuning-curve shapes were obtained when masker level at the probe frequency was the same. The results are predicted by a two-process model, consisting of a nonlinear filter followed by an exponential decay. Tuning-curve shapes in forward masking appear to be largely dependent upon the masker level (filter output level) at which one attempts to measure them.

Collaboration


Dive into the Richard L. Freyman's collaboration.

Top Co-Authors

Avatar

Karen S. Helfer

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar

Patrick M. Zurek

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Rachel K. Clifton

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Amanda M. Griffin

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar

Daniel D. McCall

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar

Charlotte Morse-Fortier

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar

Rachel Keen

University of Virginia

View shared research outputs
Top Co-Authors

Avatar

Ruth Y. Litovsky

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge