Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Timothy Streeter is active.

Publication


Featured researches published by Timothy Streeter.


Scientific Reports | 2015

Musical training, individual differences and the cocktail party problem

Jayaganesh Swaminathan; Christine R. Mason; Timothy Streeter; Virginia Best; Gerald Kidd; Aniruddh D. Patel

Are musicians better able to understand speech in noise than non-musicians? Recent findings have produced contradictory results. Here we addressed this question by asking musicians and non-musicians to understand target sentences masked by other sentences presented from different spatial locations, the classical ‘cocktail party problem’ in speech science. We found that musicians obtained a substantial benefit in this situation, with thresholds ~6 dB better than non-musicians. Large individual differences in performance were noted particularly for the non-musically trained group. Furthermore, in different conditions we manipulated the spatial location and intelligibility of the masking sentences, thus changing the amount of ‘informational masking’ (IM) while keeping the amount of ‘energetic masking’ (EM) relatively constant. When the maskers were unintelligible and spatially separated from the target (low in IM), musicians and non-musicians performed comparably. These results suggest that the characteristics of speech maskers and the amount of IM can influence the magnitude of the differences found between musicians and non-musicians in multiple-talker “cocktail party” environments. Furthermore, considering the task in terms of the EM-IM distinction provides a conceptual framework for future behavioral and neuroscientific studies which explore the underlying sensory and cognitive mechanisms contributing to enhanced “speech-in-noise” perception by musicians.


tests and proofs | 2005

Perceptual plasticity in spatial auditory displays

Barbara G. Shinn-Cunningham; Timothy Streeter; Jean-François Gyss

Often, virtual acoustic environments present cues that are inconsistent with an individuals normal experiences. Through training, however, an individual can at least partially adapt to such inconsistent cues through either short- [Kassem 1998; Shinn-Cunningham 2000; Shinn-Cunningham et al. 1998a, 1998b; Zahorik 2001] or long- [Hofman et al. 1998] term exposure. The type and degree of inconsistency as well as the length of training determine the final accuracy and consistency with which the subject can localize sounds [Shinn-Cunningham 2000]. The current experiments of short-term adaptation measure how localization bias (mean error) and resolution (precision) change when subjects are exposed to auditory cue rearrangements simpler than those previously investigated. These results, combined with those of earlier experiments, suggest that there is plasticity at many different levels of the spatial auditory processing pathway with different time scales governing the plasticity at different levels of the system. This view of spatial auditory plasticity has important implications for the design of spatial auditory displays.


The Journal of Neuroscience | 2016

Role of Binaural Temporal Fine Structure and Envelope Cues in Cocktail-Party Listening.

Jayaganesh Swaminathan; Christine R. Mason; Timothy Streeter; Virginia Best; Elin Roverud; Gerald Kidd

While conversing in a crowded social setting, a listener is often required to follow a target speech signal amid multiple competing speech signals (the so-called “cocktail party” problem). In such situations, separation of the target speech signal in azimuth from the interfering masker signals can lead to an improvement in target intelligibility, an effect known as spatial release from masking (SRM). This study assessed the contributions of two stimulus properties that vary with separation of sound sources, binaural envelope (ENV) and temporal fine structure (TFS), to SRM in normal-hearing (NH) human listeners. Target speech was presented from the front and speech maskers were either colocated with or symmetrically separated from the target in azimuth. The target and maskers were presented either as natural speech or as “noise-vocoded” speech in which the intelligibility was conveyed only by the speech ENVs from several frequency bands; the speech TFS within each band was replaced with noise carriers. The experiments were designed to preserve the spatial cues in the speech ENVs while retaining/eliminating them from the TFS. This was achieved by using the same/different noise carriers in the two ears. A phenomenological auditory-nerve model was used to verify that the interaural correlations in TFS differed across conditions, whereas the ENVs retained a high degree of correlation, as intended. Overall, the results from this study revealed that binaural TFS cues, especially for frequency regions below 1500 Hz, are critical for achieving SRM in NH listeners. Potential implications for studying SRM in hearing-impaired listeners are discussed. SIGNIFICANCE STATEMENT Acoustic signals received by the auditory system pass first through an array of physiologically based band-pass filters. Conceptually, at the output of each filter, there are two principal forms of temporal information: slowly varying fluctuations in the envelope (ENV) and rapidly varying fluctuations in the temporal fine structure (TFS). The importance of these two types of information in everyday listening (e.g., conversing in a noisy social situation; the “cocktail-party” problem) has not been established. This study assessed the contributions of binaural ENV and TFS cues for understanding speech in multiple-talker situations. Results suggest that, whereas the ENV cues are important for speech intelligibility, binaural TFS cues are critical for perceptually segregating the different talkers and thus for solving the cocktail party problem.


Journal of the Acoustical Society of America | 2013

Design and preliminary testing of a visually guided hearing aid.

Gerald Kidd; Sylvain Favrot; Joseph G. Desloge; Timothy Streeter; Christine R. Mason

An approach to hearing aid design is described, and preliminary acoustical and perceptual measurements are reported, in which an acoustic beam-forming microphone array is coupled to an eye-glasses-mounted eye-tracker. This visually guided hearing aid (VGHA)-currently a laboratory-based prototype-senses direction of gaze using the eye tracker and an interface converts those values into control signals that steer the acoustic beam accordingly. Preliminary speech intelligibility measurements with noise and speech maskers revealed near- or better-than normal spatial release from masking with the VGHA. Although not yet a wearable prosthesis, the principle underlying the device is supported by these findings.


Journal of the Acoustical Society of America | 2008

Effect of spatial uncertainty of masker on masked detection for nonspeech stimuli

Wei Li Fan; Timothy Streeter; Nathaniel I. Durlach

Research on informational masking for nonspeech stimuli has focused on the effects of spectral uncertainty in the masker. In this letter, results are presented from some preliminary probe experiments in which the spectrum of the masker is held fixed but the spatial properties of the masker are randomized. In addition, in some tests, the overall level of the stimulus is randomized. These experiments differ from previous experiments that have measured the effect of spatial uncertainty on masking in that the only attributes (aside from level) that distinguish the target from the masker are the spatial attributes; in all of the tests, the target and masker were statistically identical, statistically independent, narrowband noise signals. In general, the results indicate that detection performance is degraded by spatial uncertainty in the masker but that compared both to the effects of spectral uncertainty and to the effects of overall-level uncertainty, the effects of spatial uncertainty are relatively small.


Trends in hearing | 2017

The Benefit of a Visually Guided Beamformer in a Dynamic Speech Task

Virginia Best; Elin Roverud; Timothy Streeter; Christine R. Mason; Gerald Kidd

The aim of this study was to evaluate the performance of a visually guided hearing aid (VGHA) under conditions designed to capture some aspects of “real-world” communication settings. The VGHA uses eye gaze to steer the acoustic look direction of a highly directional beamforming microphone array. Although the VGHA has been shown to enhance speech intelligibility for fixed-location, frontal targets, it is currently not known whether these benefits persist in the face of frequent changes in location of the target talker that are typical of conversational turn-taking. Participants were 14 young adults, 7 with normal hearing and 7 with bilateral sensorineural hearing impairment. Target stimuli were sequences of 12 question–answer pairs that were embedded in a mixture of competing conversations. The participant’s task was to respond via a key press after each answer indicating whether it was correct or not. Spatialization of the stimuli and microphone array processing were done offline using recorded impulse responses, before presentation over headphones. The look direction of the array was steered according to the eye movements of the participant as they followed a visual cue presented on a widescreen monitor. Performance was compared for a “dynamic” condition in which the target stimulus moved between three locations, and a “fixed” condition with a single target location. The benefits of the VGHA over natural binaural listening observed in the fixed condition were reduced in the dynamic condition, largely because visual fixation was less accurate.


Trends in hearing | 2016

A Flexible Question-and-Answer Task for Measuring Speech Understanding

Virginia Best; Timothy Streeter; Elin Roverud; Christine R. Mason; Gerald Kidd

This report introduces a new speech task based on simple questions and answers. The task differs from a traditional sentence recall task in that it involves an element of comprehension and can be implemented in an ongoing fashion. It also contains two target items (the question and the answer) that may be associated with different voices and locations to create dynamic listening scenarios. A set of 227 questions was created, covering six broad categories (days of the week, months of the year, numbers, colors, opposites, and sizes). All questions and their one-word answers were spoken by 11 female and 11 male talkers. In this study, listeners were presented with question-answer pairs and asked to indicate whether the answer was true or false. Responses were given as simple button or key presses, which are quick to make and easy to score. Two preliminary experiments are presented that illustrate different ways of implementing the basic task. In the first experiment, question-answer pairs were presented in speech-shaped noise, and performance was compared across subjects, question categories, and time, to examine the different sources of variability. In the second experiment, sequences of question-answer pairs were presented amidst competing conversations in an ongoing, spatially dynamic listening scenario. Overall, the question-and-answer task appears to be feasible and could be implemented flexibly in a number of different ways.


Journal of the Acoustical Society of America | 2013

Spatial release from masking for noise-vocoded speech

Jayaganesh Swaminathan; Christine R. Mason; Timothy Streeter; Virginia Best; Gerald Kidd

Spatially separating a speech target from interfering masker(s) generally improves target intelligibility; an effect known as spatial release from masking (SRM). This study assessed the contribution of envelope cues to SRM. Target speech was presented from the front (0° azimuth) and speech maskers were either colocated or symmetrically separated from the target in azimuth (±15°, ±30°, ±45° and ±90°) using KEMAR head-related transfer functions. The target and maskers were presented either as natural speech or as noise-vocoded speech. For the vocoded speech, intelligibility was conveyed only by the envelopes from M frequency bands. Experiment 1 examined the effects of varying the number of frequency bands for the vocoder, and the degree of target-masker spatial separation, on SRM. Experiment 2 examined the effects of low-pass filtering the envelopes of the vocoded speech bands on SRM. Preliminary results for experiment 1 indicated that SRM improved as the number of spectral channels providing independent en...


tests and proofs | 2005

Spatial auditory display: Comments on Shinn-Cunningham et al., ICAD 2001

Barbara G. Shinn-Cunningham; Timothy Streeter

Spatial auditory displays have received a great deal of attention in the community investigating how to present information through sound. This short commentary discusses our 2001 ICAD paper (Shinn-Cunningham, Streeter, and Gyss), which explored whether it is possible to provide enhanced spatial auditory information in an auditory display. The discussion provides some historical context and discusses how work on representing information in spatial auditory displays has progressed over the last 5 years.


Scientific Reports | 2015

Erratum: Musical training, individual differences and the cocktail party problem.

Jayaganesh Swaminathan; Christine R. Mason; Timothy Streeter; Virginia Best; Gerald Kidd; Aniruddh D. Patel

Are musicians better able to understand speech in noise than non-musicians? Recent findings have produced contradictory results. Here we addressed this question by asking musicians and non-musicians to understand target sentences masked by other sentences presented from different spatial locations, the classical ‘cocktail party problem’ in speech science. We found that musicians obtained a substantial benefit in this situation, with thresholds ~6 dB better than non-musicians. Large individual differences in performance were noted particularly for the non-musically trained group. Furthermore, in different conditions we manipulated the spatial location and intelligibility of the masking sentences, thus changing the amount of ‘informational masking’ (IM) while keeping the amount of ‘energetic masking’ (EM) relatively constant. When the maskers were unintelligible and spatially separated from the target (low in IM), musicians and non-musicians performed comparably. These results suggest that the characteristics of speech maskers and the amount of IM can influence the magnitude of the differences found between musicians and non-musicians in multiple-talker “cocktail party” environments. Furthermore, considering the task in terms of the EM-IM distinction provides a conceptual framework for future behavioral and neuroscientific studies which explore the underlying sensory and cognitive mechanisms contributing to enhanced “speech-in-noise” perception by musicians.

Collaboration


Dive into the Timothy Streeter's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Joseph G. Desloge

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Sylvain Favrot

Technical University of Denmark

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge