Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Scott Bressler is active.

Publication


Featured researches published by Scott Bressler.


IEEE Transactions on Audio, Speech, and Language Processing | 2010

Evaluating Source Separation Algorithms With Reverberant Speech

Michael I. Mandel; Scott Bressler; Barbara G. Shinn-Cunningham; Daniel P. W. Ellis

This paper examines the performance of several source separation systems on a speech separation task for which human intelligibility has previously been measured. For anechoic mixtures, automatic speech recognition (ASR) performance on the separated signals is quite similar to human performance. In reverberation, however, while signal separation has some benefit for ASR, the results are still far below those of human listeners facing the same task. Performing this same experiment with a number of oracle masks created with a priori knowledge of the separated sources motivates a new objective measure of separation performance, the Direct-path, Early echo, and Reverberation, of the Target and Masker (DERTM), which is closely related to the ASR results. This measure indicates that while the non-oracle algorithms successfully reject the direct-path signal from the masking source, they reject less of its reverberation, explaining the disappointing ASR performance.


Psychological Research-psychologische Forschung | 2014

Bottom-up influences of voice continuity in focusing selective auditory attention

Scott Bressler; Salwa Masud; Hari Bharadwaj; Barbara G. Shinn-Cunningham

Selective auditory attention causes a relative enhancement of the neural representation of important information and suppression of the neural representation of distracting sound, which enables a listener to analyze and interpret information of interest. Some studies suggest that in both vision and in audition, the “unit” on which attention operates is an object: an estimate of the information coming from a particular external source out in the world. In this view, which object ends up in the attentional foreground depends on the interplay of top-down, volitional attention and stimulus-driven, involuntary attention. Here, we test the idea that auditory attention is object based by exploring whether continuity of a non-spatial feature (talker identity, a feature that helps acoustic elements bind into one perceptual object) also influences selective attention performance. In Experiment 1, we show that perceptual continuity of target talker voice helps listeners report a sequence of spoken target digits embedded in competing reversed digits spoken by different talkers. In Experiment 2, we provide evidence that this benefit of voice continuity is obligatory and automatic, as if voice continuity biases listeners by making it easier to focus on a subsequent target digit when it is perceptually linked to what was already in the attentional foreground. Our results support the idea that feature continuity enhances streaming automatically, thereby influencing the dynamic processes that allow listeners to successfully attend to objects through time in the cacophony that assails our ears in many everyday settings.


Hearing Research | 2017

Sensory coding and cognitive processing of sound in Veterans with blast exposure

Scott Bressler; Hannah Goldberg; Barbara G. Shinn-Cunningham

ABSTRACT Recent anecdotal reports from VA audiology clinics as well as a few published studies have identified a sub‐population of Service Members seeking treatment for problems communicating in everyday, noisy listening environments despite having normal to near‐normal hearing thresholds. Because of their increased risk of exposure to dangerous levels of prolonged noise and transient explosive blast events, communication problems in these soldiers could be due to either hearing loss (traditional or “hidden”) in the auditory sensory periphery or from blast‐induced injury to cortical networks associated with attention. We found that out of the 14 blast‐exposed Service Members recruited for this study, 12 had hearing thresholds in the normal to near‐normal range. A majority of these participants reported having problems specifically related to failures with selective attention. Envelope following responses (EFRs) measuring neural coding fidelity of the auditory brainstem to suprathreshold sounds were similar between blast‐exposed and non‐blast controls. Blast‐exposed subjects performed substantially worse than non‐blast controls in an auditory selective attention task in which listeners classified the melodic contour (rising, falling, or “zig‐zagging”) of one of three simultaneous, competing tone sequences. Salient pitch and spatial differences made for easy segregation of the three concurrent melodies. Poor performance in the blast‐exposed subjects was associated with weaker evoked response potentials (ERPs) in frontal EEG channels, as well as a failure of attention to enhance the neural responses evoked by a sequence when it was the target compared to when it was a distractor. These results suggest that communication problems in these listeners cannot be explained by compromised sensory representations in the auditory periphery, but rather point to lingering blast‐induced damage to cortical networks implicated in the control of attention. Because all study participants also suffered from post‐traumatic disorder (PTSD), follow‐up studies are required to tease apart the contributions of PTSD and blast‐induced injury on cognitive performance. HighlightsBlast‐exposed Service member have communication difficulties in noisy settings.Communication problems present despite normal to near‐normal hearing thresholds.Brainstem representations of supra‐threshold coding similar to non‐blast controls.Cortical networks of attention affected by exposure to blast.


Journal of the Acoustical Society of America | 2008

Effects of pitch and spatial separation on selective attention in anechoic and reverberant environments

Scott Bressler; Barbara G. Shinn-Cunningham

Subjects identified a random, spoken sequence of five monotonized digits (F0 = 100 Hz) presented from 0° azimuth. A monotonized masking sentence (F0 = 84, 89, 94, 100, 106, 112, or 119 Hz) was presented simultaneously from either 0° and +90° azimuth (chosen randomly on each trial). The same talker recorded both target and masker speech. KEMAR‐derived transfer functions simulated either a reverberant or anechoic environment. In contrast to previous studies, in the anechoic condition, differences in pitch provided little benefit and differences in location gave improvements explainable by improvements in the target‐to‐ masker ratio at the acoustically better ear. In reverberant conditions, differences in target and masker location improved performance more than differences in pitch; however, performance was best when there were differences in both location and pitch. Results suggest that when a target utterance is easy to segregate and select (such as in anechoic space when the target is a digit sequence em...


Frontiers in Human Neuroscience | 2014

Automatic processing of abstract musical tonality

Inyong Choi; Hari Bharadwaj; Scott Bressler; Psyche Loui; Kyogu Lee; Barbara G. Shinn-Cunningham

Music perception builds on expectancy in harmony, melody, and rhythm. Neural responses to the violations of such expectations are observed in event-related potentials (ERPs) measured using electroencephalography. Most previous ERP studies demonstrating sensitivity to musical violations used stimuli that were temporally regular and musically structured, with less-frequent deviant events that differed from a specific expectation in some feature such as pitch, harmony, or rhythm. Here, we asked whether expectancies about Western musical scale are strong enough to elicit ERP deviance components. Specifically, we explored whether pitches inconsistent with an established scale context elicit deviant components even though equally rare pitches that fit into the established context do not, and even when their timing is unpredictable. We used Markov chains to create temporally irregular pseudo-random sequences of notes chosen from one of two diatonic scales. The Markov pitch-transition probabilities resulted in sequences that favored notes within the scale, but that lacked clear melodic, harmonic, or rhythmic structure. At the random positions, the sequence contained probe tones that were either within the established scale or were out of key. Our subjects ignored the note sequences, watching a self-selected silent movie with subtitles. Compared to the in-key probes, the out-of-key probes elicited a significantly larger P2 ERP component. Results show that random note sequences establish expectations of the “first-order” statistical property of musical key, even in listeners not actively monitoring the sequences.


Journal of the Acoustical Society of America | 2013

Influences of perceptual continuity on everyday listening

Barbara G. Shinn-Cunningham; Golbarg Mehraei; Scott Bressler; Salwa Masud

In the natural environment, listeners face the challenge of parsing the sound mixture reaching their ears into individual sources, and maintaining attention on a source of interest through time long enough to extract meaning. A number of studies have shown that continuity of certain acoustic features (including pitch, location, timbre, etc.) allows the brain to group sound from one acoustic sound source together through time to form an auditory object or stream. This presentation reviews results demonstrating that auditory feature continuity has important consequences on how listeners maintain attention on a stream through time. For instance, continuity of a sound feature that a listener knows is irrelevant to the task at hand nonetheless impacts the ability to maintain auditory attention based on some other sound feature. Moreover, the influence of auditory feature continuity decreases as the time between events in a given sound stream increases. Taken together, these behavioral results support the idea ...


Journal of the Acoustical Society of America | 2016

How auditory scene understanding can fail in special populations

Barbara G. Shinn-Cunningham; Scott Bressler; Le Wang

One way to gain insight into how the brain normally processes auditory scenes is to explore the perceptual problems in special populations. In our lab, we are studying two types of listeners who, although very different, both have trouble making sense of complex scenes: listeners with autism who are minimally verbal (MVA listeners), and blast-exposed military Veterans. Neither population shows evidence of specific deficits in how well information is encoded subcortically. However, both show deficits when it comes to analyzing sound mixtures with multiple sound sources. Neural responses from MVA listeners suggest that their brains do not automatically organize sound mixtures that typically developing listeners hear as distinct objects, which likely impairs their ability to analyze the content of one sound source in a mixture. In contrast, Veterans exposed to blast appear to have difficulties focusing selective attention on a sound source in order to select it from a mixture, consistent with behavioral defi...


Journal of the Acoustical Society of America | 2014

Investigating stream segregation and spatial hearing using event-related brain responses

Le Wang; Samantha Messier; Scott Bressler; Elyse Sussman; Barbara G. Shinn-Cunningham

Several studies have used auditory mismatch negativity (MMN) to study auditory stream segregation. Few of these studies, however, focused on the stream segregation that involves spatial hearing. The present study used MMNs to examine the spatial aspect of stream segregation. Traditional oddball streams were presented in a passive listening paradigm, either in isolation or in the presence of an interfering stream. The interfering streams were engineered so that the deviants were not unexpected if the two streams were heard as perceptually integrated. Interfering streams were either spectrally distant from or close to the oddball stream, and were also spatially separated from the oddball stream. The deviant stimuli differed from the standards in perceived spatial location. For comparison, the MMN paradigm developed by Lepisto et al. (2009) using intensity deviants was repeated on the same group of subjects. For both paradigms, the MMN was strongest when the oddball stream was presented in isolation, less st...


Journal of the Acoustical Society of America | 2014

Behavioral and neural measures of auditory selective attention in blast-exposed veterans with traumatic brain injury

Scott Bressler; Inyong Choi; Hari Bharadwaj; Hannah Goldberg; Barbara G. Shinn-Cunningham

It is estimated that 15–20% of veterans returning from combat operations have experienced some sort of traumatic brain injury (TBI), a majority of which are the result of exposure to blast forces from military ordinance. Many of these veterans complain of complications when understanding speech in noisy environments, even when they have normal or near normal audiograms. Here, ten veterans diagnosed with mild TBI performed a selective auditory attention task in which they identified the shape of one of three simultaneously occurring ITD-spatialized melodies. TBI subject performance was significantly worse than that of 17 adult controls. Importantly, the veterans had hearing thresholds within 20 dB HL up to 8 kHz and brainstem responses indistinguishable from those of the controls. Cortical response potentials (measured on the scalp using electroencephalography) from correctly identified trials showed weaker attention-related modulation than in controls. These preliminary results suggest blast exposure dama...


Journal of the Acoustical Society of America | 2013

Subcortical and cortical neural correlates of individual differences in temporal auditory acuity

Inyong Choi; Scott Bressler; Hari Bharadwaj; Barbara G. Shinn-Cunningham

Parsing complex auditory scenes requires the activation and coordination of many neuronal centers, both in subcortical and cortical portions of the auditory pathway. Several studies have demonstrated that even normal-hearing listeners exhibit a range of abilities on various auditory tasks. Previous work in our lab suggests this variability may be due, in part, to degraded temporal encoding of supra-threshold stimuli at the level of the brainstem. A family of studies has shown that musical experience is correlated with differences in brainstem encoding as well as long-term plasticity in the cortex, results that provide the intriguing possibility that training may influence supra-threshold sound encoding. Here we explore methods for measuring subcortical and cortical neural activity in response to complex stimuli using electroencephalography (EEG). Subjects were tested in a passive mismatch negativity (MMN) paradigm using musical chords and tones. Brainstem frequency following responses (FFRs), a measure of...

Collaboration


Dive into the Scott Bressler's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Elyse Sussman

Albert Einstein College of Medicine

View shared research outputs
Researchain Logo
Decentralizing Knowledge