Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Trevor R. Agus is active.

Publication


Featured researches published by Trevor R. Agus.


Neuron | 2010

Rapid Formation of Robust Auditory Memories: Insights from Noise

Trevor R. Agus; Simon J. Thorpe; Daniel Pressnitzer

Before a natural sound can be recognized, an auditory signature of its source must be learned through experience. Here we used random waveforms to probe the formation of new memories for arbitrary complex sounds. A behavioral measure was designed, based on the detection of repetitions embedded in noises up to 4 s long. Unbeknownst to listeners, some noise samples reoccurred randomly throughout an experimental block. Results showed that repeated exposure induced learning for otherwise totally unpredictable and meaningless sounds. The learning was unsupervised and resilient to interference from other task-relevant noises. When memories were formed, they emerged rapidly, performance became abruptly near-perfect, and multiple noises were remembered for several weeks. The acoustic transformations to which recall was tolerant suggest that the learned features were local in time. We propose that rapid sensory plasticity could explain how the auditory brain creates useful memories from the ever-changing, but sometimes repeating, acoustical world.


Journal of the Acoustical Society of America | 2012

Fast recognition of musical sounds based on timbre.

Trevor R. Agus; Clara Suied; Simon J. Thorpe; Daniel Pressnitzer

Human listeners seem to have an impressive ability to recognize a wide variety of natural sounds. However, there is surprisingly little quantitative evidence to characterize this fundamental ability. Here the speed and accuracy of musical-sound recognition were measured psychophysically with a rich but acoustically balanced stimulus set. The set comprised recordings of notes from musical instruments and sung vowels. In a first experiment, reaction times were collected for three target categories: voice, percussion, and strings. In a go/no-go task, listeners reacted as quickly as possible to members of a target category while withholding responses to distractors (a diverse set of musical instruments). Results showed near-perfect accuracy and fast reaction times, particularly for voices. In a second experiment, voices were recognized among strings and vice-versa. Again, reaction times to voices were faster. In a third experiment, auditory chimeras were created to retain only spectral or temporal features of the voice. Chimeras were recognized accurately, but not as quickly as natural voices. Altogether, the data suggest rapid and accurate neural mechanisms for musical-sound recognition based on selectivity to complex spectro-temporal signatures of sound sources.


PROCEEDINGS OF THE ROYAL SOCIETY B-BIOLOGICAL SCIENCES , 281 (1791) (2014) | 2014

Representations of specific acoustic patterns in the auditory cortex and hippocampus

Sukhbinder Kumar; Heidi M. Bonnici; Sundeep Teki; Trevor R. Agus; Daniel Pressnitzer; Eleanor A. Maguire; Timothy D. Griffiths

Previous behavioural studies have shown that repeated presentation of a randomly chosen acoustic pattern leads to the unsupervised learning of some of its specific acoustic features. The objective of our study was to determine the neural substrate for the representation of freshly learnt acoustic patterns. Subjects first performed a behavioural task that resulted in the incidental learning of three different noise-like acoustic patterns. During subsequent high-resolution functional magnetic resonance imaging scanning, subjects were then exposed again to these three learnt patterns and to others that had not been learned. Multi-voxel pattern analysis was used to test if the learnt acoustic patterns could be ‘decoded’ from the patterns of activity in the auditory cortex and medial temporal lobe. We found that activity in planum temporale and the hippocampus reliably distinguished between the learnt acoustic patterns. Our results demonstrate that these structures are involved in the neural representation of specific acoustic patterns after they have been learnt.


international symposium on circuits and systems | 2010

Characteristics of human voice processing

Trevor R. Agus; Simon J. Thorpe; Clara Suied; Daniel Pressnitzer

As human listeners, it seems that we should be experts in processing vocal sounds. Here we present new behavioral data that confirm and quantify a voice-processing advantage in a range of natural sound recognition tasks. The experiments focus on time: the reaction-time for recognition, and the shortest sound segment required for recognition. Our behavioral results provide constraints on the features used by listeners to process voice sounds. Such features are likely to be jointly spectro-temporal, over multiple time scales.


Journal of the Acoustical Society of America | 2014

Auditory gist: Recognition of very short sounds from timbre cues

Clara Suied; Trevor R. Agus; Simon J. Thorpe; Nima Mesgarani; Daniel Pressnitzer

Sounds such as the voice or musical instruments can be recognized on the basis of timbre alone. Here, sound recognition was investigated with severely reduced timbre cues. Short snippets of naturally recorded sounds were extracted from a large corpus. Listeners were asked to report a target category (e.g., sung voices) among other sounds (e.g., musical instruments). All sound categories covered the same pitch range, so the task had to be solved on timbre cues alone. The minimum duration for which performance was above chance was found to be short, on the order of a few milliseconds, with the best performance for voice targets. Performance was independent of pitch and was maintained when stimuli contained less than a full waveform cycle. Recognition was not generally better when the sound snippets were time-aligned with the sound onset compared to when they were extracted with a random starting time. Finally, performance did not depend on feedback or training, suggesting that the cues used by listeners in the artificial gating task were similar to those relevant for longer, more familiar sounds. The results show that timbre cues for sound recognition are available at a variety of time scales, including very short ones.


Scientific Reports | 2017

Voice selectivity in the temporal voice area despite matched low-level acoustic cues.

Trevor R. Agus; Sébastien Paquette; Clara Suied; Daniel Pressnitzer; Pascal Belin

In human listeners, the temporal voice areas (TVAs) are regions of the superior temporal gyrus and sulcus that respond more to vocal sounds than a range of nonvocal control sounds, including scrambled voices, environmental noises, and animal cries. One interpretation of the TVA’s selectivity is based on low-level acoustic cues: compared to control sounds, vocal sounds may have stronger harmonic content or greater spectrotemporal complexity. Here, we show that the right TVA remains selective to the human voice even when accounting for a variety of acoustical cues. Using fMRI, single vowel stimuli were contrasted with single notes of musical instruments with balanced harmonic-to-noise ratios and pitches. We also used “auditory chimeras”, which preserved subsets of acoustical features of the vocal sounds. The right TVA was preferentially activated only for the natural human voice. In particular, the TVA did not respond more to artificial chimeras preserving the exact spectral profile of voices. Additional acoustic measures, including temporal modulations and spectral complexity, could not account for the increased activation. These observations rule out simple acoustical cues as a basis for voice selectivity in the TVAs.


Scientific Reports | 2016

Fast response to human voices in autism

I-Fan Lin; Trevor R. Agus; Clara Suied; Daniel Pressnitzer; Takashi Yamada; Yoko Komine; Nobumasa Kato; Makio Kashino

Individuals with autism spectrum disorders (ASD) are reported to allocate less spontaneous attention to voices. Here, we investigated how vocal sounds are processed in ASD adults, when those sounds are attended. Participants were asked to react as fast as possible to target stimuli (either voices or strings) while ignoring distracting stimuli. Response times (RTs) were measured. Results showed that, similar to neurotypical (NT) adults, ASD adults were faster to recognize voices compared to strings. Surprisingly, ASD adults had even shorter RTs for voices than the NT adults, suggesting a faster voice recognition process. To investigate the acoustic underpinnings of this effect, we created auditory chimeras that retained only the temporal or the spectral features of voices. For the NT group, no RT advantage was found for the chimeras compared to strings: both sets of features had to be present to observe an RT advantage. However, for the ASD group, shorter RTs were observed for both chimeras. These observations indicate that the previously observed attentional deficit to voices in ASD individuals could be due to a failure to combine acoustic features, even though such features may be well represented at a sensory level.


international conference on acoustics, speech, and signal processing | 2012

A model of attention-driven scene analysis

Malcolm Slaney; Trevor R. Agus; Shih-Chii Liu; Merve Kaya; Mounya Elhilali

Parsing complex acoustic scenes involves an intricate interplay between bottom-up, stimulus-driven salient elements in the scene with top-down, goal-directed, mechanisms that shift our attention to particular parts of the scene. Here, we present a framework for exploring the interaction between these two processes in a simulated cocktail party setting. The model shows improved digit recognition in a multi-talker environment with a goal of tracking the source uttering the highest value. This work highlights the relevance of both data-driven and goal-driven processes in tackling real multi-talker, multi-source sound analysis.


Journal of the Acoustical Society of America | 2011

Importance of temporal-envelope speech cues in different spectral regions

Marine Ardoint; Trevor R. Agus; Stanley Sheft; Christian Lorenzi

This study investigated the ability to use temporal-envelope (E) cues in a consonant identification task when presented within one or two frequency bands. Syllables were split into five bands spanning the range 70-7300 Hz with each band processed to preserve E cues and degrade temporal fine-structure cues. Identification scores were measured for normal-hearing listeners in quiet for individual processed bands and for pairs of bands. Consistent patterns of results were obtained in both the single- and dual-band conditions: identification scores increased systematically with band center frequency, showing that E cues in the higher bands (1.8-7.3 kHz) convey greater information.


Journal of the Acoustical Society of America | 2017

Auditory memory for random time patterns

HiJee Kang; Trevor R. Agus; Daniel Pressnitzer

The acquisition of auditory memory for temporal patterns was investigated. The temporal patterns were random sequences of irregularly spaced clicks. Participants performed a task previously used to study auditory memory for noise [Agus, Thorpe, and Pressnitzer (2010). Neuron 66, 610-618]. The memory for temporal patterns displayed strong similarities with the memory for noise: temporal patterns were learnt rapidly, in an unsupervised manner, and could be distinguished from statistically matched patterns after learning. There was, however, a qualitative difference from the memory for noise. For temporal patterns, no memory transfer was observed after time reversals, showing that both the time intervals and their order were represented in memory. Remarkably, learning was observed over a broad range of time scales, which encompassed rhythm-like and buzz-like temporal patterns. Temporal patterns present specific challenges to the neural mechanisms of plasticity, because the information to be learnt is distributed over time. Nevertheless, the present data show that the acquisition of novel auditory memories can be as efficient for temporal patterns as for sounds containing additional spectral and spectro-temporal cues, such as noise. This suggests that the rapid formation of memory traces may be a general by-product of repeated auditory exposure.

Collaboration


Dive into the Trevor R. Agus's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Clara Suied

École Normale Supérieure

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Axelle Calcus

Université libre de Bruxelles

View shared research outputs
Top Co-Authors

Avatar

Cécile Colin

Université libre de Bruxelles

View shared research outputs
Top Co-Authors

Avatar

Paul Deltenre

Université libre de Bruxelles

View shared research outputs
Top Co-Authors

Avatar

Régine Kolinsky

Université libre de Bruxelles

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anjali Bhatara

Paris Descartes University

View shared research outputs
Top Co-Authors

Avatar

Christian Lorenzi

École Normale Supérieure

View shared research outputs
Researchain Logo
Decentralizing Knowledge