Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Stephen McAdams is active.

Publication


Featured researches published by Stephen McAdams.


Psychological Research-psychologische Forschung | 1995

Perceptual scaling of synthesized musical timbres: common dimensions, specificities, and latent subject classes.

Stephen McAdams; Suzanne Winsberg; Sophie Donnadieu; Geert De Soete; Jochen Krimphoff

To study the perceptual structure of musical timbre and the effects of musical training, timbral dissimilarities of synthesized instrument sounds were rated by professional musicians, amateur musicians, and nonmusicians. The data were analyzed with an extended version of the multidimensional scaling algorithm CLASCAL (Winsberg & De Soete, 1993), which estimates the number of latent classes of subjects, the coordinates of each timbre on common Euclidean dimensions, a specificity value of unique attributes for each timbre, and a separate weight for each latent class on each of the common dimensions and the set of specificities. Five latent classes were found for a three-dimensional spatial model with specificities. Common dimensions were quantified psychophysically in terms of log-rise time, spectral centroid, and degree of spectral variation. The results further suggest that musical timbres possess specific attributes not accounted for by these shared perceptual dimensions. Weight patterns indicate that perceptual salience of dimensions and specificities varied across classes. A comparison of class structure with biographical factors associated with degree of musical training and activity was not clearly related to the class structure, though musicians gave more precise and coherent judgments than did nonmusicians or amateurs. The model with latent classes and specificities gave a better fit to the data and made the acoustic correlates of the common dimensions more interpretable.


Journal of the Acoustical Society of America | 2005

Acoustic correlates of timbre space dimensions: a confirmatory study using synthetic tones.

Anne Caclin; Stephen McAdams; Bennett K. Smith; Suzanne Winsberg

Timbre spaces represent the organization of perceptual distances, as measured with dissimilarity ratings, among tones equated for pitch, loudness, and perceived duration. A number of potential acoustic correlates of timbre-space dimensions have been proposed in the psychoacoustic literature, including attack time, spectral centroid, spectral flux, and spectrum fine structure. The experiments reported here were designed as direct tests of the perceptual relevance of these acoustical parameters for timbre dissimilarity judgments. Listeners presented with carefully controlled synthetic tones use attack time, spectral centroid, and spectrum fine structure in dissimilarity rating experiments. These parameters thus appear as major determinants of timbre. However, spectral flux appears as a less salient timbre parameter, its salience depending on the number of other dimensions varying concurrently in the stimulus set. Dissimilarity ratings were analyzed with two different multidimensional scaling models (CLASCAL and CONSCAL), the latter providing psychophysical functions constrained by the physical parameters. Their complementarity is discussed.


Journal of the Acoustical Society of America | 1990

Hearing a mistuned harmonic in an otherwise periodic complex tone

William M. Hartmann; Stephen McAdams; Bennett K. Smith

The ability of a listener to detect a mistuned harmonic in an otherwise periodic tone is representative of the capacity to segregate auditory entities on the basis of steady-state signal cues. By use of a task in which listeners matched the pitch of a mistuned harmonic, this ability has been studied, in order to find dependences on mistuned harmonic number, fundamental frequency, signal level, and signal duration. The results considerably augment the data previously obtained from discrimination experiments and from experiments in which listeners counted apparent sources. Although previous work has emphasized the role of spectral resolution in the segregation process, the present work suggests that neural synchrony is an important consideration; our data show that listeners lose the ability to segregate mistuned harmonics at high frequencies where synchronous neural firing vanishes. The functional form of this loss is insensitive to the spacing of the harmonics. The matching experiment also permits the measurement of the pitches of mistuned harmonics. The data exhibit shifts of a form that argues against models of pitch shifts that are based entirely upon partial masking.


Journal of the Acoustical Society of America | 2001

Feature dependence in the automatic identification of musical woodwind instruments

Judith C. Brown; Olivier Houix; Stephen McAdams

The automatic identification of musical instruments is a relatively unexplored and potentially very important field for its promise to free humans from time-consuming searches on the Internet and indexing of audio material. Speaker identification techniques have been used in this paper to determine the properties (features) which are most effective in identifying a statistically significant number of sounds representing four classes of musical instruments (oboe, sax, clarinet, flute) excerpted from actual performances. Features examined include cepstral coefficients, constant-Q coefficients, spectral centroid, autocorrelation coefficients, and moments of the time wave. The number of these coefficients was varied, and in the case of cepstral coefficients, ten coefficients were sufficient for identification. Correct identifications of 79%-84% were obtained with cepstral coefficients, bin-to-bin differences of the constant-Q coefficients, and autocorrelation coefficients; the latter have not been used previously in either speaker or instrument identification work. These results depended on the training sounds chosen and the number of clusters used in the calculation. Comparison to a human perception experiment with sounds produced by the same instruments indicates that, under these conditions, computers do as well as humans in identifying woodwind instruments.


Neuropsychologia | 2010

Enhanced Pure-Tone Pitch Discrimination among Persons with Autism but not Asperger Syndrome.

Anna Bonnel; Stephen McAdams; Bennett K. Smith; Claude Berthiaume; Armando Bertone; Valter Ciocca; Jacob A. Burack; Laurent Mottron

Persons with Autism spectrum disorders (ASD) display atypical perceptual processing in visual and auditory tasks. In vision, Bertone, Mottron, Jelenic, and Faubert (2005) found that enhanced and diminished visual processing is linked to the level of neural complexity required to process stimuli, as proposed in the neural complexity hypothesis. Based on these findings, Samson, Mottron, Jemel, Belin, and Ciocca (2006) proposed to extend the neural complexity hypothesis to the auditory modality. They hypothesized that persons with ASD should display enhanced performance for simple tones that are processed in primary auditory cortical regions, but diminished performance for complex tones that require additional processing in associative auditory regions, in comparison to typically developing individuals. To assess this hypothesis, we designed four auditory discrimination experiments targeting pitch, non-vocal and vocal timbre, and loudness. Stimuli consisted of spectro-temporally simple and complex tones. The participants were adolescents and young adults with autism, Asperger syndrome, and typical developmental histories, all with IQs in the normal range. Consistent with the neural complexity hypothesis and enhanced perceptual functioning model of ASD (Mottron, Dawson, Soulières, Hubert, & Burack, 2006), the participants with autism, but not with Asperger syndrome, displayed enhanced pitch discrimination for simple tones. However, no discrimination-thresholds differences were found between the participants with ASD and the typically developing persons across spectrally and temporally complex conditions. These findings indicate that enhanced pure-tone pitch discrimination may be a cognitive correlate of speech-delay among persons with ASD. However, auditory discrimination among this group does not appear to be directly contingent on the spectro-temporal complexity of the stimuli.


Brain and Language | 1989

Dichotic perception and laterality in neonates

Josiane Bertoncini; Jose Morais; Ranka Bijeljac-Babic; Stephen McAdams; Isabelle Peretz; Jacques Mehler

Groups of 4-day-old neonates were tested for dichotic discrimination and ear differences with the High-Amplitude-Sucking procedure. In the first experiment, dichotic speech discrimination was attested by comparison with a control group. Furthermore, among those subjects who showed a substantial recovery of sucking response at least after one of the two syllable changes, it was observed that significantly more subjects manifested a stronger reaction to a right-ear change than to a left-ear change. In the second experiment, 4-day-old neonates were tested on syllable and music timbre discrimination. The significant Stimulus Type x Ear interaction observed suggests perceptual asymmetries indicative of very precocious brain specialization.


Journal of the Acoustical Society of America | 2011

The Timbre Toolbox: Extracting audio descriptors from musical signals

Geoffroy Peeters; Bruno L. Giordano; Patrick Susini; Nicolas Misdariis; Stephen McAdams

The analysis of musical signals to extract audio descriptors that can potentially characterize their timbre has been disparate and often too focused on a particular small set of sounds. The Timbre Toolbox provides a comprehensive set of descriptors that can be useful in perceptual research, as well as in music information retrieval and machine-learning approaches to content-based retrieval in large sound databases. Sound events are first analyzed in terms of various input representations (short-term Fourier transform, harmonic sinusoidal components, an auditory model based on the equivalent rectangular bandwidth concept, the energy envelope). A large number of audio descriptors are then derived from each of these representations to capture temporal, spectral, spectrotemporal, and energetic properties of the sound events. Some descriptors are global, providing a single value for the whole sound event, whereas others are time-varying. Robust descriptive statistics are used to characterize the time-varying descriptors. To examine the information redundancy across audio descriptors, correlational analysis followed by hierarchical clustering is performed. This analysis suggests ten classes of relatively independent audio descriptors, showing that the Timbre Toolbox is a multidimensional instrument for the measurement of the acoustical structure of complex sound signals.


Journal of the Acoustical Society of America | 2003

The dependency of timbre on fundamental frequency

Jeremy Marozeau; Alain de Cheveigné; Stephen McAdams; Suzanne Winsberg

The dependency of the timbre of musical sounds on their fundamental frequency (F0) was examined in three experiments. In experiment I subjects compared the timbres of stimuli produced by a set of 12 musical instruments with equal F0, duration, and loudness. There were three sessions, each at a different F0. In experiment II the same stimuli were rearranged in pairs, each with the same difference in F0, and subjects had to ignore the constant difference in pitch. In experiment III, instruments were paired both with and without an F0 difference within the same session, and subjects had to ignore the variable differences in pitch. Experiment I yielded dissimilarity matrices that were similar at different F0s, suggesting that instruments kept their relative positions within timbre space. Experiment II found that subjects were able to ignore the salient pitch difference while rating timbre dissimilarity. Dissimilarity matrices were symmetrical, suggesting further that the absolute displacement of the set of instruments within timbre space was small. Experiment III extended this result to the case where the pitch difference varied from trial to trial. Multidimensional scaling (MDS) of dissimilarity scores produced solutions (timbre spaces) that varied little across conditions and experiments. MDS solutions were used to test the validity of signal-based predictors of timbre, and in particular their stability as a function of F0. Taken together, the results suggest that timbre differences are perceived independently from differences of pitch, at least for F0 differences smaller than an octave. Timbre differences can be measured between stimuli with different F0s.


Attention Perception & Psychophysics | 1997

The representation of auditory source characteristics: Simple geometric form

Stephen Lakatos; Stephen McAdams; René Caussé

Two experiments examined listeners’ ability to discriminate the geometric shape of simple resonating bodies on the basis of their corresponding auditory attributes. In cross-modal matching tasks, subjects listened to recordings of pairs of metal bars (Experiment 1) or wooden bars (Experiment 2) struck in sequence and then selected a visual depiction of the bar cross sections that correctly represented their relative widths and heights from two opposing pairs presented on a computer screen. Multidimensional scaling solutions derived from matching scores for metal and wooden bars indicated that subjects’ performance varied directly with increasing differences in the width/height (WIH) ratios of both sets of bars. Subsequent acoustic analyses revealed that the frequency components from torsional vibrational modes and the ratios of frequencies of transverse bending modes in the bars correlated strongly with both the bars’ WIH ratios and bar coordinates in the multidimensional configurations. The results suggest that listeners can encode the auditory properties of sound sources by extracting certain invariant physical characteristics of their gross geometric properties from their acoustic behavior.


NeuroImage | 2002

Neural Correlates of Timbre Change in Harmonic Sounds

Vinod Menon; Daniel J. Levitin; Bennett K. Smith; Anna Lembke; Ben Krasnow; Daniel I. Glazer; Gary H. Glover; Stephen McAdams

Timbre is a major structuring force in music and one of the most important and ecologically relevant features of auditory events. We used sound stimuli selected on the basis of previous psychophysiological studies to investigate the neural correlates of timbre perception. Our results indicate that both the left and right hemispheres are involved in timbre processing, challenging the conventional notion that the elementary attributes of musical perception are predominantly lateralized to the right hemisphere. Significant timbre-related brain activation was found in well-defined regions of posterior Heschls gyrus and superior temporal sulcus, extending into the circular insular sulcus. Although the extent of activation was not significantly different between left and right hemispheres, temporal lobe activations were significantly posterior in the left, compared to the right, hemisphere, suggesting a functional asymmetry in their respective contributions to timbre processing. The implications of our findings for music processing in particular and auditory processing in general are discussed.

Collaboration


Dive into the Stephen McAdams's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Carolyn Drake

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Marie-Claire Botte

Centre national de la recherche scientifique

View shared research outputs
Researchain Logo
Decentralizing Knowledge