Nicolas Misdariis
IRCAM
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Nicolas Misdariis.
Journal of the Acoustical Society of America | 2011
Geoffroy Peeters; Bruno L. Giordano; Patrick Susini; Nicolas Misdariis; Stephen McAdams
The analysis of musical signals to extract audio descriptors that can potentially characterize their timbre has been disparate and often too focused on a particular small set of sounds. The Timbre Toolbox provides a comprehensive set of descriptors that can be useful in perceptual research, as well as in music information retrieval and machine-learning approaches to content-based retrieval in large sound databases. Sound events are first analyzed in terms of various input representations (short-term Fourier transform, harmonic sinusoidal components, an auditory model based on the equivalent rectangular bandwidth concept, the energy envelope). A large number of audio descriptors are then derived from each of these representations to capture temporal, spectral, spectrotemporal, and energetic properties of the sound events. Some descriptors are global, providing a single value for the whole sound event, whereas others are time-varying. Robust descriptive statistics are used to characterize the time-varying descriptors. To examine the information redundancy across audio descriptors, correlational analysis followed by hierarchical clustering is performed. This analysis suggests ten classes of relatively independent audio descriptors, showing that the Timbre Toolbox is a multidimensional instrument for the measurement of the acoustical structure of complex sound signals.
Journal of Experimental Psychology: Applied | 2010
Guillaume Lemaitre; Olivier Houix; Nicolas Misdariis; Patrick Susini
The influence of listeners expertise and sound identification on the categorization of environmental sounds is reported in three studies. In Study 1, the causal uncertainty of 96 sounds was measured by counting the different causes described by 29 participants. In Study 2, 15 experts and 15 nonexperts classified a selection of 60 sounds and indicated the similarities they used. In Study 3, 38 participants indicated their confidence in identifying the sounds. Participants reported using either acoustical similarities or similarities of the causes of the sounds. Experts used acoustical similarity more often than nonexperts, who used the similarity of the cause of the sounds. Sounds with a low causal uncertainty were more often grouped together because of the similarities of the cause, whereas sounds with a high causal uncertainty were grouped together more often because of the acoustical similarities. The same conclusions were reached for identification confidence. This measure allowed the sound classification to be predicted, and is a straightforward method to determine the appropriate description of a sound.
Journal of the Acoustical Society of America | 1998
Nicolas Misdariis; Bennett K. Smith; Daniel Presssnitzer; Patrick Susini; Stephen McAdams
Several studies dealing with the perception of musical timbre have found significant correlations between acoustical parameters of sounds and their subjective dimensions. Using the conclusions of some of these studies, a calculation method of the perceptual distance between two sounds has been developed. Initially, four parameters are considered: spectral centroid, irregularity of the spectral envelope, attack time, and degree of variation of the spectral envelope over time. For each of these, a transformation factor between the physical axis and the corresponding subjective dimension is obtained by linear regression. After a normalization of the data, the four coefficients then found are those of a linear combination that gives the final distance values. Since this model is based on numerical results derived from experiments that mostly used synthesized sounds, the application to a database of recorded musical instrument sounds needs a strong validation procedure. This procedure involves the adjustment o...
Eurasip Journal on Audio, Speech, and Music Processing | 2010
Nicolas Misdariis; Antoine Minard; Patrick Susini; Guillaume Lemaitre; Stephen McAdams; Etienne Parizet
The aim of the study is to transpose and extend to a set of environmental sounds the notion of sound descriptors usually used for musical sounds. Four separate primary studies dealing with interior car sounds, air-conditioning units, car horns, and closing car doors are considered collectively. The corpus formed by these initial stimuli is submitted to new experimental studies and analyses, both for revealing metacategories and for defining more precisely the limits of each of the resulting categories. In a second step, the new structure is modeled: common and specific dimensions within each category are derived from the initial results and new investigations of audio features are performed. Furthermore, an automatic classifier based on two audio descriptors and a multinomial logistic regression procedure is implemented and validated with the corpus.
Journal on Multimodal User Interfaces | 2012
Patrick Susini; Nicolas Misdariis; Guillaume Lemaitre; Olivier Houix
This study examined the influence of the naturalness of a sonic feedback on the perceived usability and pleasantness of the sounds used in a human-computer interface. The interface was the keyboard of an Automatic Teller Machine. The naturalness of the feedback was manipulated by using different kinds of relationship between a keystroke and its sonic feedback: causal, iconic, and arbitrary. Users were required to rate the naturalness, usability, and pleasantness of the sounds before and after manipulating the interface. Two kinds of interfaces were used: a normally functioning and a defective interface. The results indicated that the different relationships resulted in different levels of naturalness: causal mappings resulted in sounds perceived as natural, and arbitrary mappings in sounds perceived as non-natural, regardless of whether the sounds were recorded or synthesized. Before the subjects manipulated the interface, they rated the natural sounds as more pleasant and useful than the non-natural sounds. Manipulating the interface exaggerated these judgments for the causal and arbitrary mappings. The feedback sounds ruled by an iconic relationship between the user’s gesture and the resulting sounds were overall positively rated, but were sensitive to a potential contamination by the negative feelings created by a defective interface.
Journal of the Acoustical Society of America | 2013
Nicolas Misdariis; Anais Gruson; Patrick Susini
Electric vehicles, tends to become a growing category of today′s human means of transport. But, because these kind of vehicles are actually quiet, or even silent, the question of a dedicated sound design arise almost inevitably in order to make them more present − then secure − both for their proximity (pedestrians) and their users (driver). This being, current issues for a sound design research framework is then to exploit and explore sound properties that, first, will fix a goal of functionnality (emergence, recognition, acceptance) and, second, will define guidelines for the development of new aesthetics to be included in a general design approach. Thus, a first study focusing on detection of warning signals in urban environments was achieved. Based on the state-of-the-art, a corpus of elementary signals was built and characterized in a time / frequency domain for representing basic temporal and spectral properties (continuous, impulsive, harmonic, etc.). A corpus of representative urban environments was also recorded and realistic sequences were mixed with a dynamic approaching-source model. A reaction time experiment was conducted and leads to interesting observations: especially, specific properties promoting emergence. Moreover, a seemingly significant learning effect also rises from the data and should be further investigated.
Journal of the Acoustical Society of America | 2005
Denis Ricot; René Caussé; Nicolas Misdariis
The accordion reed is an example of a blown-closed free reed. Unlike most oscillating valves in wind musical instruments, self-sustained oscillations occur without acoustic coupling. Flow visualizations and measurements in water show that the flow can be supposed incompressible and potential. A model is developed and the solution is calculated in the time domain. The excitation force is found to be associated with the inertial load of the unsteady flow through the reed gaps. Inertial effect leads to velocity fluctuations in the reed opening and then to an unsteady Bernoulli force. A pressure component generated by the local reciprocal air movement around the reed is added to the modeled aerodynamic excitation pressure. Since the model is two-dimensional, only qualitative comparisons with air flow measurements are possible. The agreement between the simulated pressure waveforms and measured pressure in the very near-field of the reed is reasonable. In addition, an aeroacoustic model using the permeable Ffowcs Williams-Hawkings integral method is presented. The integral expressions of the far-field acoustic pressure are also computed in the time domain. In agreement with experimental data, the sound is found to be dominated by the dipolar source associated by the strong momentum fluctuations of the flow through the reed gaps.
PLOS ONE | 2016
Guillaume Lemaitre; Olivier Houix; Frédéric Voisin; Nicolas Misdariis; Patrick Susini
Imitative behaviors are widespread in humans, in particular whenever two persons communicate and interact. Several tokens of spoken languages (onomatopoeias, ideophones, and phonesthemes) also display different degrees of iconicity between the sound of a word and what it refers to. Thus, it probably comes at no surprise that human speakers use a lot of imitative vocalizations and gestures when they communicate about sounds, as sounds are notably difficult to describe. What is more surprising is that vocal imitations of non-vocal everyday sounds (e.g. the sound of a car passing by) are in practice very effective: listeners identify sounds better with vocal imitations than with verbal descriptions, despite the fact that vocal imitations are inaccurate reproductions of a sound created by a particular mechanical system (e.g. a car driving by) through a different system (the voice apparatus). The present study investigated the semantic representations evoked by vocal imitations of sounds by experimentally quantifying how well listeners could match sounds to category labels. The experiment used three different types of sounds: recordings of easily identifiable sounds (sounds of human actions and manufactured products), human vocal imitations, and computational “auditory sketches” (created by algorithmic computations). The results show that performance with the best vocal imitations was similar to the best auditory sketches for most categories of sounds, and even to the referent sounds themselves in some cases. More detailed analyses showed that the acoustic distance between a vocal imitation and a referent sound is not sufficient to account for such performance. Analyses suggested that instead of trying to reproduce the referent sound as accurately as vocally possible, vocal imitations focus on a few important features, which depend on each particular sound category. These results offer perspectives for understanding how human listeners store and access long-term sound representations, and sets the stage for the development of human-computer interfaces based on vocalizations.
audio mostly conference | 2011
Daniel Hug; Nicolas Misdariis
Sound design for interactive products is rapidly evolving to become a relevant topic in industry. Scientific research from the domains of Auditory Display (AD) and Sonic Interaction Design (SID) can play a central role in this development, but in order to make its way to market oriented applications, several issues still need to be addressed. Building on the sound design process employed at the Sound Perception and Design (SPD) team at Ircam, and the information gathered from interviews with professional sound designers, this paper focuses on revealing typical issues encountered in the design process of both science and design oriented communities, in particular the development of a valid and revisable, yet innovative, design hypothesis. A second aim is to improve the communication between sound and interaction designers. In order to address these challenges, a conceptual framework, which has been developed using both scientific and designerly methods, was presented and evaluated using expert reviews.
computer music modeling and retrieval | 2013
Olivier Houix; Nicolas Misdariis; Patrick Susini; Frédéric Bevilacqua; Florestan Gutierrez
Participatory workshops have been organized within the framework of the ANR project Legos that concerns gesture-sound interactive systems. These workshops addressed both theoretical issues and experimentation with prototypes. The first goal was to stimulate new ideas related to the control of everyday objects using sound feedback, and then, to create and experiment with new sonic augmented objects. The second aim was educational. We investigated how sonic interaction design can be introduced to people without backgrounds in sound and music. We present in this article an overview of three workshops. The first workshop focused on the analysis and the possible sonification of everyday objects. New usage scenarios were obtained and tested. The second workshop focused on sound metaphor, questioning the relationship between sound and gesture. The last one was a workshop organized during a summer school for students. During these workshops, we experimented a cycle of design process: analysis, creation and testing.