Luca Mion
University of Padua
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Luca Mion.
IEEE Transactions on Audio, Speech, and Language Processing | 2008
Luca Mion; G. De Poli
During a music performance, the musician adds expressiveness to the musical message by changing timing, dynamics, and timbre of the musical events to communicate an expressive intention. Traditionally, the analysis of music expression is based on measurements of the deviations of the acoustic parameters with respect to the written score. In this paper, we employ machine learning techniques to understand the expressive communication and to derive audio features at an intermediate level, between music intended as a structured language and notes intended as sound at a more physical level. We start by extracting audio features from expressive performances that were recorded by asking the musicians to perform in order to convey different expressive intentions. We use a sequential forward selection procedure to rank and select a set of features for a general description of the expressions, and a second one specific for each instrument. We show that higher recognition ratings are achieved by using a set of four features which can be specifically related to qualitative descriptions of the sound by physical metaphors. These audio features can be used to retrieve expressive content on audio data, and to design the next generation of search engines for music information retrieval.
tests and proofs | 2010
Luca Mion; Giovanni De Poli; Ennio Rapanà
Expression communication is the added value of a musical performance. It is part of the reason why music is interesting to listen to and sounds alive. Previous work on the analysis of acoustical features yielded relevant features for the recognition of different expressive intentions, inspired both by emotional and sensorial adjectives. In this article, machine learning techniques are employed to understand how expressive performances represented by the selected features are clustered on a low-dimensional space, and to define a measure of acoustical similarity. Being that expressive intentions are similar according to the features used for the recognition, and since recognition implies subjective evaluation, we hypothesized that performances are similar also from a perceptual point of view. We then compared and integrated the clustering of acoustical features with the results of two listening experiments. A first experiment aims at verifying whether subjects can distinguish different categories of expressive intentions, and a second experiment aims at understanding which expressions are perceptually clustered together in order to derive common evaluation criteria adopted by listeners, and to obtain the perceptual organization of affective and sensorial expressive intentions. An interpretation of the resulting spatial representation based on action is proposed and discussed.
Virtual Reality | 2006
Luca Mion; Gianluca D'Incà
Expression could play a key role in the audio rendering of virtual reality applications. Its understanding is an ambitious issue in the scientific environment, and several studies have investigated the analysis techniques to detect expression in music performances. The knowledge coming from these analyses is widely applicable: embedding expression on audio interfaces can drive to attractive solutions to emphasize interfaces in mixed-reality environments. Synthesized expressive sounds can be combined with real stimuli to experience augmented reality, and they can be used in multi-sensory stimulations to provide the sensation of first-person experience in virtual expressive environments. In this work we focus on the expression of violin and flute performances, with reference to sensorial and affective domains. By means of selected audio features, we draw a set of parameters describing performers’ strategies which are suitable both for tuning expressive synthesis instruments and enhancing audio in human–computer interfaces.
International Gesture Workshop | 2003
Damien Cirotteau; Giovanni De Poli; Luca Mion; Alvise Vidolin; Patrick Zanon
Understanding the content in musical gestures is an ambitious issue in scientific environment. Several studies demonstrated how different expressive intentions can be conveyed by a musical performance and correctly recognized by the listeners: several models for the synthesis can also be found in the literature. In this paper we draw an overview of the studies which have been done at the Center of Computational Sonology (CSC) during the last year on automatic recognition of musical gestures. These studies can be grouped in two main branches: analysis with the score knowledge and analysis without. A brief description of the implementations and validations is presented.
Journal of New Music Research | 2009
Giovanni De Poli; Luca Mion; Antonio Rodà
Abstract Technology-mediated music access is more and more becoming an interactive process, involving non-linguistic communication and action-based modalities. A better understanding of the musical experience and how this experience can be described is a crucial issue to render more effective and natural the interaction with musical contents. This paper aims at verifying if and how non-linguistic descriptors can be related to musical stimuli and their expressive cues. We designed four experiments using two sets of musical stimuli (simple and complex) and two sets of non-linguistic descriptors (acoustic and haptic), that we called attractors. In particular, the haptic attractors simulate the mechanic concepts of friction, elasticity and inertia (FEI). The results showed that subjects are able to relate musical stimuli with both acoustic and haptic attractors, even if the FEI metaphor seems to be more suitable for describing expressive cues in simple musical excerpts, where the expressive content is mainly ...
virtual reality software and technology | 2007
Luca Mion; Federico Avanzini; Bruno Mantel; Benoît G. Bardy; Thomas A. Stoffregen
This paper reports on a study on the perception and rendering of distance in multimodal virtual environments. A model for binaural sound synthesis is discussed, and its integration in a real-time system with motion tracking and visual rendering is presented. Results from a validation experiment show that the model effectively simulates relevant auditory cues for distance perception in dynamic conditions. The model is then used in a subsequent experiment on the perception of egocentric distance. The design and preliminary result from this experiment are discussed.
congress of the italian association for artificial intelligence | 2007
Luca Mion; Giovanni De Poli
A paradigm for music expression understanding based on a joint semantic space, described by both affective and sensorial adjectives, is presented. Machine learning techniques were employed to select and validate relevant low level features, and an interpretation of the clustered organization based on action and physical analogy is proposed.
Computer Music Journal | 2010
Luca Mion; Gianluca D'Incà; Amalia De Götzen; Ennio Rapanà
“Natural interfaces” represent one of the fastmoving investigation topics in the design of modern electronic appliances for both domestic and professional use. These interfaces stress the idea that the interaction should mimic everyday life under many different aspects, from the input device used to the feedback received and to the embodiment of the interaction. Nonverbal communication plays an important role in our everyday life, and the auditory modality is used to comprehend many different kinds of information and shorter time-span states such as moods and emotions. In this work, a model for the expressive control of unstructured sounds is proposed. Starting from the investigation of simple musical gestures played with various instruments (repeated notes, scales, and short excerpts), a set of relevant audio features for expression description is selected by statistical analysis. Selected features are not related to musical scores or structures, thus yielding an ecological approach to the representation of expression communication. In particular, perceptual features like roughness and spectral centroid provide additional descriptors related to texture and brightness, as opposed to the timing/intensity-based parameters, which lead to typical music-oriented characterizations. Afterwards, the control parameters of an expressive synthesis model are tuned according to the results of analysis to add expressive content to simple synthetic sounds. Listening tests were conducted to validate the model and results confirm the impact that this model can have on affective communication in human–computer interaction (HCI).
Archive | 2003
Luca Mion
Archive | 2005
Giovanni De Poli; Federico Avanzini; Luca Mion; Gianluca D'Incà; Cosmo Trestino; Davida Pirrò; Annie Luciani; Nicholas Castagné