Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alfonso Pérez is active.

Publication


Featured researches published by Alfonso Pérez.


Lecture Notes in Computer Science | 2006

Modelling expressive performance: a regression tree approach based on strongly typed genetic programming

Amaury Hazan; Rafael Ramirez; Esteban Maestre; Alfonso Pérez; Antonio Pertusa

This paper presents a novel Strongly-Typed Genetic Programming approach for building Regression Trees in order to model expressive music performance. The approach consists of inducing a Regression Tree model from training data (monophonic recordings of Jazz standards) for transforming an inexpressive melody into an expressive one. The work presented in this paper is an extension of [1], where we induced general expressive performance rules explaining part of the training examples. Here, the emphasis is on inducing a generative model (i.e. a model capable of generating expressive performances) which covers all the training examples. We present our evolutionary approach for a one-dimensional regression task: the performed note duration ratio prediction. We then show the encouraging results of experiments with Jazz musical material, and sketch the milestones which will enable the system to generate expressive music performance in a broader sense.


IEEE Transactions on Audio, Speech, and Language Processing | 2010

Statistical Modeling of Bowing Control Applied to Violin Sound Synthesis

Esteban Maestre; Merlijn Blaauw; Jordi Bonada; Enric Guaus; Alfonso Pérez

Excitation-continuous music instrument control patterns are often not explicitly represented in current sound synthesis techniques when applied to automatic performance. Both physical model-based and sample-based synthesis paradigms would benefit from a flexible and accurate instrument control model, enabling the improvement of naturalness and realism. We present a framework for modeling bowing control parameters in violin performance. Nearly non-intrusive sensing techniques allow for accurate acquisition of relevant timbre-related bowing control parameter signals. We model the temporal contour of bow velocity, bow pressing force, and bow-bridge distance as sequences of short Be¿zier cubic curve segments. Considering different articulations, dynamics, and performance contexts, a number of note classes are defined. Contours of bowing parameters in a performance database are analyzed at note-level by following a predefined grammar that dictates characteristics of curve segment sequences for each of the classes in consideration. As a result, contour analysis of bowing parameters of each note yields an optimal representation vector that is sufficient for reconstructing original contours with significant fidelity. From the resulting representation vectors, we construct a statistical model based on Gaussian mixtures suitable for both the analysis and synthesis of bowing parameter contours. By using the estimated models, synthetic contours can be generated through a bow planning algorithm able to reproduce possible constraints caused by the finite length of the bow. Rendered contours are successfully used in two preliminary synthesis frameworks: digital waveguide-based bowed string physical modeling and sample-based spectral-domain synthesis.


Journal of New Music Research | 2011

Automatic Performer Identification in Celtic Violin Audio Recordings

Rafael Ramirez; Esteban Maestre; Alfonso Pérez; Xavier Serra

Abstract We present a machine learning approach to the problem of identifying performers from their interpretative styles. In particular, we investigate how violinists express their view of the musical content in audio recordings and feed this information to a number of machine learning techniques in order to induce classifiers capable of identifying the interpreters. We apply sound analysis techniques based on spectral models for extracting expressive features such as pitch, timing, and amplitude representing both note characteristics and the musical context in which they appear. Our results indicate that the features extracted contain sufficient information to distinguish the considered performers, and the explored machine learning methods are capable of learning the expressive patterns that characterize each of the interpreters.


IEEE MultiMedia | 2017

Enriched Multimodal Representations of Music Performances: Online Access and Visualization

Esteban Maestre; Panagiotis Papiotis; Marco Marchini; Quim Llimona; Oscar Mayor; Alfonso Pérez; Marcelo M. Wanderley

The authors provide a first-person outlook on the technical challenges involved in the recording, analysis, archiving, and cloud-based interchange of multimodal string quartet performance data as part of a collaborative research project on ensemble music making. To facilitate the sharing of their own collection of multimodal recordings and extracted descriptors and annotations, they developed a hosting platform through which multimodal data (audio, video, motion capture, and derived signals) can be stored, visualized, annotated, and selectively retrieved via a web interface and a dedicated API. This article offers a twofold contribution: the authors open their collection of enriched multimodal recordings, the Quartet dataset, to the community, and they introduce and enable access to their multimodal data exchange platform and web application, the Repovizz system. This article is part of a special issue on multimedia technologies for enriched music.


intelligent technologies for interactive entertainment | 2011

Measuring Ensemble Synchrony through Violin Performance Parameters: A Preliminary Progress Report

Panagiotis Papiotis; Marco Marchini; Esteban Maestre; Alfonso Pérez

In this article we present our ongoing work on expressive performance analysis for violin and string ensembles, in terms of synchronization in intonation, timing, dynamics and articulation. Our current research objectives are outlined, along with an overview for the methods used to achieve them; finally, focusing on the case of intonation synchronization in violin duets, some preliminary results and conclusions based on experimental recordings are discussed.


international conference on machine learning | 2010

Modeling emotions in violin audio recordings

Andreas Neocleous; Rafael Ramirez; Alfonso Pérez; Esteban Maestre

In this paper we present a machine learning approach to modeling emotions in music performances. In particular, we investigate how a professional musician encodes emotions, such as happiness, sadness, anger and fear, in violin audio performances. In order to apply machine learning techniques to our data we first extract a melodic description from the audio recordings. We then train a model for each emotion considered. Finally, we synthesize new expressive performances from inexpressive melody descriptions (i.e. music scores) using the induced models. We explore and compare several machine learning techniques for inducing the expressive models and present the results.


intelligent data analysis | 2010

Modeling violin performances using inductive logic programming

Rafael Ramirez; Alfonso Pérez; Stefan Kersten; David Rizo; Placido Roman; José M. Iñesta

Professional musicians intuitively manipulate sound properties such as pitch, timing, amplitude and timbre in order to produce expressive performances of particular pieces. However, there is little explicit information about how and in which musical contexts this manipulation occurs. In this paper we describe a machine learning approach to modeling the knowledge applied by a musician when performing a score in order to produce an expressive performance. In particular, we apply inductive logic programming techniques in order to automatically learn models for both understanding and generating expressive violin performances.


Proceedings of the 5th International Conference on Movement and Computing | 2018

Enhancing Music Learning with Smart Technologies

Rafael Ramirez; Corrado Canepa; Simone Ghisio; Ksenia Kolykhalova; Maurizio Mancini; Erica Volta; Gualtiero Volpe; Sergio Giraldo; Oscar Mayor; Alfonso Pérez; George Waddell; Aaron Williamon

Learning to play a musical instrument is a difficult task, requiring the development of sophisticated skills. Nowadays, such a learning process is mostly based on the master-apprentice model. Technologies are rarely employed and are usually restricted to audio and video recording and playback. The TELMI (Technology Enhanced Learning of Musical Instrument Performance) Project seeks to design and implement new interaction paradigms for music learning and training based on state-of-the-art multimodal (audio, image, video, and motion) technologies. The project focuses on the violin as a case study. This practice work is intended as demo, showing to MOCO attendants the results the project obtained along two years of work. The demo simulates a setup at a higher education music institution, where attendants with any level of previous violin experience (and even with no experience at all) are invited to try the technologies themselves, performing basic tests of violin skill and pre-defined exercises under the guidance of the researchers involved in the project.


international conference on acoustics, speech, and signal processing | 2012

Non-impulsive signal deconvolution for computation of violin impulse responses

Andrés Bucci; Alfonso Pérez; Jordi Bonada

This work presents a method to compute violin body impulse responses (BIR) of acoustic violins that are adapted to the signal captured with an electric violin (a violin with a transducer measuring vibration). The computed BIRs correspond to the transfer function that maps the pickup signal of the electric violin to the radiated acoustic sound of the acoustic violins. By convolution of the pickup signal with the corresponding BIR, we pretend to simulate the radiated sound of the acoustic violin by playing the electric one. The method to obtain the BIRs is based on signal deconvolution between a recorded audio signal of an acoustic violin and the signal coming from an electric violins pickup. The recorded signals consist of glissandi performed with a violin-playing machine enhanced with motion sensors that provide the bowing parameters (bowing position, velocity and force). By controlling the bowing parameters and analyzing the fundamental frequency of each frame we perform the deconvolution on equivalent frames and obtain the desired impulse response between the two instruments. A user survey consisting of violinists and non-violinists was performed to evaluate the obtained results with respect to the original pickup signal.


international computer music conference | 2007

Acquisition of violin instrumental gestures using a commercial EMF device

Esteban Maestre; Jordi Bonada; Merlijn Blaauw; Alfonso Pérez; Enric Guaus

Collaboration


Dive into the Alfonso Pérez's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jordi Bonada

Pompeu Fabra University

View shared research outputs
Top Co-Authors

Avatar

Enric Guaus

Pompeu Fabra University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Amaury Hazan

Pompeu Fabra University

View shared research outputs
Researchain Logo
Decentralizing Knowledge