Enric Guaus
Pompeu Fabra University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Enric Guaus.
IEEE Transactions on Audio, Speech, and Language Processing | 2010
Esteban Maestre; Merlijn Blaauw; Jordi Bonada; Enric Guaus; Alfonso Pérez
Excitation-continuous music instrument control patterns are often not explicitly represented in current sound synthesis techniques when applied to automatic performance. Both physical model-based and sample-based synthesis paradigms would benefit from a flexible and accurate instrument control model, enabling the improvement of naturalness and realism. We present a framework for modeling bowing control parameters in violin performance. Nearly non-intrusive sensing techniques allow for accurate acquisition of relevant timbre-related bowing control parameter signals. We model the temporal contour of bow velocity, bow pressing force, and bow-bridge distance as sequences of short Be¿zier cubic curve segments. Considering different articulations, dynamics, and performance contexts, a number of note classes are defined. Contours of bowing parameters in a performance database are analyzed at note-level by following a predefined grammar that dictates characteristics of curve segment sequences for each of the classes in consideration. As a result, contour analysis of bowing parameters of each note yields an optimal representation vector that is sufficient for reconstructing original contours with significant fidelity. From the resulting representation vectors, we construct a statistical model based on Gaussian mixtures suitable for both the analysis and synthesis of bowing parameter contours. By using the estimated models, synthetic contours can be generated through a bow planning algorithm able to reproduce possible constraints caused by the finite length of the bow. Rendered contours are successfully used in two preliminary synthesis frameworks: digital waveguide-based bowed string physical modeling and sample-based spectral-domain synthesis.
international symposium on control, communications and signal processing | 2004
Eloi Batlle; Jaume Masip; Enric Guaus
The new transmission and storage technologies now available have put together a vast amount of digital audio. All this audio is ready and easy to transfer but it might be useless with a clear knowledge of its content as metadata attached to it. This knowledge can be manually added but this is not feasible for millions of on-line files. In this paper we present a method to automatically derive acoustic information about audio files and a technology to classify and retrieve audio examples.
international conference on multimedia and expo | 2004
Eloi Batlle; Jaume Masip; Enric Guaus; Pedro Cano
Audio fingerprinting technologies allow the identification of audio content without the need of external meta-data or watermark embedding. These audio fingerprinting technologies work by extracting a content-based compact digest that summarizes a recording and comparing them with a previously extracted fingerprint database. In this paper we present a fingerprint scheme that is based on hidden Markov models. This approach achieves a high compaction of the audio signal by exploiting structural redundancies on music and robustness to distortions thanks to the stochastic modeling. In this paper we present the basic functionality of the system as well as some results
IEEE Transactions on Audio, Speech, and Language Processing | 2012
Alfonso Perez Carrillo; Jordi Bonada; Esteban Maestre; Enric Guaus; Merlijn Blaauw
The objective of this research is to model the relationship between actions performed by a violinist and the sound which these actions produce. Violinist actions and audio are captured during real performances by means of a newly developed sensing system from which bowing and audio descriptors are computed. A database is built with this data and used to train a generative model based on neural networks. The model is driven by a continuous sequence of bowing and fingering controls and is able to generate their corresponding sequence of spectral envelopes. The model is used for synthesis, either alone as a purely spectral model, by filling the predicted envelopes with harmonic and noisy components, or coupled with a concatenative synthesizer, where the predicted envelopes are used as time-varying filters to transform the concatenated samples. The combination of sample concatenation with the timbre model allows for the preservation of sound quality inherent in samples, while providing a high level of control. Additionally, we perform an analysis of the violinist control space and the influence of the controls on the timbre.
computer music modeling and retrieval | 2010
Tan Hakan Özaslan; Enric Guaus; Eric Palacios; Josep Lluis Arcos
The study of musical expressivity is an active field in sound and music computing. The research interest comes from different motivations: to understand or model music expressivity; to identify the expressive resources that characterize an instrument, musical genre, or performer; or to build synthesis systems able to play expressively. Our research is focused on the study of classical guitar and deals with modeling the use of the expressive resources in the guitar. In this paper, we present a system that combines several state of the art analysis algorithms to identify guitar left hand articulations such as legatos and glissandos. After describing the components of our system, we report some experiments with recordings containing single articulations and short melodies performed by a professional guitarist.
international symposium on signal processing and information technology | 2003
Enric Guaus; Eloi Batlle
We present a new rhythm, metre and BPM visualization tool. It is based on the proposed rhythm transformation that transforms audio data from time domain to a so-called rhythm domain. The goal of this method is that data in rhythm domain can be interpreted as frequency domain information (for BPM detection) as well as time domain information (for metre detection). Some musical information, such as the metre (simple or compound, duple or triple, swinged or non-swinged), can be extracted from input audio. The method is based on the periodogram computation of the processed input data, and the different musical features are extracted by using well known techniques.
Fuzzy Sets and Systems | 2013
Josep Lluis Arcos; Enric Guaus; Tan Hakan Özaslan
In this paper we present our research on the design of a tool to analyze musical expressivity. Musical expressivity is a human activity difficult to model computationally because of its nature: implicitly acquired by musicians through a long process of listening and imitation. We propose the use of soft computing techniques to deal with this problem. Specifically, from a collection of sound features obtained by using state of the art audio analysis algorithms, we apply a soft computing process to generate a compact and powerful representation. Moreover, we have designed a graphical user interface to provide a flexible analysis tool. We are using the analysis tool in the guitarLab project, focused on the study of musical expressivity of classical guitar.
Signal and Image Processing | 2002
Eloi Batlle; Jaume Masip; Enric Guaus
international symposium/conference on music information retrieval | 2008
Mohamed Sordo; Òscar Celma; Martin Blech; Enric Guaus
Department of Information and Communication Technologies | 2009
Enric Guaus