Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Amaury Hazan is active.

Publication


Featured researches published by Amaury Hazan.


Lecture Notes in Computer Science | 2006

Modelling expressive performance: a regression tree approach based on strongly typed genetic programming

Amaury Hazan; Rafael Ramirez; Esteban Maestre; Alfonso Pérez; Antonio Pertusa

This paper presents a novel Strongly-Typed Genetic Programming approach for building Regression Trees in order to model expressive music performance. The approach consists of inducing a Regression Tree model from training data (monophonic recordings of Jazz standards) for transforming an inexpressive melody into an expressive one. The work presented in this paper is an extension of [1], where we induced general expressive performance rules explaining part of the training examples. Here, the emphasis is on inducing a generative model (i.e. a model capable of generating expressive performances) which covers all the training examples. We present our evolutionary approach for a one-dimensional regression task: the performed note duration ratio prediction. We then show the encouraging results of experiments with Jazz musical material, and sketch the milestones which will enable the system to generate expressive music performance in a broader sense.


International Journal on Artificial Intelligence Tools | 2006

A TOOL FOR GENERATING AND EXPLAINING EXPRESSIVE MUSIC PERFORMANCES OF MONOPHONIC JAZZ MELODIES

Rafael Ramirez; Amaury Hazan

In this paper we present a machine learning approach to modeling the knowledge applied by a musician when performing a score in order to produce an expressive performance of a piece. We describe a tool for both generating and explaining expressive music performances of monophonic Jazz melodies. The tool consists of three components: (a) a melodic transcription component which extracts a set of acoustic features from monophonic recordings, (b) a machine learning component which induce both an expressive transformation model and a set of expressive performance rules from the extracted acoustic features, and (c) a melody synthesis component which generates expressive monophonic output (MIDI or audio) from inexpressive melody descriptions using the induced expressive transformation model. We compare several machine learning techniques we have explored for inducing the expressive transformation model.


Archive | 2007

A Data Mining Approach to Expressive Music Performance Modeling

Rafael Ramirez; Amaury Hazan; Esteban Maestre; Xavier Serra

In this chapter we present a data mining approach to one of the most challenging aspects of computer music: modeling the knowledge applied by a musician when performing a score in order to produce an expressive performance of a piece. We apply data mining techniques to real performance data (i.e., audio recordings) in order to induce an expressive performance model. This leads to an expressive performance system consisting of three components: (1) a melodic transcription component that extracts a set of acoustic features from the audio recordings, (2) a data mining component that induces an expressive transformation model from the set of extracted acoustic features, and (3) a melody synthesis component that generates expressive monophonic output (MIDI or audio) from inexpressive melody descriptions using the induced expressive transformation model. We describe, explore, and compare different data mining techniques for inducing the expressive transformation model.


Connection Science | 2009

What/when causal expectation modelling applied to audio signals

Amaury Hazan; Ricard Marxer; Paul Brossier; Hendrik Purwins; Perfecto Herrera; Xavier Serra

A causal system to represent a stream of music into musical events, and to generate further expected events, is presented. Starting from an auditory front-end that extracts low-level (i.e. MFCC) and mid-level features such as onsets and beats, an unsupervised clustering process builds and maintains a set of symbols aimed at representing musical stream events using both timbre and time descriptions. The time events are represented using inter-onset intervals relative to the beats. These symbols are then processed by an expectation module using Predictive Partial Match, a multiscale technique based on N-grams. To characterise the ability of the system to generate an expectation that matches both ground truth and system transcription, we introduce several measures that take into account the uncertainty associated with the unsupervised encoding of the musical sequence. The system is evaluated using a subset of the ENST-drums database of annotated drum recordings. We compare three approaches to combine timing (when) and timbre (what) expectation. In our experiments, we show that the induced representation is useful for generating expectation patterns in a causal way.


Journal of New Music Research | 2005

Discovering expressive transformation rules from saxophone jazz performances

Rafael Ramirez; Amaury Hazan; Emilia Gómez; Esteban Maestre; Xavier Serra

Abstract If-then rules are one of the most expressive and intuitive knowledge representations and their application to represent musical knowledge raises particularly interesting questions. In this paper, we describe an approach to learning expressive performance rules from monophonic recordings of jazz standards by a skilled saxophonist. We have first developed a melodic transcription system which extracts a set of acoustic features from the recordings producing a melodic representation of the expressive performance played by the musician. We apply machine learning techniques, namely inductive logic programming, to this representation in order to induce first order logic rules of expressive music performance.


Connection Science | 2009

Model cortical responses for the detection of perceptual onsets and beat tracking in singing

Martin Coath; Susan L. Denham; Leigh M. Smith; Henkjan Honing; Amaury Hazan; Piotr Holonowicz; Hendrik Purwins

We describe a biophysically motivated model of auditory salience based on a model of cortical responses and present results that show that the derived measure of salience can be used to identify the position of perceptual onsets in a musical stimulus successfully. The salience measure is also shown to be useful to track beats and predict rhythmic structure in the stimulus on the basis of its periodicity patterns. We evaluate the method using a corpus of unaccompanied freely sung stimuli and show that the method performs well, in some cases better than state-of-the-art algorithms. These results deserve attention because they are derived from a general model of auditory processing and not an arbitrary model achieving best performance in onset detection or beat-tracking tasks.


Lecture Notes in Computer Science | 2005

Understanding expressive music performance using genetic algorithms

Rafael Ramirez; Amaury Hazan

In this paper, we describe an approach to learning expressive performance rules from monophonic Jazz standards recordings by a skilled saxophonist. We use a melodic transcription system which extracts a set of acoustic features from the recordings producing a melodic representation of the expressive performance played by the musician. We apply genetic algorithms to this representation in order to induce rules of expressive music performance. The rules collected during different runs of our system are of musical interest and have a good prediction accuracy.


intelligent user interfaces | 2005

Towards automatic transcription of expressive oral percussive performances

Amaury Hazan

We describe a tool for transcribing voice generated percussive rhythms. The system consists of: (a) a segmentation component which separates the monophonic input stream into percussive events (b) a descriptors generation component that computes a set of acoustic features from each of the extracted segments, (c) a machine learning component which assigns to each of the segmented sounds of the input stream a symbolic class. We describe each of these components and compare different machine learning strategies that can be used to obtain a symbolic representation of the oral percussive performance.


genetic and evolutionary computation conference | 2007

Inducing a generative expressive performance model using a sequential-covering genetic algorithm

Rafael Ramirez; Amaury Hazan

In this paper, we describe an evolutionary approach to inducing a generative model of expressive music performance for Jazz saxophone. We begin with a collection of audio recordings of real Jazz saxophone performances from which we extract a symbolic representation of the musicians expressive performance. We then apply an evolutionary algorithm to the symbolic representation in order to obtain computational models for different aspects of expressive performance. Finally, we use these models to automatically synthesize performances with the expressiveness that characterizes the music generated by a professional saxophonist.


Journal of the Acoustical Society of America | 2008

What/when causal expectation modelling applied to percussive audio

Amaury Hazan; Paul Brossier; Ricard Marxer; Hendrik Purwins

A causal system for representing a musical stream and generating further expected events is presented. Starting from an auditory front‐end which extracts low‐level (e.g., spectral shape, mel frequency cepstral coefficients) and midlevel features such as onsets and beats, an unsupervised categorisation process builds and maintains a set of symbols aimed at representing musical stream events using both timbre and time descriptions. The time events are represented using inter‐onset intervals relative to the beats. These symbols are then processed by an expectation module based on predictive partial match, a multiscale technique derived from N‐grams. The system capacity to generate an expectation that matches its transcription is evaluated using drum recordings from the ENST‐drums database. We show that the MFCC‐based representation leads to a more compact set of symbols and a better match between transcription and expectation. Also, we suggest that the system is sensitive to exposure and illustrate some prop...

Collaboration


Dive into the Amaury Hazan's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xavier Serra

Pompeu Fabra University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Maarten Grachten

Johannes Kepler University of Linz

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge