Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Philippe Depalle is active.

Publication


Featured researches published by Philippe Depalle.


ieee sp international symposium on time frequency and time scale analysis | 1996

Analysis of sound signals with high resolution matching pursuit

Rémi Gribonval; Emmanuel Bacry; Stéphane Mallat; Philippe Depalle; Xavier Rodet

Sound recordings include transients and sustained parts. Their analysis with a basis expansion is not rich enough to represent efficiently all such components. Pursuit algorithms choose the decomposition vectors depending upon the signal properties. The dictionary among which these vectors are selected is much larger than a basis. Matching pursuit is fast to compute, but can provide coarse representations. Basis pursuit gives a better representation but is very expensive in terms of calculation time. This paper develops a high resolution matching pursuit: it is a fast, high time-resolution, time-frequency analysis algorithm, that makes it likely to be used far musical applications.


Journal of New Music Research | 1999

Automatic Characterisation of Musical Signals: Feature Extraction and Temporal Segmentation

Stéphane Rossignol; Xavier Rodet; J. Soumagne; Jean-Luc Collette; Philippe Depalle

This paper presents some results on automatic characterisation of musical and acoustic signals in terms of features attributed to signal segments. These features describe some of the musical and acoustical content of the sound and can be used in applications such as intelligent sound processing, retrieval of music and sound in databases or music editing and labeling. The paper describes research in a very advanced stage but still ongoing. Applications and results on various examples are presented.


Journal of New Music Research | 2006

Mapping strategies for gestural and adaptive control of digital audio effects

Vincent Verfaille; Marcelo M. Wanderley; Philippe Depalle

Abstract This paper discusses explicit mapping strategies for gestural and adaptive control of digital audio effects. We address the problem of defining what is the control and what is the effect. We then propose a mapping strategy derived from mapping techniques used in sound synthesis. The explicit mapping strategy we developed has two levels and two layers for each level: the first level is the adaptive control with a feature combination layer and a control signal conditioning layer; the second level is the gestural control layer. We give musical examples that illustrate the interest of this strategy.


international conference on acoustics, speech, and signal processing | 1997

Analytical approximations of fractional delays: Lagrange interpolators and allpass filters

Stephan Tassart; Philippe Depalle

We propose in this paper a new point of view which unifies two well known filter families for approximating ideal fractional delay filters: Lagrange interpolator filters (LIF) and Thiran allpass filters. We achieve this unification by approximating the ideal Fourier transform of the fractional delay according to two different Pade approximations: series expansions and continued fraction expansions, and by proving that both approximations correspond exactly either to the LIF family or to the allpass delay filters family. This leads to an efficient modular implementation of LIFs.


international conference on acoustics, speech, and signal processing | 2014

Phase constrained complex NMF: Separating overlapping partials in mixtures of harmonic musical sources

James Bronson; Philippe Depalle

This paper examines complex non-negative matrix factorization (CMF) as a tool for separating overlapping partials in mixtures of harmonic musical sources. Unlike non-negative matrix factorization (NMF), CMF allows for the development of source separation procedures founded on a mixture model rooted in the complex-spectrum domain (in which the superposition of overlapping sources is preserved). This paper introduces a physically motivated phase constraint based on the assumption that the sources pitch is sufficient in specifying the phase evolution of the harmonics over time, uniting sinusoidal modelling of acoustic sources with the CMF analysis of their spectral representations. The CMF-based separation procedure, armed with this novel phase constraint, is demonstrated to offer a superior performance to NMF when employed as a tool for separating overlapping partials in the acoustic test cases considered.


Philosophical Transactions of the Royal Society A | 2016

Adaptive multimode signal reconstruction from time-frequency representations.

Sylvain Meignen; Thomas Oberlin; Philippe Depalle; Patrick Flandrin; Stephen McLaughlin

This paper discusses methods for the adaptive reconstruction of the modes of multicomponent AM–FM signals by their time–frequency (TF) representation derived from their short-time Fourier transform (STFT). The STFT of an AM–FM component or mode spreads the information relative to that mode in the TF plane around curves commonly called ridges. An alternative view is to consider a mode as a particular TF domain termed a basin of attraction. Here we discuss two new approaches to mode reconstruction. The first determines the ridge associated with a mode by considering the location where the direction of the reassignment vector sharply changes, the technique used to determine the basin of attraction being directly derived from that used for ridge extraction. A second uses the fact that the STFT of a signal is fully characterized by its zeros (and then the particular distribution of these zeros for Gaussian noise) to deduce an algorithm to compute the mode domains. For both techniques, mode reconstruction is then carried out by simply integrating the information inside these basins of attraction or domains.


Computer Music Journal | 2014

Mapping control structures for sound synthesis: Functional and topological perspectives

Doug Van Nort; Marcelo M. Wanderley; Philippe Depalle

This article contributes a holistic conceptual framework for the notion of “mapping” that extends the classical view of mapping as parameter association. In presenting this holistic approach to mapping techniques, we apply the framework to existing works from the literature as well as to new implementations that consider this approach in their construction. As any mapping control structure for a given digital instrument is determined by the musical context in which it is used, we present musical examples that relate the relatively abstract realm of mapping design to the physically and perceptually grounded notions of control and sonic gesture. Making this connection allows mapping to be more clearly seen as a linkage between a physical action and a sonic result. In this sense, the purpose of this work is to translate the discussion on mapping so that it links an abstract and formalized approach—intended for representation and conceptualization—with a viewpoint that considers mapping in its role as a perceived correspondence between physical materials (i.e., those that act on controllers and transducers) and sonic events. This correspondence is, at its heart, driven by our cognitive and embodied understanding of the acoustic world.


IEEE Transactions on Audio, Speech, and Language Processing | 2010

Analysis/Synthesis of Sounds Generated by Sustained Contact Between Rigid Objects

Mathieu Lagrange; Gary P. Scavone; Philippe Depalle

This paper introduces an analysis/synthesis scheme for the reproduction of sounds generated by sustained contact between rigid bodies. This scheme is rooted in a Source/Filter decomposition of the sound where the filter is described as a set of poles and the source is described as a set of impulses representing the energy transfer between the interacting objects. Compared to single impacts, sustained contact interactions like rolling and sliding make the estimation of the parameters of the Source/Filter model challenging because of two issues. First, the objects are almost continuously interacting. The second is that the source is generally unknown and has therefore to be modeled in a generic way. In an attempt to tackle those issues, the proposed analysis/synthesis scheme combines advanced analysis techniques for the estimation of the filter parameters and a flexible model of the source. It allows the modeling of a wide range of sounds. Examples are presented for objects of various shapes and sizes, rolling or sliding over plates of different materials. In order to demonstrate the versatility of the approach, the system is also considered for the modeling of sounds produced by percussive musical instruments.


international conference on acoustics, speech, and signal processing | 2012

A unified view of non-stationary sinusoidal parameter estimation methods using signal derivatives

Brian Hamilton; Philippe Depalle

In this paper, we present a unified view of three non-stationary sinusoidal parameter estimation methods which are based on taking linear transforms of a signal and its derivatives. These methods, the Distribution Derivative Method (DDM), the Generalized Derivative Method (GDM), and the Generalized Reassignment Method (GRM), are shown to be subcases of a more general method which results in a system of linear equations from which we can solve for the parameter estimators. While the GDM and GRM are known to be theoretically equivalent, we show that they are also equivalent to the DDM in one special case. Matrix formulations are established for the GDM and GRM with a polynomial log-amplitude, polynomial phase sinusoidal signal model, and a bias in previous frequency slope estimators is explicitly demonstrated.


international conference on acoustics, speech, and signal processing | 2004

Timbral analogies between vowels and plucked string tones

Caroline Traube; Philippe Depalle

Classical guitarists vary plucking position to achieve different timbres from nasal and metallic - closer to the bridge - to round and mellow -closer to the middle of the string. An interesting set of timbre descriptors commonly used by guitarists seem to refer to phonetic gestures: thin, nasal, round, open, etc. The magnitude spectrum of guitar tones being comb-filter shaped, we propose to see the local maxima of that comb filter structure as vocal formants. When guitarists talk about a guitar sound as being round, it would mean that it sounds like a round-shaped-mouth sound, such as the vowel /O/. Although the acoustic systems of the guitar and of the voice mechanism are structurally different, we highlight the fact that guitar tones and a particular set of vowels display similar formant regions. We also investigate the possibility of applying some distinctive features of speech sounds to guitar sounds.

Collaboration


Dive into the Philippe Depalle's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Marcelo M. Wanderley

Association for Computing Machinery

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge