Paul Magron
Institut Mines-Télécom
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Paul Magron.
european signal processing conference | 2015
Paul Magron; Roland Badeau; Bertrand David
This paper introduces a novel technique for reconstructing the phase of modified spectrograms of audio signals. From the analysis of mixtures of sinusoids we obtain relationships between phases of successive time frames in the Time-Frequency (TF) domain. To obtain similar relationships over frequencies, in particular within onset frames, we study an impulse model. Instantaneous frequencies and attack times are estimated locally to encompass the class of non-stationary signals such as vibratos. These techniques ensure both the vertical coherence of partials (over frequencies) and the horizontal coherence (over time). The method is tested on a variety of data and demonstrates better performance than traditional consistency-based approaches. We also introduce an audio restoration framework and observe that our technique outperforms traditional methods.
international conference on acoustics, speech, and signal processing | 2015
Paul Magron; Roland Badeau; Bertrand David
Nonnegative Matrix Factorization (NMF) is a powerful tool for decomposing mixtures of audio signals in the Time-Frequency (TF) domain. In applications such as source separation, the phase recovery for each extracted component is a major issue since it often leads to audible artifacts. In this paper, we present a methodology for evaluating various NMF-based source separation techniques involving phase reconstruction. For each model considered, a comparison between two approaches (blind separation without prior information and oracle separation with supervised model learning) is performed, in order to inquire about the room for improvement for the estimation methods. Experimental results show that the High Resolution NMF (HRNMF) model is particularly promising, because it is able to take phases and correlations over time into account with a great expressive power.
international conference on acoustics, speech, and signal processing | 2016
Paul Magron; Roland Badeau; Bertrand David
Nonnegative Matrix Factorization (NMF) is a powerful tool for decomposing mixtures of audio signals in the Time-Frequency (TF) domain. In the source separation framework, the phase recovery for each extracted component is necessary for synthesizing time-domain signals. The Complex NMF (CNMF) model aims to jointly estimate the spectrogram and the phase of the sources, but requires to constrain the phase in order to produce satisfactory sounding results. We propose to incorporate phase constraints based on signal models within the CNMF framework: a phase unwrapping constraint that enforces a form of temporal coherence, and a constraint based on the repetition of audio events, which models the phases of the sources within onset frames. We also provide an algorithm for estimating the model parameters. The experimental results highlight the interest of including such constraints in the CNMF framework for separating overlapping components in complex audio mixtures.
international conference on acoustics, speech, and signal processing | 2016
Fabian-Robert Stöter; Antoine Liutkus; Roland Badeau; Bernd Edler; Paul Magron
In this paper we present a novel source separation method aiming to overcome the difficulty of modelling non-stationary signals. The method can be applied to mixtures of musical instruments with frequency and/or amplitude modulation, e.g. typically caused by vibrato. It is based on a signal representation that divides the complex spectrogram into a grid of patches of arbitrary size. These complex patches are then processed by a two-dimensional discrete Fourier transform, forming a tensor representation which reveals spectral and temporal modulation textures. Our representation can be seen as an alternative to modulation transforms computed on magnitude spectrograms. An adapted factorization model allows to decompose different time-varying harmonic sources based on their particular common modulation profile: hence the name Common Fate Model. The method is evaluated on musical instrument mixtures playing the same fundamental frequency (unison), showing improvement over other state-of-the-art methods.
international conference on acoustics, speech, and signal processing | 2017
Paul Magron; Roland Badeau; Bertrand David
Phase reconstruction of complex components in the time-frequency domain is a challenging but necessary task for audio source separation. While traditional approaches do not exploit phase constraints that originate from signal modeling, some prior information about the phase can be obtained from sinusoidal modeling. In this paper, we introduce a probabilistic mixture model which allows us to incorporate such phase priors within a source separation framework. While the magnitudes are estimated beforehand, the phases are modeled by Von Mises random variables whose location parameters are the phase priors. We then approximate this non-tractable model by an anisotropic Gaussian model, in which the phase dependencies are preserved. This enables us to derive an MMSE estimator of the sources which optimally combines Wiener filtering and prior phase estimates. Experimental results highlight the potential of incorporating phase priors into mixture models for separating overlapping components in complex audio mixtures.
workshop on applications of signal processing to audio and acoustics | 2015
Paul Magron; Roland Badeau; Bertrand David
Phase recovery of modified spectrograms is a major issue in audio signal processing applications, such as source separation. This paper introduces a novel technique for estimating the phases of components in complex mixtures within onset frames in the Time-Frequency (TF) domain. We propose to exploit the phase repetitions from one onset frame to another. We introduce a reference phase which characterizes a component independently of its activation times. The onset phases of a component are then modeled as the sum of this reference and an offset which is linearly dependent on the frequency. We derive a complex mixture model within onset frames and we provide two algorithms for the estimation of the model phase parameters. The model is estimated on experimental data and this technique is integrated into an audio source separation framework. The results demonstrate that this model is a promising tool for exploiting phase repetitions, and point out its potential for separating overlapping components in complex mixtures.
workshop on applications of signal processing to audio and acoustics | 2017
Paul Magron; Jonathan Le Roux; Tuomas Virtanen
international conference on acoustics speech and signal processing | 2018
Paul Magron; Tuomas Virtanen
Archive | 2016
Paul Magron; Roland Badeau; Bertrand David
Archive | 2016
Paul Magron; Roland Badeau; Antoine Liutkus