Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Samer A. Abdallah is active.

Publication


Featured researches published by Samer A. Abdallah.


Cybernetics and Systems | 2002

Automatic music transcription and audio source separation

Mark D. Plumbley; Samer A. Abdallah; Juan Pablo Bello; Michael Davies; Giuliano Monti; Mark B. Sandler

In this article, we give an overview of a range of approaches to the analysis and separation of musical audio. In particular, we consider the problems of automatic music transcription and audio source separation, which are of particular interest to our group. Monophonic music transcription, where a single note is present at one time, can be tackled using an autocorrelation-based method. For polyphonic music transcription, with several notes at any time, other approaches can be used, such as a blackboard model or a multiple-cause/sparse coding method. The latter is based on ideas and methods related to independent component analysis (ICA), a method for sound source separation.


Signal Processing | 2006

Sparse representations of polyphonic music

Mark D. Plumbley; Samer A. Abdallah; Thomas Blumensath; Michael Davies

We consider two approaches for sparse decomposition of polyphonic music: a time-domain approach based on a shift-invariant model, and a frequency-domain approach based on phase-invariant power spectra. When trained on an example of a MIDI-controlled acoustic piano recording, both methods produce dictionary vectors or sets of vectors which represent underlying notes, and produce component activations related to the original MIDI score. The time-domain method is more computationally expensive, but produces sample-accurate spike-like activations and can be used for a direct time-domain reconstruction. The spectral-domain method discards phase information, but is faster than the time-domain method and retains more higher-frequency harmonics. These results suggest that these two methods would provide a powerful yet complementary approach to automatic music transcription or object-based coding of musical audio.


Machine Learning | 2006

Using duration models to reduce fragmentation in audio segmentation

Samer A. Abdallah; Mark B. Sandler; Christophe Rhodes; Michael A. Casey

We investigate explicit segment duration models in addressing the problem of fragmentation in musical audio segmentation. The resulting probabilistic models are optimised using Markov Chain Monte Carlo methods; in particular, we introduce a modification to Wolff’s algorithm to make it applicable to a segment classification model with an arbitrary duration prior. We apply this to a collection of pop songs, and show experimentally that the generated segmentations suffer much less from fragmentation than those produced by segmentation algorithms based on clustering, and are closer to an expert listener’s annotations, as evaluated by two different performance measures.


international conference on independent component analysis and signal separation | 2006

Sparse coding for convolutive blind audio source separation

Maria G. Jafari; Samer A. Abdallah; Mark D. Plumbley; Michael Davies

In this paper, we address the convolutive blind source separation (BSS) problem with a sparse independent component analysis (ICA) method, which uses ICA to find a set of basis vectors from the observed data, followed by clustering to identify the original sources. We show that, thanks to the temporally localised basis vectors that result, phase information is easily exploited to determine the clusters, using an unsupervised clustering method. Experimental results show that good performance is obtained with the proposed approach, even for short basis vectors.


international conference on acoustics, speech, and signal processing | 2006

A Markov-Chain Monte-Carlo Approach to Musical Audio Segmentation

Christophe Rhodes; Michael A. Casey; Samer A. Abdallah; Mark B. Sandler

This paper describes a method for automatically segmenting and labelling sections in recordings of musical audio. We incorporate the users expectations for segment duration as an explicit prior probability distribution in a Bayesian framework, and demonstrate experimentally that this method can produce accurate labelled segmentations for popular music


international conference on independent component analysis and signal separation | 2004

Application of geometric dependency analysis to the separation of convolved mixtures

Samer A. Abdallah; Mark D. Plumbley

We investigate a generalisation of the structure of frequency domain ICA as applied to the separation of convolved mixtures, and show how a geometric representation of residual dependency can be used both as an aid to visualisation and intuition, and as tool for clustering components into independent subspaces, thus providing a solution to the source separation problem.


Neurocomputing | 2008

An adaptive stereo basis method for convolutive blind audio source separation

Maria G. Jafari; Emmanuel Vincent; Samer A. Abdallah; Mark D. Plumbley; Mike E. Davies

We consider the problem of convolutive blind source separation of stereo mixtures, where a pair of microphones records mixtures of sound sources that are convolved with the impulse response between each source and sensor. We propose an adaptive stereo basis (ASB) source separation method for such convolutive mixtures, using an adaptive transform basis which is learned from the stereo mixture pair. The stereo basis vector pairs of the transform are grouped according to the estimated relative delay between the left and right channels for each basis, and the sources are then extracted by projecting the transformed signal onto the subspace corresponding to each group of basis vector pairs. The performance of the proposed algorithm is compared with FD-ICA and DUET under different reverberation and noise conditions, using both objective distortion measures and formal listening tests. The results indicate that the proposed stereo coding method is competitive with both these algorithms at short and intermediate reverberation times, and offers significantly improved performance at low noise and short reverberation times.


international conference on acoustics, speech, and signal processing | 2007

Flag Manifolds for Subspace ICA Problems

Yasunori Nishimori; Shotaro Akaho; Samer A. Abdallah; Mark D. Plumbley

We investigate the use of the Riemannian optimization method over the flag manifold in subspace ICA problems such as independent subspace analysis (ISA) and complex ICA. In the ISA experiment, we use the Riemannian approach over the flag manifold together with an MCMC method to overcome the problem of local minima of the ISA cost function. Experiments demonstrate the effectiveness of both Riemannian methods - simple geodesic gradient descent and hybrid geodesic gradient descent, compared with the ordinary gradient method.


2012 3rd International Workshop on Cognitive Information Processing (CIP) | 2012

Cognitive music modelling: An information dynamics approach

Samer A. Abdallah; Henrik Ekeus; Peter Foster; Andrew Robertson; Mark D. Plumbley

We describe an information-theoretic approach to the analysis of music and other sequential data, which emphasises the predictive aspects of perception, and the dynamic process of forming and modifying expectations about an unfolding stream of data, characterising these using the tools of information theory: entropies, mutual informations, and related quantities. After reviewing the theoretical foundations, we discuss a few emerging areas of application, including musicological analysis, real-time beat-tracking analysis, and the generation of musical materials as a cognitively-informed compositional aid.


Archive | 2007

Blind Source Separation using Space–Time Independent Component Analysis

Mike E. Davies; Maria G. Jafari; Samer A. Abdallah; Emmanuel Vincent; Mark D. Plumbley

We consider the problem of convolutive blind source separation (BSS). This is usually tackled through either multichannel blind deconvolution (MCBD) or using frequency-domain independent component analysis (FD-ICA). Here, instead of using a fixed time or frequency basis to solve the convolutive blind source separation problem we propose learning an adaptive spatial–temporal transform directly from the speech mixture. Most of the learnt space–time basis vectors exhibit properties suggesting that they represent the components of individual sources as they are observed at the microphones. Source separation can then be performed by projection onto the appropriate group of basis vectors.We go on to show that both MCBD and FD-ICA techniques can be considered as particular forms of this general separation method with certain constraints. While our space–time approach involves considerable additional computation it is also enlightening as to the nature of the problem and has the potential for performance benefits in terms of separation and de-noising.

Collaboration


Dive into the Samer A. Abdallah's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mark B. Sandler

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar

Maria G. Jafari

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar

Michael Davies

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yves Raimond

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Plumbley

Queen Mary University of London

View shared research outputs
Researchain Logo
Decentralizing Knowledge