Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mark B. Sandler is active.

Publication


Featured researches published by Mark B. Sandler.


IEEE Signal Processing Letters | 2004

On the use of phase and energy for musical onset detection in the complex domain

Juan Pablo Bello; Chris Duxbury; Michael Davies; Mark B. Sandler

We present a study on the combined use of energy and phase information for the detection of onsets in musical signals. The resulting method improves upon both energy-based and phase-based approaches. The detection function, generated from the analysis of the signal in the complex frequency domain is sharp at the position of onsets and smooth everywhere else. Results on a database of recordings show high detection rates for low rates of errors. The approach is more robust than its predecessors both theoretically and practically.


IEEE Transactions on Audio, Speech, and Language Processing | 2008

Structural Segmentation of Musical Audio by Constrained Clustering

Mark Levy; Mark B. Sandler

We describe a method of segmenting musical audio into structural sections based on a hierarchical labeling of spectral features. Frames of audio are first labeled as belonging to one of a number of discrete states using a hidden Markov model trained on the features. Histograms of neighboring frames are then clustered into segment-types representing distinct distributions of states, using a clustering algorithm in which temporal continuity is expressed as a set of constraints modeled by a hidden Markov random field. We give experimental results which show that in many cases the resulting segmentations correspond well to conventional notions of musical form. We show further how the constrained clustering approach can easily be extended to include prior musical knowledge, input from other machine approaches, or semi-supervision.


international conference on acoustics speech and signal processing | 1998

Classification of audio signals using statistical features on time and wavelet transform domains

Tryphon Lambrou; Panos Kudumakis; Robert D. Speller; Mark B. Sandler; Alf D. Linney

This paper presents a study on musical signal classification, using wavelet transform analysis in conjunction with statistical pattern recognition techniques. A comparative evaluation between different wavelet analysis architectures in terms of their classification ability, as well as between different classifiers is carried out. We seek to establish which statistical measures clearly distinguish between the three different musical styles of rock, piano, and jazz. Our preliminary results suggest that the features collected by the adaptive splitting wavelet transform technique performed better compared to the other wavelet based techniques, achieving an overall classification accuracy of 91.67%, using either the minimum distance classifier or the least squares minimum distance classifier. Such a system can play a useful part in multimedia applications which require content based search, classification, and retrieval of audio signals, as defined in MPEG-7.


IEEE Transactions on Multimedia | 2009

Music Information Retrieval Using Social Tags and Audio

Mark Levy; Mark B. Sandler

In this paper we describe a novel approach to applying text-based information retrieval techniques to music collections. We represent tracks with a joint vocabulary consisting of both conventional words, drawn from social tags, and audio muswords, representing characteristics of automatically-identified regions of interest within the signal. We build vector space and latent aspect models indexing words and muswords for a collection of tracks, and show experimentally that retrieval with these models is extremely well-behaved. We find in particular that retrieval performance remains good for tracks by artists unseen by our models in training, and even if tags for their tracks are extremely sparse.


Proceedings of the 1st ACM workshop on Audio and music computing multimedia | 2006

Detecting harmonic change in musical audio

Christopher Harte; Mark B. Sandler; Martin Gasser

We propose a novel method for detecting changes in the harmonic content of musical audio signals. Our method uses a new model for Equal Tempered Pitch Class Space. This model maps 12-bin chroma vectors to the interior space of a 6-D polytope; pitch classes are mapped onto the vertices of this polytope. Close harmonic relations such as fifths and thirds appear as small Euclidian distances. We calculate the Euclidian distance between analysis frames n +1 and n -1 to develop a harmonic change measure for frame n. A peak in the detection function denotes a transition from one harmonically stable region to another. Initial experiments show that the algorithm can successfully detect harmonic changes such as chord boundaries in polyphonic audio recordings.


Pattern Recognition Letters | 1990

A combinatorial Hough transform

D. Ben-Tzvi; Mark B. Sandler

Abstract A new algorithm for computing the Hough transform is presented. It calculates the parameters associated with all possible combinations of two-point line segments among the feature points in the image, rather than calculating all possible values of one of the parameters searched. It uses information available in the distribution of image points, rather than depending solely on information extracted from the transform space. Using the algorithm, the Hough transform of sparse images is more efficiently calculated. Dense images may be segmented and similarly processed. The transform space obtained by this algorithm contains less extraneous data and more significant maxima, thus making it easier to extract the desired parameters from it.


Cybernetics and Systems | 2002

Automatic music transcription and audio source separation

Mark D. Plumbley; Samer A. Abdallah; Juan Pablo Bello; Michael Davies; Giuliano Monti; Mark B. Sandler

In this article, we give an overview of a range of approaches to the analysis and separation of musical audio. In particular, we consider the problems of automatic music transcription and audio source separation, which are of particular interest to our group. Monophonic music transcription, where a single note is present at one time, can be tackled using an autocorrelation-based method. For polyphonic music transcription, with several notes at any time, other approaches can be used, such as a blackboard model or a multiple-cause/sparse coding method. The latter is based on ideas and methods related to independent component analysis (ICA), a method for sound source separation.


international conference on acoustics, speech, and signal processing | 2003

Phase-based note onset detection for music signals

Juan Pablo Bello; Mark B. Sandler

Note onsets mark the beginning of attack transients, short areas of a note containing rapid changes of the signal spectral content. Detecting onsets is not trivial, especially when analysing complex mixtures. Applications for note onset detection systems include time stretching, audio coding and synthesis. An alternative to standard energy-based onset detection is proposed by using phase information. It is suggested that by observing the frame-by-frame distribution of differential angles, the precise moment when onsets occur can be detected with accuracy. Statistical measures are used to build the detection function. The system is tested and tuned on a database of complex recordings.


EURASIP Journal on Advances in Signal Processing | 2011

Digital Audio Effects

Augusto Sarti; Udo Zoelzer; Xavier Serra; Mark B. Sandler; Simon J. Godsill

1Dipartimento di Elettronica e Informazione (DEI), Politecnico di Milano, Milano, Italy 2Department of Signal Processing and Communications, Helmut-Schmidt-University—University of Federal Armed Forces Hamburg, Germany 3Music Technology Group, Department of Information and Communication Technologies & Audiovisual Institute, Universitat Pompeu Fabra, Barcelona, Spain 4Centre for Digital Music (C4DM), School of Electronic Engineering and Computer Science, Queen Mary University of London, UK 5Department of Engineering, University of Cambridge, UK


Journal of New Music Research | 2003

Polyphonic Score Retrieval Using Polyphonic Audio Queries: A Harmonic Modeling Approach

Jeremy Pickens; Juan Pablo Bello; Giuliano Monti; Mark B. Sandler; Tim Crawford; Matthew J. Dovey; Donald Byrd

This paper extends the familiar “query by humming” music retrieval framework into the polyphonic realm. As humming in multiple voices is quite difficult, the task is more accurately described as “query by audio example,” onto a collection of scores. To our knowledge, we are the first to use polyphonic audio queries to retrieve from polyphonic symbolic collections. Furthermore, as our results will show, we will not only use an audio query to retrieve a known item symbolic piece, but we will use it to retrieve an entire set of real-world composed variations on that piece, also in the symbolic format. The harmonic modeling approach which forms the basis of this work is a new and valuable technique which has both wide applicability and future potential.

Collaboration


Dive into the Mark B. Sandler's collaboration.

Top Co-Authors

Avatar

György Fazekas

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mathieu Barthet

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Joshua D. Reiss

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar

Samer A. Abdallah

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar

Keunwoo Choi

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar

Thomas Wilmering

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar

Yves Raimond

Queen Mary University of London

View shared research outputs
Researchain Logo
Decentralizing Knowledge