Matthew E. P. Davies
Queen Mary University of London
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Matthew E. P. Davies.
IEEE Transactions on Audio, Speech, and Language Processing | 2007
Matthew E. P. Davies; Mark D. Plumbley
We present a simple and efficient method for beat tracking of musical audio. With the aim of replicating the human ability of tapping in time to music, we formulate our approach using a two state model. The first state performs tempo induction and tracks tempo changes, while the second maintains contextual continuity within a single tempo hypothesis. Beat times are recovered by passing the output of an onset detection function through adaptively weighted comb filterbank matrices to separately identify the beat period and alignment. We evaluate our beat tracker both in terms of the accuracy of estimated beat locations and computational complexity. In a direct comparison with existing algorithms, we demonstrate equivalent performance at significantly reduced computational cost
IEEE Transactions on Audio, Speech, and Language Processing | 2012
Andre Holzapfel; Matthew E. P. Davies; José R. Zapata; João Lobato Oliveira; Fabien Gouyon
In this paper, we propose a method that can identify challenging music samples for beat tracking without ground truth. Our method, motivated by the machine learning method “selective sampling,” is based on the measurement of mutual agreement between beat sequences. In calculating this mutual agreement we show the critical influence of different evaluation measures. Using our approach we demonstrate how to compile a new evaluation dataset comprised of difficult excerpts for beat tracking and examine this difficulty in the context of perceptual and musical properties. Based on tag analysis we indicate the musical properties where future advances in beat tracking research would be most profitable and where beat tracking is too difficult to be attempted. Finally, we demonstrate how our mutual agreement method can be used to improve beat tracking accuracy on large music collections.
IEEE Transactions on Audio, Speech, and Language Processing | 2012
Norberto Degara; Enrique Argones Rúa; Antonio Pena; Soledad Torres-Guijarro; Matthew E. P. Davies; Mark D. Plumbley
A new probabilistic framework for beat tracking of musical audio is presented. The method estimates the time between consecutive beat events and exploits both beat and non-beat information by explicitly modeling non-beat states. In addition to the beat times, a measure of the expected accuracy of the estimated beats is provided. The quality of the observations used for beat tracking is measured and the reliability of the beats is automatically calculated. A k -nearest neighbor regression algorithm is proposed to predict the accuracy of the beat estimates. The performance of the beat tracking system is statistically evaluated using a database of 222 musical signals of various genres. We show that modeling non-beat states leads to a significant increase in performance. In addition, a large experiment where the parameters of the model are automatically learned has been completed. Results show that simple approximations for the parameters of the model can be used. Furthermore, the performance of the system is compared with existing algorithms. Finally, a new perspective for beat tracking evaluation is presented. We show how reliability information can be successfully used to increase the mean performance of the proposed algorithm and discuss how far automatic beat tracking is from human tapping.
workshop on image analysis for multimedia interactive services | 2003
Chris Duxbury; Juan Pablo Bello; Matthew E. P. Davies; Mark B. Sandler
In this paper, we present a new approach to solving the problem of sound onset detection for note based segmentation of musical audio. The more traditional solution of looking at dierences in energy, and the more recently proposed approach of using deviations in expected phase characteristics have been combined to produce a more robust scheme, leading to overall improvements in detection accuracy.
IEEE Transactions on Audio, Speech, and Language Processing | 2014
Matthew E. P. Davies; Philippe Hamel; Kazuyoshi Yoshii; Masataka Goto
In this paper we present a system, AutoMashUpper, for making multi-song music mashups. Central to our system is a measure of “mashability” calculated between phrase sections of an input song and songs in a music collection. We define mashability in terms of harmonic and rhythmic similarity and a measure of spectral balance. The principal novelty in our approach centres on the determination of how elements of songs can be made fit together using key transposition and tempo modification, rather than based on their unaltered properties. In this way, the properties of two songs used to model their mashability can be altered with respect to transformations performed to maximize their perceptual compatibility. AutoMashUpper has a user interface to allow users to control the parameterization of the mashability estimation. It allows users to define ranges for key shifts and tempo as well as adding, changing or removing elements from the created mashups. We evaluate AutoMashUpper by its ability to reliably segment music signals into phrase sections, and also via a listening test to examine the relationship between estimated mashability and user enjoyment.
workshop on applications of signal processing to audio and acoustics | 2009
Matthew E. P. Davies; Mark D. Plumbley; Douglas Eck
We present a new method for generating input features for musical audio beat tracking systems. To emphasise periodic structure we derive a weighted linear combination of sub-band onset detection functions driven a measure of sub-band beat strength. Results demonstrate improved performance over existing state of the art models, in particular for musical excerpts with a steady tempo.
IEEE Transactions on Audio, Speech, and Language Processing | 2012
João Lobato Oliveira; Matthew E. P. Davies; Fabien Gouyon; Luís Paulo Reis
In this paper we propose an audio beat tracking system, IBT, for multiple applications. The proposed system integrates an automatic monitoring and state recovery mechanism, that applies (re-)inductions of tempo and beats, on a multi-agent-based beat tracking architecture. This system sequentially processes a continuous onset detection function while propagating parallel hypotheses of tempo and beats. Beats can be predicted in a causal or in a non-causal usage mode, which makes the system suitable for diverse applications. We evaluate the performance of the system in both modes on two application scenarios: standard (using a relatively large database of audio clips) and streaming (using long audio streams made up of concatenated clips). We show experimental evidence of the usefulness of the automatic monitoring and state recovery mechanism in the streaming scenario (i.e., improvements in beat tracking accuracy and reaction time). We also show that the system performs efficiently and at a level comparable to state-of-the-art algorithms in the standard scenario. IBT is multi-platform, open-source and freely available, and it includes plugins for different popular audio analysis, synthesis and visualization platforms.
IEEE Signal Processing Letters | 2011
Matthew E. P. Davies; Norberto Degara; Mark D. Plumbley
We present a new evaluation method for measuring the performance of musical audio beat tracking systems. Central to our method is a novel visualization, the beat error histogram, which illustrates the metrical relationship between two qausi-periodic sequences of time instants: the output of beat tracking system and a set of ground truth annotations. To quantify beat tracking performance we derive an information theoretic statistic from the histogram. Results indicate that our method is able to measure performance with greater precision than existing evaluation methods and implicitly cater for metrical ambiguity in tapping sequences.
IEEE Journal of Selected Topics in Signal Processing | 2011
Norberto Degara; Matthew E. P. Davies; Antonio Pena; Mark D. Plumbley
In this paper, we propose a rhythmically informed method for onset detection in polyphonic music. Music is highly structured in terms of the temporal regularity underlying onset occurrences and this rhythmic structure can be used to locate sound events. Using a probabilistic formulation, the method integrates information extracted from the audio signal and rhythmic knowledge derived from tempo estimates in order to exploit the temporal expectations associated with rhythm and make musically meaningful event detections. To do so, the system explicitly models note events in terms of the elapsed time between consecutive events and decodes the most likely sequence of onsets that led to the observed audio signal. In this way, the proposed method is able to identify likely time instants for onsets and to successfully exploit the temporal regularity of music. The goal of this work is to define a general framework to be used in combination with any onset detection function and tempo estimator. The method is evaluated using a dataset of music that contains multiple instruments playing at the same time, including singing and different music genres. Results show that the use of rhythmic information improves the commonly used adaptive thresholding onset detection method which only considers local information. It is also shown that the proposed probabilistic framework successfully exploits rhythmic information using different detection functions and tempo estimation algorithms.
international conference on acoustics, speech, and signal processing | 2010
Norberto Degara; Antonio Pena; Matthew E. P. Davies; Mark D. Plumbley
In this paper we explore the relationship between the temporal and rhythmic structure of musical audio signals. Using automatically extracted rhythmic structure we present a rhythmically-aware method to combine note onset detection techniques. Our method uses topdown knowledge of repetitions of musical events to improve detection performance by modelling the temporal distribution of onset locations. Results on a publicly available database demonstrate that using musical knowledge in this way can lead to significant improvements by reducing the number of missed and spurious detections.
Collaboration
Dive into the Matthew E. P. Davies's collaboration.
National Institute of Advanced Industrial Science and Technology
View shared research outputs