Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where José R. Zapata is active.

Publication


Featured researches published by José R. Zapata.


acm multimedia | 2013

ESSENTIA: an open-source library for sound and music analysis

Dmitry Bogdanov; Nicolas Wack; Emilia Gómez; Sankalp Gulati; Perfecto Herrera; Oscar Mayor; Gerard Roma; Justin Salamon; José R. Zapata; Xavier Serra

We present Essentia 2.0, an open-source C++ library for audio analysis and audio-based music information retrieval released under the Affero GPL license. It contains an extensive collection of reusable algorithms which implement audio input/output functionality, standard digital signal processing blocks, statistical characterization of data, and a large set of spectral, temporal, tonal and high-level music descriptors. The library is also wrapped in Python and includes a number of predefined executable extractors for the available music descriptors, which facilitates its use for fast prototyping and allows setting up research experiments very rapidly. Furthermore, it includes a Vamp plugin to be used with Sonic Visualiser for visualization purposes. The library is cross-platform and currently supports Linux, Mac OS X, and Windows systems. Essentia is designed with a focus on the robustness of the provided music descriptors and is optimized in terms of the computational cost of the algorithms. The provided functionality, specifically the music descriptors included in-the-box and signal processing algorithms, is easily expandable and allows for both research experiments and development of large-scale industrial applications.


IEEE Transactions on Audio, Speech, and Language Processing | 2012

Selective Sampling for Beat Tracking Evaluation

Andre Holzapfel; Matthew E. P. Davies; José R. Zapata; João Lobato Oliveira; Fabien Gouyon

In this paper, we propose a method that can identify challenging music samples for beat tracking without ground truth. Our method, motivated by the machine learning method “selective sampling,” is based on the measurement of mutual agreement between beat sequences. In calculating this mutual agreement we show the critical influence of different evaluation measures. Using our approach we demonstrate how to compile a new evaluation dataset comprised of difficult excerpts for beat tracking and examine this difficulty in the context of perceptual and musical properties. Based on tag analysis we indicate the musical properties where future advances in beat tracking research would be most profitable and where beat tracking is too difficult to be attempted. Finally, we demonstrate how our mutual agreement method can be used to improve beat tracking accuracy on large music collections.


IEEE Transactions on Audio, Speech, and Language Processing | 2014

Multi-feature beat tracking

José R. Zapata; Matthew E. P. Davies; Emilia Gómez

A recent trend in the field of beat tracking for musical audio signals has been to explore techniques for measuring the level of agreement and disagreement between a committee of beat tracking algorithms. By using beat tracking evaluation methods to compare all pairwise combinations of beat tracker outputs, it has been shown that selecting the beat tracker which most agrees with the remainder of the committee, on a song-by-song basis, leads to improved performance which surpasses the accuracy of any individual beat tracker used on its own. In this paper we extend this idea towards presenting a single, standalone beat tracking solution which can exploit the benefit of mutual agreement without the need to run multiple separate beat tracking algorithms. In contrast to existing work, we re-cast the problem as one of selecting between the beat outputs resulting from a single beat tracking model with multiple, diverse input features. Through extended evaluation on a large annotated database, we show that our multi-feature beat tracker can outperform the state of the art, and thereby demonstrate that there is sufficient diversity in input features for beat tracking, without the need for multiple tracking models.


international conference on acoustics, speech, and signal processing | 2012

On the automatic identification of difficult examples for beat tracking: Towards building new evaluation datasets

Andre Holzapfel; Matthew E. P. Davies; José R. Zapata; João Lobato Oliveira; Fabien Gouyon

In this paper, an approach is presented that identifies music samples which are difficult for current state-of-the-art beat trackers. In order to estimate this difficulty even for examples without ground truth, a method motivated by selective sampling is applied. This method assigns a degree of difficulty to a sample based on the mutual disagreement between the output of various beat tracking systems. On a large beat annotated dataset we show that this mutual agreement is correlated with the mean performance of the beat trackers evaluated against the ground truth, and hence can be used to identify difficult examples by predicting poor beat tracking performance. Towards the aim of advancing future beat tracking systems, we demonstrate how our method can be used to form new datasets containing a high proportion of challenging music examples.


international conference on acoustics, speech, and signal processing | 2013

Using voice suppression algorithms to improve beat tracking in the presence of highly predominant vocals

José R. Zapata; Emilia Gómez

Beat tracking estimation from music signals becomes difficult in the presence of highly predominant vocals. We compare the performance of five state-of-the-art algorithms on two datasets, a generic annotated collection and a dataset comprised of song excerpts with highly predominant vocals. Then, we use seven state-of-the-art audio voice suppression techniques and a simple low pass filter to improve beat tracking estimations in the later case. Finally, we evaluate all the pairwise combinations between beat tracking and voice suppression methods. We confirm our hypothesis that voice suppression improves the mean performance of beat trackers for the predominant vocal collection.


international symposium/conference on music information retrieval | 2013

Essentia: an audio analysis library for music information retrieval

Dmitry Bogdanov; Nicolas Wack; Emilia Gómez; Sankalp Gulati; Perfecto Herrera; Oscar Mayor; Gerard Roma; Justin Salamon; José R. Zapata; Xavier Serra


Audio Engineering Society Conference: 42nd International Conference: Semantic Audio | 2011

Comparative Evaluation and Combination of Audio Tempo Estimation Approaches

José R. Zapata; Emilia Gómez


international symposium/conference on music information retrieval | 2012

Assigning a confidence threshold on automatic beat annotation in large datasets

José R. Zapata; Andre Holzapfel; Matthew E. P. Davies; João Lobato Oliveira; Fabien Gouyon


ACM Sigmultimedia Records | 2014

ESSENTIA: an open source library for audio analysis

Dmitry Bogdanov; Nicolas Wack; Emilia Gómez; Sankalp Gulati; Perfecto Herrera; Oscar Mayor; Gerard Roma; Justin Salamon; José R. Zapata; Xavier Serra


computer music modeling and retrieval | 2012

Improving Beat Tracking in the presence of highly predominant vocals using source separation techniques: Preliminary study

José R. Zapata; Emilia Gómez

Collaboration


Dive into the José R. Zapata's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Matthew E. P. Davies

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gerard Roma

Pompeu Fabra University

View shared research outputs
Top Co-Authors

Avatar

Nicolas Wack

Pompeu Fabra University

View shared research outputs
Top Co-Authors

Avatar

Oscar Mayor

Pompeu Fabra University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xavier Serra

Pompeu Fabra University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge