Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Justin Salamon is active.

Publication


Featured researches published by Justin Salamon.


acm multimedia | 2013

ESSENTIA: an open-source library for sound and music analysis

Dmitry Bogdanov; Nicolas Wack; Emilia Gómez; Sankalp Gulati; Perfecto Herrera; Oscar Mayor; Gerard Roma; Justin Salamon; José R. Zapata; Xavier Serra

We present Essentia 2.0, an open-source C++ library for audio analysis and audio-based music information retrieval released under the Affero GPL license. It contains an extensive collection of reusable algorithms which implement audio input/output functionality, standard digital signal processing blocks, statistical characterization of data, and a large set of spectral, temporal, tonal and high-level music descriptors. The library is also wrapped in Python and includes a number of predefined executable extractors for the available music descriptors, which facilitates its use for fast prototyping and allows setting up research experiments very rapidly. Furthermore, it includes a Vamp plugin to be used with Sonic Visualiser for visualization purposes. The library is cross-platform and currently supports Linux, Mac OS X, and Windows systems. Essentia is designed with a focus on the robustness of the provided music descriptors and is optimized in terms of the computational cost of the algorithms. The provided functionality, specifically the music descriptors included in-the-box and signal processing algorithms, is easily expandable and allows for both research experiments and development of large-scale industrial applications.


international conference on acoustics, speech, and signal processing | 2012

Musical genre classification using melody features extracted from polyphonic music signals

Justin Salamon; Bruno Miguel Machado Rocha; Emilia Gómez

We present a new method for musical genre classification based on high-level melodic features that are extracted directly from the audio signal of polyphonic music. The features are obtained through the automatic characterisation of pitch contours describing the predominant melodic line, extracted using a state-of-the-art audio melody extraction algorithm. Using standard machine learning algorithms the melodic features are used to classify excerpts into five different musical genres. We obtain a classification accuracy above 90% for a collection of 500 excerpts, demonstrating that successful classification can be achieved using high-level melodic features that are more meaningful to humans compared to low-level features commonly used for this task. We also compare our method to a baseline approach using low-level timbre features, and study the effect of combining these low-level features with our high-level melodic features. The results demonstrate that complementing low-level features with high-level melodic features is a promising approach.


multimedia information retrieval | 2013

Tonal representations for music retrieval: from version identification to query-by-humming

Justin Salamon; Joan Serrà; Emilia Gómez

In this study we compare the use of different music representations for retrieving alternative performances of the same musical piece, a task commonly referred to as version identification. Given the audio signal of a song, we compute descriptors representing its melody, bass line and harmonic progression using state-of-the-art algorithms. These descriptors are then employed to retrieve different versions of the same musical piece using a dynamic programming algorithm based on nonlinear time series analysis. First, we evaluate the accuracy obtained using individual descriptors, and then we examine whether performance can be improved by combining these music representations (i.e.xa0descriptor fusion). Our results show that whilst harmony is the most reliable music representation for version identification, the melody and bass line representations also carry useful information for this task. Furthermore, we show that by combining these tonal representations we can increase version detection accuracy. Finally, we demonstrate how the proposed version identification method can be adapted for the task of query-by-humming. We propose a melody-based retrieval approach, and demonstrate how melody representations extracted from recordings of a cappella singing can be successfully used to retrieve the original song from a collection of polyphonic audio. The current limitations of the proposed approach are discussed in the context of version identification and query-by-humming, and possible solutions and future research directions are proposed.


Journal of New Music Research | 2014

Automatic Tonic Identification in Indian Art Music: Approaches and Evaluation

Sankalp Gulati; Ashwin Bellur; Justin Salamon; Hg Ranjani; Vignesh Ishwar; Hema A. Murthy; Xavier Serra

Abstract The tonic is a fundamental concept in Indian art music. It is the base pitch, which an artist chooses in order to construct the melodies during a rāg(a) rendition, and all accompanying instruments are tuned using the tonic pitch. Consequently, tonic identification is a fundamental task for most computational analyses of Indian art music, such as intonation analysis, melodic motif analysis and rāg recognition. In this paper we review existing approaches for tonic identification in Indian art music and evaluate them on six diverse datasets for a thorough comparison and analysis. We study the performance of each method in different contexts such as the presence/absence of additional metadata, the quality of audio data, the duration of audio data, music tradition (Hindustani/Carnatic) and the gender of the singer (male/female). We show that the approaches that combine multi-pitch analysis with machine learning provide the best performance in most cases (90% identification accuracy on average), and are robust across the aforementioned contexts compared to the approaches based on expert knowledge. In addition, we also show that the performance of the latter can be improved when additional metadata is available to further constrain the problem. Finally, we present a detailed error analysis of each method, providing further insights into the advantages and limitations of the methods.


international world wide web conferences | 2012

Melody, bass line, and harmony representations for music version identification

Justin Salamon; Joan Serrà; Emilia Gómez

In this paper we compare the use of different musical representations for the task of version identification (i.e. retrieving alternative performances of the same musical piece). We automatically compute descriptors representing the melody and bass line using a state-of-the-art melody extraction algorithm, and compare them to a harmony-based descriptor. The similarity of descriptor sequences is computed using a dynamic programming algorithm based on nonlinear time series analysis which has been successfully used for version identification with harmony descriptors. After evaluating the accuracy of individual descriptors, we assess whether performance can be improved by descriptor fusion, for which we apply a classification approach, comparing different classification algorithms. We show that both melody and bass line descriptors carry useful information for version identification, and that combining them increases version detection accuracy. Whilst harmony remains the most reliable musical representation for version identification, we demonstrate how in some cases performance can be improved by combining it with melody and bass line descriptions. Finally, we identify some of the limitations of the proposed descriptor fusion approach, and discuss directions for future research.


Proc. of the 14th Int. Conference on Digital Audio Effects (DAFx-11) | 2011

SINUSOID EXTRACTION AND SALIENCE FUNCTION DESIGN FOR PREDOMINANT MELODY ESTIMATION

Justin Salamon; Emilia Gómez; Jordi Bonada


international symposium/conference on music information retrieval | 2012

Predominant Fundamental Frequency Estimation vs Singing Voice Separation for the Automatic Transcription of Accompanied Flamenco Singing

Emilia Gómez; Francisco J. Cañadas-Quesada; Justin Salamon; Jordi Bonada; Pedro Vera-Candeas; Pablo Cabañas Molero


international symposium/conference on music information retrieval | 2012

Current Challenges in the Evaluation of Predominant Melody Extraction Algorithms

Justin Salamon; Julián Urbano


6th Music Information Retrieval Evaluation eXchange (MIREX 2011) extended abstract | 2011

MELODY EXTRACTION FROM POLYPHONIC MUSIC: MIREX 2011

Justin Salamon; Emilia Gómez


5th Music Information Retrieval Evaluation eXchange (MIREX 2010) extended abstract | 2010

MELODY EXTRACTION FROM POLYPHONIC MUSIC AUDIO

Justin Salamon; Emilia Gómez

Collaboration


Dive into the Justin Salamon's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jordi Bonada

Pompeu Fabra University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xavier Serra

Pompeu Fabra University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gerard Roma

Pompeu Fabra University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge