Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Luis Gustavo Martins is active.

Publication


Featured researches published by Luis Gustavo Martins.


acm multimedia | 2009

Improving automatic music tag annotation using stacked generalization of probabilistic SVM outputs

Steven R. Ness; Anthony Theocharis; George Tzanetakis; Luis Gustavo Martins

Music listeners frequently use words to describe music. Personalized music recommendation systems such as Last.fm and Pandora rely on manual annotations (tags) as a mechanism for querying and navigating large music collections. A well-known issue in such recommendation systems is known as the cold-start problem: it is not possible to recommend new songs/tracks until those songs/tracks have been manually annotated. Automatic tag annotation based on content analysis is a potential solution to this problem and has recently been gaining attention. We describe how stacked generalization can be used to improve the performance of a state-of-the-art automatic tag annotation system for music based on audio content analysis and report results on two publicly available datasets.


IEEE Transactions on Audio, Speech, and Language Processing | 2008

Normalized Cuts for Predominant Melodic Source Separation

Mathieu Lagrange; Luis Gustavo Martins; Jennifer Murdoch; George Tzanetakis

The predominant melodic source, frequently the singing voice, is an important component of musical signals. In this paper, we describe a method for extracting the predominant source and corresponding melody from ldquoreal-worldrdquo polyphonic music. The proposed method is inspired by ideas from computational auditory scene analysis. We formulate predominant melodic source tracking and formation as a graph partitioning problem and solve it using the normalized cut which is a global criterion for segmenting graphs that has been used in computer vision. Sinusoidal modeling is used as the underlying representation. A novel harmonicity cue which we term harmonically wrapped peak similarity is introduced. Experimental results supporting the use of this cue are presented. In addition, we show results for automatic melody extraction using the proposed approach.


acm multimedia | 2008

MarsyasX: multimedia dataflow processing with implicit patching

Luís Filipe Teixeira; Luis Gustavo Martins; Mathieu Lagrange; George Tzanetakis

The design and implementation of multimedia signal processing systems is challenging especially when efficiency and real-time performance is desired. In many modern applications, software systems must be able to handle multiple flows of various types of multimedia data such as audio and video. Researchers frequently have to rely on a combination of different software tools for each modality to assemble proof-of-concept systems that are inefficient, brittle and hard to maintain. Marsyas is a software framework originally developed to address these issues in the domain of audio processing. In this paper we describe MarsyasX, a new open-source cross-modal analysis framework that aims at a broader score of applications. It follows a dataflow architecture where complex networks of processing objects can be assembled to form systems that can handle multiple and different types of multimedia flows with expressiveness and efficiency.


international conference on acoustics, speech, and signal processing | 2008

A Computationally Efficient Scheme for Dominant Harmonic Source Separation

Mathieu Lagrange; Luis Gustavo Martins; George Tzanetakis

The leading voice is an important feature of musical pieces and can often be considered as the dominant harmonic source. We propose in this paper a new scheme for the purpose of efficient dominant harmonic source separation. This is achieved by considering an harmonicity cue which is first compared with state-of-the-art cues using a generic evaluation methodology. The proposed separation scheme is then compared to a generic computational auditory scene analysis framework. Computational speed-up and performance comparison is done using source separation and music information retrieval tasks.


content based multimedia indexing | 2007

Speaker Segmentation of Interviews Using Integrated Video and Audio Change Detectors

Mathieu Lagrange; Luis Gustavo Martins; Luís Filipe Teixeira; George Tzanetakis

In this paper, we study the use of audio and visual cues to perform speaker segmentation of audiovisual recordings of formal meetings such as interviews, lectures, or courtroom sessions. The sole use of audio cues for such recordings can be ineffective due to low recording quality and high level of background noise. We propose to use additional cues from the video stream by exploiting the relative static locations of speakers among the scene. The experiments show that the combination of those multiple cues helps to identify more robustly the transitions among speakers.


Journal of the Acoustical Society of America | 2007

A framework for sound source separation using spectral clustering

George Tzanetakis; Mathieu Lagrange; Luis Gustavo Martins; Jennifer Murdoch

Clustering based on the normalized cut criterion, and more generally, spectral clustering methods, are techniques originally proposed to model perceptual grouping tasks, such as image segmentation in computer vision. In this work, it is shown how such techniques can be applied to the problem of dominant melodic source separation in polyphonic music audio signals. One of the main advantages of this approach is the ability to incorporate mutiple perceptually‐inspired grouping criteria into a single framework without requiring multiple processing stages, as many existing computational auditory science analysis approaches do. Experimental results for several tasks, including dominant melody pitch detection, are presented. The system is based on a sinusoidal modeling analysis front‐end. A novel similarity cue based on harmonicity (harmonically‐wrapped peak similariy) is also introduced. The proposed system is data‐driven (i.e., requires no prior knowledge about the extracted source), causal, robust, practical,...


international symposium/conference on music information retrieval | 2007

Polyphonic Instrument Recognition Using Spectral Clustering.

Luis Gustavo Martins; Juan José Burred; George Tzanetakis; Mathieu Lagrange


Journal of The Audio Engineering Society | 2010

Stereo Panning Information for Music Information Retrieval Tasks

George Tzanetakis; Luis Gustavo Martins; Kirk McNally; Randy Jones


international computer music conference | 2008

INTEROPERABILITY AND THE MARSYAS 0.2 RUNTIME

George Tzanetakis; Randy Jones; Carlos Castillo; Luis Gustavo Martins; Luís Filipe Teixeira; Mathieu Lagrange


Journal of The Audio Engineering Society | 2007

Semi-Automatic Mono to Stereo Up-Mixing Using Sound Source Formation

Mathieu Lagrange; Luis Gustavo Martins; George Tzanetakis

Collaboration


Dive into the Luis Gustavo Martins's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Randy Jones

University of Victoria

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge