Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Isabel Barbancho is active.

Publication


Featured researches published by Isabel Barbancho.


IEEE Transactions on Audio, Speech, and Language Processing | 2012

Automatic Transcription of Guitar Chords and Fingering From Audio

Ana M. Barbancho; Anssi Klapuri; Lorenzo J. Tardón; Isabel Barbancho

This paper proposes a method for extracting the fingering configurations automatically from a recorded guitar performance. 330 different fingering configurations are considered, corresponding to different versions of the major, minor, major 7th, and minor 7th chords played on the guitar fretboard. The method is formulated as a hidden Markov model, where the hidden states correspond to the different fingering configurations and the observed acoustic features are obtained from a multiple fundamental frequency estimator that measures the salience of a range of candidate note pitches within individual time frames. Transitions between consecutive fingerings are constrained by a musical model trained on a database of chord sequences, and a heuristic cost function that measures the physical difficulty of moving from one configuration of finger positions to another. The method was evaluated on recordings from the acoustic, electric, and the Spanish guitar and clearly outperformed a non-guitar-specific reference chord transcription method despite the fact that the number of chords considered here is significantly larger.


Eurasip Journal on Image and Video Processing | 2009

Optical music recognition for scores written in white mensural notation

Lorenzo J. Tardón; Simone Sammartino; Isabel Barbancho; Verónica Gómez; Antonio Oliver

An Optical Music Recognition (OMR) system especially adapted for handwritten musical scores of the XVII-th and the early XVIII-th centuries written in white mensural notation is presented. The system performs a complete sequence of analysis stages: the input is the RGB image of the score to be analyzed and, after a preprocessing that returns a black and white image with corrected rotation, the staves are processed to return a score without staff lines; then, a music symbol processing stage isolates the music symbols contained in the score and, finally, the classification process starts to obtain the transcription in a suitable electronic format so that it can be stored or played. This work will help to preserve our cultural heritage keeping the musical information of the scores in a digital format that also gives the possibility to perform and distribute the original music contained in those scores.


IEEE Transactions on Audio, Speech, and Language Processing | 2012

Inharmonicity-Based Method for the Automatic Generation of Guitar Tablature

Isabel Barbancho; Lorenzo J. Tardón; Simone Sammartino; Ana M. Barbancho

In this paper, a system for the extraction of the tablature of guitar musical pieces using only the audio waveform is presented. The analysis of the inharmonicity relations between the fundamentals and the partials of the notes played is the main process that allows to estimate both the notes played and the string/fret combination that was used to produce that sound. A procedure to analyze chords will also be described. This procedure will also make use of the inharmonicity analysis to find the simultaneous string/fret combinations used to play each chord. The proposed method is suitable for any guitar type: classical, acoustic and electric guitars. The system performance has been evaluated on a series of guitar samples from the RWC instruments database and our own recordings.


international conference on acoustics, speech, and signal processing | 2009

Transcription and expressiveness detection system for violin music

Isabel Barbancho; Cristina de la Bandera; Ana M. Barbancho; Lorenzo J. Tardón

In this paper, a transcription system for music played by violin is presented. The transcription system not only detects the pitch and duration of the notes but also identifies successfully the employed technique to play each note: détaché with and without accent and with and without vibrato, pizzicato, tremolo, spiccato and flageolett-töne. The transcription system is based on a combined analysis of the time domain and frequency domain properties of the music signal.


IEEE Transactions on Audio, Speech, and Language Processing | 2015

SiPTH: singing transcription based on hysteresis defined on the pitch-time curve

Emilio Molina; Lorenzo J. Tardón; Ana M. Barbancho; Isabel Barbancho

In this paper, we present a method for monophonic singing transcription based on hysteresis defined on the pitch-time curve. This method is designed to perform note segmentation even when the pitch evolution during the same note behaves unstably, as in the case of untrained singers. The selected approach estimates the regions in which the chroma is stable, these regions are classified as voiced or unvoiced according to a decision tree classifier using two descriptors based on aperiodicity and power. Then, a note segmentation stage based on pitch intervals of the sung signal is carried out. To this end, a dynamic averaging of the pitch curve is performed after the beginning of a note is detected in order to roughly estimate the pitch. Deviations of the actual pitch curve with respect to this average are measured to determine the next note change according to a hysteresis process defined on the pitch-time curve. Finally, each note is labeled using three single values: rounded pitch (to semitones), duration and volume. Also, a complete evaluation methodology that includes the definition of different relevant types of errors, measures and a method for the computation of the evaluation measures are presented. The proposed system improves significantly the performance of the baseline approach, and attains results similar to previous approaches.


international conference on acoustics, speech, and signal processing | 2013

Fundamental frequency alignment vs. note-based melodic similarity for singing voice assessment

Emilio Molina; Isabel Barbancho; Emilia Gómez; Ana M. Barbancho; Lorenzo J. Tardón

This paper presents a generic approach for automatic singing assessment for basic singing levels. The system provides the user with a set of intonation, rhythm and overall ratings obtained by measuring the similarity of the sung melody and a target performance. Two different similarity approaches are discussed: f0 curve alignment through Dynamic Time Warping (DTW), and singing transcription plus note-level similarity. From these two approaches, we extract different intonation and rhythm similarity measures which are combined through quadratic polynomial regression analysis in order to fit the judgement of 4 trained musicians on 27 performances. The results show that the proposed system is suitable for automatic singing voice rating and that DTW based measures are specially simple and effective for intonation and rhythm assessment.


Knowledge Based Systems | 2014

Automatic melody composition based on a probabilistic model of music style and harmonic rules

Carles Roig; Lorenzo J. Tardón; Isabel Barbancho; Ana M. Barbancho

Abstract The aim of the present work is to perform a step towards the design of specific algorithms and methods for automatic music generation. A novel probabilistic model for the characterization of music learned from music samples is designed. This model makes use of automatically extracted music parameters, namely tempo, time signature, rhythmic patterns and pitch contours, to characterize music. Specifically, learned rhythmic patterns and pitch contours are employed to characterize music styles. Then, a novel autonomous music compositor that generates new melodies using the model developed will be presented. The methods proposed in this paper take into consideration different aspects related to the traditional way in which music is composed by humans such as harmony evolution and structure repetitions and apply them together with the probabilistic reutilization of rhythm patterns and pitch contours learned beforehand to compose music pieces.


vehicular technology conference | 2003

Multirate weighted multistage PIC receiver for UMTS FDD uplink

A.M. Barbancho; Isabel Barbancho; Lorenzo J. Tardón

In this paper, a multirate implementation of a weighted PIC (parallel interference cancellation) receiver, suitable for UMTS base stations is presented. Two key points of this receiver are the calculation of the set of weights for the system, and the technique to regenerate the channels at each stage. The calculation of the set of weights for the systems is based on the eigenvalues of the correlation matrix of the active users. To this end, two approaches for the calculus of the correlation matrix of the UMTS signal are presented. Relative to the regeneration of the channels, three methods are tested: linear regeneration, hard tentative decision and soft tentative decision. For the regeneration methods, in which the amplitude of the channels is needed, we propose a technique to estimate the channel amplitude using all the data received during a scrambling code period. The performance of the proposed receiver is compared against other commonly employed receivers like the adaptive MMSE receiver and the correlation receiver.


Journal of the Acoustical Society of America | 2010

Design of an efficient music-speech discriminator

Lorenzo J. Tardón; Simone Sammartino; Isabel Barbancho

In this paper, the problem of the design of a simple and efficient music-speech discriminator for large audio data sets in which advanced music playing techniques are taught and voice and music are intrinsically interleaved is addressed. In the process, a number of features used in speech-music discrimination are defined and evaluated over the available data set. Specifically, the data set contains pieces of classical music played with different and unspecified instruments (or even lyrics) and the voice of a teacher (a top music performer) or even the overlapped voice of the translator and other persons. After an initial test of the performance of the features implemented, a selection process is started, which takes into account the type of classifier selected beforehand, to achieve good discrimination performance and computational efficiency, as shown in the experiments. The discrimination application has been defined and tested on a large data set supplied by Fundacion Albeniz, containing a large variety of classical music pieces played with different instrument, which include comments and speeches of famous performers.


international conference on multimedia and expo | 2009

Automatic edition of songs for Guitar Hero/Frets on Fire

Ana M. Barbancho; Isabel Barbancho; Lorenzo J. Tardón; Cristina Urdiales

In this contribution, an automatic system of song edition for Guitar Hero/Frets on Fire is presented. The system performs three fundamental stages: time analysis, frequency analysis and button allocation. The temporal analysis of the musical signal is used to obtain the rhythmic pattern of the song. The frequency analysis is perform on the basis of aMel filter bank because these filters provide an approximation to the human perception of low and high frequency sounds. The allocation of buttons is done using the rhythmic pattern extracted, the Mel filters outputs and, also, taking into account a selectable difficulty level. The distributions of the buttons obtained with the designed system are similar to the ones found in the predetermined songs of the games.

Collaboration


Dive into the Isabel Barbancho's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge