Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Carlos Eduardo Cancino Chacón is active.

Publication


Featured researches published by Carlos Eduardo Cancino Chacón.


discovery science | 2015

An Evaluation of Score Descriptors Combined with Non-linear Models of Expressive Dynamics in Music

Carlos Eduardo Cancino Chacón; Maarten Grachten

Expressive interpretation forms an important but complex aspect of music, in particular in certain forms of classical music. Modeling the relation between musical expression and structural aspects of the score being performed, is an ongoing line of research. Prior work has shown that some simple numerical descriptors of the score (capturing dynamics annotations and pitch) are effective for predicting expressive dynamics in classical piano performances. Nevertheless, the features have only been tested in a very simple linear regression model. In this work, we explore the potential of a non-linear model for predicting expressive dynamics. Using a set of descriptors that capture different types of structure in the musical score, we compare the predictive accuracies of linear and non-linear models. We show that, in addition to being (slightly) more accurate, non-linear models can better describe certain interactions between numerical descriptors than linear models.


International Conference on Mathematics and Computation in Music | 2015

Probabilistic Segmentation of Musical Sequences Using Restricted Boltzmann Machines

Stefan Lattner; Maarten Grachten; Kat Agres; Carlos Eduardo Cancino Chacón

A salient characteristic of human perception of music is that musical events are perceived as being grouped temporally into structural units such as phrases or motifs. Segmentation of musical sequences into structural units is a topic of ongoing research, both in cognitive psychology and music information retrieval. Computational models of music segmentation are typically based either on explicit knowledge of music theory or human perception, or on statistical and information-theoretic properties of musical data. The former, rule-based approach has been found to better account for (human annotated) segment boundaries in music than probabilistic approaches [14], although the statistical model proposed in [14] performs almost as well as state-of-the-art rule-based approaches. In this paper, we propose a new probabilistic segmentation method, based on Restricted Boltzmann Machines (RBM). By sampling, we determine a probability distribution over a subset of visible units in the model, conditioned on a configuration of the remaining visible units. We apply this approach to an n-gram representation of melodies, where the RBM generates the conditional probability of a note given its \(n-1\) predecessors. We use this quantity in combination with a threshold to determine the location of segment boundaries. A comparative evaluation shows that this model slightly improves segmentation performance over the model proposed in [14], and as such is closer to the state-of-the-art rule-based models.


Journal of New Music Research | 2018

Convolution-based classification of audio and symbolic representations of music

Gissel Velarde; Carlos Eduardo Cancino Chacón; David Meredith; Tillman Weyde; Maarten Grachten

Abstract We present a novel convolution-based method for classification of audio and symbolic representations of music, which we apply to classification of music by style. Pieces of music are first sampled to pitch–time representations (spectrograms or piano-rolls) and then convolved with a Gaussian filter, before being classified by a support vector machine or by k-nearest neighbours in an ensemble of classifiers. On the well-studied task of discriminating between string quartet movements by Haydn and Mozart, we obtain accuracies that equal the state of the art on two data-sets. However, in multi-class composer identification, methods specialised for classifying symbolic representations of music are more effective. We also performed experiments on symbolic representations, synthetic audio and two different recordings of The Well-Tempered Clavier by J. S. Bach to study the method’s capacity to distinguish preludes from fugues. Our experimental results show that our approach performs similarly on symbolic representations, synthetic audio and audio recordings, setting our method apart from most previous studies that have been designed for use with either audio or symbolic data, but not both.


Archive | 2017

Temporal Dependencies in the Expressive Timing of Classical Piano Performances

Maarten Grachten; Carlos Eduardo Cancino Chacón


international symposium/conference on music information retrieval | 2014

DEVELOPING TONAL PERCEPTION THROUGH UNSUPERVISED LEARNING

Carlos Eduardo Cancino Chacón; Stefan Lattner; Maarten Grachten


arXiv: Sound | 2018

A Computational Study of the Role of Tonal Tension in Expressive Piano Performance.

Carlos Eduardo Cancino Chacón; Maarten Grachten


international symposium/conference on music information retrieval | 2017

From Bach to the Beatles: The Simulation of Human Tonal Expectation Using Ecologically-Trained Predictive Models.

Carlos Eduardo Cancino Chacón; Maarten Grachten; Kat Agres


arXiv: Sound | 2017

The ACCompanion v0.1: An Expressive Accompaniment System.

Carlos Eduardo Cancino Chacón; Martin Bonev; Amaury Durand; Maarten Grachten; Andreas Arzt; Laura Bishop; Werner Goebl; Gerhard Widmer


arXiv: Learning | 2017

Strategies for Conceptual Change in Convolutional Neural Networks.

Maarten Grachten; Carlos Eduardo Cancino Chacón


Archive | 2017

What were you expecting? Using Expectancy Features to Predict Expressive Performances of Classical Piano Music.

Carlos Eduardo Cancino Chacón; Maarten Grachten; David R. W. Sears; Gerhard Widmer

Collaboration


Dive into the Carlos Eduardo Cancino Chacón's collaboration.

Top Co-Authors

Avatar

Maarten Grachten

Johannes Kepler University of Linz

View shared research outputs
Top Co-Authors

Avatar

Stefan Lattner

Austrian Research Institute for Artificial Intelligence

View shared research outputs
Top Co-Authors

Avatar

Gerhard Widmer

Johannes Kepler University of Linz

View shared research outputs
Top Co-Authors

Avatar

Kat Agres

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andreas Arzt

Johannes Kepler University of Linz

View shared research outputs
Top Co-Authors

Avatar

Laura Bishop

Austrian Research Institute for Artificial Intelligence

View shared research outputs
Top Co-Authors

Avatar

Thassilo Gadermaier

Austrian Research Institute for Artificial Intelligence

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge