Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Eric J. Humphrey is active.

Publication


Featured researches published by Eric J. Humphrey.


intelligent information systems | 2013

Feature learning and deep architectures: new directions for music informatics

Eric J. Humphrey; Juan Pablo Bello; Yann LeCun

As we look to advance the state of the art in content-based music informatics, there is a general sense that progress is decelerating throughout the field. On closer inspection, performance trajectories across several applications reveal that this is indeed the case, raising some difficult questions for the discipline: why are we slowing down, and what can we do about it? Here, we strive to address both of these concerns. First, we critically review the standard approach to music signal analysis and identify three specific deficiencies to current methods: hand-crafted feature design is sub-optimal and unsustainable, the power of shallow architectures is fundamentally limited, and short-time analysis cannot encode musically meaningful structure. Acknowledging breakthroughs in other perceptual AI domains, we offer that deep learning holds the potential to overcome each of these obstacles. Through conceptual arguments for feature learning and deeper processing architectures, we demonstrate how deep processing models are more powerful extensions of current methods, and why now is the time for this paradigm shift. Finally, we conclude with a discussion of current challenges and the potential impact to further motivate an exploration of this promising research area.


international conference on machine learning and applications | 2012

Rethinking Automatic Chord Recognition with Convolutional Neural Networks

Eric J. Humphrey; Juan Pablo Bello

Despite early success in automatic chord recognition, recent efforts are yielding diminishing returns while basically iterating over the same fundamental approach. Here, we abandon typical conventions and adopt a different perspective of the problem, where several seconds of pitch spectra are classified directly by a convolutional neural network. Using labeled data to train the system in a supervised manner, we achieve state of the art performance through this initial effort in an otherwise unexplored area. Subsequent error analysis provides insight into potential areas of improvement, and this approach to chord recognition shows promise for future harmonic analysis systems.


international conference on acoustics, speech, and signal processing | 2012

Learning a robust Tonnetz-space transform for automatic chord recognition

Eric J. Humphrey; Taemin Cho; Juan Pablo Bello

Temporal pitch class profiles - commonly referred to as a chromagrams - are the de facto standard signal representation for content-based methods of musical harmonic analysis, despite exhibiting a set of practical difficulties. Here, we present a novel, data-driven approach to learning a robust function that projects audio data into Tonnetz-space, a geometric representation of equal-tempered pitch intervals grounded in music theory. We apply this representation to automatic chord recognition and show that our approach out-performs the classification accuracy of previous chroma representations, while providing a mid-level feature space that circumvents challenges inherent to chroma.


international conference on machine learning and applications | 2011

Non-Linear Semantic Embedding for Organizing Large Instrument Sample Libraries

Eric J. Humphrey; Aron P. Glennon; Juan Pablo Bello

Though tags and metadata may provide rich indicators of relationships between high-level concepts like songs, artists or even genres, verbal descriptors lack the fine-grained detail necessary to capture acoustic nuances necessary for efficient retrieval of sounds in extremely large sample libraries. To these ends, we present a flexible approach titled Non-linear Semantic Embedding (NLSE), capable of projecting high-dimensional time-frequency representations of musical instrument samples into a low-dimensional, semantically-organized metric space. As opposed to other dimensionality reduction techniques, NLSE incorporates extrinsic semantic information in learning a projection, automatically learns salient acoustic features, and generates an intuitively meaningful output space.


Journal of New Music Research | 2015

From Genre Classification to Rhythm Similarity: Computational and Musicological Insights

Tlacael Miguel Esparza; Juan Pablo Bello; Eric J. Humphrey

Traditionally, the development and validation of computational measures of rhythmic similarity in music relies on proxy classification tasks, often equating rhythm similarity to genre. In this paper, we perform a comprehensive, cross-disciplinary exploration of the classification performance of a state-of-the-art system for rhythm similarity. By synthesizing the methods of quantitative analysis with a musicological perspective, detailed insight is gained into the various facets that affect system behaviour, consisting of three main areas: rhythmic sensitivities of a given feature representation, idiosyncrasies of the data used for evaluation, and the tenuous relationship between rhythmic similarity and genre. Through this study, we provide perspective on gauging the abilities of a computational system beyond classification accuracy, as well as a deeper understanding of system design and evaluation methodology as a musically meaningful exercise.


international conference on acoustics, speech, and signal processing | 2014

From music audio to chord tablature: Teaching deep convolutional networks toplay guitar

Eric J. Humphrey; Juan Pablo Bello

Automatic chord recognition is conventionally tackled as a general music audition task, where the desired output is a time-aligned sequence of discrete chord symbols, e.g. CMaj7, Esus2, etc. In practice, however, this presents two related challenges: one, the act of decoding a given chord sequence requires that the musician knows both the notes in the chord and how to play them on some instrument; and two, chord labeling systems do not degrade gracefully for users without significant musical training. Alternatively, we address both challenges by modeling the physical constraints of a guitar to produce human-readable representations of music audio, i.e guitar tablature via a deep convolutional network. Through training and evaluation as a standard chord recognition system, the model is able to yield representations that require minimal prior knowledge to interpret, while maintaining respectable performance compared to the state of the art.


international symposium/conference on music information retrieval | 2012

MOVING BEYOND FEATURE DESIGN: DEEP ARCHITECTURES AND AUTOMATIC FEATURE LEARNING IN MUSIC INFORMATICS

Eric J. Humphrey; Juan Pablo Bello; Yann LeCun


international symposium/conference on music information retrieval | 2015

A Software Framework for Musical Data Augmentation.

Brian McFee; Eric J. Humphrey; Juan Pablo Bello


international symposium/conference on music information retrieval | 2014

JAMS: A JSON Annotated Music Specification for Reproducible MIR Research.

Eric J. Humphrey; Justin Salamon; Oriol Nieto; Jon Forsyth; Rachel M. Bittner; Juan Pablo Bello


international symposium/conference on music information retrieval | 2014

MIR_EVAL: A Transparent Implementation of Common MIR Metrics.

Colin Raffel; Brian McFee; Eric J. Humphrey; Justin Salamon; Oriol Nieto; Dawen Liang; Daniel P. W. Ellis

Collaboration


Dive into the Eric J. Humphrey's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Brian McFee

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tristan Jehan

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge