Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gabriel Vigliensoni is active.

Publication


Featured researches published by Gabriel Vigliensoni.


international world wide web conferences | 2012

Creating a large-scale searchable digital collection from printed music materials

Andrew Hankinson; John Ashley Burgoyne; Gabriel Vigliensoni; Ichiro Fujinaga

In this paper we present our work towards developing a large-scale web application for digitizing, recognizing (via optical music recognition), correcting, displaying, and searching printed music texts. We present the results of a recently completed prototype implementation of our workflow process, from document capture to presentation on the web. We discuss a number of lessons learned from this prototype. Finally, we present some open-source Web 2.0 tools developed to provide essential infrastructure components for making searchable printed music collections available online. Our hope is that these experiences and tools will help in creating next-generation globally accessible digital music libraries.


international conference on machine vision | 2017

Pixel-wise binarization of musical documents with convolutional neural networks

Jorge Calvo-Zaragoza; Gabriel Vigliensoni; Ichiro Fujinaga

Binarization is an important process in document analysis systems. Yet, it is quite difficult to devise a binarization method that perform successfully over a wide range of documents, especially in the case of digitized old musical manuscripts and scores with irregular lighting and source degradation. Our approach to binarization of musical documents is based on training a Convolutional Neural Network that classifies each pixel of the image as either background or foreground. Our results demonstrate that the approach is competitive with other state-of-the-art algorithms. It also illustrates the advantage of being able to adapt to any type of score by simply modifying the training set.


iberian conference on pattern recognition and image analysis | 2017

Staff-Line Detection on Grayscale Images with Pixel Classification

Jorge Calvo-Zaragoza; Gabriel Vigliensoni; Ichiro Fujinaga

Staff-line detection and removal are important processing steps in most Optical Music Recognition systems. Traditional methods make use of heuristic strategies based on image processing techniques with binary images. However, binarization is a complex process for which it is difficult to achieve perfect results. In this paper we describe a novel staff-line detection and removal method that deals with grayscale images directly. Our approach uses supervised learning to classify each pixel of the image as symbol, staff, or background. This classification is achieved by means of Convolutional Neural Networks. The features of each pixel consist of a square window from the input image centered at the pixel to be classified. As a case of study, we performed experiments with the CVC-Muscima dataset. Our approach showed promising performance, outperforming state-of-the-art algorithms for staff-line removal.


Proceedings of the 4th International Workshop on Digital Libraries for Musicology | 2017

GRAIL: Database Linking Music Metadata Across Artist, Release, and Track

Michael D. Barone; Kurt Dacosta; Gabriel Vigliensoni; Matthew Woolhouse

Linking information from multiple music databases is important for MIR because it provides a means to determine consistency of metadata between resources/services, which can help facilitate innovative product development and research. However, as yet, no open access tools exist that persistently link and validate metadata resources at the three main entities of music data: artist, release, and track. This paper introduces an open access resource which attempts to address the issue of linking information from multiple music databases. The General Recorded Audio Identity Linker (GRAIL - api.digitalmusiclab.org) is a music metadata ID-linking API that: i) connects International Standard Recording Codes (ISRCs) to music metadata IDs from services such as MusicBrainz, Spotify, and Last.FM; ii) provides these ID linkages as a publicly available resource; iii) confirms linkage accuracy using continuous metadata crawling from music-service APIs; and iv) derives consistency values (CV) for linkages by means of a set of quantifiable criteria. To date, more than 35M tracks, 8M releases, and 900K artists from 16 services have been ingested into GRAIL. We discuss the challenges faced in past attempts to link music metadata, the methods and rationale which we adopted in order to construct GRAIL and to ensure it remains updated with validated information.


Proceedings of the 3rd International workshop on Digital Libraries for Musicology | 2016

Document Analysis for Music Scores via Machine Learning

Jorge Calvo-Zaragoza; Gabriel Vigliensoni; Ichiro Fujinaga

Content within musical documents not only contains musical notation but can also include text, ornaments, annotations, and editorial data. Before any attempt at automatic recognition of elements in these layers, it is necessary to perform a document analysis process to detect and classify each of its constituent parts. The obstacle for this analysis is the high heterogeneity amongst collections, which makes it difficult to propose methods that can be generalizable to a broader range of sources. In this paper we propose a data-driven document analysis framework based on machine learning, which focuses on classifying regions of interest at pixel level. The main advantage of this approach is that it can be exploited regardless of the type of document provided, as long as training data is available. Our preliminary experimentation includes a set of specific tasks that can be performed on music such as the detection of staff lines, isolation of music symbols, and the layering of the document into its elemental parts.


international symposium/conference on music information retrieval | 2010

Evaluating the Genre Classification Performance of Lyrical Features Relative to Audio, Symbolic and Cultural Features.

Cory McKay; John Ashley Burgoyne; Jason Hockman; Jordan B. L. Smith; Gabriel Vigliensoni; Ichiro Fujinaga


new interfaces for musical expression | 2012

A Quantitative Comparison of Position Trackers for the Development of a Touch-less Musical Interface.

Gabriel Vigliensoni; Marcelo M. Wanderley


international symposium/conference on music information retrieval | 2012

DIGITAL DOCUMENT IMAGE RETRIEVAL USING OPTICAL MUSIC RECOGNITION

Andrew Hankinson; John Ashley Burgoyne; Gabriel Vigliensoni; Alastair Porter; Jessica Thompson; Wendy Liu; Remi Chiu; Ichiro Fujinaga


international computer music conference | 2010

Soundcatcher: Explorations In Audio-Looping And Time-Freezing Using An Open-Air Gestural Controller

Gabriel Vigliensoni; Marcelo M. Wanderley


international symposium/conference on music information retrieval | 2016

Automatic Music Recommendation Systems: Do Demographic, Profiling, and Contextual Features Improve Their Performance?.

Gabriel Vigliensoni; Ichiro Fujinaga

Collaboration


Dive into the Gabriel Vigliensoni's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge