Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daniel Wolff is active.

Publication


Featured researches published by Daniel Wolff.


Information Retrieval | 2014

Learning music similarity from relative user ratings

Daniel Wolff; Tillman Weyde

Computational modelling of music similarity is an increasingly important part of personalisation and optimisation in music information retrieval and research in music perception and cognition. The use of relative similarity ratings is a new and promising approach to modelling similarity that avoids well known problems with absolute ratings. In this article, we use relative ratings from the MagnaTagATune dataset with new and existing variants of state-of-the-art algorithms and provide the first comprehensive and rigorous evaluation of this approach. We compare metric learning based on support vector machines (SVMs) and metric-learning-to-rank (MLR), including a diagonal and a novel weighted variant, and relative distance learning with neural networks (RDNN). We further evaluate the effectiveness of different high and low level audio features and genre data, as well as dimensionality reduction methods, weighting of similarity ratings, and different sampling methods. Our results show that music similarity measures learnt on relative ratings can be significantly better than a standard Euclidian metric, depending on the choice of learning algorithm, feature sets and application scenario. MLR and SVM outperform DMLR and RDNN, while MLR with weighted ratings leads to no further performance gain. Timbral and music-structural features are most effective, and all features jointly are significantly better than any other combination of feature sets. Sharing audio clips (but not the similarity ratings) between test and training sets improves performance, in particular for the SVM-based methods, which is useful for some applications scenarios. A testing framework has been implemented in Matlab and made publicly available http://mi.soi.city.ac.uk/datasets/ir2012framework so that these results are reproducible.


adaptive multimedia retrieval | 2011

Combining sources of description for approximating music similarity ratings

Daniel Wolff; Tillman Weyde

In this paper, we compare the effectiveness of basic acoustic features and genre annotations when adapting a music similarity model to user ratings. We use the Metric Learning to Rank algorithm to learn a Mahalanobis metric from comparative similarity ratings in in the MagnaTagATune database. Using common formats for feature data, our approach can easily be transferred to other existing databases. Our results show that genre data allow more effective learning of a metric than simple audio features, but a combination of both feature sets clearly outperforms either individual set.


ACM Journal on Computing and Cultural Heritage | 2017

The Digital Music Lab: A Big Data Infrastructure for Digital Musicology

Samer A. Abdallah; Emmanouil Benetos; Nicolas Gold; Steven Hargreaves; Tillman Weyde; Daniel Wolff

In musicology and music research generally, the increasing availability of digital music, storage capacities, and computing power enable and require new and intelligent systems. In the transition from traditional to digital musicology, many techniques and tools have been developed for the analysis of individual pieces of music, but large-scale music data that are increasingly becoming available require research methods and systems that work on the collection-level and at scale. Although many relevant algorithms have been developed during the past 15 years of research in Music Information Retrieval, an integrated system that supports large-scale digital musicology research has so far been lacking. In the Digital Music Lab (DML) project, a collaboration among music librarians, musicologists, computer scientists, and human-computer interface specialists, the DML software system has been developed that fills this gap by providing intelligent large-scale music analysis with a user-friendly interactive interface supporting musicologists in their exploration and enquiry. The DML system empowers musicologists by addressing several challenges: distributed processing of audio and other music data, management of the data analysis process and results, remote analysis of data under copyright, logical inference on the extracted information and metadata, and visual web-based interfaces for exploring and querying the music collections. The DML system is scalable and based on Semantic Web technology and integrates into Linked Data with the vision of a distributed system that enables music research across archives, libraries, and other providers of music data. A first DML system prototype has been set up in collaboration with the British Library and I Like Music Ltd. This system has been used to analyse a diverse corpus of currently 250,000 music tracks. In this article, we describe the DML system requirements, design, architecture, components, and available data sources, explaining their interaction. We report use cases and applications with initial evaluations of the proposed system.


Proceedings of the 1st International Workshop on Digital Libraries for Musicology | 2014

Big Data for Musicology

Tillman Weyde; Stephen Cottrell; Jason Dykes; Emmanouil Benetos; Daniel Wolff; Dan Tidhar; Alexander Kachkaev; Mark D. Plumbley; Simon Dixon; Mathieu Barthet; Nicolas Gold; Samer A. Abdallah; Aquiles Alancar-Brayner; Mahendra Mahey; Adam Tovell

Digital music libraries and collections are growing quickly and are increasingly made available for research. We argue that the use of large data collections will enable a better understanding of music performance and music in general, which will benefit areas such as music search and recommendation, music archiving and indexing, music production and education. However, to achieve these goals it is necessary to develop new musicological research methods, to create and adapt the necessary technological infrastructure, and to find ways of working with legal limitations. Most of the necessary basic technologies exist, but they need to be brought together and applied to musicology. We aim to address these challenges in the Digital Music Lab project, and we feel that with suitable methods and technology Big Music Data can provide new opportunities to musicology.


european signal processing conference | 2016

Digital music lab: A framework for analysing big music data

Samer A. Abdallah; Emmanouil Benetos; Nicolas Gold; Steven Hargreaves; Tillman Weyde; Daniel Wolff

In the transition from traditional to digital musicology, large scale music data are increasingly becoming available which require research methods that work on the collection level and at scale. In the Digital Music Lab (DML) project, a software system has been developed that provides large-scale analysis of music audio with an interactive interface. The DML system includes distributed processing of audio and other music data, remote analysis of copyright-restricted data, logical inference on the extracted information and metadata, and visual web-based interfaces for exploring and querying music collections. A system prototype has been set up in collaboration with the British Library and I Like Music Ltd, which has been used to analyse a diverse corpus of over 250,000 music recordings. In this paper we describe the system requirements, architecture, components, and data sources, explaining their interaction. Use cases and applications with initial evaluations of the proposed system are also reported.


international world wide web conferences | 2012

Adapting similarity on the MagnaTagATune database: effects of model and feature choices

Daniel Wolff; Tillman Weyde

Predicting users tastes on music has become crucial for a competitive music recommendation systems, and perceived similarity plays an influential role in this. MIR currently turns towards making recommendation systems adaptive to user preferences and context. Here, we consider the particular task of adapting music similarity measures to user voting data. This work builds on and responds to previous publications based on the MagnaTagATune dataset. We have reproduced the similarity dataset presented by Stober and Nürnberger at AMR 2011 to enable a comparison of approaches. On this dataset, we compare their two-level approach, defining similarity measures on individual facets and combining them in a linear model, to the Metric Learning to Rank (MLR) algorithm. MLR adapts a similarity measure that operates directly on low-level features to the user data. We compare the different algorithms, features and parameter spaces with regards to minimising constraint violations. Furthermore, the effectiveness of the MLR algorithm in generalising to unknown data is evaluated on this dataset. We also explore the effects of feature choice. Here, we find that the binary genre data shows little correlation with the similarity data, but combined with audio features it clearly improves generalisation.


Proceedings of the 1st International Workshop on Digital Libraries for Musicology | 2014

Incremental Dataset Definition for Large Scale Musicological Research

Daniel Wolff; Dan Tidhar; Emmanouil Benetos; Edouard Dumon; Srikanth Cherla; Tillman Weyde

Conducting experiments on large scale musical datasets often requires the definition of a dataset as a first step in the analysis process. This is a classification task, but metadata providing the relevant information is not always available or reliable and manual annotation can be prohibitively expensive. In this study we aim to automate the annotation process using a machine learning approach for classification. We evaluate the effectiveness and the trade-off between accuracy and required number of annotated samples. We present an interactive incremental method based on active learning with uncertainty sampling. The music is represented by features extracted from audio and textual metadata and we evaluate logistic regression, support vector machines and Bayesian classification. Labelled training examples can be iteratively produced with a web-based interface, selecting the samples with lowest classification confidence in each iteration. We apply our method to address the problem of instrumentation identification, a particular case of dataset definition, which is a critical first step in a variety of experiments and potentially also plays a significant role in the curation of digital audio collections. We have used the CHARM dataset to evaluate the effectiveness of our method and focused on a particular case of instrumentation recognition, namely on the detection of piano solo pieces. We found that uncertainty sampling led to quick improvement of the classification, which converged after ca. 100 samples to values above 98%. In our test the textual metadata yield better results than our audio features and results depend on the learning methods. The results show that effective training of a classifier is possible with our method which greatly reduces the effort of labelling where a residual error rate is acceptable.


international symposium/conference on music information retrieval | 2011

Adapting Metrics for Music Similarity Using Comparative Ratings

Daniel Wolff; Tillman Weyde


Archive | 2014

Big Chord Data Extraction and Mining

Mathieu Barthet; Mark D. Plumbley; Alexander Kachkaev; Jason Dykes; Daniel Wolff; Tillman Weyde


international symposium/conference on music information retrieval | 2012

A Systematic Comparison of Music Similarity Adaptation Approaches

Daniel Wolff; Sebastian Stober; Andreas Nürnberger; Tillman Weyde

Collaboration


Dive into the Daniel Wolff's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Emmanouil Benetos

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar

Dan Tidhar

University of Cambridge

View shared research outputs
Top Co-Authors

Avatar

Jason Dykes

City University London

View shared research outputs
Top Co-Authors

Avatar

Nicolas Gold

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mathieu Barthet

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge