Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tillman Weyde is active.

Publication


Featured researches published by Tillman Weyde.


Journal of New Music Research | 2013

An approach to melodic segmentation and classification based on filtering with the Haar-wavelet

Gissel Velarde; Tillman Weyde; David Meredith

Abstract We present a novel method of classification and segmentation of melodies in symbolic representation. The method is based on filtering pitch as a signal over time with the Haar wavelet, and we evaluate it on two tasks. The filtered signal corresponds to a single-scale signal ws from the continuous Haar wavelet transform. The melodies are first segmented using local maxima or zero-crossings of ws. The segments of ws are then classified using the k nearest neighbour algorithm with Euclidian and city-block distances. This method proves more effective than using unfiltered pitch signals and Gestalt-based segmentation when used to recognize the parent works of segments from Bach’s Two-Part Inventions (BWV 772–786). When used to classify 360 Dutch folk tunes into 26 tune families, the performance of the method is comparable to the use of pitch signals, but not as good as that of string-matching methods based on multiple features.


international conference on acoustics, speech, and signal processing | 2015

A hybrid recurrent neural network for music transcription

Siddharth Sigtia; Emmanouil Benetos; Nicolas Boulanger-Lewandowski; Tillman Weyde; Artur S. d'Avila Garcez; Simon Dixon

We investigate the problem of incorporating higher-level symbolic score-like information into Automatic Music Transcription (AMT) systems to improve their performance. We use recurrent neural networks (RNNs) and their variants as music language models (MLMs) and present a generative architecture for combining these models with predictions from a frame level acoustic classifier. We also compare different neural network architectures for acoustic modeling. The proposed model computes a distribution over possible output sequences given the acoustic input signal and we present an algorithm for performing a global search for good candidate transcriptions. The performance of the proposed model is evaluated on piano music from the MAPS dataset and we observe that the proposed model consistently outperforms existing transcription methods.


international conference on acoustics, speech, and signal processing | 2014

Automatic transcription of pitched and unpitched sounds from polyphonic music

Emmanouil Benetos; Sebastian Ewert; Tillman Weyde

Automatic transcription of polyphonic music has been an active research field for several years and is considered by many to be a key enabling technology in music signal processing. However, current transcription approaches either focus on detecting pitched sounds (from pitched musical instruments) or on detecting unpitched sounds (from drum kits). In this paper, we propose a method that jointly transcribes pitched and unpitched sounds from polyphonic music recordings. The proposed model extends the probabilistic latent component analysis algorithm and supports the detection of pitched sounds from multiple instruments as well as the detection of un-pitched sounds from drum kit components, including bass drums, snare drums, cymbals, hi-hats, and toms. Our experiments based on polyphonic Western music containing both pitched and unpitched instruments led to very encouraging results in multi-pitch detection and drum transcription tasks.


Information Retrieval | 2014

Learning music similarity from relative user ratings

Daniel Wolff; Tillman Weyde

Computational modelling of music similarity is an increasingly important part of personalisation and optimisation in music information retrieval and research in music perception and cognition. The use of relative similarity ratings is a new and promising approach to modelling similarity that avoids well known problems with absolute ratings. In this article, we use relative ratings from the MagnaTagATune dataset with new and existing variants of state-of-the-art algorithms and provide the first comprehensive and rigorous evaluation of this approach. We compare metric learning based on support vector machines (SVMs) and metric-learning-to-rank (MLR), including a diagonal and a novel weighted variant, and relative distance learning with neural networks (RDNN). We further evaluate the effectiveness of different high and low level audio features and genre data, as well as dimensionality reduction methods, weighting of similarity ratings, and different sampling methods. Our results show that music similarity measures learnt on relative ratings can be significantly better than a standard Euclidian metric, depending on the choice of learning algorithm, feature sets and application scenario. MLR and SVM outperform DMLR and RDNN, while MLR with weighted ratings leads to no further performance gain. Timbral and music-structural features are most effective, and all features jointly are significantly better than any other combination of feature sets. Sharing audio clips (but not the similarity ratings) between test and training sets improves performance, in particular for the SVM-based methods, which is useful for some applications scenarios. A testing framework has been implemented in Matlab and made publicly available http://mi.soi.city.ac.uk/datasets/ir2012framework so that these results are reproducible.


adaptive multimedia retrieval | 2011

Combining sources of description for approximating music similarity ratings

Daniel Wolff; Tillman Weyde

In this paper, we compare the effectiveness of basic acoustic features and genre annotations when adapting a music similarity model to user ratings. We use the Metric Learning to Rank algorithm to learn a Mahalanobis metric from comparative similarity ratings in in the MagnaTagATune database. Using common formats for feature data, our approach can easily be transferred to other existing databases. Our results show that genre data allow more effective learning of a metric than simple audio features, but a combination of both feature sets clearly outperforms either individual set.


ACM Journal on Computing and Cultural Heritage | 2017

The Digital Music Lab: A Big Data Infrastructure for Digital Musicology

Samer A. Abdallah; Emmanouil Benetos; Nicolas Gold; Steven Hargreaves; Tillman Weyde; Daniel Wolff

In musicology and music research generally, the increasing availability of digital music, storage capacities, and computing power enable and require new and intelligent systems. In the transition from traditional to digital musicology, many techniques and tools have been developed for the analysis of individual pieces of music, but large-scale music data that are increasingly becoming available require research methods and systems that work on the collection-level and at scale. Although many relevant algorithms have been developed during the past 15 years of research in Music Information Retrieval, an integrated system that supports large-scale digital musicology research has so far been lacking. In the Digital Music Lab (DML) project, a collaboration among music librarians, musicologists, computer scientists, and human-computer interface specialists, the DML software system has been developed that fills this gap by providing intelligent large-scale music analysis with a user-friendly interactive interface supporting musicologists in their exploration and enquiry. The DML system empowers musicologists by addressing several challenges: distributed processing of audio and other music data, management of the data analysis process and results, remote analysis of data under copyright, logical inference on the extracted information and metadata, and visual web-based interfaces for exploring and querying the music collections. The DML system is scalable and based on Semantic Web technology and integrates into Linked Data with the vision of a distributed system that enables music research across archives, libraries, and other providers of music data. A first DML system prototype has been set up in collaboration with the British Library and I Like Music Ltd. This system has been used to analyse a diverse corpus of currently 250,000 music tracks. In this article, we describe the DML system requirements, design, architecture, components, and available data sources, explaining their interaction. We report use cases and applications with initial evaluations of the proposed system.


Communications in computer and information science | 2009

Sequential Association Rules in Atonal Music

Aline Honingh; Tillman Weyde; Darrell Conklin

This paper describes a preliminary study on the structure of atonal music. In the same way as sequential association rules of chords can be found in tonal music, sequential association rules of pitch class set categories can be found in atonal music. It has been noted before that certain pitch class sets can be grouped into 6 different categories . In this paper we calculate those categories in a different way and show that virtually all possible pitch class sets can be grouped into these categories. Each piece in a corpus of atonal music was segmented at the bar level and of each segment it was calculated to which category it belongs. The percentages of occurrence of the different categories in the corpus were tabulated, and it turns out that these statistics may be useful for distinguishing tonal from atonal music. Furthermore, sequential association rules were sought within the sequence of categories. The category transition matrix shows how many times it happens that one specific category is followed by another. The statistical significance of each progression can be calculated, and we present the significant progressions as sequential association rules for atonal music.


Second International Conference on Web Delivering of Music, 2002. WEDELMUSIC 2002. Proceedings. | 2002

Concepts of the MUSITECH infrastructure for Internet-based interactive musical applications

Martin Gieseking; Tillman Weyde

This paper gives a survey of the infrastructure currently being developed in the MUSITECH project. The aim of this project is to conceptualize and implement a computational environment for navigation and interaction in Internet-based musical applications. This comprises the development of data models, exchange formats, interface modules and a software framework. Our approach is to integrate different information and media types like MIDI, audio, text based codes and metadata and their relations, especially to provide means to describe arbitrary musical structures. We attempt to connect different musical domains to support cooperations and synergies. To establish platform independence Java, XML (extensible markup language), and other open standards are used. The object model, a framework and various components for visualization, playback and other common tasks and the technical infrastructure are being developed and will be evaluated within the project.


International Journal of Smart Engineering System Design | 2003

Design and Optimization of Neuro-Fuzzy-Based Recognition of Musical Rhythm Patterns

Tillman Weyde; Klaus Dalinghaus

Since melody is based on rhythm, the task of recognizing patterns and assigning rhythmic structure to unquantized musical input is a fundamental one for interactive musical systems and for searching musical databases. We use a combination of combinatorial pattern matching and structural interpretation with a match quality rating by a neuro-fuzzy system that incorporates musical knowledge and operates on perceptually relevant features extracted from the input data. This system can learn from relatively few expert examples by using iterative training by relative samples. It shows good recognition results and the used methods of pre-filtering and optimization facilitate efficient computation. The system is modular, so feature extraction, rules, and perceptual constraints can be changed to adapt it to other areas of application.


international symposium on neural networks | 2015

Discriminative learning and inference in the Recurrent Temporal RBM for melody modelling

Srikanth Cherla; Son N. Tran; Artur S. d'Avila Garcez; Tillman Weyde

We are interested in modelling musical pitch sequences in melodies in the symbolic form. The task here is to learn a model to predict the probability distribution over the various possible values of pitch of the next note in a melody, given those leading up to it. For this task, we propose the Recurrent Temporal Discriminative Restricted Boltzmann Machine (RTDRBM). It is obtained by carrying out discriminative learning and inference as put forward in the Discriminative RBM (DRBM), in a temporal setting by incorporating the recurrent structure of the Recurrent Temporal RBM (RTRBM). The model is evaluated on the cross entropy of its predictions using a corpus containing 8 datasets of folk and chorale melodies, and compared with n-grams and other standard connectionist models. Results show that the RTDRBM has a better predictive performance than the rest of the models, and that the improvement is statistically significant.

Collaboration


Dive into the Tillman Weyde's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Emmanouil Benetos

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Simon Dixon

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar

Son N. Tran

City University London

View shared research outputs
Top Co-Authors

Avatar

Dan Tidhar

University of Cambridge

View shared research outputs
Top Co-Authors

Avatar

Kerstin Neubarth

Canterbury Christ Church University

View shared research outputs
Top Co-Authors

Avatar

Jason Dykes

City University London

View shared research outputs
Top Co-Authors

Avatar

Kia Ng

University of Leeds

View shared research outputs
Researchain Logo
Decentralizing Knowledge