Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Siddharth Sigtia is active.

Publication


Featured researches published by Siddharth Sigtia.


IEEE Transactions on Audio, Speech, and Language Processing | 2016

An end-to-end neural network for polyphonic piano music transcription

Siddharth Sigtia; Emmanouil Benetos; Simon Dixon

We present a supervised neural network model for polyphonic piano music transcription. The architecture of the proposed model is analogous to speech recognition systems and comprises an acoustic model and a music language model. The acoustic model is a neural network used for estimating the probabilities of pitches in a frame of audio. The language model is a recurrent neural network that models the correlations between pitch combinations over time. The proposed model is general and can be used to transcribe polyphonic music without imposing any constraints on the polyphony. The acoustic and language model predictions are combined using a probabilistic graphical model. Inference over the output variables is performed using the beam search algorithm. We perform two sets of experiments. We investigate various neural network architectures for the acoustic models and also investigate the effect of combining acoustic and music language model predictions using the proposed architecture. We compare performance of the neural network-based acoustic models with two popular unsupervised acoustic models. Results show that convolutional neural network acoustic models yield the best performance across all evaluation metrics. We also observe improved performance with the application of the music language models. Finally, we present an efficient variant of beam search that improves performance and reduces run-times by an order of magnitude, making the model suitable for real-time applications.


international conference on acoustics, speech, and signal processing | 2014

Improved music feature learning with deep neural networks

Siddharth Sigtia; Simon Dixon

Recent advances in neural network training provide a way to efficiently learn representations from raw data. Good representations are an important requirement for Music Information Retrieval (MIR) tasks to be performed successfully. However, a major problem with neural networks is that training time becomes prohibitive for very large datasets and the learning algorithm can get stuck in local minima for very deep and wide network architectures. In this paper we examine 3 ways to improve feature learning for audio data using neural networks: 1.using Rectified Linear Units (ReLUs) instead of standard sigmoid units; 2.using a powerful regularisation technique called Dropout; 3.using Hessian-Free (HF) optimisation to improve training of sigmoid nets. We show that these methods provide significant improvements in training time and the features learnt are better than state of the art handcrafted features, with a genre classification accuracy of 83 ± 1.1% on the Tzanetakis (GTZAN) dataset. We found that the rectifier networks learnt better features than the sigmoid networks. We also demonstrate the capacity of the features to capture relevant information from audio data by applying them to genre classification on the ISMIR 2004 dataset.


international conference on acoustics, speech, and signal processing | 2015

A hybrid recurrent neural network for music transcription

Siddharth Sigtia; Emmanouil Benetos; Nicolas Boulanger-Lewandowski; Tillman Weyde; Artur S. d'Avila Garcez; Simon Dixon

We investigate the problem of incorporating higher-level symbolic score-like information into Automatic Music Transcription (AMT) systems to improve their performance. We use recurrent neural networks (RNNs) and their variants as music language models (MLMs) and present a generative architecture for combining these models with predictions from a frame level acoustic classifier. We also compare different neural network architectures for acoustic modeling. The proposed model computes a distribution over possible output sequences given the acoustic input signal and we present an algorithm for performing a global search for good candidate transcriptions. The performance of the proposed model is evaluated on piano music from the MAPS dataset and we observe that the proposed model consistently outperforms existing transcription methods.


IEEE Transactions on Audio, Speech, and Language Processing | 2016

Automatic Environmental Sound Recognition: Performance Versus Computational Cost

Siddharth Sigtia; Adam M. Stark; Sacha Krstulovic; Mark D. Plumbley

In the context of the Internet of Things, sound sensing applications are required to run on embedded platforms where notions of product pricing and form factor impose hard constraints on the available computing power. Whereas Automatic Environmental Sound Recognition (AESR) algorithms are most often developed with limited consideration for computational cost, this paper seeks which AESR algorithm can make the most of a limited amount of computing power by comparing the sound classification performance as a function of its computational cost. Results suggest that Deep Neural Networks yield the best ratio of sound classification accuracy across a range of computational costs, while Gaussian Mixture Models offer a reasonable accuracy at a consistently small cost, and Support Vector Machines stand between both in terms of compromise between accuracy and computational cost.


IEEE Transactions on Audio, Speech, and Language Processing | 2017

Unsupervised Feature Learning Based on Deep Models for Environmental Audio Tagging

Yong Xu; Qiang Huang; Wenwu Wang; Peter Foster; Siddharth Sigtia; Philip J. B. Jackson; Mark D. Plumbley

Environmental audio tagging aims to predict only the presence or absence of certain acoustic events in the interested acoustic scene. In this paper, we make contributions to audio tagging in two parts, respectively, acoustic modeling and feature learning. We propose to use a shrinking deep neural network (DNN) framework incorporating unsupervised feature learning to handle the multilabel classification task. For the acoustic modeling, a large set of contextual frames of the chunk are fed into the DNN to perform a multilabel classification for the expected tags, considering that only chunk (or utterance) level rather than frame-level labels are available. Dropout and background noise aware training are also adopted to improve the generalization capability of the DNNs. For the unsupervised feature learning, we propose to use a symmetric or asymmetric deep denoising auto-encoder (syDAE or asyDAE) to generate new data-driven features from the logarithmic Mel-filter banks features. The new features, which are smoothed against background noise and more compact with contextual information, can further improve the performance of the DNN baseline. Compared with the standard Gaussian mixture model baseline of the DCASE 2016 audio tagging challenge, our proposed method obtains a significant equal error rate (EER) reduction from 0.21 to 0.13 on the development set. The proposed asyDAE system can get a relative 6.7% EER reduction compared with the strong DNN baseline on the development set. Finally, the results also show that our approach obtains the state-of-the-art performance with 0.15 EER on the evaluation set of the DCASE 2016 audio tagging task while EER of the first prize of this challenge is 0.17.In this paper we make contributions to audio tagging in two parts, respectively, acoustic modeling and feature learning. We propose to use a fully deep neural network (DNN) framework incorporating unsupervised feature learning to handle the multi-label classification task in a regression way. Considering that only chunk-level rather than frame-level labels are available, the whole or almost whole frames of the chunk are fed into the DNN to perform a multi-label regression for the expected tags. The fully DNN, which is regarded as an encoding function, can map the audio features sequence to a multi-tag vector. For the unsupervised feature learning, we propose to use a deep auto-encoder (AE) to generate new features with non-negative representation from the basic features. The new feature can further improve the performance of audio tagging. A deep pyramid structure was also designed to extract more robust high-level features related to the target tags. Further improved methods were adopted, such as the dropout and background noise aware training, to enhance the generalization capability of DNNs for new audio recordings in mismatched environments. Compared with the conventional Gaussian Mixture Model (GMM) and support vector machine (SVM) methods, the proposed fully DNN-based method is able to utilize the long-term temporal information with the whole chunk as the input. The results show that our approach obtains a 19.1% relative improvement compared with the official GMM-based baseline method of DCASE 2016 audio tagging task.


workshop on applications of signal processing to audio and acoustics | 2015

Chime-home: A dataset for sound source recognition in a domestic environment

Peter Foster; Siddharth Sigtia; Sacha Krstulovic; Jon Barker; Mark D. Plumbley


international symposium/conference on music information retrieval | 2015

Audio Chord Recognition with a Hybrid Recurrent Neural Network.

Siddharth Sigtia; Nicolas Boulanger-Lewandowski; Simon Dixon


international symposium/conference on music information retrieval | 2014

An RNN-based Music Language Model for Improving Automatic Music Transcription

Siddharth Sigtia; Emmanouil Benetos; Srikanth Cherla; Tillman Weyde; Artur S. d'Avila Garcez; Simon Dixon


arXiv: Sound | 2016

Fully Deep Neural Networks Incorporating Unsupervised Feature Learning for Audio Tagging.

Yong Xu; Qiang Huang; Wenwu Wang; Peter Foster; Siddharth Sigtia; Philip J. B. Jackson; Mark D. Plumbley


arXiv: Neural and Evolutionary Computing | 2016

Learning to Generate Genotypes with Neural Networks

Alexander W. Churchill; Siddharth Sigtia; Chrisantha Fernando

Collaboration


Dive into the Siddharth Sigtia's collaboration.

Top Co-Authors

Avatar

Simon Dixon

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar

Emmanouil Benetos

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Peter Foster

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alexander W. Churchill

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar

Chrisantha Fernando

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge