Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Srikanth Cherla is active.

Publication


Featured researches published by Srikanth Cherla.


IEEE-ASME Transactions on Mechatronics | 2014

Regression Methods for Virtual Metrology of Layer Thickness in Chemical Vapor Deposition

Hendrik Purwins; Bernd Barak; Ahmed Nagi; Reiner Engel; Uwe Höckele; Andreas Kyek; Srikanth Cherla; Benjamin Lenz; Günter Pfeifer; Kurt Weinzierl

The quality of wafer production in semiconductor manufacturing cannot always be monitored by a costly physical measurement. Instead of measuring a quantity directly, it can be predicted by a regression method (virtual metrology). In this paper, a survey on regression methods is given to predict average silicon nitride cap layer thickness for the plasma-enhanced chemical vapor deposition dual-layer metal passivation stack process. Process and production equipment fault detection and classification data are used as predictor variables. Various variable sets are compared: one most predictive variable alone, the three most predictive variables, an expert selection, and full set. The following regression methods are compared: simple linear regression, multiple linear regression, partial least square regression, and ridge linear regression utilizing the partial least square estimate algorithm, and support vector regression (SVR). On a test set, SVR outperforms the other methods by a large margin, being more robust toward changes in the production conditions. The method performs better on high-dimensional multivariate input data than on the most predictive variables alone. Process expert knowledge used for a priori variable selection further enhances the performance slightly. The results confirm earlier findings that virtual metrology can benefit from the robustness of SVR, an adaptive generic method that performs well even if no process knowledge is applied. However, the integration of process expertise into the method improves the performance once more.


Computer Music Journal | 2013

Automatic phrase continuation from guitar and bass guitar melodies

Srikanth Cherla; Hendrik Purwins; Marco Marchini

A framework is proposed for generating interesting, musically similar variations of a given monophonic melody. The focus is on pop/rock guitar and bass guitar melodies with the aim of eventual extensions to other instruments and musical styles. It is demonstrated here how learning musical style from segmented audio data can be formulated as an unsupervised learning problem to generate a symbolic representation. A melody is first segmented into a sequence of notes using onset detection and pitch estimation. A set of hierarchical, coarse-to-fine symbolic representations of the melody is generated by clustering pitch values at multiple similarity thresholds. The variance ratio criterion is then used to select the appropriate clustering levels in the hierarchy. Note onsets are aligned with beats, considering the estimated meter of the melody, to create a sequence of symbols that represent the rhythm in terms of onsets/rests and the metrical locations of their occurrence. A joint representation based on the cross-product of the pitch cluster indices and metrical locations is used to train the prediction model, a variable-length Markov chain. The melodies generated by the model were evaluated through a questionnaire by a group of experts, and received an overall positive response.


international symposium on neural networks | 2015

Discriminative learning and inference in the Recurrent Temporal RBM for melody modelling

Srikanth Cherla; Son N. Tran; Artur S. d'Avila Garcez; Tillman Weyde

We are interested in modelling musical pitch sequences in melodies in the symbolic form. The task here is to learn a model to predict the probability distribution over the various possible values of pitch of the next note in a melody, given those leading up to it. For this task, we propose the Recurrent Temporal Discriminative Restricted Boltzmann Machine (RTDRBM). It is obtained by carrying out discriminative learning and inference as put forward in the Discriminative RBM (DRBM), in a temporal setting by incorporating the recurrent structure of the Recurrent Temporal RBM (RTRBM). The model is evaluated on the cross entropy of its predictions using a corpus containing 8 datasets of folk and chorale melodies, and compared with n-grams and other standard connectionist models. Results show that the RTDRBM has a better predictive performance than the rest of the models, and that the improvement is statistically significant.


international conference on artificial neural networks | 2016

Generalising the Discriminative Restricted Boltzmann Machines

Srikanth Cherla; Son N. Tran; Artur S. d'Avila Garcez; Tillman Weyde

We present a novel theoretical result that generalises the Discriminative Restricted Boltzmann Machine (DRBM). While originally the DRBM was defined assuming the \(\{0, 1\}\)-Bernoulli distribution in each of its hidden units, this result makes it possible to derive cost functions for variants of the DRBM that utilise other distributions, including some that are often encountered in the literature. This paper shows that this function can be extended to the Binomial and \(\{-1,+1\}\)-Bernoulli hidden units.


Proceedings of the 1st International Workshop on Digital Libraries for Musicology | 2014

Incremental Dataset Definition for Large Scale Musicological Research

Daniel Wolff; Dan Tidhar; Emmanouil Benetos; Edouard Dumon; Srikanth Cherla; Tillman Weyde

Conducting experiments on large scale musical datasets often requires the definition of a dataset as a first step in the analysis process. This is a classification task, but metadata providing the relevant information is not always available or reliable and manual annotation can be prohibitively expensive. In this study we aim to automate the annotation process using a machine learning approach for classification. We evaluate the effectiveness and the trade-off between accuracy and required number of annotated samples. We present an interactive incremental method based on active learning with uncertainty sampling. The music is represented by features extracted from audio and textual metadata and we evaluate logistic regression, support vector machines and Bayesian classification. Labelled training examples can be iteratively produced with a web-based interface, selecting the samples with lowest classification confidence in each iteration. We apply our method to address the problem of instrumentation identification, a particular case of dataset definition, which is a critical first step in a variety of experiments and potentially also plays a significant role in the curation of digital audio collections. We have used the CHARM dataset to evaluate the effectiveness of our method and focused on a particular case of instrumentation recognition, namely on the detection of piano solo pieces. We found that uncertainty sampling led to quick improvement of the classification, which converged after ca. 100 samples to values above 98%. In our test the textual metadata yield better results than our audio features and results depend on the learning methods. The results show that effective training of a classifier is possible with our method which greatly reduces the effort of labelling where a residual error rate is acceptable.


international symposium/conference on music information retrieval | 2014

An RNN-based Music Language Model for Improving Automatic Music Transcription

Siddharth Sigtia; Emmanouil Benetos; Srikanth Cherla; Tillman Weyde; Artur S. d'Avila Garcez; Simon Dixon


Archive | 2013

An efficient shift-invariant model for polyphonic music transcription

Emmanouil Benetos; Srikanth Cherla; Tillman Weyde


international symposium/conference on music information retrieval | 2013

A Distributed Model For Multiple-Viewpoint Melodic Prediction.

Srikanth Cherla; Tillman Weyde; Artur S. d'Avila Garcez; Marcus T. Pearce


Archive | 2013

A Neural Probabilistic Model for Predicting Melodic Sequences

Srikanth Cherla; Artur d’Avila Garcez; Tillman Weyde


international symposium/conference on music information retrieval | 2014

Multiple Viewpiont Melodic Prediction with Fixed-Context Neural Networks.

Srikanth Cherla; Tillman Weyde; Artur S. d'Avila Garcez

Collaboration


Dive into the Srikanth Cherla's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Emmanouil Benetos

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar

Son N. Tran

City University London

View shared research outputs
Top Co-Authors

Avatar

Marcus T. Pearce

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar

Siddharth Sigtia

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar

Simon Dixon

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dan Tidhar

University of Cambridge

View shared research outputs
Researchain Logo
Decentralizing Knowledge