Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sankaran Panchapagesan is active.

Publication


Featured researches published by Sankaran Panchapagesan.


Computer Speech & Language | 2009

Frequency warping for VTLN and speaker adaptation by linear transformation of standard MFCC

Sankaran Panchapagesan; Abeer Alwan

Vocal tract length normalization (VTLN) for standard filterbank-based Mel frequency cepstral coefficient (MFCC) features is usually implemented by warping the center frequencies of the Mel filterbank, and the warping factor is estimated using the maximum likelihood score (MLS) criterion. A linear transform (LT) equivalent for frequency warping (FW) would enable more efficient MLS estimation. We recently proposed a novel LT to perform FW for VTLN and model adaptation with standard MFCC features. In this paper, we present the mathematical derivation of the LT and give a compact formula to calculate it for any FW function. We also show that our LT is closely related to different LTs previously proposed for FW with cepstral features, and these LTs for FW are all shown to be numerically almost identical for the sine-log all-pass transform (SLAPT) warping functions. Our formula for the transformation matrix is, however, computationally simpler and, unlike other previous LT approaches to VTLN with MFCC features, no modification of the standard MFCC feature extraction scheme is required. In VTLN and speaker adaptive modeling (SAM) experiments with the DARPA resource management (RM1) database, the performance of the new LT was comparable to that of regular VTLN implemented by warping the Mel filterbank, when the MLS criterion was used for FW estimation. This demonstrates that the approximations involved do not lead to any performance degradation. Performance comparable to front end VTLN was also obtained with LT adaptation of HMM means in the back end, combined with mean bias and variance adaptation according to the maximum likelihood linear regression (MLLR) framework. The FW methods performed significantly better than standard MLLR for very limited adaptation data (1 utterance), and were equally effective with unsupervised parameter estimation. We also performed speaker adaptive training (SAT) with feature space LT denoted CLTFW. Global CLTFW SAT gave results comparable to SAM and VTLN. By estimating multiple CLTFW transforms using a regression tree, and including an additive bias, we obtained significantly improved results compared to VTLN, with increasing adaptation data.


Journal of the Acoustical Society of America | 2011

A study of acoustic-to-articulatory inversion of speech by analysis-by-synthesis using chain matrices and the Maeda articulatory model

Sankaran Panchapagesan; Abeer Alwan

In this paper, a quantitative study of acoustic-to-articulatory inversion for vowel speech sounds by analysis-by-synthesis using the Maeda articulatory model is performed. For chain matrix calculation of vocal tract (VT) acoustics, the chain matrix derivatives with respect to area function are calculated and used in a quasi-Newton method for optimizing articulatory trajectories. The cost function includes a distance measure between natural and synthesized first three formants, and parameter regularization and continuity terms. Calibration of the Maeda model to two speakers, one male and one female, from the University of Wisconsin x-ray microbeam (XRMB) database, using a cost function, is discussed. Model adaptation includes scaling the overall VT and the pharyngeal region and modifying the outer VT outline using measured palate and pharyngeal traces. The inversion optimization is initialized by a fast search of an articulatory codebook, which was pruned using XRMB data to improve inversion results. Good agreement between estimated midsagittal VT outlines and measured XRMB tongue pellet positions was achieved for several vowels and diphthongs for the male speaker, with average pellet-VT outline distances around 0.15 cm, smooth articulatory trajectories, and less than 1% average error in the first three formants.


international conference on acoustics, speech, and signal processing | 2006

Multi-Parameter Frequency Warping for Vtln by Gradient Search

Sankaran Panchapagesan; Abeer Alwan

The current method for estimating frequency warping (FW) functions for vocal tract length normalization (VTLN) is by maximizing the ASR likelihood score by an exhaustive search over a grid of FW parameters. Exhaustive search is inefficient when estimating multi-parameter FWs, which have been shown to give improvements in recognition accuracy over single parameter FWs (J.W. McDonough, 2000). Here we develop a gradient search algorithm to obtain the optimal FW parameters for MFCC features, since previous work focussed on PLP cepstral features (J.W. McDonough, 2000). The novel calculation involved was that of the gradient of the Mel filterbank with respect to the FW parameters. Even for a single parameter, the gradient search method was more efficient than grid search by a factor of around 1.6 on the average for male children speakers tested on models trained from adult males. When used to estimate multi-parameter sine-log allpass transform (SLAPT, (J.W. McDonough, 2000)) FWs for VTLN, more than 50% reduction in word error rate was obtained with five parameter SLAPT compared to single-parameter piecewise linear FW


conference of the international speech communication association | 2016

Multi-Task Learning and Weighted Cross-Entropy for DNN-Based Keyword Spotting.

Sankaran Panchapagesan; Ming Sun; Aparna Khare; Spyros Matsoukas; Arindam Mandal; Björn Hoffmeister; Shiv Vitaladevuni

We propose improved Deep Neural Network (DNN) training loss functions for more accurate single keyword spotting on resource-constrained embedded devices. The loss function modifications consist of a combination of multi-task training and weighted cross entropy. In the multi-task architecture, the keyword DNN acoustic model is trained with two tasks in parallel the main task of predicting the keyword-specific phone states, and an auxiliary task of predicting LVCSR senones. We show that multi-task learning leads to comparable accuracy over a previously proposed transfer learning approach where the keyword DNN training is initialized by an LVCSR DNN of the same input and hidden layer sizes. The combination of LVCSRinitialization and Multi-task training gives improved keyword detection accuracy compared to either technique alone. We also propose modifying the loss function to give a higher weight on input frames corresponding to keyword phone targets, with a motivation to balance the keyword and background training data. We show that weighted cross-entropy results in additional accuracy improvements. Finally, we show that the combination of 3 techniques LVCSR-initialization, multi-task training and weighted cross-entropy gives the best results, with significantly lower False Alarm Rate than the LVCSR-initialization technique alone, across a wide range of Miss Rates.


conference of the international speech communication association | 2016

Model Compression Applied to Small-Footprint Keyword Spotting.

George Tucker; Minhua Wu; Ming Sun; Sankaran Panchapagesan; Gengshen Fu; Shiv Vitaladevuni

Several consumer speech devices feature voice interfaces that perform on-device keyword spotting to initiate user interactions. Accurate on-device keyword spotting within a tight CPU budget is crucial for such devices. Motivated by this, we investigated two ways to improve deep neural network (DNN) acoustic models for keyword spotting without increasing CPU usage. First, we used low-rank weight matrices throughout the DNN. This allowed us to increase representational power by increasing the number of hidden nodes per layer without changing the total number of multiplications. Second, we used knowledge distilled from an ensemble of much larger DNNs used only during training. We systematically evaluated these two approaches on a massive corpus of far-field utterances. Alone both techniques improve performance and together they combine to give significant reductions in false alarms and misses without increasing CPU or memory usage.


conference of the international speech communication association | 2006

Frequency warping by linear transformation of standard MFCC.

Sankaran Panchapagesan


asilomar conference on signals, systems and computers | 2006

A Study on the Best Wavelet for Audio Compression

Rodrigo Capobianco Guido; Carlos Dias Maciel; Mauricio Monteiro; Everthon Silva Fonseca; Sankaran Panchapagesan; José Carlos Pereira; Lucimar Sasso Vieira; Sylvio Barbon Junior; Marcio Alonso Borges Guilherme; Kim Inocencio Cesar Sergio; Thais Lorasqui Scarpa; Paulo Cesar Fantinato; Emerson Jesus Rodrigues de Moura


conference of the international speech communication association | 2017

Compressed Time Delay Neural Network for Small-Footprint Keyword Spotting.

Ming Sun; David Snyder; Yixin Gao; Varun Nagaraja; Mike Rodehorst; Sankaran Panchapagesan; Nikko Strom; Spyros Matsoukas; Shiv Vitaladevuni


conference of the international speech communication association | 2008

Vocal tract inversion by cepstral analysis-by-synthesis using chain matrices.

Sankaran Panchapagesan; Abeer Alwan


international conference on acoustics, speech, and signal processing | 2018

MONOPHONE-BASED BACKGROUND MODELING FOR TWO-STAGE ON-DEVICE WAKE WORD DETECTION

Minhua Wu; Sankaran Panchapagesan; Ming Sun; Jiacheng Gu; Ryan W. Thomas; Shiv Naga Prasad Vitaladevuni; Björn Hoffmeister; Arindam Mandal

Collaboration


Dive into the Sankaran Panchapagesan's collaboration.

Top Co-Authors

Avatar

Abeer Alwan

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nikko Strom

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anirudh Raju

University of California

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge