Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sachin S. Kajarekar is active.

Publication


Featured researches published by Sachin S. Kajarekar.


Speech Communication | 2005

Modeling prosodic feature sequences for speaker recognition

Elizabeth Shriberg; Luciana Ferrer; Sachin S. Kajarekar; Anand Venkataraman; Andreas Stolcke

Abstract We describe a novel approach to modeling idiosyncratic prosodic behavior for automatic speaker recognition. The approach computes various duration, pitch, and energy features for each estimated syllable in speech recognition output, quantizes the features, forms N-grams of the quantized values, and models normalized counts for each feature N-gram using support vector machines (SVMs). We refer to these features as “SNERF-grams” (N-grams of Syllable-based Nonuniform Extraction Region Features). Evaluation of SNERF-gram performance is conducted on two-party spontaneous English conversational telephone data from the Fisher corpus, using one conversation side in both training and testing. Results show that SNERF-grams provide significant performance gains when combined with a state-of-the-art baseline system, as well as with two highly successful long-range feature systems that capture word usage and lexically constrained duration patterns. Further experiments examine the relative contributions of features by quantization resolution, N-gram length, and feature type. Results show that the optimal number of bins depends on both feature type and N-gram length, but is roughly in the range of 5–10 bins. We find that longer N-grams are better than shorter ones, and that pitch features are most useful, followed by duration and energy features. The most important pitch features are those capturing pitch level, whereas the most important energy features reflect patterns of rising and falling. For duration features, nucleus duration is more important for speaker recognition than are durations from the onset or coda of a syllable. Overall, we find that SVM modeling of prosodic feature sequences yields valuable information for automatic speaker recognition. It also offers rich new opportunities for exploring how speakers differ from each other in voluntary but habitual ways.


international conference on acoustics, speech, and signal processing | 2000

Feature extraction using non-linear transformation for robust speech recognition on the Aurora database

Sangita Sharma; Daniel P. W. Ellis; Sachin S. Kajarekar; Pratibha Jain; Hynek Hermansky

We evaluate the performance of several feature sets on the Aurora task as defined by ETSI. We show that after a non-linear transformation, a number of features can be effectively used in a HMM-based recognition system. The non-linear transformation is computed using a neural network which is discriminatively trained on the phonetically labeled (forcibly aligned) training data. A combination of the non-linearly transformed PLP (perceptive linear predictive coefficients), MSG (modulation filtered spectrogram) and TRAP (temporal pattern) features yields a 63% improvement in error rate as compared to baseline me frequency cepstral coefficients features. The use of the non-linearly transformed RASTA-like features, with system parameters scaled down to take into account the ETSI imposed memory and latency constraints, still yields a 40% improvement in error rate.


IEEE Transactions on Audio, Speech, and Language Processing | 2007

Speaker Recognition With Session Variability Normalization Based on MLLR Adaptation Transforms

Andreas Stolcke; Sachin S. Kajarekar; Luciana Ferrer; Elizabeth Shrinberg

We present a new modeling approach for speaker recognition that uses the maximum-likelihood linear regression (MLLR) adaptation transforms employed by a speech recognition system as features for support vector machine (SVM) speaker models. This approach is attractive because, unlike standard frame-based cepstral speaker recognition models, it normalizes for the choice of spoken words in text-independent speaker verification without data fragmentation. We discuss the basics of the MLLR-SVM approach, and show how it can be enhanced by combining transforms relative to multiple reference models, with excellent results on recent English NIST evaluation sets. We then show how the approach can be applied even if no full word-level recognition system is available, which allows its use on non-English data even without matching speech recognizers. Finally, we examine how two recently proposed algorithms for intersession variability compensation perform in conjunction with MLLR-SVM.


international conference on acoustics, speech, and signal processing | 2009

THE SRI NIST 2008 speaker recognition evaluation system

Sachin S. Kajarekar; Nicolas Scheffer; Martin Graciarena; Elizabeth Shriberg; Andreas Stolcke; Luciana Ferrer; Tobias Bocklet

The SRI speaker recognition system for the 2008 NIST speaker recognition evaluation (SRE) incorporates a variety of models and features, both cepstral and stylistic. We highlight the improvements made to specific subsystems and analyze the performance of various subsystem combinations in different data conditions. We show the importance of language and nativeness conditioning, as well as the role of ASR for speaker verification.


international conference on acoustics, speech, and signal processing | 2006

Combining Prosodic Lexical and Cepstral Systems for Deceptive Speech Detection

Martin Graciarena; Elizabeth Shriberg; Andreas Stolcke; Frank Enos; Julia Hirschberg; Sachin S. Kajarekar

We report on machine learning experiments to distinguish deceptive from nondeceptive speech in the Columbia-SRI-Colorado (CSC) corpus. Specifically, we propose a system combination approach using different models and features for deception detection. Scores from an SVM system based on prosodic/lexical features are combined with scores from a Gaussian mixture model system based on acoustic features, resulting in improved accuracy over the individual systems. Finally, we compare results from the prosodic-only SVM system using features derived either from recognized words or from human transcriptions


international conference on acoustics, speech, and signal processing | 2005

SRI's 2004 NIST speaker recognition evaluation system

Sachin S. Kajarekar; Luciana Ferrer; Elizabeth Shriberg; M. Kemal Sönmez; Andreas Stolcke; Anand Venkataraman; Jing Zheng

The paper describes our recent efforts in exploring longer-range features and their statistical modeling techniques for speaker recognition. In particular, we describe a system that uses discriminant features from cepstral coefficients, and systems that use discriminant models from word n-grams and syllable-based NERF n-grams. These systems together with a cepstral baseline system are evaluated on the 2004 NIST speaker recognition evaluation dataset. The effect of the development set is measured using two different datasets, one from Switchboard databases and another from the FISHER database. Results show that the difference between the development and evaluation sets affects the performance of the systems only when more training data is available. Results also show that systems using longer-range features combined with the baseline result in about a 31% improvement with 1-side training over the baseline system and about a 61% improvement with 8-side training over the baseline system.


international conference on acoustics, speech, and signal processing | 2002

A new speaker change detection method for two-speaker segmentation

Andre G. Adam; Sachin S. Kajarekar; Hynek Hermansky

In absence of prior information about speakers, an important step in speaker segmentation is to obtain initial estimates for training speaker models. In this paper, we present a new method for obtaining these estimates. The method assumes that a conversation must be initiated by one of the speakers. Thus one speaker model is estimated from the small segment at the beginning of the conversation and the segment that has the largest distance from the initial segment is used to train second speaker model. We describe a system based on this method and evaluate it on two different tasks: a controlled task with variations in the duration of the initial speaker segment and amount of overlapped speech and 2001 NIST Speaker Recognition Evaluation task that contains natural conversations. This system shows significant improvements over the conventional system in absence of overlapped speech on the controlled task.


2006 IEEE Odyssey - The Speaker and Language Recognition Workshop | 2006

Improvements in MLLR-Transform-based Speaker Recognition

Andreas Stolcke; Luciana Ferrer; Sachin S. Kajarekar

We previously proposed the use of MLLR transforms derived from a speech recognition system as speaker features in a speaker verification system. In this paper we report recent improvements to this approach. First, we noticed a fundamental problem in our previous implementation that stemmed from a mismatch between male and female recognition models, and the model transforms they produce. Although it affects only a small percentage of verification trials (those in which the gender detector commits errors), this mismatch has a large effect on average system performance. We solve this problem by consistently using only one recognition model (either male or female) in computing speaker adaptation transforms regardless of estimated speaker gender. A further accuracy boost is obtained by combining feature vectors derived from male and female vectors into one larger feature vector. Using 1-conversation-side training, the final system has about 27% lower decision cost than a state-of-the-art ccpstral GMM speaker system, and 53% lower decision cost when trained on 8 conversation sides per speaker


ieee automatic speech recognition and understanding workshop | 2005

Four weightings and a fusion: a cepstral-SVM system for speaker recognition

Sachin S. Kajarekar

A new speaker recognition system is described that uses Mel-frequency cepstral features. This system is a combination of four support vector machines (SVMs). All the SVM systems use polynomial features and they are trained and tested independently using a linear inner-product kernel. Scores from each system are combined with equal weight to generate the final score. We evaluate the combined SVM system using extensive development sets with diverse recording conditions. These sets include NIST 2003, 2004 and 2005 speaker recognition evaluation datasets, and FISHER data. The results show that for 1-side training, the combined SVM system gives comparable performance to a system using cepstral features with a Gaussian mixture model (baseline), and combination of the two systems improves the baseline performance. For 8-side training, the combined SVM system is able to take advantage of more data and gives a 29% improvement over the baseline system


international conference on acoustics, speech, and signal processing | 2006

The Contribution of Cepstral and Stylistic Features to SRI's 2005 NIST Speaker Recognition Evaluation System

Luciana Ferrer; Elizabeth Shriberg; Sachin S. Kajarekar; Andreas Stolcke; M. Kemal Sönmez; Anand Venkataraman; Harry Bratt

Recent work in speaker recognition has demonstrated the advantage of modeling stylistic features in addition to traditional cepstral features, but to date there has been little study of the relative contributions of these different feature types to a state-of-the-art system. In this paper we provide such an analysis, based on SRIs submission to the NIST 2005 speaker recognition evaluation. The system consists of 7 subsystems (3 cepstral 4 stylistic). By running independent N-way subsystem combinations for increasing values of N, we fines that (1) a monotonic pattern in the choice of the best N systems allows for the inference of subsystem importance; (2) the ordering of subsystems alternates between cepstral and stylistic; (3) syllable-based prosodic features are the strongest stylistic features, and (4) overall subsystem ordering depends crucially on the amount of training data (1 versus 8 conversation sides). Improvements over the baseline cepstral system, when all systems are combined, range from 47% to 67%, with larger improvements for the 8-side condition. These results provide direct evidence of the complementary contributions of cepstral and stylistic features to speaker discrimination

Collaboration


Dive into the Sachin S. Kajarekar's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge