John N. Gowdy
Clemson University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by John N. Gowdy.
international conference on acoustics, speech, and signal processing | 2002
Eric Patterson; Sabri Gurbuz; Zekeriya Tufekci; John N. Gowdy
Multimodal signal processing has become an important topic of research for overcoming certain problems of audio-only speech processing. Audio-visual speech recognition is one area with great potential. Difficulties due to background noise and multiple speakers are significantly reduced by the additional information provided by extra visual features. Despite a few efforts to create databases in this area, none has emerged as a standard for comparison for several possible reasons. This paper seeks to introduce a new audiovisual database that is flexible and fairly comprehensive, yet easily available to researchers on one DVD. The CUAVE database is a speaker-independent corpus of over 7,000 utterances of both connected and isolated digits. It is designed to meet several goals that are discussed in this paper. The most notable are availability of the database, flexibility for use of the audio-visual data, and realistic considerations in the recordings (such as speaker movement). Another important focus of the database is the inclusion of pairs of simultaneous speakers, the first documented database of this kind. The overall goal of this project is to facilitate more widespread audio-visual research through an easily available database. For information on obtaining CUAVE, please visit our webpage (http://ece.clemson.edu/speech).
EURASIP Journal on Advances in Signal Processing | 2002
Eric Patterson; Sabri Gurbuz; Zekeriya Tufekci; John N. Gowdy
Strides in computer technology and the search for deeper, more powerful techniques in signal processing have brought multimodal research to the forefront in recent years. Audio-visual speech processing has become an important part of this research because it holds great potential for overcoming certain problems of traditional audio-only methods. Difficulties, due to background noise and multiple speakers in an application environment, are significantly reduced by the additional information provided by visual features. This paper presents information on a new audio-visual database, a feature study on moving speakers, and on baseline results for the whole speaker group. Although a few databases have been collected in this area, none has emerged as a standard for comparison. Also, efforts to date have often been limited, focusing on cropped video or stationary speakers. This paper seeks to introduce a challenging audio-visual database that is flexible and fairly comprehensive, yet easily available to researchers on one DVD. The Clemson University Audio-Visual Experiments (CUAVE) database is a speaker-independent corpus of both connected and continuous digit strings totaling over 7000 utterances. It contains a wide variety of speakers and is designed to meet several goals discussed in this paper. One of these goals is to allow testing of adverse conditions such as moving talkers and speaker pairs. A feature study of connected digit strings is also discussed. It compares stationary and moving talkers in a speaker-independent grouping. An image-processing-based contour technique, an image transform method, and a deformable template scheme are used in this comparison to obtain visual features. This paper also presents methods and results in an attempt to make these techniques more robust to speaker movement. Finally, initial baseline speaker-independent results are included using all speakers, and conclusions as well as suggested areas of research are given.
international conference on acoustics, speech, and signal processing | 2000
John N. Gowdy; Zekeriya Tufekci
In this paper we propose a new feature vector consisting of mel-frequency discrete wavelet coefficients (MFDWC). The MFDWC are obtained by applying the discrete wavelet transform (DWT) to the mel-scaled log filterbank energies of a speech frame. The purpose of using the DWT is to benefit from its localization property in the time and frequency domains. MFDWC are similar to subband-based (SUB) features and multi-resolution (MULT) features in that both attempt to achieve good time and frequency localization. However, MFDWC have better time/frequency localization than SUB features and MULT features. We evaluated the performance of new features for clean speech and noisy speech and compared the performance of MFDWC with mel-frequency cepstral coefficients (MFCC), SUB features and MULT features. Experimental results on a phoneme recognition task showed that a MFDWC-based recognizer gave better results than recognizers based on MFCC, SUB features, and MULT features for the white gaussian noise, band-limited white gaussian noise and clean speech cases.
international conference on acoustics, speech, and signal processing | 2004
John N. Gowdy; Amarnag Subramanya; Chris D. Bartels; Jeff A. Bilmes
In this paper, we propose a model based on dynamic Bayesian networks (DBN) to integrate information from multiple audio and visual streams. We also compare the DBN based system (implemented using the Graphical Model Toolkit (GMTK)) with a classical HMM (implemented in the Hidden Markov Model Toolkit (HTK)) for both the single and two stream integration problems. We also propose a new model (mixed integration) to integrate information from three or more streams derived from different modalities and compare the new models performance with that of a synchronous integration scheme. A new technique to estimate stream confidence measures for the integration of three or more streams is also developed and implemented. Results from our implementation using the Clemson University Audio Visual Experiments (CUAVE) database indicate an absolute improvement of about 4% in word accuracy in the -4 to 10db average case when making use of two audio and one video streams for the mixed integration models over the sychronous models.
international conference on acoustics, speech, and signal processing | 2001
Sabri Gurbuz; Zekeriya Tufekci; Eric Patterson; John N. Gowdy
Focuses on an affine-invariant lipreading method, and its optimal combination with an audio subsystem to implement an audio-visual automatic speech recognition (AV-ASR) system. The lipreading method is based on outer lip contour description which is transformed to the Fourier domain and normalized there to eliminate dependencies on the affine transformation (translation, rotation, scaling, and shear) and on the starting point. The optimal combination algorithm incorporates a signal-to-noise ratio (SNR) based weight selection rule which leads to a more accurate global likelihood ratio test. Experimental results are presented for an isolated word recognition task for eight different noise types from the NOISEX data base for several SNR values.
international conference on acoustics, speech, and signal processing | 1989
Michael S. Scordilis; John N. Gowdy
Although a number of algorithms exist for the generation of the fundamental frequency contour in automatic text-to-speech conversion systems, the absence of a general theory of intonation still prevents the correct derivation of this important feature in unrestricted text applications. A parallel distributed approach is presented in which two neural networks were designed to learn the F0 values for each phoneme and the F0 fluctuations within each phoneme for words that correspond to a small training set. The neural networks used for this task have demonstrated the ability to generalize their properties on new text, and their level of success depends on the composition and size of the training corpus.<<ETX>>
international conference on acoustics, speech, and signal processing | 2002
Sabri Gurbuz; Zekeriya Tufekci; Eric Patterson; John N. Gowdy
In this paper, we extend an existing audio-only automatic speech recognizer to implement a multi-stream audio-visual automatic speech recognition (AV-ASR) system. Our method forms a multi-stream feature vector from audio-visual speech data, computes the statistical modal parameters probabilities on the basis of multi-stream audio-visual features, and performs dynamic programming jointly on the multi-stream product modal Hidden Markov Models (MS-PM-HMMs) by utilizing a noise type and signal-to-noise ratio (SNR) based stream-weighting value. Experimental results are presented for an isolated word recognition task for eight different noise types from the NOISEX data base for several SNR values. The proposed system reduces the word error rate (WER), averaged over several SNR and noise types, from 55.9% With the audio-only recognizer and 7.9% with the late-integration audio-visual recognizer to 2.6% WER in the validation set.
IEEE Journal of Biomedical and Health Informatics | 2015
Raul I. Ramos-Garcia; Eric R. Muth; John N. Gowdy; Adam W. Hoover
This paper considers the problem of recognizing eating gestures by tracking wrist motion. Eating gestures are activities commonly undertaken during the consumption of a meal, such as sipping a drink of liquid or using utensils to cut food. Each of these gestures causes a pattern of wrist motion that can be tracked to automatically identify the activity. Previous works have studied this problem at the level of a single gesture. In this paper, we demonstrate that individual gestures have sequential dependence. To study this, three types of classifiers were built: 1) a K-nearest neighbor classifier which uses no sequential context, 2) a hidden Markov model (HMM) which captures the sequential context of subgesture motions, and 3) HMMs that model intergesture sequential dependencies. We built first-order to sixth-order HMMs to evaluate the usefulness of increasing amounts of sequential dependence to aid recognition. On a dataset of 25 meals, we found that the baseline accuracies for the KNN and the subgesture HMM classifiers were 75.8% and 84.3%, respectively. Using HMMs that model intergesture sequential dependencies, we were able to increase accuracy to up to 96.5%. These results demonstrate that sequential dependencies exist between eating gestures and that they can be exploited to improve recognition accuracy.
international conference on acoustics, speech, and signal processing | 2014
Sanjay P. Patil; John N. Gowdy
Performance of traditional speech enhancement techniques like spectral subtraction and log-Minimum Mean Squared Error Short Time Spectral Amplitude (log-MMSE STSA) estimation degrades in presence of highly non-stationary noises like babble noise. This is mainly due to inaccurate noise estimation during the voiced segment of the speech signal. In this paper, we propose to exploit the fine structure of the phase spectra of the voiced speech in the baseband STFT domain. This phase structure is used to detect the noise dominant frequency bins in the voiced frames. This information is used to achieve better non-stationary noise Power Spectral Density (PSD) estimation. Using this estimation, performance of spectral subtraction and log-MMSE STSA is improved overall by 0.3 and 0.2, respectively, in terms of Perceptual Evaluation of Speech Quality (PESQ) measure over the original algorithms when noisy speech is used for pitch estimation. We also present the combination of these two algorithms (spectral subtraction and log-MMSE STSA) to achieve the overall PESQ improvement of 0.5 over standard log-MMSE STSA when accurate pitch estimation is available.
international conference on acoustics, speech, and signal processing | 1989
Veton Kepuska; John N. Gowdy
Some experiments with a neural-network model based on the self-organizing feature map algorithm are described. The main problem in phonemic recognition is the overlapping of feature vectors due to variability of speech and due to the coarticulation effect. This property of speech is reflected in the self-organized neural-network model in that a network unit can respond to more than one phonemic class. The authors have shown for their database that the sequence of responding units is consistent and similar for isolated utterances of the same word and distinct for different words. Thus, recognition can be based on network sequence identification. However, it is desirable that this sequence be somewhat simplified. Toward this goal they propose an algorithm for sequence smoothing. It is proposed that this network can be used as the feature extraction stage of another neural network that can learn the responding sequences as part of a speech recognition system.<<ETX>>