Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sabri Gurbuz is active.

Publication


Featured researches published by Sabri Gurbuz.


international conference on acoustics, speech, and signal processing | 2002

CUAVE: A new audio-visual database for multimodal human-computer interface research

Eric Patterson; Sabri Gurbuz; Zekeriya Tufekci; John N. Gowdy

Multimodal signal processing has become an important topic of research for overcoming certain problems of audio-only speech processing. Audio-visual speech recognition is one area with great potential. Difficulties due to background noise and multiple speakers are significantly reduced by the additional information provided by extra visual features. Despite a few efforts to create databases in this area, none has emerged as a standard for comparison for several possible reasons. This paper seeks to introduce a new audiovisual database that is flexible and fairly comprehensive, yet easily available to researchers on one DVD. The CUAVE database is a speaker-independent corpus of over 7,000 utterances of both connected and isolated digits. It is designed to meet several goals that are discussed in this paper. The most notable are availability of the database, flexibility for use of the audio-visual data, and realistic considerations in the recordings (such as speaker movement). Another important focus of the database is the inclusion of pairs of simultaneous speakers, the first documented database of this kind. The overall goal of this project is to facilitate more widespread audio-visual research through an easily available database. For information on obtaining CUAVE, please visit our webpage (http://ece.clemson.edu/speech).


EURASIP Journal on Advances in Signal Processing | 2002

Moving-talker, speaker-independent feature study, and baseline results using the CUAVE multimodal speech corpus

Eric Patterson; Sabri Gurbuz; Zekeriya Tufekci; John N. Gowdy

Strides in computer technology and the search for deeper, more powerful techniques in signal processing have brought multimodal research to the forefront in recent years. Audio-visual speech processing has become an important part of this research because it holds great potential for overcoming certain problems of traditional audio-only methods. Difficulties, due to background noise and multiple speakers in an application environment, are significantly reduced by the additional information provided by visual features. This paper presents information on a new audio-visual database, a feature study on moving speakers, and on baseline results for the whole speaker group. Although a few databases have been collected in this area, none has emerged as a standard for comparison. Also, efforts to date have often been limited, focusing on cropped video or stationary speakers. This paper seeks to introduce a challenging audio-visual database that is flexible and fairly comprehensive, yet easily available to researchers on one DVD. The Clemson University Audio-Visual Experiments (CUAVE) database is a speaker-independent corpus of both connected and continuous digit strings totaling over 7000 utterances. It contains a wide variety of speakers and is designed to meet several goals discussed in this paper. One of these goals is to allow testing of adverse conditions such as moving talkers and speaker pairs. A feature study of connected digit strings is also discussed. It compares stationary and moving talkers in a speaker-independent grouping. An image-processing-based contour technique, an image transform method, and a deformable template scheme are used in this comparison to obtain visual features. This paper also presents methods and results in an attempt to make these techniques more robust to speaker movement. Finally, initial baseline speaker-independent results are included using all speakers, and conclusions as well as suggested areas of research are given.


international conference on acoustics, speech, and signal processing | 2001

Application of affine-invariant Fourier descriptors to lipreading for audio-visual speech recognition

Sabri Gurbuz; Zekeriya Tufekci; Eric Patterson; John N. Gowdy

Focuses on an affine-invariant lipreading method, and its optimal combination with an audio subsystem to implement an audio-visual automatic speech recognition (AV-ASR) system. The lipreading method is based on outer lip contour description which is transformed to the Fourier domain and normalized there to eliminate dependencies on the affine transformation (translation, rotation, scaling, and shear) and on the starting point. The optimal combination algorithm incorporates a signal-to-noise ratio (SNR) based weight selection rule which leads to a more accurate global likelihood ratio test. Experimental results are presented for an isolated word recognition task for eight different noise types from the NOISEX data base for several SNR values.


international conference on acoustics, speech, and signal processing | 2002

Multi-stream product modal audio-visual integration strategy for robust adaptive speech recognition

Sabri Gurbuz; Zekeriya Tufekci; Eric Patterson; John N. Gowdy

In this paper, we extend an existing audio-only automatic speech recognizer to implement a multi-stream audio-visual automatic speech recognition (AV-ASR) system. Our method forms a multi-stream feature vector from audio-visual speech data, computes the statistical modal parameters probabilities on the basis of multi-stream audio-visual features, and performs dynamic programming jointly on the multi-stream product modal Hidden Markov Models (MS-PM-HMMs) by utilizing a noise type and signal-to-noise ratio (SNR) based stream-weighting value. Experimental results are presented for an isolated word recognition task for eight different noise types from the NOISEX data base for several SNR values. The proposed system reduces the word error rate (WER), averaged over several SNR and noise types, from 55.9% With the audio-only recognizer and 7.9% with the late-integration audio-visual recognizer to 2.6% WER in the validation set.


international conference on acoustics, speech, and signal processing | 2005

Noise robust speaker verification using mel-frequency discrete wavelet coefficients and parallel model compensation

Zekeriya Tufekci; Sabri Gurbuz

In this paper, we discuss using parallel model combination (PMC) along with mel-frequency discrete wavelet coefficients (MFDWCs) features to take advantage of both noise compensation and local features to decrease the effect of noise on speaker verification performance. We evaluate the performance of MFDWCs using the NIST 1998 speaker recognition and NOISEX-92 databases for various noise types and noise levels. We also compare the performance of these versus MFCCs and both using PMC for dealing with additive noise. The experimental results show significant performance improvements for MFDWCs versus MFCCs after compensating the Gaussian mixture models (GMMs) using the PMC technique. The MFDWCs gave 5.24 and 3.23 points performance improvement on average over MFCCs for -6 dB and 0 dB SNR values, respectively. These correspond to 26.44% and 23.73% relative reductions in equal error rate (EER), respectively.


southeastcon | 2000

Speech spectrogram based model adaptation for speaker identification

Sabri Gurbuz; John N. Gowdy; Zekeriya Tufekci

Speech signal feature extraction is a challenging research area with great significance to the speaker identification and speech recognition communities. We propose a novel speech spectrogram based spectral modal adaptation algorithm. This system is based on dynamic thresholding of speech spectrograms for text-dependent speaker identification. For a given utterance from a target speaker we aim to find the target speaker among a number of speakers who exist in the system. Conceptually, this algorithm attempts to increase the spectral similarity for the target speaker while increasing the spectral dissimilarity for the non-target speaker who is a member of the enrolment set. Therefore, it removes aging and intersession-dependent spectral variation in the utterance while preserving the speaker inherent spectral features. The hidden Markov model (HMM) parameters representing each listed speaker in the system are adapted for each identification event. The results obtained using speech signals from both the Noisex database and from recordings in the laboratory environment seem promising and demonstrate the robustness of the algorithm for aging and session-dependent utterances. Additionally, we have evaluated the adapted and the non-adapted models with data recorded two months after the initial enrollment. The adaptation seems to improve the performance of the system for the aged data from 84% to 91%.


Presence: Teleoperators & Virtual Environments | 1997

Autonomous visualization of real environments for telepresence applications

Robert Geist; Todd Stinson; Robert J. Schalkoff; Sabri Gurbuz

The autonomous, noncontact creation of virtual environments from existing, real environments is described. The technique uses structured light to provide direct estimation of 3D surface patch parameters. Active (laser) cameras are used to determine 3D object models, and passive camera images are used to generate color and texture. This process, termed virtualization, has immediate application to providing telepresence in previously unmodeled or unstructured environments. An example of the process is shown and directions for future research are indicated.


southeastcon | 2001

Independent information from visual features for multimodal speech recognition

Sabri Gurbuz; Zekeriya Tufekci; Eric Patterson; John N. Gowdy

The performance of audio-based speech recognition systems degrades severely when there is a mismatch between training and usage environments due to background noise. This degradation is due to a loss of ability to extract and distinguish important information from audio features. One of the emerging techniques for dealing with this problem is the addition of visual features in a multimodal recognition system. This paper presents an affine-invariant, multimodal speech recognition system and focuses on the additional information that is available from video features. Results are presented that demonstrate the distinct information available from a visual subsystem that will allow optimal joint-decisions based on the SNR-ratio and type of noise to exceed either audio or video subsystem in nearly all noisy environments.


Lecture Notes in Computer Science | 2001

Affine-Invariant Visual Features Contain Supplementary Information to Enhance Speech Recognition

Sabri Gurbuz; Eric Patterson; Zekeriya Tufekci; John N. Gowdy

The performance of audio-based speech recognition systems degrades severely when there is a mismatch between training and usage environments due to background noise. This degradation is due to a loss of ability to extract and distinguish important information from audio features. One of the emerging techniques for dealing with this problem is the addition of visual features in a multimodal recognition system. This paper presents an affine-invariant, multimodal speech recognition system and focuses on the supplementary information that is available from video features.


visual information processing conference | 1999

Image segmentation using trainable fuzzy set classifiers

Robert J. Schalkoff; Albrecht E. Carver; Sabri Gurbuz

A general image analysis and segmentation method using fuzzy set classification and learning is described. The method uses a learned fuzzy representation of pixel region characteristics, based upon the conjunction and disjunction of extracted and derived fuzzy color and texture features. Both positive and negative exemplars of some visually apparent characteristic which forms the basis of the inspection, input by a human operator, are used together with a clustering algorithm to construct positive similarity membership functions and negative similarity membership functions. Using these composite fuzzified images, P and N, are produced using fuzzy union. Classification is accomplished via image defuzzification, whereby linguistic meaning is assigned to each pixel in the fuzzy set using a fuzzy inference operation. The technique permits: (1) strict color and texture discrimination, (2) machine learning of color and texture characteristics of regions, (3) and judicious labeling of each pixel based upon leaned fuzzy representation and fuzzy classification. This approach appears ideal for applications involving visual inspection and allows the development of image-based inspection systems which may be trained and used by relatively unskilled workers. We show three different examples involving the visual inspection of mixed waste drums, lumber and woven fabric.

Collaboration


Dive into the Sabri Gurbuz's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Eric Patterson

University of North Carolina at Wilmington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge