Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Oldooz Hazrati is active.

Publication


Featured researches published by Oldooz Hazrati.


Journal of the Acoustical Society of America | 2011

A channel-selection criterion for suppressing reverberation in cochlear implants

Kostas Kokkinakis; Oldooz Hazrati; Philipos C. Loizou

Little is known about the extent to which reverberation affects speech intelligibility by cochlear implant (CI) listeners. Experiment 1 assessed CI users performance using Institute of Electrical and Electronics Engineers (IEEE) sentences corrupted with varying degrees of reverberation. Reverberation times of 0.30, 0.60, 0.80, and 1.0 s were used. Results indicated that for all subjects tested, speech intelligibility decreased exponentially with an increase in reverberation time. A decaying-exponential model provided an excellent fit to the data. Experiment 2 evaluated (offline) a speech coding strategy for reverberation suppression using a channel-selection criterion based on the signal-to-reverberant ratio (SRR) of individual frequency channels. The SRR reflects implicitly the ratio of the energies of the signal originating from the early (and direct) reflections and the signal originating from the late reflections. Channels with SRR larger than a preset threshold were selected, while channels with SRR smaller than the threshold were zeroed out. Results in a highly reverberant scenario indicated that the proposed strategy led to substantial gains (over 60 percentage points) in speech intelligibility over the subjects daily strategy. Further analysis indicated that the proposed channel-selection criterion reduces the temporal envelope smearing effects introduced by reverberation and also diminishes the self-masking effects responsible for flattened formants.


International Journal of Audiology | 2012

The combined effects of reverberation and noise on speech intelligibility by cochlear implant listeners

Oldooz Hazrati; Philipos C. Loizou

Objective: The purpose of this study is to assess the individual effect of reverberation and noise, as well as their combined effect, on speech intelligibility by cochlear implant (CI) users. Design: Sentence stimuli corrupted by reverberation, noise, and reverberation + noise are presented to 11 CI listeners for word identification. They are tested in two reverberation conditions (T60 = 0.6 s, 0.8 s), two noise conditions (SNR = 5 dB, 10 dB), and four reverberation + noise conditions. Study sample: Eleven CI users participated. Results: Results indicated that reverberation degrades speech intelligibility to a greater extent than additive noise (speech-shaped noise), at least for the SNR levels tested. The combined effects were greater than those introduced by either reverberation or noise alone. Conclusions: The effect of reverberation on speech intelligibility by CI users was found to be larger than that by noise. The results from the present study highlight the importance of testing CI users in reverberant conditions, since testing in noise-alone conditions might underestimate the difficulties they experience in their daily lives where reverberation and noise often coexist.


Journal of the Acoustical Society of America | 2013

Blind binary masking for reverberation suppression in cochlear implants.

Oldooz Hazrati; Jaewook Lee; Philipos C. Loizou

A monaural binary time-frequency (T-F) masking technique is proposed for suppressing reverberation. The mask is estimated for each T-F unit by extracting a variance-based feature from the reverberant signal and comparing it against an adaptive threshold. Performance of the estimated binary mask is evaluated in three moderate to relatively high reverberant conditions (T60u2009=u20090.3, 0.6, and 0.8u2009s) using intelligibility listening tests with cochlear implant users. Results indicate that the proposed T-F masking technique yields significant improvements in intelligibility of reverberant speech even in relatively high reverberant conditions (T60 = 0.8u2009s). The improvement is hypothesized to result from the recovery of the vowel/consonant boundaries, which are severely smeared in reverberation.


Biomedical Signal Processing and Control | 2013

Predicting the intelligibility of reverberant speech for cochlear implant listeners with a non-intrusive intelligibility measure

Fei Chen; Oldooz Hazrati; Philipos C. Loizou

Reverberation is known to reduce the temporal envelope modulations present in the signal and affect the shape of the modulation spectrum. A non-intrusive intelligibility measure for reverberant speech is proposed motivated by the fact that the area of the modulation spectrum decreases with increasing reverberation. The proposed measure is based on the average modulation area computed across four acoustic frequency bands spanning the signal bandwidth. High correlations (r = 0.98) were observed with sentence intelligibility scores obtained by cochlear implant listeners. Proposed measure outperformed other measures including an intrusive speech-transmission index based measure.


IEEE Signal Processing Magazine | 2015

Objective Quality and Intelligibility Prediction for Users of Assistive Listening Devices: Advantages and limitations of existing tools

Tiago H. Falk; Vijay Parsa; João Felipe Santos; Kathryn H. Arehart; Oldooz Hazrati; Rainer Huber; James M. Kates; Susan Scollie

This article presents an overview of 12 existing objective speech quality and intelligibility prediction tools. Two classes of algorithms are presented?intrusive and nonintrusive?with the former requiring the use of a reference signal, while the latter does not. Investigated metrics include both those developed for normal hearing (NH) listeners, as well as those tailored particularly for hearing impaired (HI) listeners who are users of assistive listening devices [i.e., hearing aids (HAs) and cochlear implants (CIs)]. Representative examples of those optimized for HI listeners include the speech-to-reverberation modulation energy ratio (SRMR), tailored to HAs (SRMR-HA) and to CIs (SRMR-CI); the modulation spectrum area (ModA); the HA speech quality (HASQI) and perception indices (HASPI); and the perception-model-based quality prediction method for hearing impairments (PEMO-Q-HI). The objective metrics are tested on three subjectively rated speech data sets covering reverberation-alone, noise-alone, and reverberation-plus-noise degradation conditions, as well as degradations resultant from nonlinear frequency compression and different speech enhancement strategies. The advantages and limitations of each measure are highlighted and recommendations are given for suggested uses of the different tools under specific environmental and processing conditions.


Speech Communication | 2013

Objective speech intelligibility measurement for cochlear implant users in complex listening environments

João Felipe Santos; Stefano Cosentino; Oldooz Hazrati; Philipos C. Loizou; Tiago H. Falk

Objective intelligibility measurement allows for reliable, low-cost, and repeatable assessment of innovative speech processing technologies, thus dispensing costly and time-consuming subjective tests. To date, existing objective measures have focused on normal hearing model, and limited use has been found for restorative hearing instruments such as cochlear implants (CIs). In this paper, we have evaluated the performance of five existing objective measures, as well as proposed two refinements to one particular measure to better emulate CI hearing, under complex listening conditions involving noise-only, reverberation-only, and noise-plus-reverberation. Performance is assessed against subjectively rated data. Experimental results show that the proposed CI-inspired objective measures outperformed all existing measures; gains by as much as 22% could be achieved in rank correlation.


Journal of the Acoustical Society of America | 2013

Reverberation suppression in cochlear implants using a blind channel-selection strategy

Oldooz Hazrati; Philipos C. Loizou

Reverberation severely degrades speech intelligibility for cochlear implant (CI) users. The ideal reverberant mask (IRM), a binary mask for reverberation suppression which is computed using signal-to-reverberant ratio, was found to yield substantial intelligibility gains for CI users even in highly reverberant environments (e.g., T60u2009=u20091.0u2009s). Motivated by the intelligibility improvements obtained from IRM, a monaural blind channel-selection criterion for reverberation suppression is proposed. The proposed channel-selection strategy is blind, meaning that prior knowledge of neither the room impulse response nor the anechoic signal is required. By the use of a residual signal obtained from linear prediction analysis of the reverberant signal, the residual-to-reverberant ratio (RRR) of individual frequency channels was employed as the channel-selection criterion. In each frame, the channels with RRR less than an adaptive threshold were retained while the rest were zeroed out. Performance of the proposed strategy was evaluated via intelligibility listening tests conducted with CI users in simulated rooms with two reverberation times of 0.6 and 0.8u2009s. The results indicate significant intelligibility improvements in both reverberant conditions (over 30 and 40 percentage points in T60u2009=u20090.6 and 0.8u2009s, respectively). The improvement is comparable to that obtained with the IRM strategy.


Journal of the Acoustical Society of America | 2013

Simultaneous suppression of noise and reverberation in cochlear implants using a ratio masking strategy

Oldooz Hazrati; Seyed Omid Sadjadi; Philipos C. Loizou; John H. L. Hansen

Cochlear implant (CI) recipients ability to identify words is reduced in noisy or reverberant environments. The speech identification task for CI users becomes even more challenging in conditions where both reverberation and noise co-exist as they mask the spectro-temporal cues of speech in a rather complementary fashion. Ideal channel selection (ICS) was found to result in significantly more intelligible speech when applied to the noisy, reverberant, as well as noisy reverberant speech. In this study, a blind single-channel ratio masking strategy is presented to simultaneously suppress the negative effects of reverberation and noise on speech identification performance for CI users. In this strategy, noise power spectrum is estimated from the non-speech segments of the utterance while reverberation spectral variance is computed as a delayed and scaled version of the reverberant speech spectrum. Based on the estimated noise and reverberation power spectra, a weight between 0 and 1 is assigned to each time-frequency unit to form the final mask. Listening experiments conducted with CI users in two reverberant conditions (T60u2009=u20090.6 and 0.8u2009s) at a signal-to-noise ratio of 15u2009dB indicate substantial improvements in speech intelligibility in both reverberant-alone and noisy reverberant conditions considered.


international conference on acoustics, speech, and signal processing | 2014

Robust and efficient environment detection for adaptive speech enhancement in cochlear implants

Oldooz Hazrati; Seyed Omid Sadjadi; John H. L. Hansen

Cochlear implant (CI) recipients require alternative signal processing for speech enhancement, since the quantities needed for intelligibility and quality improvement differ significantly when direct stimulation of the basilar membrane is employed for CIs. Here, a robust feature vector is proposed for environment classification in CI devices. The feature vector is directly computed from the output of the advanced combination encoder (ACE), which is a sound coding strategy commonly used in CIs. Performance of the proposed feature vector is evaluated in the context of environment classification tasks under anechoic quiet, noisy, reverberant, and noisy reverberant conditions. Speech material taken from the IEEE corpus are used to simulate different environmental acoustic conditions with: 1) three measured room impulse responses (RIR) with distinct reverberation times (T60) for generating reverberant environments, and 2) car, train, white Gaussian, multi-talker babble, and speech-shaped noise (SSN) samples for creating noisy conditions at 4 different signal-to-noise ratio (SNR) levels. We investigate 3 different classifiers for environment detection, namely Gaussian mixture models (GMM), support vector machines (SVM), and neural networks (NN). Experimental results illustrate the effectiveness of the proposed features for environment classification.


international conference on acoustics, speech, and signal processing | 2015

Leveraging automatic speech recognition in cochlear implants for improved speech intelligibility under reverberation

Oldooz Hazrati; Shabnam Ghaffarzadegan; John H. L. Hansen

Despite recent advancements in digital signal processing technology for cochlear implant (CI) devices, there still remains a significant gap between speech identification performance of CI users in reverberation compared to that in anechoic quiet conditions. Alternatively, automatic speech recognition (ASR) systems have seen significant improvements in recent years resulting in robust speech recognition in a variety of adverse environments, including reverberation. In this study, we exploit advancements seen in ASR technology for alternative formulated solutions to benefit CI users. Specifically, an ASR system is developed using multicondition training on speech data with different reverberation characteristics (e.g., T60 values), resulting in low word error rates (WER) in reverberant conditions. A speech synthesizer is then utilized to generate speech waveforms from the output of the ASR system, from which the synthesized speech is presented to CI listeners. The effectiveness of this hybrid recognition-synthesis CI strategy is evaluated under moderate to highly reverberant conditions (i.e., T60 = 0.3, 0.6, 0.8, and 1.0s) using speech material extracted from the TIMIT corpus. Experimental results confirm the effectiveness of multi-condition training on performance of the ASR system in reverberation, which consequently results in substantial speech intelligibility gains for CI users in reverberant environments.

Collaboration


Dive into the Oldooz Hazrati's collaboration.

Top Co-Authors

Avatar

Philipos C. Loizou

University of Texas at Dallas

View shared research outputs
Top Co-Authors

Avatar

John H. L. Hansen

University of Texas at Dallas

View shared research outputs
Top Co-Authors

Avatar

João Felipe Santos

Institut national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Tiago H. Falk

Institut national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Emily A. Tobey

University of Texas at Dallas

View shared research outputs
Top Co-Authors

Avatar

Hussnain Ali

University of Texas at Dallas

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Seyed Omid Sadjadi

University of Texas at Dallas

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jaewook Lee

University of Texas at Dallas

View shared research outputs
Researchain Logo
Decentralizing Knowledge