Chandan K. A. Reddy
University of Texas at Dallas
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Chandan K. A. Reddy.
international conference of the ieee engineering in medicine and biology society | 2016
Chandan K. A. Reddy; Yiya Hao; Issa M. S. Panahi
In this paper, we present a new Speech Enhancement (SE) technique capable of running on a smartphone, as an assistive device for hearing aids (HAs). The developed method incorporates the coherence between the speech and noise signals to obtain a SE gain function which is used in conjunction with the gain function obtained by Spectral Subtraction using adaptive gain averaging. SE using coherence based gain function is found to suppress the background noise well, while inducing speech distortion. On the other hand, SE using Spectral Subtraction improves speech quality with tolerable speech distortion, but introduces background musical noise for certain noise types. The weighted fusion of the two gain functions strikes a balance between noise suppression and speech distortion. Also it allows the user to control the weighting factor based on the noisy environment and their comfort level of hearing. The developed method is computationally fast and operates in real-time. The proposed method was evaluated for machinery, babble, and car noise types, using both objective and subjective measures for both quality and intelligibility of the enhanced speech. The results show significant improvements in comparison with stand-alone Spectral Subtraction with weighted gain averaging SE methods.In this paper, we present a new Speech Enhancement (SE) technique capable of running on a smartphone, as an assistive device for hearing aids (HAs). The developed method incorporates the coherence between the speech and noise signals to obtain a SE gain function which is used in conjunction with the gain function obtained by Spectral Subtraction using adaptive gain averaging. SE using coherence based gain function is found to suppress the background noise well, while inducing speech distortion. On the other hand, SE using Spectral Subtraction improves speech quality with tolerable speech distortion, but introduces background musical noise for certain noise types. The weighted fusion of the two gain functions strikes a balance between noise suppression and speech distortion. Also it allows the user to control the weighting factor based on the noisy environment and their comfort level of hearing. The developed method is computationally fast and operates in real-time. The proposed method was evaluated for machinery, babble, and car noise types, using both objective and subjective measures for both quality and intelligibility of the enhanced speech. The results show significant improvements in comparison with stand-alone Spectral Subtraction with weighted gain averaging SE methods.
ieee global conference on signal and information processing | 2015
Yu Rao; Chetan Vahanesa; Chandan K. A. Reddy; Issa M. S. Panahi
Most single channel speech enhancement approaches introduce distortion in the speech and create musical noise. In this paper, we propose a novel data-driven two-stage speech enhancement approach based on cepstral smoothing method. By utilizing the cepstral smoothing method and modified recursive a priori and a posteriori signal-to-noise ratio estimation, the main speech information is preserved while noise is attenuated at the first stage. At the second stage, the residual noise is suppressed. The lookup table technique is also applied to achieve better noise suppression. A notable improvement of the proposed approach over the Voice-Activity-Detector (VAD) based decision directed approach is that it introduces less speech distortion while suppressing the noise. Quantifiable results shows that up to 20 percent improvement in terms of speech quality is achieved under certain noisy conditions.
ieee global conference on signal and information processing | 2015
Chandan K. A. Reddy; Vahid Montazeri; Yu Rao; Issa M. S. Panahi
In this paper we propose an efficient single microphone (single channel) speech enhancement (SE) method for Quasi-Stationary noise environment. The proposed method estimates the noise by exploiting its quasi-periodic nature, followed by a statistical model based method to enhance the speech. An efficient reduced-order linear predictive error filtering is introduced to increase the signal to noise ratio (SNR) of the noisy speech. The proposed method is evaluated experimentally by considering the actual recorded Functional Magnetic Resonance Imaging (fMRI) machinery noise which is quasi periodic in nature, added in clean speech. Objective evaluation of our method shows improvement in both quality and intelligibility measures when tested with the sentences chosen from IEEE corpus added in broadband quasi periodic fMRI noise. The proposed method outperforms the standard statistical model based SE technique.
Journal of the Acoustical Society of America | 2018
Nikhil Shankar; Gautam Shreedhar Bhat; Chandan K. A. Reddy; Issa M. S. Panahi
In this work, the coherence between speech and noise signals is used to obtain a Speech Enhancement (SE) gain function, in combination with a Super-Gaussian Joint Maximum a Posteriori (SGJMAP) single microphone SE gain function. The proposed SE method is implemented on a smartphone that works as an assistive device to hearing aids in real-time. Although coherence SE gain function suppresses the background noise well, it distorts the speech. In contrary, SE using SGJMAP improves speech quality with the introduction of musical noise, which we contain by using a post filter. The weighted union of these two gain functions strikes a balance between noise suppression and speech distortion. A “weighting” parameter is introduced in the derived gain function that can be controlled by the smartphone user based on different background noises and their comfort level of hearing, which proves the practical usability of the developed SE method implemented on the smartphone. Objective and subjective measures of the proposed method show significant improvements in both quality and intelligibility compared to the state of the art SE techniques considered in this work for several noisy conditions at the signal to noise ratio levels of -5 dB, 0 dB, and 5 dB.In this work, the coherence between speech and noise signals is used to obtain a Speech Enhancement (SE) gain function, in combination with a Super-Gaussian Joint Maximum a Posteriori (SGJMAP) single microphone SE gain function. The proposed SE method is implemented on a smartphone that works as an assistive device to hearing aids in real-time. Although coherence SE gain function suppresses the background noise well, it distorts the speech. In contrary, SE using SGJMAP improves speech quality with the introduction of musical noise, which we contain by using a post filter. The weighted union of these two gain functions strikes a balance between noise suppression and speech distortion. A “weighting” parameter is introduced in the derived gain function that can be controlled by the smartphone user based on different background noises and their comfort level of hearing, which proves the practical usability of the developed SE method implemented on the smartphone. Objective and subjective measures of the propo...
international conference on acoustics, speech, and signal processing | 2017
Chandan K. A. Reddy; Anshuman Ganguly; Issa M. S. Panahi
In this paper, a Blind Speech Separation (BSS) technique is introduced based on Independent Component Analysis (ICA) for underdetermined single microphone case. In general, ICA uses noisy speech from at least two microphones to separate speech and noise. But ICA fails to separate when only one stream of noisy speech is available. We use Log Spectral Magnitude Estimator based on Minimum Mean Square Error (LogMMSE) as a non-linear estimation technique to estimate the speech spectrum, which is used as the other input to ICA, with the noisy speech. The proposed method was tested for machinery, babble and traffic noise types mixed with speech at Signal to Noise Ratios (SNRs) of −5 dB, 0 dB and 5 dB. Objective and subjective results show high quality and intelligibility in the separated speech using the developed method.
IEEE Signal Processing Letters | 2017
Chandan K. A. Reddy; Nikhil Shankar; Gautam Shreedhar Bhat; Ram Charan; Issa M. S. Panahi
In this letter, we derive a new super-Gaussian joint maximum a posteriori (SGJMAP) based single microphone speech enhancement gain function. The developed speech enhancement method is implemented on a smartphone, and this arrangement functions as an assistive device to hearing aids. We introduce a tradeoff parameter in the derived gain function that allows the smartphone user to customize their listening preference, by controlling the amount of noise suppression and speech distortion in real-time based on their level of hearing comfort perceived in noisy real-world acoustic environment. Objective quality and intelligibility measures show the effectiveness of the proposed method in comparison to benchmark techniques considered in this letter. Subjective results reflect the usefulness of the developed speech enhancement application in real-world noisy conditions at signal to noise ratio levels of –5, 0, and 5 dB.
international conference of the ieee engineering in medicine and biology society | 2016
Chetan Vahanesa; Chandan K. A. Reddy; Issa M. S. Panahi
Functional Magnetic Resonance Imaging (fMRI) is used in many diagnostic procedures for neurological related disorders. Strong broadband acoustic noise generated during fMRI scan interferes with the speech communication between the physician and the patient. In this paper, we propose a single microphone Speech Enhancement (SE) technique which is based on the supervised machine learning technique and a statistical model based SE technique. The proposed algorithm is robust and computationally efficient and has capability to run in real-time. Objective and Subjective evaluations show that the proposed SE method outperforms the existing state-of-the-art algorithms in terms of quality and intelligibility of the recovered speech at low Signal to Noise Ratios (SNRs).
international conference on acoustics, speech, and signal processing | 2015
Anshuman Ganguly; Chandan K. A. Reddy; Issa M. S. Panahi
This paper presents an improved Parallel Feedback Active noise control (PFANC) method with adaptive noise signal decomposition using NLMS based linear prediction. The proposed method is tested for several stationary periodic signals, quasi-periodic signals and non-stationary signals buried in Additive White Gaussian noise (AWGN) under different Signal to Noise Ratios (SNR). Performance improvement in terms of Noise Attenuation Levels (NAL) exceeding 40dB have been obtained and compared with previous methods. Proposed method has lower implementation cost, lower computational delay, better noise tracking capability and better NAL performance as compared to previous methods.
signal processing systems | 2016
Anshuman Ganguly; Chandan K. A. Reddy; Yiya Hao; Issa M. S. Panahi
asilomar conference on signals, systems and computers | 2017
Issa M. S. Panahi; Chandan K. A. Reddy; Linda M. Thibodeau