Hassan H. Namarvar
University of Southern California
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Hassan H. Namarvar.
international symposium on neural networks | 2001
Hassan H. Namarvar; Jim-Shih Liaw
A new version of dynamic synapse neural network (DSNN) has been applied to recognize noisy raw waveforms of words spoken by multiple speakers. The new architecture of DSNN is based on the original DSNN and a wavelet filter bank, which decomposes speech signals in multiresolution frequency bands. In this study we applied a genetic algorithm (GA) learning method to optimize the neural network. The advantage of the GA method is that it facilitates finding of a semi-optimal parameter set in the search space domain. In order to speed up the training time of the network, a new discrete time implementation of the DSNN was introduced based on the impulse invariant transformation. The network was tested for difficult discrimination conditions.
international symposium on neural networks | 2002
Hassan H. Namarvar; Alireza A. Dibazar
A new architecture for dynamic synapse neural networks (DSNNs) has been introduced based on incorporating a continuous nonlinear mechanism to simulate synaptic neuro-transmitter release, adding a nonlinear output layer, and utilizing a Gauss-Newton learning method to train the network. We applied this network to simulate two nonlinear dynamical systems and then identify the dynamical systems by generating random noise observation data. The network estimation error per sample on the training phase was less than approximately 2% and on the test set was less than approximately 3%.
international symposium on neural networks | 2003
Hassan H. Namarvar
We formulate the dynamic synapse neural network from the averaged activity of local population of neurons perspective. We have applied the trust region nonlinear optimization approach to train the network and show the new learning method effectiveness in comparison to the genetic algorithms by optimizing large-scale networks.
Journal of the Acoustical Society of America | 2004
Hassan H. Namarvar
We propose a new noise robust speech recognition system using time‐frequency domain analysis and radial basis function (RBF) support vector machines (SVM). Here, we ignore the effects of correlative and nonstationary noise and only focus on continuous additive Gaussian white noise. We then develop an isolated digit/command recognizer and compare its performance to two other systems, in which the SVM classifier has been replaced by multilayer perceptron (MLP) and RBF neural networks. All systems are trained under the low signal‐to‐noise ratio (SNR) condition. We obtained the best correct classification rate of 83% and 52% for digit recognition on the TI‐46 corpus for the SVM and MLP systems, respectively under the SNR=0 (dB), while we could not train the RBF network for the same dataset. The newly developed speech recognition system seems to be noise robust for medium size speech recognition problems under continuous, stationary background noise. However, it is still required to test the system under reali...
Journal of the Acoustical Society of America | 2004
Alireza A. Dibazar; Hassan H. Namarvar
The modified architecture of the dynamic synapse neural network (DSNN) is used to model windowed short time speech signal. The quasi‐linearization algorithm is applied to train the network. The parameters of the trained network, which are representatives of the signal, are fed into the GMM/HMM based classifier. The performance of the modified architecture with GMM/HMM based classifier is demonstrated by recognition of continuous speech from unprocessed, noisy raw waveforms spoken by multiple speakers. Our results indicate that the features obtained from DSNN are robust in the presence of additive white Gaussian noise with respect to state‐of‐the‐art Mel frequency features. [Work supported in part by DARPA, NASA and ONR.]
Journal of the Acoustical Society of America | 2004
Hassan H. Namarvar
An isolated phoneme recognition system is proposed using time‐frequency domain analysis and support vector machines (SVMs). The TIMIT corpus which contains a total of 6300 sentences, ten sentences spoken by each of 630 speakers from eight major dialect regions of the United States, was used in this experiment. Provided time‐aligned phonetic transcription was used to extract phonemes from speech samples. A 55‐output classifier system was designed corresponding to 55 classes of phonemes and trained with the kernel learning algorithms. The training dataset was extracted from clean training samples. A portion of the database, i.e., 65 338 samples of training dataset, was used to train the system. The performance of the system on the training dataset was 76.4%. The whole test dataset of the TIMIT corpus was used to test the generalization of the system. All samples, i.e., 55 655 samples of the test dataset, were used to test the system. The performance of the system on the test dataset was 45.3%. This ap...
Journal of the Acoustical Society of America | 2003
Alireza A. Dibazar; Hassan H. Namarvar; Sageev George
In this paper, we propose a probabilistic neurotransmitter release for dynamic synapses neural network (DSNNs). The capabilities of DSNNs have already been investigated in the processing of spatio‐temporal patterns of action potentials [J.‐S. Liaw and T. W. Berger (1996)]. The deterministic model of synapse is substituted by probabilistic Markov model. The action potentials generated by auditory system are the inputs of the model. The probability of neurotransmitter release is then estimated from the model. In general, the aim of this study is to present a robust pure tone recognition system based on DSNN. To generate action potential from music tone we have employed pulse code modulation method. The action potentials are plugged into the DSNN for recognition purpose. Our simulation results showed that the DSNN based on probabilistic release has 4% better recognition performance with respect to the deterministic model [A. A. Dibazar et al., SFN 2002]. [Work supported in part by DARPA‐CVS, NASA, and ONR.]
Journal of the Acoustical Society of America | 2003
Alireza A. Dibazar; Hassan H. Namarvar
In this paper we introduce a hybrid system for robust automatic speech recognition. We first present a new architecture for dynamic synapse neural networks (DSNNs) for a speech recognition task and then we extend the quasi‐linearization algorithm to estimate the DSNNs parameters. This algorithm converges quadratically to the extemal solution of a given set of nonlinear differential equations. In our application, the DSNN parameters and or the signals spectrums estimated by DSNN are classified using a hidden Markov model (HMM) based classifier. Our results indicate that the features obtained from DSNN are robust with respect to state of the art frequency features in the presence of a high level of noise. [Work supported in part by DARPA, NASA, and
Journal of the Acoustical Society of America | 2003
Hassan H. Namarvar
An idea of speech enhancement using a Dynamic Synapse Neural Network (DSNN) with an extended Kalman filtering (EKF) training method is described. The goal of this study is to introduce a new methodology in better speech enhancement in the presence of continuous environment background noise, such as fans and air‐conditioning units. The efficiency of this method is shown by applying it to noisy speech signals to remove recorded laboratory noise from signals at different signal‐to‐noise ratio levels. The preliminary results have been encouraging enough to justify our idea. To provide more noise robustness, this could be used as a pre‐processing level in automatic speech recognition (ASR) systems. The proposed method would have a profound impact on the performance of ASR systems. [Work supported by DARPA CBS, NASA, and ONR.]
Journal of the Acoustical Society of America | 2002
Alireza A. Dibazar; Hassan H. Namarvar
In this paper we propose a new method for using dynamic synapse neural networks (DSNNs) to accomplish isolated word recognition. The DSNNs developed by Liaw and Berger (1996) provide explicit analytic computational frameworks for the solution of nonlinear differential equations. Our method employs quasilinearization of a nonlinear differential equation to train a DSNN. This method employs an iterative algorithm, which converges monotonically to the extremal solutions of the nonlinear differential equation. The utility of the method was explored by training a simple DSNN to perform a speech recognition task on unprocessed, noisy raw waveforms of words spoken by multiple speakers. The simulation results showed that this training method has very fast convergence with respect to other existing methods. [Work supported by ONR and DARPA.]