Narasimhan Sundararajan
Nanyang Technological University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Narasimhan Sundararajan.
IEEE Transactions on Neural Networks | 2006
Nan-Ying Liang; Guang-Bin Huang; Paramasivan Saratchandran; Narasimhan Sundararajan
In this paper, we develop an online sequential learning algorithm for single hidden layer feedforward networks (SLFNs) with additive or radial basis function (RBF) hidden nodes in a unified framework. The algorithm is referred to as online sequential extreme learning machine (OS-ELM) and can learn data one-by-one or chunk-by-chunk (a block of data) with fixed or varying chunk size. The activation functions for additive nodes in OS-ELM can be any bounded nonconstant piecewise continuous functions and the activation functions for RBF nodes can be any integrable piecewise continuous functions. In OS-ELM, the parameters of hidden nodes (the input weights and biases of additive nodes or the centers and impact factors of RBF nodes) are randomly selected and the output weights are analytically determined based on the sequentially arriving data. The algorithm uses the ideas of ELM of Huang developed for batch learning which has been shown to be extremely fast with generalization performance better than other batch training methods. Apart from selecting the number of hidden nodes, no other control parameters have to be manually chosen. Detailed performance comparison of OS-ELM is done with other popular sequential learning algorithms on benchmark problems drawn from the regression, classification and time series prediction areas. The results show that the OS-ELM is faster than the other sequential algorithms and produces better generalization performance
IEEE Transactions on Neural Networks | 2005
Guang-Bin Huang; Paramasivan Saratchandran; Narasimhan Sundararajan
This work presents a new sequential learning algorithm for radial basis function (RBF) networks referred to as generalized growing and pruning algorithm for RBF (GGAP-RBF). The paper first introduces the concept of significance for the hidden neurons and then uses it in the learning algorithm to realize parsimonious networks. The growing and pruning strategy of GGAP-RBF is based on linking the required learning accuracy with the significance of the nearest or intentionally added new neuron. Significance of a neuron is a measure of the average information content of that neuron. The GGAP-RBF algorithm can be used for any arbitrary sampling density for training samples and is derived from a rigorous statistical point of view. Simulation results for bench mark problems in the function approximation area show that the GGAP-RBF outperforms several other sequential learning algorithms in terms of learning speed, network size and generalization performance regardless of the sampling density function of the training data.
Neural Computation | 1997
Lu Yingwei; Narasimhan Sundararajan; P. Saratchandran
This article presents a sequential learning algorithm for function approximation and time-series prediction using a minimal radial basis function neural network (RBFNN). The algorithm combines the growth criterion of the resource-allocating network (RAN) of Platt (1991) with a pruning strategy based on the relative contribution of each hidden unit to the overall network output. The resulting network leads toward a minimal topology for the RBFNN. The performance of the algorithm is compared with RAN and the enhanced RAN algorithm of Kadirkamanathan and Niranjan (1993) for the following benchmark problems: (1) hearta from the benchmark problems database PROBEN1, (2) Hermite polynomial, and (3) Mackey-Glass chaotic time series. For these problems, the proposed algorithm is shown to realize RBFNNs with far fewer hidden neurons with better or same accuracy.
Neurocomputing | 2005
Ming-Bin Li; Guang-Bin Huang; Paramasivan Saratchandran; Narasimhan Sundararajan
Recently, a new learning algorithm for the feedforward neural network named the extreme learning machine (ELM) which can give better performance than traditional tuning-based learning methods for feedforward neural networks in terms of generalization and learning speed has been proposed by Huang et al. In this paper, we first extend the ELM algorithm from the real domain to the complex domain, and then apply the fully complex extreme learning machine (C-ELM) for nonlinear channel equalization applications. The simulation results show that the ELM equalizer significantly outperforms other neural network equalizers such as the complex minimal resource allocation network (CMRAN), complex radial basis function (CRBF) network and complex backpropagation (CBP) equalizers. C-ELM achieves much lower symbol error rate (SER) and has faster learning speed.
systems man and cybernetics | 2009
Hai-Jun Rong; Guang-Bin Huang; Narasimhan Sundararajan; Paramasivan Saratchandran
In this correspondence, an online sequential fuzzy extreme learning machine (OS-fuzzy-ELM) has been developed for function approximation and classification problems. The equivalence of a Takagi-Sugeno-Kang (TSK) fuzzy inference system (FIS) to a generalized single hidden-layer feedforward network is shown first, which is then used to develop the OS-fuzzy-ELM algorithm. This results in a FIS that can handle any bounded nonconstant piecewise continuous membership function. Furthermore, the learning in OS-fuzzy-ELM can be done with the input data coming in a one-by-one mode or a chunk-by-chunk (a block of data) mode with fixed or varying chunk size. In OS-fuzzy-ELM, all the antecedent parameters of membership functions are randomly assigned first, and then, the corresponding consequent parameters are determined analytically. Performance comparisons of OS-fuzzy-ELM with other existing algorithms are presented using real-world benchmark problems in the areas of nonlinear system identification, regression, and classification. The results show that the proposed OS-fuzzy-ELM produces similar or better accuracies with at least an order-of-magnitude reduction in the training time.
IEEE Transactions on Circuits and Systems Ii-express Briefs | 2006
Guang-Bin Huang; Qin-Yu Zhu; Kezhi Mao; Chee Kheong Siew; Paramasivan Saratchandran; Narasimhan Sundararajan
Neural networks with threshold activation functions are highly desirable because of the ease of hardware implementation. However, the popular gradient-based learning algorithms cannot be directly used to train these networks as the threshold functions are nondifferentiable. Methods available in the literature mainly focus on approximating the threshold activation functions by using sigmoid functions. In this paper, we show theoretically that the recently developed extreme learning machine (ELM) algorithm can be used to train the neural networks with threshold functions directly instead of approximating them with sigmoid functions. Experimental results based on real-world benchmark regression problems demonstrate that the generalization performance obtained by ELM is better than other algorithms used in threshold networks. Also, the ELM method does not need control variables (manually tuned parameters) and is much faster.
IEEE/ACM Transactions on Computational Biology and Bioinformatics | 2007
Runxuan Zhang; Guang-Bin Huang; Narasimhan Sundararajan; Paramasivan Saratchandran
In this paper, the recently developed Extreme Learning Machine (ELM) is used for directing multicategory classification problems in the cancer diagnosis area. ELM avoids problems like local minima, improper learning rate and overfitting commonly faced by iterative learning methods and completes the training very fast. We have evaluated the multicategory classification performance of ELM on three benchmark microarray data sets for cancer diagnosis, namely, the GCM data set, the Lung data set, and the Lymphoma data set. The results indicate that ELM produces comparable or better classification accuracies with reduced training time and implementation complexity compared to artificial neural networks methods like conventional back-propagation ANN, Linders SANN, and Support Vector Machine methods like SVM-OVO and Ramaswamys SVM-OVA. ELM also achieves better accuracies for classification of individual categories.
Automatica | 2001
Y. Li; Narasimhan Sundararajan; Paramasivan Saratchandran
In this paper, an on-line learning neuro-control scheme that incorporates a growing radial basis function network (GRBFN) is proposed for a nonlinear aircraft controller design. The scheme is based on feedback-error-learning strategy in which the neuro-flight-controller (NFC) augments a conventional controller in the loop. By using the Lyapunov synthesis approach, the tuning rule for updating all the parameters of the RBFN (weights, widths and centers of the Gaussian functions) is derived which ensures the stability of the overall system with improved tracking accuracy. The theoretical results are validated using simulation studies based on a nonlinear 6-DOF high performance fighter aircraft undergoing a high @a stability-axis roll maneuver. Compared with a traditional RBFN where only the weights are tuned, a GRBFN with tuning of all the parameters can implement a more compact network structure with smaller tracking error.
IEEE Transactions on Neural Networks | 2002
Deng Jianping; Narasimhan Sundararajan; Paramasivan Saratchandran
A complex radial basis function neural network is proposed for equalization of quadrature amplitude modulation (QAM) signals in communication channels. The network utilizes a sequential learning algorithm referred to as complex minimal resource allocation network (CMRAN) and is an extension of the MRAN algorithm originally developed for online learning in real-valued radial basis function (RBF) networks. CMRAN has the ability to grow and prune the (complex) RBF networks hidden neurons to ensure a parsimonious network structure. The performance of the CMRAN equalizer for nonlinear channel equalization problems has been evaluated by comparing it with the functional link artificial neural network (FLANN) equalizer of J.C. Patra et al. (1999) and the Gaussian stochastic gradient (SG) RBF equalizer of I. Cha and S. Kassam (1995). The results clearly show that CMRANs performance is superior in terms of symbol error rates and network complexity.
Information Sciences | 2008
Sundaram Suresh; Narasimhan Sundararajan; Paramasivan Saratchandran
In this paper, we propose two risk-sensitive loss functions to solve the multi-category classification problems where the number of training samples is small and/or there is a high imbalance in the number of samples per class. Such problems are common in the bio-informatics/medical diagnosis areas. The most commonly used loss functions in the literature do not perform well in these problems as they minimize only the approximation error and neglect the estimation error due to imbalance in the training set. The proposed risk-sensitive loss functions minimize both the approximation and estimation error. We present an error analysis for the risk-sensitive loss functions along with other well known loss functions. Using a neural architecture, classifiers incorporating these risk-sensitive loss functions have been developed and their performance evaluated for two real world multi-class classification problems, viz., a satellite image classification problem and a micro-array gene expression based cancer classification problem. To study the effectiveness of the proposed loss functions, we have deliberately imbalanced the training samples in the satellite image problem and compared the performance of our neural classifiers with those developed using other well-known loss functions. The results indicate the superior performance of the neural classifier using the proposed loss functions both in terms of the overall and per class classification accuracy. Performance comparisons have also been carried out on a number of benchmark problems where the data is normal i.e., not sparse or imbalanced. Results indicate similar or better performance of the proposed loss functions compared to the well-known loss functions.