Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Paramasivan Saratchandran is active.

Publication


Featured researches published by Paramasivan Saratchandran.


IEEE Transactions on Neural Networks | 2006

A Fast and Accurate Online Sequential Learning Algorithm for Feedforward Networks

Nan-Ying Liang; Guang-Bin Huang; Paramasivan Saratchandran; Narasimhan Sundararajan

In this paper, we develop an online sequential learning algorithm for single hidden layer feedforward networks (SLFNs) with additive or radial basis function (RBF) hidden nodes in a unified framework. The algorithm is referred to as online sequential extreme learning machine (OS-ELM) and can learn data one-by-one or chunk-by-chunk (a block of data) with fixed or varying chunk size. The activation functions for additive nodes in OS-ELM can be any bounded nonconstant piecewise continuous functions and the activation functions for RBF nodes can be any integrable piecewise continuous functions. In OS-ELM, the parameters of hidden nodes (the input weights and biases of additive nodes or the centers and impact factors of RBF nodes) are randomly selected and the output weights are analytically determined based on the sequentially arriving data. The algorithm uses the ideas of ELM of Huang developed for batch learning which has been shown to be extremely fast with generalization performance better than other batch training methods. Apart from selecting the number of hidden nodes, no other control parameters have to be manually chosen. Detailed performance comparison of OS-ELM is done with other popular sequential learning algorithms on benchmark problems drawn from the regression, classification and time series prediction areas. The results show that the OS-ELM is faster than the other sequential algorithms and produces better generalization performance


IEEE Transactions on Neural Networks | 2005

A generalized growing and pruning RBF (GGAP-RBF) neural network for function approximation

Guang-Bin Huang; Paramasivan Saratchandran; Narasimhan Sundararajan

This work presents a new sequential learning algorithm for radial basis function (RBF) networks referred to as generalized growing and pruning algorithm for RBF (GGAP-RBF). The paper first introduces the concept of significance for the hidden neurons and then uses it in the learning algorithm to realize parsimonious networks. The growing and pruning strategy of GGAP-RBF is based on linking the required learning accuracy with the significance of the nearest or intentionally added new neuron. Significance of a neuron is a measure of the average information content of that neuron. The GGAP-RBF algorithm can be used for any arbitrary sampling density for training samples and is derived from a rigorous statistical point of view. Simulation results for bench mark problems in the function approximation area show that the GGAP-RBF outperforms several other sequential learning algorithms in terms of learning speed, network size and generalization performance regardless of the sampling density function of the training data.


Neurocomputing | 2005

Letters: Fully complex extreme learning machine

Ming-Bin Li; Guang-Bin Huang; Paramasivan Saratchandran; Narasimhan Sundararajan

Recently, a new learning algorithm for the feedforward neural network named the extreme learning machine (ELM) which can give better performance than traditional tuning-based learning methods for feedforward neural networks in terms of generalization and learning speed has been proposed by Huang et al. In this paper, we first extend the ELM algorithm from the real domain to the complex domain, and then apply the fully complex extreme learning machine (C-ELM) for nonlinear channel equalization applications. The simulation results show that the ELM equalizer significantly outperforms other neural network equalizers such as the complex minimal resource allocation network (CMRAN), complex radial basis function (CRBF) network and complex backpropagation (CBP) equalizers. C-ELM achieves much lower symbol error rate (SER) and has faster learning speed.


systems man and cybernetics | 2009

Online Sequential Fuzzy Extreme Learning Machine for Function Approximation and Classification Problems

Hai-Jun Rong; Guang-Bin Huang; Narasimhan Sundararajan; Paramasivan Saratchandran

In this correspondence, an online sequential fuzzy extreme learning machine (OS-fuzzy-ELM) has been developed for function approximation and classification problems. The equivalence of a Takagi-Sugeno-Kang (TSK) fuzzy inference system (FIS) to a generalized single hidden-layer feedforward network is shown first, which is then used to develop the OS-fuzzy-ELM algorithm. This results in a FIS that can handle any bounded nonconstant piecewise continuous membership function. Furthermore, the learning in OS-fuzzy-ELM can be done with the input data coming in a one-by-one mode or a chunk-by-chunk (a block of data) mode with fixed or varying chunk size. In OS-fuzzy-ELM, all the antecedent parameters of membership functions are randomly assigned first, and then, the corresponding consequent parameters are determined analytically. Performance comparisons of OS-fuzzy-ELM with other existing algorithms are presented using real-world benchmark problems in the areas of nonlinear system identification, regression, and classification. The results show that the proposed OS-fuzzy-ELM produces similar or better accuracies with at least an order-of-magnitude reduction in the training time.


IEEE Transactions on Circuits and Systems Ii-express Briefs | 2006

Can threshold networks be trained directly

Guang-Bin Huang; Qin-Yu Zhu; Kezhi Mao; Chee Kheong Siew; Paramasivan Saratchandran; Narasimhan Sundararajan

Neural networks with threshold activation functions are highly desirable because of the ease of hardware implementation. However, the popular gradient-based learning algorithms cannot be directly used to train these networks as the threshold functions are nondifferentiable. Methods available in the literature mainly focus on approximating the threshold activation functions by using sigmoid functions. In this paper, we show theoretically that the recently developed extreme learning machine (ELM) algorithm can be used to train the neural networks with threshold functions directly instead of approximating them with sigmoid functions. Experimental results based on real-world benchmark regression problems demonstrate that the generalization performance obtained by ELM is better than other algorithms used in threshold networks. Also, the ELM method does not need control variables (manually tuned parameters) and is much faster.


IEEE/ACM Transactions on Computational Biology and Bioinformatics | 2007

Multicategory Classification Using An Extreme Learning Machine for Microarray Gene Expression Cancer Diagnosis

Runxuan Zhang; Guang-Bin Huang; Narasimhan Sundararajan; Paramasivan Saratchandran

In this paper, the recently developed Extreme Learning Machine (ELM) is used for directing multicategory classification problems in the cancer diagnosis area. ELM avoids problems like local minima, improper learning rate and overfitting commonly faced by iterative learning methods and completes the training very fast. We have evaluated the multicategory classification performance of ELM on three benchmark microarray data sets for cancer diagnosis, namely, the GCM data set, the Lung data set, and the Lymphoma data set. The results indicate that ELM produces comparable or better classification accuracies with reduced training time and implementation complexity compared to artificial neural networks methods like conventional back-propagation ANN, Linders SANN, and Support Vector Machine methods like SVM-OVO and Ramaswamys SVM-OVA. ELM also achieves better accuracies for classification of individual categories.


Automatica | 2001

Neuro-controller design for nonlinear fighter aircraft maneuver using fully tuned RBF networks

Y. Li; Narasimhan Sundararajan; Paramasivan Saratchandran

In this paper, an on-line learning neuro-control scheme that incorporates a growing radial basis function network (GRBFN) is proposed for a nonlinear aircraft controller design. The scheme is based on feedback-error-learning strategy in which the neuro-flight-controller (NFC) augments a conventional controller in the loop. By using the Lyapunov synthesis approach, the tuning rule for updating all the parameters of the RBFN (weights, widths and centers of the Gaussian functions) is derived which ensures the stability of the overall system with improved tracking accuracy. The theoretical results are validated using simulation studies based on a nonlinear 6-DOF high performance fighter aircraft undergoing a high @a stability-axis roll maneuver. Compared with a traditional RBFN where only the weights are tuned, a GRBFN with tuning of all the parameters can implement a more compact network structure with smaller tracking error.


IEEE Transactions on Neural Networks | 2002

Communication channel equalization using complex-valued minimal radial basis function neural networks

Deng Jianping; Narasimhan Sundararajan; Paramasivan Saratchandran

A complex radial basis function neural network is proposed for equalization of quadrature amplitude modulation (QAM) signals in communication channels. The network utilizes a sequential learning algorithm referred to as complex minimal resource allocation network (CMRAN) and is an extension of the MRAN algorithm originally developed for online learning in real-valued radial basis function (RBF) networks. CMRAN has the ability to grow and prune the (complex) RBF networks hidden neurons to ensure a parsimonious network structure. The performance of the CMRAN equalizer for nonlinear channel equalization problems has been evaluated by comparing it with the functional link artificial neural network (FLANN) equalizer of J.C. Patra et al. (1999) and the Gaussian stochastic gradient (SG) RBF equalizer of I. Cha and S. Kassam (1995). The results clearly show that CMRANs performance is superior in terms of symbol error rates and network complexity.


Information Sciences | 2008

Risk-sensitive loss functions for sparse multi-category classification problems

Sundaram Suresh; Narasimhan Sundararajan; Paramasivan Saratchandran

In this paper, we propose two risk-sensitive loss functions to solve the multi-category classification problems where the number of training samples is small and/or there is a high imbalance in the number of samples per class. Such problems are common in the bio-informatics/medical diagnosis areas. The most commonly used loss functions in the literature do not perform well in these problems as they minimize only the approximation error and neglect the estimation error due to imbalance in the training set. The proposed risk-sensitive loss functions minimize both the approximation and estimation error. We present an error analysis for the risk-sensitive loss functions along with other well known loss functions. Using a neural architecture, classifiers incorporating these risk-sensitive loss functions have been developed and their performance evaluated for two real world multi-class classification problems, viz., a satellite image classification problem and a micro-array gene expression based cancer classification problem. To study the effectiveness of the proposed loss functions, we have deliberately imbalanced the training samples in the satellite image problem and compared the performance of our neural classifiers with those developed using other well-known loss functions. The results indicate the superior performance of the neural classifier using the proposed loss functions both in terms of the overall and per class classification accuracy. Performance comparisons have also been carried out on a number of benchmark problems where the data is normal i.e., not sparse or imbalanced. Results indicate similar or better performance of the proposed loss functions compared to the well-known loss functions.


Neurocomputing | 2008

A sequential multi-category classifier using radial basis function networks

Sundaram Suresh; Narasimhan Sundararajan; Paramasivan Saratchandran

This paper presents a new sequential multi-category classifier using radial basis function (SMC-RBF) network for real-world classification problems. The classification algorithm processes the training data one by one and builds the RBF network starting with zero hidden neuron. The growth criterion uses the misclassification error, the approximation error to the true decision boundary and a distance measure between the current sample and the nearest neuron belonging to the same class. SMC-RBF uses the hinge loss function (instead of the mean square loss function) for a more accurate estimate of the posterior probability. For network parameter updates, a decoupled extended Kalman filter is used to reduce the computational overhead. Performance of the proposed algorithm is evaluated using three benchmark problems, viz., image segmentation, vehicle and glass from the UCI machine learning repository. In addition, performance comparison has also been done on two real-world problems in the areas of remote sensing and bio-informatics. The performance of the proposed SMC-RBF classifier is also compared with the other RBF sequential learning algorithms like MRAN, GAP-RBFN, OS-ELM and the well-known batch classification algorithm SVM. The results indicate that SMC-RBF produces a higher classification accuracy with a more compact network. Also, the study indicates that using a function approximation algorithm for classification problems may not work well when the classes are not well separated and the training data is not uniformly distributed among the classes.

Collaboration


Dive into the Paramasivan Saratchandran's collaboration.

Top Co-Authors

Avatar

Narasimhan Sundararajan

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Guang-Bin Huang

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Deng Jianping

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Sundaram Suresh

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Hai-Jun Rong

Xi'an Jiaotong University

View shared research outputs
Top Co-Authors

Avatar

Ming-Bin Li

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Nan-Ying Liang

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Shou King Foo

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Yan Li

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge