Ramaswamy Savitha
Nanyang Technological University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ramaswamy Savitha.
Neural Computation | 2012
Ramaswamy Savitha; Sundaram Suresh; Narasimhan Sundararajan
Recent studies on human learning reveal that self-regulated learning in a metacognitive framework is the best strategy for efficient learning. As the machine learning algorithms are inspired by the principles of human learning, one needs to incorporate the concept of metacognition to develop efficient machine learning algorithms. In this letter we present a metacognitive learning framework that controls the learning process of a fully complex-valued radial basis function network and is referred to as a metacognitive fully complex-valued radial basis function (Mc-FCRBF) network. Mc-FCRBF has two components: a cognitive component containing the FC-RBF network and a metacognitive component, which regulates the learning process of FC-RBF. In every epoch, when a sample is presented to Mc-FCRBF, the metacognitive component decides what to learn, when to learn, and how to learn based on the knowledge acquired by the FC-RBF network and the new information contained in the sample. The Mc-FCRBF learning algorithm is described in detail, and both its approximation and classification abilities are evaluated using a set of benchmark and practical problems. Performance results indicate the superior approximation and classification performance of Mc-FCRBF compared to existing methods in the literature.
International Journal of Neural Systems | 2009
Ramaswamy Savitha; Sundaram Suresh; Narasimhan Sundararajan
Radial basis function networks are the most popular neural network architecture due its simpler structure and better approximation ability owing to the localization property of the Gaussian function. in this chapter, we study complex-valued RBF networks and their learning algorithms. First, we present a complex-valued RBF network which is a direct extension of the real-valued RBF network. CRBF network is a single hidden layer networkwhich computes the output of the network as a linear combination of the hidden neuron outputs.
IEEE Transactions on Neural Networks | 2011
Sundaram Suresh; Ramaswamy Savitha; Narasimhan Sundararajan
This paper presents a sequential learning algorithm for a complex-valued resource allocation network with a self-regulating scheme, referred to as complex-valued self-regulating resource allocation network (CSRAN). The self-regulating scheme in CSRAN decides what to learn, when to learn, and how to learn based on the information present in the training samples. CSRAN is a complex-valued radial basis function network with a sech activation function in the hidden layer. The network parameters are updated using a complex-valued extended Kalman filter algorithm. CSRAN starts with no hidden neuron and builds up an appropriate number of hidden neurons, resulting in a compact structure. Performance of the CSRAN is evaluated using a synthetic complex-valued function approximation problem, two real-world applications consisting of a complex quadrature amplitude modulation channel equalization, and an adaptive beam-forming problem. Since complex-valued neural networks are good decision makers, the decision-making ability of the CSRAN is compared with other complex-valued classifiers and the best performing real-valued classifier using two benchmark unbalanced classification problems from UCI machine learning repository. The approximation and classification results show that the CSRAN outperforms other existing complex-valued learning algorithms available in the literature.
Cognitive Computation | 2014
Ramaswamy Savitha; Sundaram Suresh; H. J. Kim
This paper presents an efficient fast learning classifier based on the Nelson and Narens model of human meta-cognition, namely ‘Meta-cognitive Extreme Learning Machine (McELM).’ McELM has two components: a cognitive component and a meta-cognitive component. The cognitive component of McELM is a three-layered extreme learning machine (ELM) classifier. The neurons in the hidden layer of the cognitive component employ the q-Gaussian activation function, while the neurons in the input and output layers are linear. The meta-cognitive component of McELM has a self-regulatory learning mechanism that decides what-to-learn, when-to-learn, and how-to-learn in a meta-cognitive framework. As the samples in the training set are presented one-by-one, the meta-cognitive component receives the monitory signals from the cognitive component and chooses suitable learning strategies for the sample. Thus, it either deletes the sample, uses the sample to add a new neuron, or updates the output weights based on the sample, or reserves the sample for future use. Therefore, unlike the conventional ELM, the architecture of McELM is not fixed a priori, instead, the network is built during the training process. While adding a neuron, McELM chooses the centers based on the sample, and the width of the Gaussian function is chosen randomly. The output weights are estimated using the least square estimate based on the hinge-loss error function. The hinge-loss error function facilitates prediction of posterior probabilities better than the mean-square error and is hence preferred to develop the McELM classifier. While updating the network parameters, the output weights are updated using a recursive least square estimate. The performance of McELM is evaluated on a set of benchmark classification problems from the UCI machine learning repository. Performance study results highlight that meta-cognition in ELM framework enhances the decision-making ability of ELM significantly.
Evolving Systems | 2013
Kartick Subramanian; Ramaswamy Savitha; Sundaram Suresh
In this paper, we present a Meta-cognitive Interval Type-2 neuro-Fuzzy Inference System (McIT2FIS) classifier and its projection based learning algorithm. McIT2FIS consists of two components, namely, a cognitive component and a meta-cognitive component. The cognitive component is an Interval Type-2 neuro-Fuzzy Inference System (IT2FIS) represented as a six layered adaptive network realizing Takagi-Sugeno-Kang type inference mechanism. IT2FIS begins with zero rules, and rules are added and updated depending on the relative knowledge represented by the sample in comparison to that represented by the cognitive component. The knowledge representation ability of IT2FIS is controlled by a self-regulatory learning mechanism that forms the meta-cognitive component. As each sample is presented to the network, the meta-cognitive component monitors the hinge-loss error and class-specific spherical potential of the current sample to decide what-to-learn, when-to-learn and how-to-learn them, efficiently. When a new rule is added or when an existing rule is updated, a Projection Based Learning (PBL) algorithm uses class specific criterion and sample overlap criterion to estimate the network parameters corresponding to the minimum energy point of the error function. The performance of McIT2FIS is evaluated on a set of benchmark classification problems from UCI machine learning repository. The statistical performance comparison with other algorithms available in the literature indicates improved performance of McIT2FIS.
Neurocomputing | 2009
Ramaswamy Savitha; Sundaram Suresh; Narasimhan Sundararajan; Paramasivan Saratchandran
In a fully complex-valued feed-forward network, the convergence of the Complex-valued Back Propagation (CBP) learning algorithm depends on the choice of the activation function, learning sample distribution, minimization criterion, initial weights and the learning rate. The minimization criteria used in the existing versions of CBP learning algorithm in the literature do not approximate the phase of complex-valued output well in function approximation problems. The phase of a complex-valued output is critical in telecommunication and reconstruction and source localization problems in medical imaging applications. In this paper, the issues related to the convergence of complex-valued neural networks are clearly enumerated using a systematic sensitivity study on existing complex-valued neural networks. In addition, we also compare the performance of different types of split complex-valued neural networks. From the observations in the sensitivity analysis, we propose a new CBP learning algorithm with logarithmic performance index for a complex-valued neural network with exponential activation function. The proposed CBP learning algorithm directly minimizes both the magnitude and phase errors and also provides better convergence characteristics. Performance of the proposed scheme is evaluated using two synthetic complex-valued function approximation problems, the complex XOR problem, and a non-minimum phase equalization problem. Also, a comparative analysis on the convergence of the existing fully complex and split complex networks is presented.
Neurocomputing | 2012
Ramaswamy Savitha; Sundaram Suresh; Narasimhan Sundararajan; H. J. Kim
In this paper, we investigate the decision making ability of a fully complex-valued radial basis function (FC-RBF) network in solving real-valued classification problems. The FC-RBF classifier is a single hidden layer fully complex-valued neural network with a nonlinear input layer, a nonlinear hidden layer, and a linear output layer. The neurons in the input layer of the classifier employ the phase encoded transformation to map the input features from the Real domain to the Complex domain. The neurons in the hidden layer employ a fully complex-valued Gaussian-like activation function of the type of hyperbolic secant (sech). The classification ability of the classifier is first studied analytically and it is shown that the decision boundaries of the FC-RBF classifier are orthogonal to each other. Then, the performance of the FC-RBF classifier is studied experimentally using a set of real-valued benchmark problems and also a real-world problem. The study clearly indicates the superior classification ability of the FC-RBF classifier.
IEEE Transactions on Neural Networks | 2013
Ramaswamy Savitha; Sundaram Suresh; Narasimhan Sundararajan
This paper presents a fully complex-valued relaxation network (FCRN) with its projection-based learning algorithm. The FCRN is a single hidden layer network with a Gaussian-like sech activation function in the hidden layer and an exponential activation function in the output layer. For a given number of hidden neurons, the input weights are assigned randomly and the output weights are estimated by minimizing a nonlinear logarithmic function (called as an energy function) which explicitly contains both the magnitude and phase errors. A projection-based learning algorithm determines the optimal output weights corresponding to the minima of the energy function by converting the nonlinear programming problem into that of solving a set of simultaneous linear algebraic equations. The resultant FCRN approximates the desired output more accurately with a lower computational effort. The classification ability of FCRN is evaluated using a set of real-valued benchmark classification problems from the University of California, Irvine machine learning repository. Here, a circular transformation is used to transform the real-valued input features to the complex domain. Next, the FCRN is used to solve three practical problems: a quadrature amplitude modulation channel equalization, an adaptive beamforming, and a mammogram classification. Performance results from this paper clearly indicate the superior classification/approximation performance of the FCRN.
international symposium on neural networks | 2011
Ramaswamy Savitha; Sundaram Suresh; Narasimhan Sundararajan
This paper presents a fast learning fully complex-valued classifier to solve real-valued classification problems, called the ‘Fast Learning Complex-valued Neural Classifier’ (FLCNC). The FLCNC is a single hidden layer network with a non-linear, real to complex transformed input layer, a hidden layer with a fully complex activation function and a linear output layer. The neurons in the input layer convert the real-valued input features to the Complex domain using an unique non-linear transformation. At the hidden layer, the complex-valued transformed input features are mapped onto a higher dimensional Complex plane using a fully complex-valued activation function of the type of ‘sech’. The parameters of the input and hidden neurons of the FLCNC are chosen randomly and the output parameters are estimated analytically which makes the FLCNC to perform fast classification. Moreover, the unique nonlinear input transformation and the orthogonal decision boundaries of the complex-valued neural network help the FLCNC to perform accurate classification. Performance of the FLCNC is demonstrated using a set of multi-category and binary real valued classification problems with both balanced and unbalanced data sets from the UCI machine learning repository. Performance comparison with existing complex-valued and real-valued classifiers show the superior classification performance of the FLCNC.
ieee region 10 conference | 2009
Ramaswamy Savitha; S. Vigneswaran; Sundaram Suresh; Narasimhan Sundararajan
Beamforming is an array signal processing problem of forming a beam pattern of an array of sensors. In doing so, beams are directed to the desired direction (beam-pointing) and the nulls are directed to interference direction (null-steering). In this paper, the performance of beamforming using the Fully Complex-valued RBF network (FC-RBF) with the fully complex-valued activation function is compared with the performance of the existing complex-valued RBF neural networks. It was observed that the FC-RBF network performed better than the other complex-valued RBF networks in suppressing the nulls and steering beams, as desired. The learning speed of the FC-RBF network was also faster than the Complex-valued Radial Basis Function network. Comparison of these performances with the optimum Matrix method showed that the beampattern of the FC-RBF beamformer was closer to the beampattern of the matrix method.