Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where S. Balasundaram is active.

Publication


Featured researches published by S. Balasundaram.


Neural Computing and Applications | 2013

On Lagrangian twin support vector regression

S. Balasundaram; Muhammad Tanveer

In this paper, a simple and linearly convergent Lagrangian support vector machine algorithm for the dual of the twin support vector regression (TSVR) is proposed. Though at the outset the algorithm requires inverse of matrices, it has been shown that they would be obtained by performing matrix subtraction of the identity matrix by a scalar multiple of inverse of a positive semi-definite matrix that arises in the original formulation of TSVR. The algorithm can be easily implemented and does not need any optimization packages. To demonstrate its effectiveness, experiments were performed on well-known synthetic and real-world datasets. Similar or better generalization performance of the proposed method in less training time in comparison with the standard and twin support vector regression methods clearly exhibits its suitability and applicability.


Neurocomputing | 2014

1-Norm extreme learning machine for regression and multiclass classification using Newton method

S. Balasundaram; Deepak Gupta; Kapil

In this paper, a novel 1-norm extreme learning machine (ELM) for regression and multiclass classification is proposed as a linear programming problem whose solution is obtained by solving its dual exterior penalty problem as an unconstrained minimization problem using a fast Newton method. The algorithm converges from any starting point and can be easily implemented in MATLAB. The main advantage of the proposed approach is that it leads to a sparse model representation meaning that many components of the optimal solution vector will become zero and therefore the decision function can be determined using much less number of hidden nodes in comparison to ELM. Numerical experiments were performed on a number of interesting real-world benchmark datasets and their results are compared with ELM using additive and radial basis function (RBF) hidden nodes, optimally pruned ELM (OP-ELM) and support vector machine (SVM) methods. Similar or better generalization performance of the proposed method on the test data over ELM, OP-ELM and SVM clearly illustrates its applicability and usefulness.


Expert Systems With Applications | 2010

On Lagrangian support vector regression

S. Balasundaram; Kapil

Prediction by regression is an important method of solution for forecasting. In this paper an iterative Lagrangian support vector machine algorithm for regression problems has been proposed. The method has the advantage that its solution is obtained by taking the inverse of a matrix of order equals to the number of input samples at the beginning of the iteration rather than solving a quadratic optimization problem. The algorithm converges from any starting point and does not need any optimization packages. Numerical experiments have been performed on Bodyfat and a number of important time series datasets of interest. The results obtained are in close agreement with the exact solution of the problems considered clearly demonstrates the effectiveness of the proposed method.


International Journal of Machine Learning and Cybernetics | 2016

On optimization based extreme learning machine in primal for regression and classification by functional iterative method

S. Balasundaram; Deepak Gupta

In this paper, the recently proposed extreme learning machine in the aspect of optimization method by Huang et al. (Neurocomputing, 74: 155–163, 2010) has been considered in its primal form whose solution is obtained by solving an absolute value equation problem by a simple, functional iterative algorithm. It has been proved under sufficient conditions that the algorithm converges linearly. The pseudo codes of the algorithm for regression and classification are given and they can be easily implemented in MATLAB. Experiments were performed on a number of real-world datasets using additive and radial basis function hidden nodes. Similar or better generalization performance of the proposed method in comparison to support vector machine (SVM), extreme learning machine (ELM), optimally pruned extreme learning machine (OP-ELM) and optimization based extreme learning machine (OB-ELM) methods with faster learning speed than SVM and OB-ELM demonstrates its effectiveness and usefulness.


Applied Intelligence | 2016

Training primal twin support vector regression via unconstrained convex minimization

S. Balasundaram; Yogendra Meena

In this paper, we propose a new unconstrained twin support vector regression model in the primal space (UPTSVR). With the addition of a regularization term in the formulation of the problem, the structural risk is minimized. The proposed formulation solves two smaller sized unconstrained minimization problems having continues, piece-wise quadratic objective functions by gradient based iterative methods. However, since their objective functions contain the non-smooth ‘plus’ function, two approaches are taken: (i) replace the non-smooth ‘plus’ function with their smooth approximate functions; (ii) apply a generalized derivative of the non-smooth ‘plus’ function. They lead to five algorithms whose pseudo-codes are also given. Experimental results obtained on a number of interesting synthetic and real-world benchmark datasets using these algorithms in comparison with the standard support vector regression (SVR) and twin SVR (TSVR) clearly demonstrates the effectiveness of the proposed method.


Neurocomputing | 2011

Application of error minimized extreme learning machine for simultaneous learning of a function and its derivatives

S. Balasundaram; Kapil

Abstract In this paper a new learning algorithm is proposed for the problem of simultaneous learning of a function and its derivatives as an extension of the study of error minimized extreme learning machine for single hidden layer feedforward neural networks. Our formulation leads to solving a system of linear equations and its solution is obtained by Moore–Penrose generalized pseudo-inverse. In this approach the number of hidden nodes is automatically determined by repeatedly adding new hidden nodes to the network either one by one or group by group and updating the output weights incrementally in an efficient manner until the network output error is less than the given expected learning accuracy. For the verification of the efficiency of the proposed method a number of interesting examples are considered and the results obtained with the proposed method are compared with that of other two popular methods. It is observed that the proposed method is fast and produces similar or better generalization performance on the test data.


Neural Computing and Applications | 2013

On extreme learning machine for ε-insensitive regression in the primal by Newton method

S. Balasundaram; Kapil

In this paper, extreme learning machine (ELM) for ε-insensitive error loss function-based regression problem formulated in 2-norm as an unconstrained optimization problem in primal variables is proposed. Since the objective function of this unconstrained optimization problem is not twice differentiable, the popular generalized Hessian matrix and smoothing approaches are considered which lead to optimization problems whose solutions are determined using fast Newton–Armijo algorithm. The main advantage of the algorithm is that at each iteration, a system of linear equations is solved. By performing numerical experiments on a number of interesting synthetic and real-world datasets, the results of the proposed method are compared with that of ELM using additive and radial basis function hidden nodes and of support vector regression (SVR) using Gaussian kernel. Similar or better generalization performance of the proposed method on the test data in comparable computational time over ELM and SVR clearly illustrates its efficiency and applicability.


Neural Computing and Applications | 2010

On finite Newton method for support vector regression

S. Balasundaram; Ram Pal Singh

In this paper, we propose a Newton iterative method of solution for solving an ε-insensitive support vector regression formulated as an unconstrained optimization problem. The proposed method has the advantage that the solution is obtained by solving a system of linear equations at a finite number of times rather than solving a quadratic optimization problem. For the case of linear or kernel support vector regression, the finite termination of the Newton method has been proved. Experiments were performed on IBM, Google, Citigroup and Sunspot time series. The proposed method converges in at most six iterations. The results are compared with that of the standard, least squares and smooth support vector regression methods and of the exact solutions clearly demonstrate the effectiveness of the proposed method.


International Journal for Numerical Methods in Engineering | 1996

ON A MIXED-HYBRID FINITE ELEMENT METHOD FOR ANISOTROPIC PLATE BENDING PROBLEMS

Neela Nataraj; P. K. Bhattacharyya; S. Balasundaram; S. Gopalsamy

The paper deals with a new mixed-hybrid finite element method for anisotropic elastic plate bending problems, which gives a simultaneous approximation to bending and twisting moment tensor and deflection fields in the interior of each triangle and traces of the displacement field and its normal derivative on sides of each triangle. Efficient computer implementation procedures for this mixed-hybrid scheme have been developed. Results of numerical experiments on different anisotropic plates of practical and research interest are given.


International Journal of Knowledge-based and Intelligent Engineering Systems | 2013

Smooth Newton method for implicit Lagrangian twin support vector regression

S. Balasundaram; Muhammad Tanveer

A new smoothing approach for the implicit Lagrangian twin support vector regression is proposed in this paper. Our formulation leads to solving a pair of unconstrained quadratic programming problems of smaller size than in the classical support vector regression and their solutions are obtained using Newton-Armijo algorithm. This approach has the advantage that a system of linear equations is solved in each iteration of the algorithm. Numerical experiments on several synthetic and real-world datasets are performed and, their results and training time are compared with both the support vector regression and twin support vector regression to verify the effectiveness of the proposed method.

Collaboration


Dive into the S. Balasundaram's collaboration.

Top Co-Authors

Avatar

Deepak Gupta

Jawaharlal Nehru University

View shared research outputs
Top Co-Authors

Avatar

Kapil

Jawaharlal Nehru University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yogendra Meena

Jawaharlal Nehru University

View shared research outputs
Top Co-Authors

Avatar

Muhammad Tanveer

Jawaharlal Nehru University

View shared research outputs
Top Co-Authors

Avatar

Gagandeep Benipal

Jawaharlal Nehru University

View shared research outputs
Top Co-Authors

Avatar

N. Kapil

Jawaharlal Nehru University

View shared research outputs
Top Co-Authors

Avatar

Neela Nataraj

Indian Institute of Technology Bombay

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge