Muhammad Tanveer
Jawaharlal Nehru University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Muhammad Tanveer.
Cognitive Computation | 2015
Muhammad Tanveer
In this paper, we propose a new linear programming formulation of exact 1-norm twin support vector machine (TWSVM) for classification whose solution is obtained by solving a pair of dual exterior penalty problems as unconstrained minimization problems using Newton–Armijo algorithm. The idea of our formulation is to reformulate TWSVM as a strongly convex problem by incorporated regularization techniques and then derive an exact 1-norm linear programming formulation for TWSVM to improve robustness and sparsity. The solution of two modified unconstrained minimization problems reduces to solving just two systems of linear equations as opposed to solving two quadratic programming problems in TWSVM and TBSVM, which leads to extremely simple and fast algorithm. One significant advantage of our proposed method is the implementation of structural risk minimization principle. However, only empirical risk is considered in the primal problems of TWSVM due to its complex structure and thus may incur overfitting and suboptimal in some cases. Our approach has the advantage that a pair of matrix equation of order equals to the number of input examples is solved at each iteration of the algorithm. The algorithm converges from any starting point that can be easily implemented in MATLAB without using any optimization packages. Computational comparisons of our proposed method against original TWSVM, GEPSVM and SVM have been made on both synthetic and benchmark datasets. Experimental results show that our method is better or comparable in both computation time and classification accuracy.
Neural Computing and Applications | 2013
S. Balasundaram; Muhammad Tanveer
In this paper, a simple and linearly convergent Lagrangian support vector machine algorithm for the dual of the twin support vector regression (TSVR) is proposed. Though at the outset the algorithm requires inverse of matrices, it has been shown that they would be obtained by performing matrix subtraction of the identity matrix by a scalar multiple of inverse of a positive semi-definite matrix that arises in the original formulation of TSVR. The algorithm can be easily implemented and does not need any optimization packages. To demonstrate its effectiveness, experiments were performed on well-known synthetic and real-world datasets. Similar or better generalization performance of the proposed method in less training time in comparison with the standard and twin support vector regression methods clearly exhibits its suitability and applicability.
Knowledge Based Systems | 2016
Muhammad Tanveer; K. Shubham; Mujahed Al-Dhaifallah; Shen-Shyang Ho
Our RKNNWTSVR implements structural risk minimization principle by introducing extra regularization terms in each objective function.Our RKNNWTSVR cannot only help to alleviate overfitting issue and improve the generalization performance but also introduce invertibility in the dual formulation.The square of the 2-norm of the vector of slack variables is used in RKNNWTSVR to make the objective functions strongly convex.Four algorithms are designed to solve the proposed RKNNWTSVR.The solution reduces to solving just two systems of linear equations which makes our RKNNWTSVR extremely simple and efficient.No external optimizer is necessary for solving the RKNNWTSVR formulation. In general, pattern classification and regression tasks do not take into consideration the variation in the importance of the training samples. For twin support vector regression (TSVR), this implies that all the training samples play the same role on the bound functions. However, the number of close neighboring samples near to each training sample has an effect on the bound functions. In this paper, we formulate a regularized version of the KNN-based weighted twin support vector regression (KNNWTSVR) called RKNNWTSVR which is both efficient and effective. By introducing the regularization term and replacing 2-norm of slack variables instead of 1-norm, our RKNNWTSVR only needs to solve a simple system of linear equations with low computational cost, and at the same time, it improves the generalization performance. Particularly, we compare four implementations of RKNNWTSVR with existing approaches. Experimental results on several synthetic and benchmark datasets indicate that, comparing to SVR, WSVR, TSVR and KNNWTSVR, our proposed RKNNWTSVR has better generalization ability and requires less computational time.
Fixed Point Theory and Applications | 2010
Mohammad Imdad; Muhammad Tanveer; M Hasan
Employing the common property (E.A), we prove some common fixed point theorems for weakly compatible mappings via an implicit relation in Menger PM spaces. Some results on similar lines satisfying quasicontraction condition as well as -type contraction condition are also proved in Menger PM spaces. Our results substantially improve the corresponding theorems contained in (Branciari, (2002); Rhoades, (2003); Vijayaraju et al., (2005)) and also some others in Menger as well as metric spaces. Some related results are also derived besides furnishing illustrative examples.
Applied Intelligence | 2016
Muhammad Tanveer; K. Shubham; Mujahed Al-Dhaifallah; Kottakkaran Sooppy Nisar
Twin support vector regression (TSVR) and Lagrangian TSVR (LTSVR) satisfy only empirical risk minimization principle. Moreover, the matrices in their formulations are always positive semi-definite. To overcome these problems, we propose an efficient implicit Lagrangian formulation for the dual regularized twin support vector regression, called IRLTSVR for short. By introducing a regularization term to each objective function, the optimization problems in our IRLTSVR are positive definite and implement the structural risk minimization principle. Moreover, the 1-norm of the vector of slack variable is replaced with 2-norm to make the objective functions strongly convex. Our IRLTSVR solves two systems of linear equations instead of solving two quadratic programming problems (QPPs) in TSVR and one large QPP in SVR, which makes the learning speed of IRLTSVR faster than TSVR and SVR. Particularly, we compare three implementations of IRLTSVR with existing approaches. Computational results on several synthetic and real-world benchmark datasets clearly indicate the effectiveness and applicability of the IRLTSVR in comparison to SVR, TSVR and LTSVR.
Knowledge and Information Systems | 2015
Muhammad Tanveer
In this paper, a new unconstrained minimization problem formulation is proposed for linear programming twin support vector machine (TWSVM) classifiers. The proposed formulation leads to two smaller-sized unconstrained minimization problems having their objective functions piecewise differentiable. However, since their objective functions contain the non-smooth “plus” function, two new smoothing approaches are assumed to solve the proposed formulation, and then apply Newton-Armijo algorithm. The idea of our formulation is to reformulate TWSVM as a strongly convex problem by incorporated regularization techniques and then derive smooth 1-norm linear programming formulation for TWSVM to improve robustness. One significant advantage of our proposed algorithm over TWSVM is that the structural risk minimization principle is implemented in the primal problems which embodies the marrow of statistical learning theory. In addition, the solution of two modified unconstrained minimization problems reduces to solving just two systems of linear equations as opposed to solving two quadratic programming problems in TWSVM and TBSVM, which leads to extremely simple and fast algorithm. Our approach has the advantage that a pair of matrix equation of order equals to the number of input examples is solved at each iteration of the algorithm. The algorithm converges from any starting point that can be easily implemented in MATLAB without using any optimization packages. The performance of our proposed method is verified experimentally on several benchmark and synthetic datasets. Experimental results show the effectiveness of our methods in both training time and classification accuracy.
International Journal of Knowledge-based and Intelligent Engineering Systems | 2013
S. Balasundaram; Muhammad Tanveer
A new smoothing approach for the implicit Lagrangian twin support vector regression is proposed in this paper. Our formulation leads to solving a pair of unconstrained quadratic programming problems of smaller size than in the classical support vector regression and their solutions are obtained using Newton-Armijo algorithm. This approach has the advantage that a system of linear equations is solved in each iteration of the algorithm. Numerical experiments on several synthetic and real-world datasets are performed and, their results and training time are compared with both the support vector regression and twin support vector regression to verify the effectiveness of the proposed method.
International Journal of Machine Learning and Cybernetics | 2015
Muhammad Tanveer
In this paper, we proposed an implicit Lagrangian twin support vector machine (TWSVM) classifiers by formulating a pair of unconstrained minimization problems (UMPs) in dual variables whose solutions will be obtained using finite Newton method. The advantage of considering the generalized Hessian approach for our modified UMPs reduces to solving just two systems of linear equations as opposed to solving two quadratic programming problems in TWSVM and TBSVM, which leads to extremely simple and fast algorithm. Unlike the classical TWSVM and least square TWSVM (LSTWSVM), the structural risk minimization principle is implemented by adding regularization term in the primal problems of our proposed algorithm. This embodies the essence of statistical learning theory. Computational comparisons of our proposed method against GEPSVM, TWSVM, STWSVM and LSTWSVM have been made on both synthetic and well-known real world benchmark datasets. Experimental results show that our method yields significantly better generalization performance in both computational time and classification accuracy.
International Journal of Advanced Intelligence Paradigms | 2012
S. Balasundaram; Muhammad Tanveer
A new approach for classification problems, called proximal bilateral-weighted fuzzy support vector machine, is proposed wherein each input example is treated as belonging to both positive and negative classes with different fuzzy memberships. The assumption of treating every input example belonging to both the classes is very well justified in real world applications. For example, for the study of credit risk assessment a customer can not always be assumed to be absolutely good or bad as he may default or pay his debit at times and therefore he may be treated as belonging to both the classes. Our formulation leads to solving a system of linear equations of size equals to the number of input examples. Computational results of the proposed method on publicly available datasets including two credit risk analysis datasets to that of the standard, proximal and bilateral-weighted fuzzy support vector machine methods clearly demonstrates its efficiency and usefulness.
Journal of Intelligent and Fuzzy Systems | 2014
Muhammad Tanveer
In the recent articles, Khan and Sumitra Appl. Math. Sci., 5292011, 1421-1430 and Singh et al. Int. J. Math. Anal., 527 2011, 1301-1308 claim a fuzzy version of common fixed point theorem of Bouhadjera and Godet-Thobie [3]. The results of aforementioned articles contain flaws and are not correct in their present form. We provide some examples to demonstrate that this claim is false unless some additional conditions are imposed. Our note is desired to complete the interesting results in the quoted paper.