Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yuan-Hai Shao is active.

Publication


Featured researches published by Yuan-Hai Shao.


IEEE Transactions on Neural Networks | 2011

Improvements on Twin Support Vector Machines

Yuan-Hai Shao; Chunhua Zhang; Xiao-Bo Wang; Nai-Yang Deng

For classification problems, the generalized eigenvalue proximal support vector machine (GEPSVM) and twin support vector machine (TWSVM) are regarded as milestones in the development of the powerful SVMs, as they use the nonparallel hyperplane classifiers. In this brief, we propose an improved version, named twin bounded support vector machines (TBSVM), based on TWSVM. The significant advantage of our TBSVM over TWSVM is that the structural risk minimization principle is implemented by introducing the regularization term. This embodies the marrow of statistical learning theory, so this modification can improve the performance of classification. In addition, the successive overrelaxation technique is used to solve the optimization problems to speed up the training procedure. Experimental results show the effectiveness of our method in both computation time and classification accuracy, and therefore confirm the above conclusion further.


Pattern Recognition | 2012

Least squares recursive projection twin support vector machine for classification

Yuan-Hai Shao; Nai-Yang Deng; Zhi-Min Yang

In this paper we formulate a least squares version of the recently proposed projection twin support vector machine (PTSVM) for binary classification. This formulation leads to extremely simple and fast algorithm, called least squares projection twin support vector machine (LSPTSVM) for generating binary classifiers. Different from PTSVM, we add a regularization term, ensuring the optimization problems in our LSPTSVM are positive definite and resulting better generalization ability. Instead of usually solving two dual problems, we solve two modified primal problems by solving two systems of linear equations whereas PTSVM need to solve two quadratic programming problems along with two systems of linear equations. Our experiments on publicly available datasets indicate that our LSPTSVM has comparable classification accuracy to that of PTSVM but with remarkably less computational time.


Knowledge Based Systems | 2013

A regularization for the projection twin support vector machine

Yuan-Hai Shao; Zhen Wang; Wei-Jie Chen; Nai-Yang Deng

For the recently proposed projection twin support vector machine (PTSVM) [1], we propose a simpler and reasonable variant from theoretical point of view, called projection twin support vector machine with regularization term, RPTSVM for short. Note that only the empirical risk minimization is introduced in primal problems of PTSVM, incurring the possible singularity problems. Our RPTSVM reformulates the primal problems by adding a maximum margin regularization term, and, therefore, the singularity is avoided and the regularized risk principle is implemented. In addition, the nonlinear classification ignored in PTSVM is also considered in our RPTSVM. Further, a successive overrelaxation technique and a genetic algorithm are introduced to solve our optimization problems and to do the parameter selection, respectively. Computational comparisons of our RPTSVM against original PTSVM, TWSVM and MVSVM indicate that our RPTSVM obtains better generalization than others.


Pattern Recognition | 2014

An efficient weighted Lagrangian twin support vector machine for imbalanced data classification

Yuan-Hai Shao; Wei-Jie Chen; Jing-Jing Zhang; Zhen Wang; Nai-Yang Deng

Abstract In this paper, we propose an efficient weighted Lagrangian twin support vector machine (WLTSVM) for the imbalanced data classification based on using different training points for constructing the two proximal hyperplanes. The main contributions of our WLTSVM are: (1) a graph based under-sampling strategy is introduced to keep the proximity information, which is robustness to outliers, (2) the weight biases are embedded in the Lagrangian TWSVM formulations, which overcomes the bias phenomenon in the original TWSVM for the imbalanced data classification, (3) the convergence of the training procedure of Lagrangian functions is proven and (4) it is tested and compared with some other TWSVMs on synthetic and real datasets to show its feasibility and efficiency for the imbalanced data classification.


Neural Computing and Applications | 2013

An ε-twin support vector machine for regression

Yuan-Hai Shao; Chunhua Zhang; Zhi-Min Yang; Ling Jing; Nai-Yang Deng

This study proposes a new regressor—ε-twin support vector regression (ε-TSVR) based on TSVR. ε-TSVR determines a pair of ε-insensitive proximal functions by solving two related SVM-type problems. Different form only empirical risk minimization is implemented in TSVR, the structural risk minimization principle is implemented by introducing the regularization term in primal problems of our ε-TSVR, yielding the dual problems to be stable positive definite quadratic programming problems, so can improve the performance of regression. In addition, the successive overrelaxation technique is used to solve the optimization problems to speed up the training procedure. Experimental results for both artificial and real datasets show that, compared with the popular ε-SVR, LS-SVR and TSVR, our ε-TSVR has remarkable improvement of generalization performance with short training time.


International Journal of Machine Learning and Cybernetics | 2014

Laplacian smooth twin support vector machine for semi-supervised classification

Wei-Jie Chen; Yuan-Hai Shao; Ning Hong

Laplacian twin support vector machine (Lap-TSVM) is a state-of-the-art nonparallel-planes semi-supervised classifier. It tries to exploit the geometrical information embedded in unlabeled data to boost its generalization ability. However, Lap-TSVM may endure heavy burden in training procedure since it needs to solve two quadratic programming problems (QPPs) with the matrix “inversion” operation. In order to enhance the performance of Lap-TSVM, this paper presents a new formulation of Lap-TSVM, termed as Lap-STSVM. Rather than solving two QPPs in dual space, firstly, we convert the primal constrained QPPs of Lap-TSVM into unconstrained minimization problems (UMPs). Afterwards, a smooth technique is introduced to make these UMPs twice differentiable. At last, a fast Newton–Armijo algorithm is designed to solve the UMPs in Lap-STSVM. Experimental evaluation on both artificial and real-world datasets demonstrate the benefits of the proposed approach.


Pattern Recognition | 2013

A GA-based model selection for smooth twin parametric-margin support vector machine

Zhen Wang; Yuan-Hai Shao; Tie-Ru Wu

The recently proposed twin parametric-margin support vector machine, denoted by TPMSVM, gains good generalization and is suitable for many noise cases. However, in the TPMSVM, it solves two dual quadratic programming problems (QPPs). Moreover, compared with support vector machine (SVM), TPMSVM has at least four regularization parameters that need regulating, which affects its practical applications. In this paper, we increase the efficiency of TPMSVM from two aspects. First, by introducing a quadratic function, we directly optimize a pair of QPPs of TPMSVM in the primal space, called STPMSVM for short. Compared with solving two dual QPPs in the TPMSVM, STPMSVM can obviously improve the training speed without loss of generalization. Second, a genetic algorithm GA-based model selection for STPMSVM in the primal space is suggested. The GA-based STPMSVM can not only select the parameters efficiently, but also provide discriminative feature selection. Computational results on several synthetic as well as benchmark datasets confirm the great improvements on the training process of our GA-based STPMSVM.


Knowledge Based Systems | 2012

Probabilistic outputs for twin support vector machines

Yuan-Hai Shao; Nai-Yang Deng; Zhi-Min Yang; Wei-Jie Chen; Zhen Wang

In many cases, the output of a classifier should be a calibrated posterior probability to enable post-processing. However, twin support vector machines (TWSVM) do not provide such probabilities. In this paper, we propose a TWSVM probability model, called PTWSVM, to estimate the posterior probability. Note that our model is quite different from the SVM probability model because of the different mechanism of TWSVM and SVM. In fact, for TWSVM, we first define a new ranking continues output by comparing the distances between the sample and the two non-parallel hyperplanes, and then map this ranking continues output into probability by training the parameters of an additional sigmoid function. Our PTWSVM has been tested on both artificial datasets and several data-mining-style datasets, and the numerical experiments indicate that PTWSVM yields nice results.


IEEE Signal Processing Letters | 2013

Improved Generalized Eigenvalue Proximal Support Vector Machine

Yuan-Hai Shao; Nai-Yang Deng; Wei-Jie Chen; Zhen Wang

In this letter, we propose an improved version of generalized eigenvalue proximal support vector machine (GEPSVM), called IGEPSVM for short. The main improvements are 1) the generalized eigenvalue decomposition is replaced by the standard eigenvalue decomposition, resulting in simpler optimization problems without the possible singularity. 2) An extra meaningful parameter is introduced, resulting in the stronger classification generalization ability. Experimental results on both the artificial datasets and several benchmark datasets show that our IGEPSVM is superior to GEPSVM in both computation time and classification accuracy.


Applied Intelligence | 2013

Least squares twin parametric-margin support vector machine for classification

Yuan-Hai Shao; Zhen Wang; Wei-Jie Chen; Nai-Yang Deng

In this paper, we propose a novel least squares twin parametric-margin support vector machine (TPMSVM) for binary classification, called LSTPMSVM for short. LSTPMSVM attempts to solve two modified primal problems of TPMSVM, instead of two dual problems usually solved. The solution of the two modified primal problems reduces to solving just two systems of linear equations as opposed to solving two quadratic programming problems along with two systems of linear equations in TPMSVM, which leads to extremely simple and fast algorithm. Classification using nonlinear kernel with reduced technique also leads to systems of linear equations. Therefore our LSTPMSVM is able to solve large datasets accurately without any external optimizers. Further, a particle swarm optimization (PSO) algorithm is introduced to do the parameter selection. Our experiments on synthetic as well as on several benchmark data sets indicate that our LSTPMSVM has comparable classification accuracy to that of TPMSVM but with remarkably less computational time.

Collaboration


Dive into the Yuan-Hai Shao's collaboration.

Top Co-Authors

Avatar

Nai-Yang Deng

China Agricultural University

View shared research outputs
Top Co-Authors

Avatar

Wei-Jie Chen

Zhejiang University of Technology

View shared research outputs
Top Co-Authors

Avatar

Zhen Wang

Inner Mongolia University

View shared research outputs
Top Co-Authors

Avatar

Chun-Na Li

Zhejiang University of Technology

View shared research outputs
Top Co-Authors

Avatar

Zhi-Min Yang

Zhejiang University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lan Bai

Inner Mongolia University

View shared research outputs
Top Co-Authors

Avatar

Xiang-Yu Hua

Zhejiang University of Technology

View shared research outputs
Top Co-Authors

Avatar

Li-Ming Liu

Capital University of Economics and Business

View shared research outputs
Top Co-Authors

Avatar

Ming-Zeng Liu

Dalian University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge