Chun-Na Li
Zhejiang University of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Chun-Na Li.
Neural Networks | 2015
Chun-Na Li; Yuan-Hai Shao; Nai-Yang Deng
In this paper, we propose an L1-norm two-dimensional linear discriminant analysis (L1-2DLDA) with robust performance. Different from the conventional two-dimensional linear discriminant analysis with L2-norm (L2-2DLDA), where the optimization problem is transferred to a generalized eigenvalue problem, the optimization problem in our L1-2DLDA is solved by a simple justifiable iterative technique, and its convergence is guaranteed. Compared with L2-2DLDA, our L1-2DLDA is more robust to outliers and noises since the L1-norm is used. This is supported by our preliminary experiments on toy example and face datasets, which show the improvement of our L1-2DLDA over L2-2DLDA.
Knowledge Based Systems | 2015
Yuan-Hai Shao; Wei-Jie Chen; Zhen Wang; Chun-Na Li; Nai-Yang Deng
In this paper, we formulate a twin-type support vector machine for large-scale classification problems, called weighted linear loss twin support vector machine (WLTSVM). By introducing the weighted linear loss, our WLTSVM only needs to solve simple linear equations with lower computational cost, and meanwhile, maintains the generalization ability. So, it is able to deal with large-scale problems efficiently without any extra external optimizers. The experimental results on several benchmark datasets indicate that, comparing to TWSVM, our WLTSVM has comparable classification accuracy but with less computational time.
Pattern Recognition | 2016
Wei-Jie Chen; Yuan-Hai Shao; Chun-Na Li; Nai-Yang Deng
Abstract Multi-label learning paradigm, which aims at dealing with data associated with potential multiple labels, has attracted a great deal of attention in machine intelligent community. In this paper, we propose a novel multi-label twin support vector machine (MLTSVM) for multi-label classification. MLTSVM determines multiple nonparallel hyperplanes to capture the multi-label information embedded in data, which is a useful promotion of twin support vector machine (TWSVM) for multi-label classification. To speed up the training procedure, an efficient successive overrelaxation (SOR) algorithm is developed for solving the involved quadratic programming problems (QPPs) in MLTSVM. Extensive experimental results on both synthetic and real-world multi-label datasets confirm the feasibility and effectiveness of the proposed MLTSVM.
Optimization | 2016
Chun-Na Li; Yuan-Hai Shao; Nai-Yang Deng
In this paper, we propose a robust L1-norm non-parallel proximal support vector machine (L1-NPSVM), which aims at giving a robust performance for binary classification in contrast to GEPSVM, especially for the problem with outliers. There are three mainly properties of the proposed L1-NPSVM. Firstly, different from the traditional GEPSVM which solves two generalized eigenvalue problems, our L1-NPSVM solves a pair of L1-norm optimal problems by using a simple justifiable iterative technique. Secondly, by introducing the L1-norm, our L1-NPSVM is more robust to outliers than GEPSVM to a great extent. Thirdly, compared with GEPSVM, no parameters need to be regularized in our L1-NPSVM. The effectiveness of the proposed method is demonstrated by tests on a simple artificial example as well as on some UCI datasets, which shows the improvements of GEPSVM.
Applied Mathematics and Computation | 2017
Ya-Fen Ye; Yuan-Hai Shao; Nai-Yang Deng; Chun-Na Li; Xiang-Yu Hua
Lp-norm least squares support vector regression (Lp-LSSVR) is proposed for feature selection in regression.Using the absolute constraint and the Lp-norm regularization term, Lp-LSSVR performs robust against outliers.Lp-LSSVR ensures the useful features to be selected based on theoretical analysis.Lp-LSSVR only solves a series of linear equations, leading to fast training speed. In this paper, we aim a novel algorithm called robust Lp-norm least squares support vector regression (Lp-LSSVR) that is more robust than the traditional least squares support vector regression(LS-SVR). Using the absolute constraint and the Lp-norm regularization term, our Lp-LSSVR performs robust against outliers. Moreover, though the optimization problem is non-convex, the sparse solution of Lp-norm and the lower bonds for nonzero components technique ensure useful features selected by Lp-LSSVR, and it helps to find the local optimum of our Lp-LSSVR. Experimental results show that although Lp-LSSVR is more robust than least squares support vector regression (LS-SVR), and much faster than Lp-norm support vector regression (Lp-SVR) and SVR due to its equality constraint, it is slower than LS-SVR and L1-norm support vector regression (L1-SVR), it is as effective as Lp-SVR, L1-SVR, LS-SVR and SVR in both feature selection and regression.
Information Sciences | 2016
Zhen Wang; Yuan-Hai Shao; Lan Bai; Chun-Na Li; Li-Ming Liu; Nai-Yang Deng
Abstract Linear discriminant analysis (LDA) with its extensions is a group of classical methods in dimensionality reduction for supervised learning. However, when some classes are far away from the others, it may be difficult to find the optimal direction by LDA because of the average between-class scatter. Moreover, LDAs are always time consuming for high dimensional problem since the involved generalized eigenvalue problem is needed to be solved. In this paper, a multiple between-class linear discriminant analysis (MBLDA) is proposed for dimensionality reduction. MBLDA finds the transformation directions by approximating the solution to a min-max programming problem, leading to well separability in the reduced space with a fast learning speed on the high-dimensional problem. It is proved theoretically that the proposed method can deal with the special generalized eigenvalue problem by solving a underdetermined homogeneous system of linear equations. Experimental results on the artificial and benchmark datasets show that MBLDA can not only reduce the dimension to a powerful linear discriminant level but also have a fast learning speed.
Pattern Recognition | 2018
Yuan-Hai Shao; Chun-Na Li; Ming-Zeng Liu; Zhen Wang; Nai-Yang Deng
We propose an Lq-norm LS-SVM with feature selection for small size samples.Feature selection is achieved effectively by minimizing the Lq-norm of weight.The number of selected features can be adjusted by choosing the parameters.An efficient iterative global convergent algorithm is introduced to solve the primal problem.Experimental results show its feasibility and efficiency. Least squares support vector machine (LS-SVM) is a popular hyperplane-based classifier and has attracted many attentions. However, it may suffer from singularity or ill-condition issue for the small sample size (SSS) problem where the sample size is much smaller than the number of features of a data set. Feature selection is an effective way to solve this problem. Motivated by this, in the paper, we propose a sparse Lq-norm least squares support vector machine (Lq-norm LS-SVM) with 0
Knowledge Based Systems | 2018
Ming-Zeng Liu; Yuan-Hai Shao; Zhen Wang; Chun-Na Li; Wei-Jie Chen
Abstract In this paper, by introducing the statistics of training data into support vector regression (SVR), we propose a minimum deviation distribution regression (MDR). Rather than just minimizing the structural risk, MDR also minimizes both the regression deviation mean and the regression deviation variance, which is able to deal with the different distribution of boundary data and noises. The formulation of minimizing the first and second order statistics in MDR leads to a strongly convex quadratic programming problem (QPP). An efficient dual coordinate descend algorithm is adopted for small sample problem, and an average stochastic gradient algorithm for large scale one. Both theoretical analysis and experimental results illustrate the efficiency and effectiveness of the proposed method.
Information Sciences | 2018
Zhen Wang; Yuan-Hai Shao; Lan Bai; Chun-Na Li; Li-Ming Liu; Nai-Yang Deng
Abstract Within the large scale classification problem, the stochastic gradient descent method called PEGASOS has been successfully applied to support vector machines (SVMs). In this paper, we propose a stochastic gradient twin support vector machine (SGTSVM) based on the twin support vector machine (TWSVM). Compared to PEGASOS, our method is insensitive to stochastic sampling. Furthermore, we prove the convergence of SGTSVM and the approximation between TWSVM and SGTSVM under uniform sampling, whereas PEGASOS is almost surely convergent and only has an opportunity to obtain an approximation to SVM. In addition, we extend SGTSVM to nonlinear classification problems via a kernel trick. Experiments on artificial and publicly available datasets show that our method has stable performance and can handle large scale problems easily.
Neurocomputing | 2018
Wei-Jie Chen; Chun-Na Li; Yuan-Hai Shao; Ju Zhang; Nai-Yang Deng
Abstract The recently proposed multi-weight vector projection support vector machine (EMVSVM) is an excellent multi-projections classifier. However, the formulation of MVSVM is based on the L2-norm criterion, which makes it prone to be affected by outliers. To alleviate this issue, in this paper, we propose a robust L1-norm MVSVM method, termed as MVSVM L 1 . Specifically, our MVSVM L 1 aims to seek a pair of multiple projections such that, for each class, it maximizes the ratio of the L1-norm between-class dispersion and the L1-norm within-class dispersion. To optimize such L1-norm ratio problem, a simple but efficient iterative algorithm is further presented. The convergence of the algorithm is also analyzed theoretically. Extensive experimental results on both synthetic and real-world datasets confirm the feasibility and effectiveness of the proposed MVSVM L 1 .