Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Shuisheng Zhou is active.

Publication


Featured researches published by Shuisheng Zhou.


Applied Mathematics and Computation | 2008

A smoothing trust-region Newton-CG method for minimax problem

Feng Ye; Hongwei Liu; Shuisheng Zhou; Sanyang Liu

This paper presents a smooth approximate method with a new smoothing technique and a standard unconstrained minimization algorithm in the solution to the finite minimax problems. The new smooth approximations only replace the original problem in some neighborhoods of the kink points with a twice continuously differentiable function, its gradient and Hessian matrix are the combination of the first and the second order derivative of the original functions respectively. Compared to the other smooth functions such as the exponential penalty function, the remarkable advantage of the new smooth function is that the combination coefficients of its gradient and the Hessian matrix have sparse properties. Furthermore, the maximal possible difference value between the optimal values of the smooth approximate problem and the original one is determined by a fixed parameter selected previous. An algorithm to solve the equivalent unconstrained problem by using the trust-region Newton conjugate gradient method is proposed in the solution process. Finally, some numerical examples are reported to compare the proposed algorithm with SQP algorithm that implements in MATLAB toolbox and the algorithm in [E. Polak, J.O. Royset, R.S. Womersley, Algorithms with adaptive smoothing for finite minimax problems, Journal of Optimization Theory and Applications 119 (3) (2003) 459–484] based on the exponential penalty function, the numerical results prove that the proved algorithm is efficient. 2008 Elsevier Inc. All rights reserved.


Neurocomputing | 2009

Variant of Gaussian kernel and parameter setting method for nonlinear SVM

Shuisheng Zhou; Hongwei Liu; Feng Ye

The classification problem by the nonlinear support vector machine (SVM) with kernel function is discussed in this paper. Firstly, the stretching ratio is defined to analyze the performance of the kernel function, and a new type of kernel function is introduced by modifying the Gaussian kernel. The new kernel function has many properties as good as or better than Gaussian kernel: such as its stretching ratio is always lager than 1, and its implicit kernel map magnifies the distance between the vectors in local but without enlarging the radius of the circumscribed hypersphere that includes the whole mapping vectors in feature space, which maybe gets a bigger margin. Secondly, two aspects are considered to choose a good spread parameter for a given kernel function approximately and easily. One is the distance criterion which minimizes the sum-square distance between the labeled training sample and its own center and maximizes the sum-square distance between the training sample and the other labeled-center, which is equivalent to the famous Fisher ratio. The other is the angle criterion which minimizes the angle between the kernel matrix and the target matrix. Then a better criterion is given by combined those aspects. Finally, some experiments show that our methods are efficient.


Pattern Recognition Letters | 2010

Efficient nearest neighbor query based on extended B+-tree in high-dimensional space

Jiangtao Cui; Zhiyong An; Yong Guo; Shuisheng Zhou

Nearest neighbor queries in high-dimensional space are important in various applications. One-dimensional mapping is an efficient indexing method to speed up the k-nearest neighbor search, which can transform a high-dimensional point into a single-dimensional value indexed by a B^+-tree. In this paper, we present a new one-dimensional indexing scheme based on extended B^+-tree for k-nearest neighbor search in high-dimensional space. We first partition the high-dimensional dataset and perform Principal Component Analysis on each partition. The distance of each point to the center of the partition is indexed using a B^+-tree, and the projection on the first principal component of each point is embedded into leaf node of the B^+-tree. In the query, a new filter strategy according to the spatial relationship between the query point and the axis determined by the first principal component is applied to improve the query performance. We also present a novel k-nearest neighbor search algorithm which can guarantee the accuracy of query results. Extensive experiments have been indicative of the effectiveness of our approach.


Pattern Recognition Letters | 2007

Efficient high-dimensional indexing by sorting principal component

Jiangtao Cui; Shuisheng Zhou; Junding Sun

The vector approximation file (VA-file) approach is an efficient high-dimensional indexing method for image retrieval in large database. Some extensions of VA-file have been proposed towards better query performance. However, all of these methods applying sequential scan need read the whole vector approximation file. In this paper, we present a new indexing structure based on vector approximation method, in which only a small part of approximation file need be accessed. First, principal component analysis is used to map multidimensional points to a 1D line. Then a B^+-tree is built to index the approximate vector according to principal component. When performing k-nearest neighbor search, the partial distortion searching algorithm is used to reject the improper approximate vectors. Only a small set of approximate vectors need to be sequentially scanned during the search, which can reduce the CPU cost and I/O cost dramatically. Experiment results on large image databases show that the new approach provides a faster search speed than the other VA-file approaches.


Pattern Recognition Letters | 2007

Semismooth Newton support vector machine

Shuisheng Zhou; Hongwei Liu; Lihua Zhou; Feng Ye

Support vector machines can be posed as quadratic programming problems in a variety of ways. This paper investigates the 2-norm soft margin SVM with an additional quadratic penalty for the bias term that leads to a positive definite quadratic program in feature space only with the nonnegative constraint. An unconstrained programming problem is proposed as the Lagrangian dual of the quadratic programming for the linear classification problem. The resulted problem minimizes a differentiable convex piecewise quadratic function with lower dimensions in input space, and a Semismooth Newton algorithm is introduced to solve it quickly, then a Semismooth Newton Support Vector Machine (SNSVM) is presented. After the kernel matrix is factorized by the Cholesky factorization or the incomplete Cholesky factorization, the nonlinear kernel classification problem can also be solved by SNSVM, and the complexity of the algorithms has no apparent increase. Many numerical experiments demonstrate that our algorithm is comparable with the similar algorithms such as Lagrangian Support Vector Machines (LSVM) and Semismooth Support Vector Machines (SSVM).


Optimization Methods & Software | 2009

A new iterative algorithm training SVM

Shuisheng Zhou; Hongwei Liu; Feng Ye; Lihua Zhou

Based on the geometric interpretation of the support vector machine (SVM), a new feasible direction (NFD) algorithm is first proposed as a generalization of Franc and Hlaváĉ’s SK algorithm in this study. In the new algorithm, two vertices of the training sets are selected to update the current solution per iteration for the separable problem, while only one of them is used in the SK algorithm. Similar to the SK and Keerthi et al.’s nearest points algorithm (NPA), the proposed NFD can solve the inseparable problem with L 2 cost function. Furthermore, based on a geometric interpretation of ν-SVM and the definition of the reduced convex hull, the proposed algorithm extends to train the ν-SVM with commonly L 1 cost function for the inseparable problem. The convergence property of the algorithm is studied from the theoretical viewpoint. Computational experiments suggest that our algorithms are competitive with other SVM algorithms including SK, NPA, Tao et al.’s generalized SK and SMO. It was observed that the number of iterations and the training time are reduced in many cases.


international conference on natural computation | 2007

The Variant of Gaussian Kernel and Its Model Selection Method

Shuisheng Zhou; Hongwei Liu; Feng Ye

The classification problem by nonlinear support vector machine with kernel function is discussed in this paper. The stretching ratio is defined in order to analyze the performance of the kernel function. A new type of kernel function is introduced by modifying the Gaussian kernel, and it has many properties as good as or better than Gaussian function. For example, the map of the new kernel function magnifies the distance between vectors in local because the stretching ratio is always larger than one without enlarging the radius of the circumscribed hypersphere that includes the whole mapping vectors in feature space, which gets the bigger margin. Two criterions are proposed to choose a good spread parameter for a given kernel function approximately but easily. Some experiments are given to compare the classification performances between the proposed kernel function and Gaussian kernel function.


Neurocomputing | 2018

Sparse Algorithm for Robust LSSVM in Primal Space

Li Chen; Shuisheng Zhou

Abstract As having the closed form solution, the least squares support vector machine (LSSVM) has been widely used for classification and regression problems owing to its competitive performance compared with other types of SVMs. However, the LSSVM has two drawbacks: it is sensitive to outliers and its solution lacks sparseness. The robust LSSVM (R-LSSVM) partially overcomes the first drawback via its nonconvex truncated loss function, but it is unable to address the second drawback because its current algorithms produce dense solutions that are inefficient for training large-scale problems. In this paper, we interpret the robustness of the R-LSSVM from a re-weighted viewpoint and develop a primal R-LSSVM using the representer theorem. The new model may have a sparse solution. Then, we design a convergent sparse R-LSSVM (SR-LSSVM) algorithm to achieve a sparse solution of the primal R-LSSVM after obtaining a low-rank approximation of the kernel matrix. The new algorithm not only overcomes the two drawbacks of LSSVM simultaneously, it also has lower complexity than the existing algorithms. Therefore, it is very efficient at training large-scale problems. Numerous experimental results demonstrate that the SR-LSSVM can achieve better or comparable performance to other related algorithms in less training time, especially when used to train large-scale problems.


Computational Optimization and Applications | 2012

Rank-two update algorithms for the minimum volume enclosing ellipsoid problem

Wei-jie Cong; Hongwei Liu; Feng Ye; Shuisheng Zhou

We consider the problem of computing a (1+ε)-approximation to the minimum volume enclosing ellipsoid (MVEE) of a given set of m points in Rn. Based on the idea of sequential minimal optimization (SMO) method, we develop a rank-two update algorithm. This algorithm computes an approximate solution to the dual optimization formulation of the MVEE problem, which updates only two weights of the dual variable at each iteration. We establish that this algorithm computes a (1+ε)-approximation to MVEE in O(mn3/ε) operations and returns a core set of size O(n2/ε) for ε∈(0,1). In addition, we give an extension of this rank-two update algorithm. Computational experiments show the proposed algorithms are very efficient for solving large-scale problem with a high accuracy.


international conference on natural computation | 2016

Fast kernel fuzzy c-means algorithms based on difference of convex programming

Li Chen; Shuisheng Zhou; Xintao Gao

In this study, we propose three new algorithms based on difference of convex (DC) programming and DC algorithm (DCA) for kernel fuzzy c-means (KFCM) clustering model. Firstly, KFCM model is reformulated into two equivalent forms of DC programmings for which different KFCM algorithms are designed. Then, to further accelerate the second DCA based KFCM algorithm, we adopt an approximate strategy which is demonstrated effectiveness by experiments, as it constructs cluster centers to be linear combinations of a few randomly selected samples instead of full of them. In order to find a good initial point, we develop an alternative procedure of KFCM and our proposed DCA based KFCM algorithms. The proposed DCA based KFCM algorithms are efficient because they only require to compute the projection of points onto a sphere at each iteration, which is inexpensive. Numerical results on several real world datasets show that the proposed algorithms based on DCA for KFCM model is more efficient than classical KFCM algorithm with regard to accuracies, running-time and iterative times.

Collaboration


Dive into the Shuisheng Zhou's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Li Chen

Zhongyuan University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Li Chen

Zhongyuan University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge