Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Zhiquan Qi is active.

Publication


Featured researches published by Zhiquan Qi.


Pattern Recognition | 2013

Robust twin support vector machine for pattern classification

Zhiquan Qi; Yingjie Tian; Yong Shi

In this paper, we proposed a new robust twin support vector machine (called R-TWSVM) via second order cone programming formulations for classification, which can deal with data with measurement noise efficiently. Preliminary experiments confirm the robustness of the proposed method and its superiority to the traditional robust SVM in both computation time and classification accuracy. Remarkably, since there are only inner products about inputs in our dual problems, this makes us apply kernel trick directly for nonlinear cases. Simultaneously we does not need to solve the extra inverse of matrices, which is totally different with existing TWSVMs. In addition, we also show that the TWSVMs are the special case of our robust model and simultaneously give a new dual form of TWSVM by degenerating R-TWSVM, which successfully overcomes the existing shortcomings of TWSVM.


Neural Networks | 2012

Laplacian twin support vector machine for semi-supervised classification

Zhiquan Qi; Yingjie Tian; Yong Shi

Semi-supervised learning has attracted a great deal of attention in machine learning and data mining. In this paper, we have proposed a novel Laplacian Twin Support Vector Machine (called Lap-TSVM) for the semi-supervised classification problem, which can exploit the geometry information of the marginal distribution embedded in unlabeled data to construct a more reasonable classifier and be a useful extension of TSVM. Furthermore, by choosing appropriate parameters, Lap-TSVM degenerates to either TSVM or TBSVM. All experiments on synthetic and real data sets show that the Lap-TSVMs classifier combined by two nonparallel hyperplanes is superior to Lap-SVM and TSVM in both classification accuracy and computation time.


IEEE Transactions on Systems, Man, and Cybernetics | 2014

Nonparallel Support Vector Machines for Pattern Classification

Yingjie Tian; Zhiquan Qi; Xuchan Ju; Yong Shi; Xiaohui Liu

We propose a novel nonparallel classifier, called nonparallel support vector machine (NPSVM), for binary classification. Our NPSVM that is fully different from the existing nonparallel classifiers, such as the generalized eigenvalue proximal support vector machine (GEPSVM) and the twin support vector machine (TWSVM), has several incomparable advantages: (1) two primal problems are constructed implementing the structural risk minimization principle; (2) the dual problems of these two primal problems have the same advantages as that of the standard SVMs, so that the kernel trick can be applied directly, while existing TWSVMs have to construct another two primal problems for nonlinear cases based on the approximate kernel-generated surfaces, furthermore, their nonlinear problems cannot degenerate to the linear case even the linear kernel is used; (3) the dual problems have the same elegant formulation with that of standard SVMs and can certainly be solved efficiently by sequential minimization optimization algorithm, while existing GEPSVM or TWSVMs are not suitable for large scale problems; (4) it has the inherent sparseness as standard SVMs; (5) existing TWSVMs are only the special cases of the NPSVM when the parameters of which are appropriately chosen. Experimental results on lots of datasets show the effectiveness of our method in both sparseness and classification accuracy, and therefore, confirm the above conclusion further. In some sense, our NPSVM is a new starting point of nonparallel classifiers.


Knowledge Based Systems | 2013

Structural twin support vector machine for classification

Zhiquan Qi; Yingjie Tian; Yong Shi

It has been shown that the structural information of data may contain useful prior domain knowledge for training a classifier. How to apply the structural information of data to build a good classifier is a new research focus recently. As we all know, the all existing structural large margin methods are the common in considering all structural information within classes into one model. In fact, these methods do not balance all structural informations relationships both infra-class and inter-class, which directly results in these prior information not being exploited sufficiently. In this paper, we design a new Structural Twin Support Vector Machine (called S-TWSVM). Unlike existing methods based on structural information, S-TWSVM uses two hyperplanes to decide the category of new data, of which each model only considers one classs structural information and closer to the class at the same time far away from the other class. This makes S-TWSVM fully exploit these prior knowledge to directly improve the algorithms the capacity of generalization. All experiments show that our proposed method is rigidly superior to the state-of-the-art algorithms based on structural information of data in both computation time and classification accuracy.


Neural Networks | 2012

Twin support vector machine with Universum data

Zhiquan Qi; Yingjie Tian; Yong Shi

The Universum, which is defined as the sample not belonging to either class of the classification problem of interest, has been proved to be helpful in supervised learning. In this work, we designed a new Twin Support Vector Machine with Universum (called U-TSVM), which can utilize Universum data to improve the classification performance of TSVM. Unlike U-SVM, in U-TSVM, Universum data are located in a nonparallel insensitive loss tube by using two Hinge Loss functions, which can exploit these prior knowledge embedded in Universum data more flexible. Empirical experiments demonstrate that U-TSVM can directly improve the classification accuracy of standard TSVM that use the labeled data alone and is superior to U-SVM in most cases.


IEEE Transactions on Neural Networks | 2015

Successive Overrelaxation for Laplacian Support Vector Machine

Zhiquan Qi; Yingjie Tian; Yong Shi

Semisupervised learning (SSL) problem, which makes use of both a large amount of cheap unlabeled data and a few unlabeled data for training, in the last few years, has attracted amounts of attention in machine learning and data mining. Exploiting the manifold regularization (MR), Belkinet al. proposed a new semisupervised classification algorithm: Laplacian support vector machines (LapSVMs), and have shown the state-of-the-art performance in SSL field. To further improve the LapSVMs, we proposed a fast Laplacian SVM (FLapSVM) solver for classification. Compared with the standard LapSVM, our method has several improved advantages as follows: 1) FLapSVM does not need to deal with the extra matrix and burden the computations related to the variable switching, which make it more suitable for large scale problems; 2) FLapSVMs dual problem has the same elegant formulation as that of standard SVMs. This means that the kernel trick can be applied directly into the optimization model; and 3) FLapSVM can be effectively solved by successive overrelaxation technology, which converges linearly to a solution and can process very large data sets that need not reside in memory. In practice, combining the strategies of random scheduling of subproblem and two stopping conditions, the computing speed of FLapSVM is rigidly quicker to that of LapSVM and it is a valid alternative to PLapSVM.


IEEE Transactions on Intelligent Transportation Systems | 2016

Automatic Road Crack Detection Using Random Structured Forests

Yong Shi; Limeng Cui; Zhiquan Qi; Fan Meng; Zhensong Chen

Cracks are a growing threat to road conditions and have drawn much attention to the construction of intelligent transportation systems. However, as the key part of an intelligent transportation system, automatic road crack detection has been challenged because of the intense inhomogeneity along the cracks, the topology complexity of cracks, the inference of noises with similar texture to the cracks, and so on. In this paper, we propose CrackForest, a novel road crack detection framework based on random structured forests, to address these issues. Our contributions are shown as follows: 1) apply the integral channel features to redefine the tokens that constitute a crack and get better representation of the cracks with intensity inhomogeneity; 2) introduce random structured forests to generate a high-performance crack detector, which can identify arbitrarily complex cracks; and 3) propose a new crack descriptor to characterize cracks and discern them from noises effectively. In addition, our method is faster and easier to parallel. Experimental results prove the state-of-the-art detection precision of CrackForest compared with competing methods.


Neural Computing and Applications | 2014

Efficient sparse nonparallel support vector machines for classification

Yingjie Tian; Xuchan Ju; Zhiquan Qi

In this paper, we propose a novel nonparallel classifier, named sparse nonparallel support vector machine (SNSVM), for binary classification. Different with the existing nonparallel classifiers, such as the twin support vector machines (TWSVMs), SNSVM has several advantages: It constructs two convex quadratic programming problems for both linear and nonlinear cases, which can be solved efficiently by successive overrelaxation technique; it does not need to compute the inverse matrices any more before training; it has the similar sparseness with standard SVMs; it degenerates to the TWSVMs when the parameters are appropriately chosen. Therefore, SNSVM is certainly superior to them theoretically. Experimental results on lots of data sets show the effectiveness of our method in both sparseness and classification accuracy and, therefore, confirm the above conclusions further.


Neural Computing and Applications | 2013

Efficient railway tracks detection and turnouts recognition method using HOG features

Zhiquan Qi; Yingjie Tian; Yong Shi

Railway tracks detection and turnouts recognition are the basic tasks in driver assistance systems, which can determine the interesting regions for detecting obstacles and signals. In this paper, a novel railway tracks detection and turnouts recognition method using HOG (Histogram of Oriented Gradients) features was presented. At first, the approach computes HOG features and establishes integral images, and then extracts railway tracks by region-growing algorithm. Then based on recognizing the open direction of the turnout, we find the path where the train will travel through. Experiments demonstrated that our method was able to correctly extract tracks and recognize turnouts even in very bad illumination conditions and run fast enough for practical use. In addition, our approach only needs a computer and a cheap camera installed in the railroad vehicle, not specialized hardwares and equipment.


Journal of Computational and Applied Mathematics | 2014

A nonparallel support vector machine for a classification problem with universum learning

Zhiquan Qi; Yingjie Tian; Yong Shi

Universum samples, defined as samples not belonging to any class for a classification problem of interest, have been useful in supervised learning. Here we design a new nonparallel support vector machine ( U -NSVM) that can exploit prior knowledge embedded in the universum to construct a more robust classifier for training. To this end, U -NSVM maximizes the two margins associated with the two closest neighboring classes, which is combined by two nonparallel hyperplanes. Therefore, U -NSVM has better flexibility and can yield a more reasonable classifier in most cases. In addition, our method includes fewer parameters than U -SVM, so is easier to implement. Experiments demonstrate that U -NSVM outperforms the traditional SVM and U -SVM.

Collaboration


Dive into the Zhiquan Qi's collaboration.

Top Co-Authors

Avatar

Yong Shi

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Yingjie Tian

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Fan Meng

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Limeng Cui

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Lingfeng Niu

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Zhensong Chen

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Bo Wang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Xuchan Ju

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Xiaodan Yu

University of Nebraska Omaha

View shared research outputs
Top Co-Authors

Avatar

Vassil Alexandrov

Barcelona Supercomputing Center

View shared research outputs
Researchain Logo
Decentralizing Knowledge