Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Qing Tao is active.

Publication


Featured researches published by Qing Tao.


IEEE Transactions on Neural Networks | 2005

Posterior probability support vector Machines for unbalanced data

Qing Tao; Gao-wei Wu; Fei-Yue Wang; Jue Wang

This paper proposes a complete framework of posterior probability support vector machines (PPSVMs) for weighted training samples using modified concepts of risks, linear separability, margin, and optimal hyperplane. Within this framework, a new optimization problem for unbalanced classification problems is formulated and a new concept of support vectors established. Furthermore, a soft PPSVM with an interpretable parameter /spl nu/ is obtained which is similar to the /spl nu/-SVM developed by Scho/spl uml/lkopf et al., and an empirical method for determining the posterior probability is proposed as a new approach to determine /spl nu/. The main advantage of an PPSVM classifier lies in that fact that it is closer to the Bayes optimal without knowing the distributions. To validate the proposed method, two synthetic classification examples are used to illustrate the logical correctness of PPSVMs and their relationship to regular SVMs and Bayesian methods. Several other classification experiments are conducted to demonstrate that the performance of PPSVMs is better than regular SVMs in some cases. Compared with fuzzy support vector machines (FSVMs), the proposed PPSVM is a natural and an analytical extension of regular SVMs based on the statistical learning theory.


IEEE Transactions on Neural Networks | 2008

Recursive Support Vector Machines for Dimensionality Reduction

Qing Tao; Dejun Chu; Jue Wang

The usual dimensionality reduction technique in supervised learning is mainly based on linear discriminant analysis (LDA), but it suffers from singularity or undersampled problems. On the other hand, a regular support vector machine (SVM) separates the data only in terms of one single direction of maximum margin, and the classification accuracy may be not good enough. In this letter, a recursive SVM (RSVM) is presented, in which several orthogonal directions that best separate the data with the maximum margin are obtained. Theoretical analysis shows that a completely orthogonal basis can be derived in feature subspace spanned by the training samples and the margin is decreasing along the recursive components in linearly separable cases. As a result, a new dimensionality reduction technique based on multilevel maximum margin components and then a classifier with high accuracy are achieved. Experiments in synthetic and several real data sets show that RSVM using multilevel maximum margin features can do efficient dimensionality reduction and outperform regular SVM in binary classification problems.


Pattern Recognition Letters | 2004

A generalized S-K algorithm for learning v -SVM classifiers

Qing Tao; Gao-wei Wu; Jue Wang

The S-K algorithm (Schlesinger-Kozinec algorithm) and the modified kernel technique due to Friess et al. have been recently combined to solve SVM with L-2 cost function. In this paper, we generalize S-K algorithm to be applied for soft convex hulls. As a result, our algorithm can solve v-SVM based on L-1 cost function. Simple in nature, our soft algorithm is essentially a algorithm for finding the epsilon-optimal nearest points between two soft convex hulls. As only the vertexes of the hard convex hulls are used, the obvious superiority of our algorithm is that it has almost the same computational cost as that of the hard S-K algorithm. The theoretical analysis and some experiments demonstrate the performance of our algorithm


Pattern Recognition | 2005

A new maximum margin algorithm for one-class problems and its boosting implementation

Qing Tao; Gao-wei Wu; Jue Wang

In this paper, each one-class problem is regarded as trying to estimate a function that is positive on a desired slab and negative on the complement. The main advantage of this viewpoint is that the loss function and the expected risk can be defined to ensure that the slab can contain as many samples as possible. Inspired by the nature of SVMs, the intuitive margin is also defined. As a result, a new linear optimization problem to maximize the margin and some theoretically motivated learning algorithms are obtained. Moreover, the proposed algorithms can be implemented by boosting techniques to solve nonlinear one-class classifications.


european conference on machine learning | 2012

Stochastic coordinate descent methods for regularized smooth and nonsmooth losses

Qing Tao; Kang Kong; Dejun Chu; Gaowei Wu

Stochastic Coordinate Descent (SCD) methods are among the first optimization schemes suggested for efficiently solving large scale problems. However, until now, there exists a gap between the convergence rate analysis and practical SCD algorithms for general smooth losses and there is no primal SCD algorithm for nonsmooth losses. In this paper, we discuss these issues using the recently developed structural optimization techniques. In particular, we first present a principled and practical SCD algorithm for regularized smooth losses, in which the one-variable subproblem is solved using the proximal gradient method and the adaptive componentwise Lipschitz constant is obtained employing the line search strategy. When the loss is nonsmooth, we present a novel SCD algorithm, in which the one-variable subproblem is solved using the dual averaging method. We show that our algorithms exploit the regularization structure and achieve several optimal convergence rates that are standard in the literature. The experiments demonstrate the expected efficiency of our SCD algorithms in both smooth and nonsmooth cases.


Pattern Recognition | 2007

Learning linear PCA with convex semi-definite programming

Qing Tao; Gao-wei Wu; Jue Wang

The aim of this paper is to learn a linear principal component using the nature of support vector machines (SVMs). To this end, a complete SVM-like framework of linear PCA (SVPCA) for deciding the projection direction is constructed, where new expected risk and margin are introduced. Within this framework, a new semi-definite programming problem for maximizing the margin is formulated and a new definition of support vectors is established. As a weighted case of regular PCA, our SVPCA coincides with the regular PCA if all the samples play the same part in data compression. Theoretical explanation indicates that SVPCA is based on a margin-based generalization bound and thus good prediction ability is ensured. Furthermore, the robust form of SVPCA with a interpretable parameter is achieved using the soft idea in SVMs. The great advantage lies in the fact that SVPCA is a learning algorithm without local minima because of the convexity of the semi-definite optimization problems. To validate the performance of SVPCA, several experiments are conducted and numerical results have demonstrated that their generalization ability is better than that of regular PCA. Finally, some existing problems are also discussed.


IEEE Transactions on Neural Networks | 2014

Stochastic Learning via Optimizing the Variational Inequalities

Qing Tao; Qian-Kun Gao; Dejun Chu; Gaowei Wu

A wide variety of learning problems can be posed in the framework of convex optimization. Many efficient algorithms have been developed based on solving the induced optimization problems. However, there exists a gap between the theoretically unbeatable convergence rate and the practically efficient learning speed. In this paper, we use the variational inequality (VI) convergence to describe the learning speed. To this end, we avoid the hard concept of regret in online learning and directly discuss the stochastic learning algorithms. We first cast the regularized learning problem as a VI. Then, we present a stochastic version of alternating direction method of multipliers (ADMMs) to solve the induced VI. We define a new VI-criterion to measure the convergence of stochastic algorithms. While the rate of convergence for any iterative algorithms to solve nonsmooth convex optimization problems cannot be better than O(1/√t), the proposed stochastic ADMM (SADMM) is proved to have an O(1/t) VI-convergence rate for the l1-regularized hinge loss problems without strong convexity and smoothness. The derived VI-convergence results also support the viewpoint that the standard online analysis is too loose to analyze the stochastic setting properly. The experiments demonstrate that SADMM has almost the same performance as the state-of-the-art stochastic learning algorithms but its O(1/t) VI-convergence rate is capable of tightly characterizing the real learning speed.


international symposium on neural networks | 2002

The theoretical analysis of kernel technique and its applications

Qing Tao; Jiaqi Wang; Gao-wei Wu; Jue Wang

Investigates linear separability in feature space from Tietze extension theorem and function approximation theory, and kernel technique is motivated theoretically. Two kernel-based algorithms are presented. One of them is a general learning algorithm for feedforward neural networks, and they can solve large-scale classification problems.


intelligence and security informatics | 2006

One-Class strategies for security information detection

Qing Tao; Gao-wei Wu; Jue Wang

Detecting security-related information is a critical component of ISI research, which involves studying a wide range of technical and systems challenges related to the acquisition, collection, storage, retrieval, synthesis, and analysis of security-related information.


intelligence and security informatics | 2005

Some marginal learning algorithms for unsupervised problems

Qing Tao; Gao-wei Wu; Fei-Yue Wang; Jue Wang

In this paper, we investigate one-class and clustering problems by using statistical learning theory. To establish a universal framework, a unsupervised learning problem with predefined threshold η is formally described and the intuitive margin is introduced. Then, one-class and clustering problems are formulated as two specific η-unsupervised problems. By defining a specific hypothesis space in η-one-class problems, the crucial minimal sphere algorithm for regular one-class problems is proved to be a maximum margin algorithm. Furthermore, some new one-class and clustering marginal algorithms can be achieved in terms of different hypothesis spaces. Since the nature in SVMs is employed successfully, the proposed algorithms have robustness, flexibility and high performance. Since the parameters in SVMs are interpretable, our unsupervised learning framework is clear and natural. To verify the reasonability of our formulation, some synthetic and real experiments are conducted. They demonstrate that the proposed framework is not only of theoretical interest, but they also has a legitimate place in the family of practical unsupervised learning techniques.

Collaboration


Dive into the Qing Tao's collaboration.

Top Co-Authors

Avatar

Gao-wei Wu

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Jue Wang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Gaowei Wu

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Fei-Yue Wang

Chinese Academy of Sciences

View shared research outputs
Researchain Logo
Decentralizing Knowledge