Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Bin Zou is active.

Publication


Featured researches published by Bin Zou.


Machine Learning | 2009

The generalization performance of ERM algorithm with strongly mixing observations

Bin Zou; Luoqing Li; Zongben Xu

The generalization performance is the main concern of machine learning theoretical research. The previous main bounds describing the generalization ability of the Empirical Risk Minimization (ERM) algorithm are based on independent and identically distributed (i.i.d.) samples. In order to study the generalization performance of the ERM algorithm with dependent observations, we first establish the exponential bound on the rate of relative uniform convergence of the ERM algorithm with exponentially strongly mixing observations, and then we obtain the generalization bounds and prove that the ERM algorithm with exponentially strongly mixing observations is consistent. The main results obtained in this paper not only extend the previously known results for i.i.d. observations to the case of exponentially strongly mixing observations, but also improve the previous results for strongly mixing samples. Because the ERM algorithm is usually very time-consuming and overfitting may happen when the complexity of the hypothesis space is high, as an application of our main results we also explore a new strategy to implement the ERM algorithm in high complexity hypothesis space.


IEEE Transactions on Neural Networks | 2013

Generalization Performance of Fisher Linear Discriminant Based on Markov Sampling

Bin Zou; Luoqing Li; Zongben Xu; Tao Luo; Yuan Yan Tang

Fisher linear discriminant (FLD) is a well-known method for dimensionality reduction and classification that projects high-dimensional data onto a low-dimensional space where the data achieves maximum class separability. The previous works describing the generalization ability of FLD have usually been based on the assumption of independent and identically distributed (i.i.d.) samples. In this paper, we go far beyond this classical framework by studying the generalization ability of FLD based on Markov sampling. We first establish the bounds on the generalization performance of FLD based on uniformly ergodic Markov chain (u.e.M.c.) samples, and prove that FLD based on u.e.M.c. samples is consistent. By following the enlightening idea from Markov chain Monto Carlo methods, we also introduce a Markov sampling algorithm for FLD to generate u.e.M.c. samples from a given data of finite size. Through simulation studies and numerical studies on benchmark repository using FLD, we find that FLD based on u.e.M.c. samples generated by Markov sampling can provide smaller misclassification rates compared to i.i.d. samples.


Journal of Complexity | 2009

Learning from uniformly ergodic Markov chains

Bin Zou; Hai Zhang; Zongben Xu

Evaluation for generalization performance of learning algorithms has been the main thread of machine learning theoretical research. The previous bounds describing the generalization performance of the empirical risk minimization (ERM) algorithm are usually established based on independent and identically distributed (i.i.d.) samples. In this paper we go far beyond this classical framework by establishing the generalization bounds of the ERM algorithm with uniformly ergodic Markov chain (u.e.M.c.) samples. We prove the bounds on the rate of uniform convergence/relative uniform convergence of the ERM algorithm with u.e.M.c. samples, and show that the ERM algorithm with u.e.M.c. samples is consistent. The established theory underlies application of ERM type of learning algorithms.


IEEE Transactions on Systems, Man, and Cybernetics | 2014

The generalization performance of regularized regression algorithms based on Markov sampling.

Bin Zou; Yuan Yan Tang; Zongben Xu; Luoqing Li; Jie Xu; Yang Lu

This paper considers the generalization ability of two regularized regression algorithms [least square regularized regression (LSRR) and support vector machine regression (SVMR)] based on non-independent and identically distributed (non-i.i.d.) samples. Different from the previously known works for non-i.i.d. samples, in this paper, we research the generalization bounds of two regularized regression algorithms based on uniformly ergodic Markov chain (u.e.M.c.) samples. Inspired by the idea from Markov chain Monto Carlo (MCMC) methods, we also introduce a new Markov sampling algorithm for regression to generate u.e.M.c. samples from a given dataset, and then, we present the numerical studies on the learning performance of LSRR and SVMR based on Markov sampling, respectively. The experimental results show that LSRR and SVMR based on Markov sampling can present obviously smaller mean square errors and smaller variances compared to random sampling.


IEEE Transactions on Systems, Man, and Cybernetics | 2015

The Generalization Ability of SVM Classification Based on Markov Sampling

Jie Xu; Yuan Yan Tang; Bin Zou; Zongben Xu; Luoqing Li; Yang Lu; Baochang Zhang

The previously known works studying the generalization ability of support vector machine classification (SVMC) algorithm are usually based on the assumption of independent and identically distributed samples. In this paper, we go far beyond this classical framework by studying the generalization ability of SVMC based on uniformly ergodic Markov chain (u.e.M.c.) samples. We analyze the excess misclassification error of SVMC based on u.e.M.c. samples, and obtain the optimal learning rate of SVMC for u.e.M.c. samples. We also introduce a new Markov sampling algorithm for SVMC to generate u.e.M.c. samples from given dataset, and present the numerical studies on the learning performance of SVMC based on Markov sampling for benchmark datasets. The numerical studies show that the SVMC based on Markov sampling not only has better generalization ability as the number of training samples are bigger, but also the classifiers based on Markov sampling are sparsity when the size of dataset is bigger with regard to the input dimension.


IEEE Transactions on Neural Networks | 2015

The Generalization Ability of Online SVM Classification Based on Markov Sampling

Jie Xu; Yuan Yan Tang; Bin Zou; Zongben Xu; Luoqing Li; Yang Lu

In this paper, we consider online support vector machine (SVM) classification learning algorithms with uniformly ergodic Markov chain (u.e.M.c.) samples. We establish the bound on the misclassification error of an online SVM classification algorithm with u.e.M.c. samples based on reproducing kernel Hilbert spaces and obtain a satisfactory convergence rate. We also introduce a novel online SVM classification algorithm based on Markov sampling, and present the numerical studies on the learning ability of online SVM classification based on Markov sampling for benchmark repository. The numerical studies show that the learning performance of the online SVM classification algorithm based on Markov sampling is better than that of classical online SVM classification based on random sampling as the size of training samples is larger.


Neural Networks | 2014

Generalization performance of Gaussian kernels SVMC based on Markov sampling

Jie Xu; Yuan Yan Tang; Bin Zou; Zongben Xu; Luoqing Li; Yang Lu

In this paper we consider Gaussian RBF kernels support vector machine classification (SVMC) algorithm with uniformly ergodic Markov chain (u.e.M.c.) samples in reproducing kernel Hilbert spaces (RKHS). We analyze the learning rates of Gaussian RBF kernels SVMC based on u.e.M.c. samples and obtain the fast learning rate of Gaussian RBF kernels SVMC based on u.e.M.c. samples by using the strongly mixing property of u.e.M.c. samples. We also present the numerical studies on the learning performance of Gaussian RBF kernels SVMC based on Markov sampling for real-world datasets. These experimental results show that Gaussian RBF kernels SVMC based on Markov sampling has better learning performance compared to randomly independent sampling.


Advances in Computational Mathematics | 2012

Generalization bounds of ERM algorithm with V-geometrically Ergodic Markov chains

Bin Zou; Zongben Xu; Xiangyu Chang

The previous results describing the generalization ability of Empirical Risk Minimization (ERM) algorithm are usually based on the assumption of independent and identically distributed (i.i.d.) samples. In this paper we go far beyond this classical framework by establishing the first exponential bound on the rate of uniform convergence of the ERM algorithm with V-geometrically ergodic Markov chain samples, as the application of the bound on the rate of uniform convergence, we also obtain the generalization bounds of the ERM algorithm with V-geometrically ergodic Markov chain samples and prove that the ERM algorithm with V-geometrically ergodic Markov chain samples is consistent. The main results obtained in this paper extend the previously known results of i.i.d. observations to the case of V-geometrically ergodic Markov chain samples.


International Journal of Wavelets, Multiresolution and Information Processing | 2011

GENERALIZATION BOUNDS OF REGULARIZATION ALGORITHMS DERIVED SIMULTANEOUSLY THROUGH HYPOTHESIS SPACE COMPLEXITY, ALGORITHMIC STABILITY AND DATA QUALITY

Xiangyu Chang; Zongben Xu; Bin Zou; Hai Zhang

A main issue in machine learning research is to analyze the generalization performance of a learning machine. Most classical results on the generalization performance of regularization algorithms are derived merely with the complexity of hypothesis space or the stability property of a learning algorithm. However, in practical applications, the performance of a learning algorithm is not actually affected only by an unitary factor just like the complexity of hypothesis space, stability of the algorithm and data quality. Therefore, in this paper, we develop a framework of evaluating the generalization performance of regularization algorithms combinatively in terms of hypothesis space complexity, algorithmic stability and data quality. We establish new bounds on the learning rate of regularization algorithms based on the measure of uniform stability and empirical covering number for general type of loss functions. As applications of the generic results, we evaluate the learning rates of support vector machine...


international symposium on neural networks | 2005

The bounds on the rate of uniform convergence for learning machine

Bin Zou; Luoqing Li; Jie Xu

The generalization performance is the important property of learning machines. The desired learning machines should have the quality of stability with respect to the training samples. We consider the empirical risk minimization on the function sets which are eliminated noisy. By applying the Kutins inequality we establish the bounds of the rate of uniform convergence of the empirical risks to their expected risks for learning machines and compare the bounds with known results.

Collaboration


Dive into the Bin Zou's collaboration.

Top Co-Authors

Avatar

Zongben Xu

Xi'an Jiaotong University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chen Xu

University of Ottawa

View shared research outputs
Top Co-Authors

Avatar

Xinge You

Huazhong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hai Zhang

Xi'an Jiaotong University

View shared research outputs
Top Co-Authors

Avatar

Hua Han

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Rong Chen

Xi'an Jiaotong University

View shared research outputs
Top Co-Authors

Avatar

Tao Luo

Xi'an Jiaotong University

View shared research outputs
Researchain Logo
Decentralizing Knowledge