Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yiming Ying is active.

Publication


Featured researches published by Yiming Ying.


Foundations of Computational Mathematics | 2006

Learning Rates of Least-Square Regularized Regression

Qiang Wu; Yiming Ying; Ding-Xuan Zhou

AbstractThis paper considers the regularized learning algorithm associated with the least-square loss and reproducing kernel Hilbert spaces. The target is the error analysis for the regression problem in learning theory. A novel regularization approach is presented, which yields satisfactory learning rates. The rates depend on the approximation property and on the capacity of the reproducing kernel Hilbert space measured by covering numbers. When the kernel is C∞ and the regression function lies in the corresponding reproducing kernel Hilbert space, the rate is mζ with ζ arbitrarily close to 1, regardless of the variance of the bounded probability distribution.


Journal of Complexity | 2007

Multi-kernel regularized classifiers

Qiang Wu; Yiming Ying; Ding-Xuan Zhou

A family of classification algorithms generated from Tikhonov regularization schemes are considered. They involve multi-kernel spaces and general convex loss functions. Our main purpose is to provide satisfactory estimates for the excess misclassification error of these multi-kernel regularized classifiers when the loss functions achieve the zero value. The error analysis consists of two parts: regularization error and sample error. Allowing multi-kernels in the algorithm improves the regularization error and approximation error, which is one advantage of the multi-kernel setting. For a general loss function, we show how to bound the regularization error by the approximation in some weighted L^q spaces. For the sample error, we use a projection operator. The projection in connection with the decay of the regularization error enables us to improve convergence rates in the literature even for the one-kernel schemes and special loss functions: least-square loss and hinge loss for support vector machine soft margin classifiers. Existence of the optimization problem for the regularization scheme associated with multi-kernels is verified when the kernel functions are continuous with respect to the index set. Concrete examples, including Gaussian kernels with flexible variances and probability distributions with some noise conditions, are used to illustrate the general theory.


international conference on computer vision | 2013

Similarity Metric Learning for Face Recognition

Qiong Cao; Yiming Ying; Peng Li

Recently, there is a considerable amount of efforts devoted to the problem of unconstrained face verification, where the task is to predict whether pairs of images are from the same person or not. This problem is challenging and difficult due to the large variations in face images. In this paper, we develop a novel regularization framework to learn similarity metrics for unconstrained face verification. We formulate its objective function by incorporating the robustness to the large intra-personal variations and the discriminative power of novel similarity metrics. In addition, our formulation is a convex optimization problem which guarantees the existence of its global solution. Experiments show that our proposed method achieves the state-of-the-art results on the challenging Labeled Faces in the Wild (LFW) database [10].


IEEE Transactions on Information Theory | 2006

Online Regularized Classification Algorithms

Yiming Ying; Ding-Xuan Zhou

This paper considers online classification learning algorithms based on regularization schemes in reproducing kernel Hilbert spaces associated with general convex loss functions. A novel capacity independent approach is presented. It verifies the strong convergence of the algorithm under a very weak assumption of the step sizes and yields satisfactory convergence rates for polynomially decaying step sizes. Explicit learning rates with respect to the misclassification error are given in terms of the choice of step sizes and the regularization parameter (depending on the sample size). Error bounds associated with the hinge loss, the least square loss, and the support vector machine q-norm loss are presented to illustrate our method


Foundations of Computational Mathematics | 2008

Online Gradient Descent Learning Algorithms

Yiming Ying; Massimiliano Pontil

This paper considers the least-square online gradient descent algorithm in a reproducing kernel Hilbert space (RKHS) without an explicit regularization term. We present a novel capacity independent approach to derive error bounds and convergence results for this algorithm. The essential element in our analysis is the interplay between the generalization error and a weighted cumulative error which we define in the paper. We show that, although the algorithm does not involve an explicit RKHS regularization term, choosing the step sizes appropriately can yield competitive error rates with those in the literature.


BMC Bioinformatics | 2009

Enhanced protein fold recognition through a novel data integration approach

Yiming Ying; Kaizhu Huang; Colin Campbell

BackgroundProtein fold recognition is a key step in protein three-dimensional (3D) structure discovery. There are multiple fold discriminatory data sources which use physicochemical and structural properties as well as further data sources derived from local sequence alignments. This raises the issue of finding the most efficient method for combining these different informative data sources and exploring their relative significance for protein fold classification. Kernel methods have been extensively used for biological data analysis. They can incorporate separate fold discriminatory features into kernel matrices which encode the similarity between samples in their respective data sources.ResultsIn this paper we consider the problem of integrating multiple data sources using a kernel-based approach. We propose a novel information-theoretic approach based on a Kullback-Leibler (KL) divergence between the output kernel matrix and the input kernel matrix so as to integrate heterogeneous data sources. One of the most appealing properties of this approach is that it can easily cope with multi-class classification and multi-task learning by an appropriate choice of the output kernel matrix. Based on the position of the output and input kernel matrices in the KL-divergence objective, there are two formulations which we respectively refer to as MKLdiv-dc and MKLdiv-conv. We propose to efficiently solve MKLdiv-dc by a difference of convex (DC) programming method and MKLdiv-conv by a projected gradient descent algorithm. The effectiveness of the proposed approaches is evaluated on a benchmark dataset for protein fold recognition and a yeast protein function prediction problem.ConclusionOur proposed methods MKLdiv-dc and MKLdiv-conv are able to achieve state-of-the-art performance on the SCOP PDB-40D benchmark dataset for protein fold prediction and provide useful insights into the relative significance of informative data sources. In particular, MKLdiv-dc further improves the fold discrimination accuracy to 75.19% which is a more than 5% improvement over competitive Bayesian probabilistic and SVM margin-based kernel learning methods. Furthermore, we report a competitive performance on the yeast protein function prediction problem.


Machine Learning | 2016

Generalization bounds for metric and similarity learning

Qiong Cao; Zheng-Chu Guo; Yiming Ying

Recently, metric learning and similarity learning have attracted a large amount of interest. Many models and optimization algorithms have been proposed. However, there is relatively little work on the generalization analysis of such methods. In this paper, we derive novel generalization bounds of metric and similarity learning. In particular, we first show that the generalization analysis reduces to the estimation of the Rademacher average over “sums-of-i.i.d.” sample-blocks related to the specific matrix norm. Then, we derive generalization bounds for metric/similarity learning with different matrix-norm regularizers by estimating their specific Rademacher complexities. Our analysis indicates that sparse metric/similarity learning with


international conference on data mining | 2009

GSML: A Unified Framework for Sparse Metric Learning

Kaizhu Huang; Yiming Ying; Colin Campbell


Journal of Mathematical Analysis and Applications | 2002

Singular integral operators on function spaces

Jiecheng Chen; Dashan Fan; Yiming Ying

L^1


Canadian Journal of Mathematics | 2003

Certain Operators with Rough Singular Kernels

Jiecheng Chen; Dashan Fan; Yiming Ying

Collaboration


Dive into the Yiming Ying's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ding-Xuan Zhou

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Peng Li

University of Bristol

View shared research outputs
Top Co-Authors

Avatar

Qiang Wu

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Kaizhu Huang

Xi'an Jiaotong-Liverpool University

View shared research outputs
Top Co-Authors

Avatar

Siwei Lyu

State University of New York System

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge