Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yunlong Feng is active.

Publication


Featured researches published by Yunlong Feng.


Computers & Mathematics With Applications | 2011

Unified approach to coefficient-based regularized regression

Yunlong Feng; Shao-Gao Lv

Abstract In this paper, we consider the coefficient-based regularized least-squares regression problem with the l q -regularizer ( 1 ≤ q ≤ 2 ) and data dependent hypothesis spaces. Algorithms in data dependent hypothesis spaces perform well with the property of flexibility. We conduct a unified error analysis by a stepping stone technique. An empirical covering number technique is also employed in our study to improve sample error. Comparing with existing results, we make a few improvements: First, we obtain a significantly sharper learning rate that can be arbitrarily close to O ( m − 1 ) under reasonable conditions, which is regarded as the best learning rate in learning theory. Second, our results cover the case q = 1 , which is novel. Finally, our results hold under very general conditions.


Neural Computation | 2016

Kernelized elastic net regularization: Generalization bounds, and sparse recovery

Yunlong Feng; Shaogao Lv; Hanyuan Hang; Johan A. K. Suykens

Kernelized elastic net regularization (KENReg) is a kernelization of the well-known elastic net regularization (Zou & Hastie, 2005). The kernel in KENReg is not required to be a Mercer kernel since it learns from a kernelized dictionary in the coefficient space. Feng, Yang, Zhao, Lv, and Suykens (2014) showed that KENReg has some nice properties including stability, sparseness, and generalization. In this letter, we continue our study on KENReg by conducting a refined learning theory analysis. This letter makes the following three main contributions. First, we present refined error analysis on the generalization performance of KENReg. The main difficulty of analyzing the generalization error of KENReg lies in characterizing the population version of its empirical target function. We overcome this by introducing a weighted Banach space associated with the elastic net regularization. We are then able to conduct elaborated learning theory analysis and obtain fast convergence rates under proper complexity and regularity assumptions. Second, we study the sparse recovery problem in KENReg with fixed design and show that the kernelization may improve the sparse recovery ability compared to the classical elastic net regularization. Finally, we discuss the interplay among different properties of KENReg that include sparseness, stability, and generalization. We show that the stability of KENReg leads to generalization, and its sparseness confidence can be derived from generalization. Moreover, KENReg is stable and can be simultaneously sparse, which makes it attractive theoretically and practically.


IEEE Transactions on Neural Networks | 2016

Robust Low-Rank Tensor Recovery With Regularized Redescending M-Estimator

Yuning Yang; Yunlong Feng; Johan A. K. Suykens

This paper addresses the robust low-rank tensor recovery problems. Tensor recovery aims at reconstructing a low-rank tensor from some linear measurements, which finds applications in image processing, pattern recognition, multitask learning, and so on. In real-world applications, data might be contaminated by sparse gross errors. However, the existing approaches may not be very robust to outliers. To resolve this problem, this paper proposes approaches based on the regularized redescending M-estimators, which have been introduced in robust statistics. The robustness of the proposed approaches is achieved by the regularized redescending M-estimators. However, the nonconvexity also leads to a computational difficulty. To handle this problem, we develop algorithms based on proximal and linearized block coordinate descent methods. By explicitly deriving the Lipschitz constant of the gradient of the data-fitting risk, the descent property of the algorithms is present. Moreover, we verify that the objective functions of the proposed approaches satisfy the Kurdyka-Łojasiewicz property, which establishes the global convergence of the algorithms. The numerical experiments on synthetic data as well as real data verify that our approaches are robust in the presence of outliers and still effective in the absence of outliers.


IEEE Signal Processing Letters | 2015

A Rank-One Tensor Updating Algorithm for Tensor Completion

Yuning Yang; Yunlong Feng; Johan A. K. Suykens

In this letter, we propose a rank-one tensor updating algorithm for solving tensor completion problems. Unlike the existing methods which penalize the tensor by using the sum of nuclear norms of unfolding matrices, our optimization model directly employs the tensor nuclear norm which is studied recently. Under the framework of the conditional gradient method, we show that at each iteration, solving the proposed model amounts to computing the tensor spectral norm and the related rank-one tensor. Because the problem of finding the related rank-one tensor is NP-hard, we propose a subroutine to solve it approximately, which is of low computational complexity. Experimental results on real datasets show that our algorithm is efficient and effective.


Siam Journal on Optimization | 2016

Rank-1 Tensor Properties with Applications to a Class of Tensor Optimization Problems

Yuning Yang; Yunlong Feng; Xiaolin Huang; Johan A. K. Suykens

This paper studies models and algorithms for a class of tensor optimization problems, based on a rank-1 equivalence property between a tensor and certain unfoldings. It is first shown that in


Applicable Analysis | 2012

Least-squares regularized regression with dependent samples and q-penalty

Yunlong Feng

d


Neural Computation | 2016

Robust support vector machines for classification with nonconvex and smooth losses

Yunlong Feng; Yuning Yang; Xiaolin Huang; Siamak Mehrkanoon; Johan A. K. Suykens

th order tensor space, the set of rank-1 tensors is the same as the intersection of


Neural Computation | 2016

Learning theory estimates with observations from general stationary stochastic processes

Hanyuan Hang; Yunlong Feng; Ingo Steinwart; Johan A. K. Suykens

\lceil \log_2(d) \rceil


IEEE Transactions on Neural Networks | 2016

Robust Gradient Learning With Applications

Yunlong Feng; Yuning Yang; Johan A. K. Suykens

tensor sets, of which tensors have a specific rank-1 balanced unfolding matrix. Moreover, the number


Mathematical and Computer Modelling | 2013

Learning performance of elastic-net regularization

Yu-long Zhao; Yunlong Feng

\lceil \log_2(d) \rceil

Collaboration


Dive into the Yunlong Feng's collaboration.

Top Co-Authors

Avatar

Johan A. K. Suykens

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

Yuning Yang

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

Shaogao Lv

Southwestern University of Finance and Economics

View shared research outputs
Top Co-Authors

Avatar

Xiaolin Huang

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar

Hanyuan Hang

University of Stuttgart

View shared research outputs
Top Co-Authors

Avatar

Shao-Gao Lv

University of Science and Technology of China

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Siamak Mehrkanoon

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

Zahra Karevan

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge