Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Zhirong Yang is active.

Publication


Featured researches published by Zhirong Yang.


International Journal of Pattern Recognition and Artificial Intelligence | 2007

Projective non-negative matrix factorization with applications to facial image processing

Zhirong Yang; Zhijian Yuan; Jorma Laaksonen

We propose a new variant of Non-negative Matrix Factorization (NMF), including its model and two optimization rules. Our method is based on positively constrained projections and is related to the conventional SVD or PCA decomposition. The new model can potentially be applied to image compression and feature extraction problems. Of the latter, we consider processing of facial images, where each image consists of several parts and for each part the observations with different lighting mainly distribute along a straight line through the origin. No regularization terms are required in the objective functions and both suggested optimization rules can easily be implemented by matrix manipulations. The experiments show that the derived base vectors are spatially more localized than those of NMF. In turn, the better part-based representations improve the recognition rate of semantic classes such as the gender or existence of mustache in the facial images.


Neurocomputing | 2007

Multiplicative updates for non-negative projections

Zhirong Yang; Jorma Laaksonen

We present here how to construct multiplicative update rules for non-negative projections based on Ojas iterative learning rule. Our method integrates the multiplicative normalization factor into the original additive update rule as an additional term which generally has a roughly opposite direction. As a consequence, the modified additive learning rule can easily be converted to its multiplicative version, which maintains the non-negativity after each iteration. The derivation of our approach provides a sound interpretation of learning non-negative projection matrices based on iterative multiplicative updates-a kind of Hebbian learning with normalization. A convergence analysis is scratched by interpretating the multiplicative updates as a special case of natural gradient learning. We also demonstrate two application examples of the proposed technique, a non-negative variant of the linear Hebbian networks and a non-negative Fisher discriminant analysis, including its kernel extension. The resulting example algorithms demonstrate interesting properties for data analysis tasks in experiments performed on facial images.


Neural Networks | 2008

2008 Special Issue: Principal whitened gradient for information geometry

Zhirong Yang; Jorma Laaksonen

We propose two strategies to improve the optimization in information geometry. First, a local Euclidean embedding is identified by whitening the tangent space, which leads to an additive parameter update sequence that approximates the geodesic flow to the optimal density model. Second, removal of the minor components of gradients enhances the estimation of the Fisher information matrix and reduces the computational cost. We also prove that dimensionality reduction is necessary for learning multidimensional linear transformations. The optimization based on the principal whitened gradients demonstrates faster and more robust convergence in simulations on unsupervised learning with synthetic data and on discriminant analysis of breast cancer data.


scandinavian conference on image analysis | 2007

Regularized neighborhood component analysis

Zhirong Yang; Jorma Laaksonen

Discriminative feature extraction is one of the fundamental problems in pattern recognition and signal processing. It was recently proposed that maximizing the class prediction by neighboring samples in the transformed space is an effective objective for learning a low-dimensional linear embedding of labeled data. The associated methods, Neighborhood Component Analysis (NCA) and Relevant Component Analysis (RCA), have been proven to be useful preprocessing techniques for discriminative information visualization and classification. We point out here that NCA and RCA are prone to overfitting and therefore regularization is required. NCA and RCAs failure for high-dimensional data is demonstrated in this paper by experiments in facial image processing. We also propose to incorporate a Gaussian prior into the NCA objective and obtain the Regularized Neighborhood Component Analysis (RNCA). The empirical results show that the generalization can be significantly enhanced by using the proposed regularization method.


international conference on pattern recognition | 2005

Partial relevance in interactive facial image retrieval

Zhirong Yang; Jorma Laaksonen

For databases of facial images, where each subject has only a few images, the query precision of interactive retrieval suffers from the problem of extremely small class sizes. A novel method is proposed to relieve this problem by applying partial relevance to the interactive retrieval. This work extends an existing content-based image retrieval system, PicSOM, by relaxing the relevance criterion in the early rounds of the retrieval. Moreover, we apply linear discriminant analysis as a preprocessing step before training the Self-Organizing Maps (SOMs) so that the resulting SOMs have stronger discriminative power. The results of simulated retrieval experiments suggest that for semantic classes such as “black persons” or “bearded persons” the first image which depicts the target subject can be obtained three to six times faster than by retrieval without the partial relevance.


international conference on artificial neural networks | 2009

Projective Nonnegative Matrix Factorization with α -Divergence

Zhirong Yang; Erkki Oja

A new matrix factorization algorithm which combines two recently proposed nonnegative learning techniques is presented. Our new algorithm, α -PNMF, inherits the advantages of Projective Nonnegative Matrix Factorization (PNMF) for learning a highly orthogonal factor matrix. When the Kullback-Leibler (KL) divergence is generalized to α -divergence, it gives our method more flexibility in approximation. We provide multiplicative update rules for α -PNMF and present their convergence proof. The resulting algorithm is empirically verified to give a good solution by using a variety of real-world datasets. For feature extraction, α -PNMF is able to learn highly sparse and localized part-based representations of facial images. For clustering, the new method is also advantageous over Nonnegative Matrix Factorization with α -divergence and ordinary PNMF in terms of higher purity and smaller entropy.


international symposium on neural networks | 2007

Approximated Geodesic Updates with Principal Natural Gradients

Zhirong Yang; Jorma Laaksonen

We propose a novel optimization algorithm which overcomes two drawbacks of Amaris natural gradient updates for information geometry. First, prewhitening the tangent vectors locally converts a Riemannian manifold to an Euclidean space so that the additive parameter update sequence approximates geodesics. Second, we prove that dimensionality reduction of natural gradients is necessary for learning multidimensional linear transformations. Removal of minor components also leads to noise reduction and better computational efficiency. The proposed method demonstrates faster and more robust convergence in the simulations on recovering a Gaussian mixture of artificial data and on discriminative learning of ionosphere data.


scandinavian conference on image analysis | 2009

Informative Laplacian Projection

Zhirong Yang; Jorma Laaksonen

A new approach of constructing the similarity matrix for eigendecomposition on graph Laplacians is proposed. We first connect the Locality Preserving Projection method to probability density derivatives, which are then replaced by informative score vectors. This change yields a normalization factor and increases the contribution of the data pairs in low-density regions. The proposed method can be applied to both unsupervised and supervised learning. Empirical study on facial images is provided. The experiment results demonstrate that our method is advantageous for discovering statistical patterns in sparse data areas.


international conference on artificial neural networks | 2007

Face recognition using parzenfaces

Zhirong Yang; Jorma Laaksonen

A novel discriminant analysis method is presented for the face recognition problem. It has been recently shown that the predictive objectives based on Parzen estimation are advantageous for learning discriminative projections if the class distributions are complicated in the projected space. However, the existing algorithms based on Parzen estimators require expensive computation to obtain the gradient for optimization. We propose here an accelerating technique by reformulating the gradient and implement its computation by matrix products. Furthermore, we point out that regularization is necessary for high-dimensional face recognition problems. The discriminative objective is therefore extended by a smoothness constraint of facial images. Our Parzen Discriminant Analysis method can be trained much faster and achieve higher recognition accuracies than the compared algorithms in experiments on two popularly used face databases.


international conference on artificial neural networks | 2006

A fast fixed-point algorithm for two-class discriminative feature extraction

Zhirong Yang; Jorrna Laaksonen

We propose a fast fixed-point algorithm to improve the Relevant Component Analysis (RCA) in two-class cases. Using an objective function that maximizes the predictive information, our method is able to extract more than one discriminative component of data for two-class problems, which cannot be accomplished by classical Fishers discriminant analysis. After prewhitening the data, we apply Newtons optimization method which automatically chooses the learning rate in the iterative training of each component. The convergence of the iterative learning is quadratic, i.e. much faster than the linear optimization by gradient. methods. Empirical tests presented in the paper show that feature extraction using the new method resembles RCA for low-dimensional ionosphere data and significantly outperforms the latter in efficiency for high-dimensional facial image data.

Collaboration


Dive into the Zhirong Yang's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Erkki Oja

Helsinki University of Technology

View shared research outputs
Top Co-Authors

Avatar

Jorrna Laaksonen

Helsinki University of Technology

View shared research outputs
Top Co-Authors

Avatar

Zhijian Yuan

Helsinki University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge