Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Liansheng Zhuang is active.

Publication


Featured researches published by Liansheng Zhuang.


computer vision and pattern recognition | 2012

Non-negative low rank and sparse graph for semi-supervised learning

Liansheng Zhuang; Haoyuan Gao; Zhouchen Lin; Yi Ma; Xin Zhang; Nenghai Yu

Constructing a good graph to represent data structures is critical for many important machine learning tasks such as clustering and classification. This paper proposes a novel non-negative low-rank and sparse (NNLRS) graph for semi-supervised learning. The weights of edges in the graph are obtained by seeking a nonnegative low-rank and sparse matrix that represents each data sample as a linear combination of others. The so-obtained NNLRS-graph can capture both the global mixture of subspaces structure (by the low rankness) and the locally linear structure (by the sparseness) of the data, hence is both generative and discriminative. We demonstrate the effectiveness of NNLRS-graph in semi-supervised classification and discriminative analysis. Extensive experiments testify to the significant advantages of NNLRS-graph over graphs obtained through conventional means.


computer vision and pattern recognition | 2013

Single-Sample Face Recognition with Image Corruption and Misalignment via Sparse Illumination Transfer

Liansheng Zhuang; Allen Y. Yang; Zihan Zhou; Shankar Sastry; Yi Ma

Single-sample face recognition is one of the most challenging problems in face recognition. We propose a novel face recognition algorithm to address this problem based on a sparse representation based classification (SRC) framework. The new algorithm is robust to image misalignment and pixel corruption, and is able to reduce required training images to one sample per class. To compensate the missing illumination information typically provided by multiple training images, a sparse illumination transfer (SIT) technique is introduced. The SIT algorithms seek additional illumination examples of face images from one or more additional subject classes, and form an illumination dictionary. By enforcing a sparse representation of the query image, the method can recover and transfer the pose and illumination information from the alignment stage to the recognition stage. Our extensive experiments have demonstrated that the new algorithms significantly outperform the existing algorithms in the single-sample regime and with less restrictions. In particular, the face alignment accuracy is comparable to that of the well-known Deformable SRC algorithm using multiple training images, and the face recognition accuracy exceeds those of the SRC and Extended SRC algorithms using hand labeled alignment initialization.


International Journal of Computer Vision | 2015

Neither Global Nor Local: Regularized Patch-Based Representation for Single Sample Per Person Face Recognition

Shenghua Gao; Kui Jia; Liansheng Zhuang; Yi Ma

This paper presents a regularized patch-based representation for single sample per person face recognition. We represent each image by a collection of patches and seek their sparse representations under the gallery images patches and intra-class variance dictionaries at the same time. For the reconstruction coefficients of all the patches from the same image, by imposing a group sparsity constraint on the reconstruction coefficients corresponding to the patches from the gallery images, and by imposing a sparsity constraint on the reconstruction coefficients corresponding to the intra-class variance dictionaries, our formulation harvests the advantages of both patch-based image representation and global image representation, i.e. our method overcomes the side effect of those patches which are severely corrupted by the variances in face recognition, while enforcing those less discriminative patches to be constructed by the gallery patches from the right person. Moreover, instead of using the manually designed intra-class variance dictionaries, we propose to learn the intra-class variance dictionaries which not only greatly accelerate the prediction of the probe images but also improve the face recognition accuracy in the single sample per person scenario. Experimental results on the AR, Extended Yale B, CMU-PIE, and LFW datasets show that our method outperforms sparse coding related face recognition methods as well as some other specially designed single sample per person face representation methods, and achieves the best performance. These encouraging results demonstrate the effectiveness of regularized patch-based face representation for single sample per person face recognition.


IEEE Transactions on Image Processing | 2015

Constructing a Nonnegative Low-Rank and Sparse Graph With Data-Adaptive Features

Liansheng Zhuang; Shenghua Gao; Jinhui Tang; Jingjing Wang; Zhouchen Lin; Yi Ma; Nenghai Yu

This paper aims at constructing a good graph to discover the intrinsic data structures under a semisupervised learning setting. First, we propose to build a nonnegative low-rank and sparse (referred to as NNLRS) graph for the given data representation. In particular, the weights of edges in the graph are obtained by seeking a nonnegative low-rank and sparse reconstruction coefficients matrix that represents each data sample as a linear combination of others. The so-obtained NNLRS-graph captures both the global mixture of subspaces structure (by the low-rankness) and the locally linear structure (by the sparseness) of the data, hence it is both generative and discriminative. Second, as good features are extremely important for constructing a good graph, we propose to learn the data embedding matrix and construct the graph simultaneously within one framework, which is termed as NNLRS with embedded features (referred to as NNLRS-EF). Extensive NNLRS experiments on three publicly available data sets demonstrate that the proposed method outperforms the state-of-the-art graph construction method by a large margin for both semisupervised classification and discriminative analysis, which verifies the effectiveness of our proposed method.


International Journal of Computer Vision | 2015

Sparse Illumination Learning and Transfer for Single-Sample Face Recognition with Image Corruption and Misalignment

Liansheng Zhuang; Tsung-Han Chan; Allen Y. Yang; Shankar Sastry; Yi Ma

Single-sample face recognition is one of the most challenging problems in face recognition. We propose a novel algorithm to address this problem based on a sparse representation based classification (SRC) framework. The new algorithm is robust to image misalignment and pixel corruption, and is able to reduce required gallery images to one sample per class. To compensate for the missing illumination information traditionally provided by multiple gallery images, a sparse illumination learning and transfer (SILT) technique is introduced. The illumination in SILT is learned by fitting illumination examples of auxiliary face images from one or more additional subjects with a sparsely-used illumination dictionary. By enforcing a sparse representation of the query image in the illumination dictionary, the SILT can effectively recover and transfer the illumination and pose information from the alignment stage to the recognition stage. Our extensive experiments have demonstrated that the new algorithms significantly outperform the state of the art in the single-sample regime and with less restrictions. In particular, the single-sample face alignment accuracy is comparable to that of the well-known Deformable SRC algorithm using multiple gallery images per class. Furthermore, the face recognition accuracy exceeds those of the SRC and Extended SRC algorithms using hand labeled alignment initialization.


asian conference on computer vision | 2014

Unsupervised Feature Learning for RGB-D Image Classification

I-Hong Jhuo; Shenghua Gao; Liansheng Zhuang; D. T. Lee; Yi Ma

Motivated by the success of Deep Neural Networks in computer vision, we propose a deep Regularized Reconstruction Independent Component Analysis network (R\(^2\)ICA) for RGB-D image classification. In each layer of this network, we include a R\(^2\)ICA as the basic building block to determine the relationship between the gray-scale and depth images corresponding to the same object or scene. Implementing commonly used local contrast normalization and spatial pooling, we gradually enhance our network to be resilient to local variance resulting in a robust image representation for RGB-D image classification. Moreover, compared with conventional handcrafted feature-based RGB-D image representation, the proposed deep R\(^2\)ICA is a feedforward network. Hence, it is more efficient for image representation. Experimental results on three publicly available RGB-D datasets demonstrate that the proposed method consistently outperforms the state-of-the-art conventional, manually designed RGB-D image representation confirming its effectiveness for RGB-D image classification.


Neurocomputing | 2013

Regularized Semi-Supervised Latent Dirichlet Allocation for visual concept learning

Liansheng Zhuang; Haoyuan Gao; Jiebo Luo; Zhouchen Lin

Topic model is a popular tool for visual concept learning. Most topic models are either unsupervised or fully supervised. In this paper, to take advantage of both limited labeled training images and rich unlabeled images, we propose a novel regularized Semi-Supervised Latent Dirichlet Allocation (r-SSLDA) for learning visual concept classifiers. Instead of introducing a new complex topic model, we attempt to find an efficient way to learn topic models in a semi-supervised way. Our r-SSLDA considers both semi-supervised properties and supervised topic model simultaneously in a regularization framework. Furthermore, to improve the performance of r-SSLDA, we introduce the low rank graph to the framework. Experiments on Caltech 101 and Caltech 256 have shown that r-SSLDA outperforms both unsupervised LDA and achieves competitive performance against fully supervised LDA with much fewer labeled images.


Neurocomputing | 2016

Locality-preserving low-rank representation for graph construction from nonlinear manifolds

Liansheng Zhuang; Jingjing Wang; Zhouchen Lin; Allen Y. Yang; Yi Ma; Nenghai Yu

Building a good graph to represent data structure is important in many computer vision and machine learning tasks such as recognition and clustering. This paper proposes a novel method to learn an undirected graph from a mixture of nonlinear manifolds via Locality-Preserving Low-Rank Representation ( L 2 R 2 ), which extents the original LRR model from linear subspaces to nonlinear manifolds. By enforcing a locality-preserving sparsity constraint to the LRR model, L 2 R 2 guarantees its linear representation to be nonzero only in a local neighborhood of the data point, and thus preserves the intrinsic geometric structure of the manifolds. Its numerical solution results in a constrained convex optimization problem with linear constraints. We further apply a linearized alternating direction method to solve the problem. We have conducted extensive experiments to benchmark its performance against six state-of-the-art algorithms. Using nonlinear manifold clustering and semi-supervised classification on images as examples, the proposed method significantly outperforms the existing methods, and is also robust to moderate data noise and outliers.


international conference on image and graphics | 2011

Semi-supervised Classification via Low Rank Graph

Liansheng Zhuang; Haoyuan Gao; Jingjing Huang; Nenghai Yu

Graph plays a very important role in graph based semi-supervised learning (SSL) methods. However, most current graph construction methods emphasize on local properties of the graph. In this paper, inspired by the advances of compressive sensing, we present a novel method to construct a so-called low-rank graph (LR-graph) for graph based SSL methods. Assuming that the graph is sparse and low rank, our proposed method uses both the local property and the global property of the graph, and thus is better at capturing the global structure of all data. Compared with current graphs, LR-graph is more informative and discriminative, and robust to outliers. Experiments on generic object recognition show that LR-graph achieves state-of-the-art performance for graph based SSL methods.


IEEE Transactions on Image Processing | 2017

Label Information Guided Graph Construction for Semi-Supervised Learning

Liansheng Zhuang; Zihan Zhou; Shenghua Gao; Jingwen Yin; Zhouchen Lin; Yi Ma

In the literature, most existing graph-based semi-supervised learning methods only use the label information of observed samples in the label propagation stage, while ignoring such valuable information when learning the graph. In this paper, we argue that it is beneficial to consider the label information in the graph learning stage. Specifically, by enforcing the weight of edges between labeled samples of different classes to be zero, we explicitly incorporate the label information into the state-of-the-art graph learning methods, such as the low-rank representation (LRR), and propose a novel semi-supervised graph learning method called semi-supervised low-rank representation. This results in a convex optimization problem with linear constraints, which can be solved by the linearized alternating direction method. Though we take LRR as an example, our proposed method is in fact very general and can be applied to any self-representation graph learning methods. Experiment results on both synthetic and real data sets demonstrate that the proposed graph learning method can better capture the global geometric structure of the data, and therefore is more effective for semi-supervised learning tasks.

Collaboration


Dive into the Liansheng Zhuang's collaboration.

Top Co-Authors

Avatar

Nenghai Yu

University of Science and Technology of China

View shared research outputs
Top Co-Authors

Avatar

Yi Ma

ShanghaiTech University

View shared research outputs
Top Co-Authors

Avatar

Wei Zhou

University of Science and Technology of China

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jingjing Wang

University of Science and Technology of China

View shared research outputs
Top Co-Authors

Avatar

Shenghua Gao

ShanghaiTech University

View shared research outputs
Top Co-Authors

Avatar

Haoyuan Gao

University of Science and Technology of China

View shared research outputs
Top Co-Authors

Avatar

Yangchun Qian

University of Science and Technology of China

View shared research outputs
Top Co-Authors

Avatar

Ketan Tang

University of Science and Technology of China

View shared research outputs
Top Co-Authors

Avatar

Allen Y. Yang

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge