Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Guangcan Liu is active.

Publication


Featured researches published by Guangcan Liu.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2013

Robust Recovery of Subspace Structures by Low-Rank Representation

Guangcan Liu; Zhouchen Lin; Shuicheng Yan; Ju Sun; Yong Yu; Yi Ma

In this paper, we address the subspace clustering problem. Given a set of data samples (vectors) approximately drawn from a union of multiple subspaces, our goal is to cluster the samples into their respective subspaces and remove possible outliers as well. To this end, we propose a novel objective function named Low-Rank Representation (LRR), which seeks the lowest rank representation among all the candidates that can represent the data samples as linear combinations of the bases in a given dictionary. It is shown that the convex program associated with LRR solves the subspace clustering problem in the following sense: When the data is clean, we prove that LRR exactly recovers the true subspace structures; when the data are contaminated by outliers, we prove that under certain conditions LRR can exactly recover the row space of the original data and detect the outlier as well; for data corrupted by arbitrary sparse errors, LRR can also approximately recover the row space with theoretical guarantees. Since the subspace membership is provably determined by the row space, these further imply that LRR can perform robust subspace clustering and error correction in an efficient and effective way.


international conference on computer vision | 2011

Latent Low-Rank Representation for subspace segmentation and feature extraction

Guangcan Liu; Shuicheng Yan

Low-Rank Representation (LRR) [16, 17] is an effective method for exploring the multiple subspace structures of data. Usually, the observed data matrix itself is chosen as the dictionary, which is a key aspect of LRR. However, such a strategy may depress the performance, especially when the observations are insufficient and/or grossly corrupted. In this paper we therefore propose to construct the dictionary by using both observed and unobserved, hidden data. We show that the effects of the hidden data can be approximately recovered by solving a nuclear norm minimization problem, which is convex and can be solved efficiently. The formulation of the proposed method, called Latent Low-Rank Representation (LatLRR), seamlessly integrates subspace segmentation and feature extraction into a unified framework, and thus provides us with a solution for both subspace segmentation and feature extraction. As a subspace segmentation algorithm, LatLRR is an enhanced version of LRR and outperforms the state-of-the-art algorithms. Being an unsupervised feature extraction algorithm, LatLRR is able to robustly extract salient features from corrupted data, and thus can work much better than the benchmark that utilizes the original data vectors as features for classification. Compared to dimension reduction based methods, LatLRR is more robust to noise.


IEEE Transactions on Image Processing | 2012

Saliency Detection by Multitask Sparsity Pursuit

Congyan Lang; Guangcan Liu; Jian Yu; Shuicheng Yan

This paper addresses the problem of detecting salient areas within natural images. We shall mainly study the problem under unsupervised setting, i.e., saliency detection without learning from labeled images. A solution of multitask sparsity pursuit is proposed to integrate multiple types of features for detecting saliency collaboratively. Given an image described by multiple features, its saliency map is inferred by seeking the consistently sparse elements from the joint decompositions of multiple-feature matrices into pairs of low-rank and sparse matrices. The inference process is formulated as a constrained nuclear norm and as an ℓ2,1 -norm minimization problem, which is convex and can be solved efficiently with an augmented Lagrange multiplier method. Compared with previous methods, which usually make use of multiple features by combining the saliency maps obtained from individual features, the proposed method seamlessly integrates multiple features to produce jointly the saliency map with a single inference step and thus produces more accurate and reliable results. In addition to the unsupervised setting, the proposed method can be also generalized to incorporate the top-down priors obtained from supervised environment. Extensive experiments well validate its superiority over other state-of-the-art methods.


international conference on computer vision | 2011

Multi-task low-rank affinity pursuit for image segmentation

Bin Cheng; Guangcan Liu; Jingdong Wang; Zhongyang Huang; Shuicheng Yan

This paper investigates how to boost region-based image segmentation by pursuing a new solution to fuse multiple types of image features. A collaborative image segmentation framework, called multi-task low-rank affinity pursuit, is presented for such a purpose. Given an image described with multiple types of features, we aim at inferring a unified affinity matrix that implicitly encodes the segmentation of the image. This is achieved by seeking the sparsity-consistent low-rank affinities from the joint decompositions of multiple feature matrices into pairs of sparse and low-rank matrices, the latter of which is expressed as the production of the image feature matrix and its corresponding image affinity matrix. The inference process is formulated as a constrained nuclear norm and ℓ2;1-norm minimization problem, which is convex and can be solved efficiently with the Augmented Lagrange Multiplier method. Compared to previous methods, which are usually based on a single type of features, the proposed method seamlessly integrates multiple types of features to jointly produce the affinity matrix within a single inference step, and produces more accurate and reliable segmentation results. Experiments on the MSRC dataset and Berkeley segmentation dataset well validate the superiority of using multiple features over single feature and also the superiority of our method over conventional methods for feature fusion. Moreover, our method is shown to be very competitive while comparing to other state-of-the-art methods.


IEEE Transactions on Image Processing | 2012

Inductive Robust Principal Component Analysis

Bing-Kun Bao; Guangcan Liu; Changsheng Xu; Shuicheng Yan

In this paper, we address the error correction problem, that is, to uncover the low-dimensional subspace structure from high-dimensional observations, which are possibly corrupted by errors. When the errors are of Gaussian distribution, principal component analysis (PCA) can find the optimal (in terms of least-square error) low-rank approximation to high-dimensional data. However, the canonical PCA method is known to be extremely fragile to the presence of gross corruptions. Recently, Wright established a so-called robust principal component analysis (RPCA) method, which can well handle the grossly corrupted data. However, RPCA is a transductive method and does not handle well the new samples, which are not involved in the training procedure. Given a new datum, RPCA essentially needs to recalculate over all the data, resulting in high computational cost. So, RPCA is inappropriate for the applications that require fast online computation. To overcome this limitation, in this paper, we propose an inductive robust principal component analysis (IRPCA) method. Given a set of training data, unlike RPCA that targets on recovering the original data matrix, IRPCA aims at learning the underlying projection matrix, which can be used to efficiently remove the possible corruptions in any datum. The learning is done by solving a nuclear-norm regularized minimization problem, which is convex and can be solved in polynomial time. Extensive experiments on a benchmark human face dataset and two video surveillance datasets show that IRPCA cannot only be robust to gross corruptions, but also handle the new data well and in an efficient way.


computer vision and pattern recognition | 2012

Street-to-shop: Cross-scenario clothing retrieval via parts alignment and auxiliary set

Si Liu; Zheng Song; Guangcan Liu; Changsheng Xu; Hanqing Lu; Shuicheng Yan

In this paper, we address a practical problem of cross-scenario clothing retrieval — given a daily human photo captured in general environment, e.g., on street, finding similar clothing in online shops, where the photos are captured more professionally and with clean background. There are large discrepancies between daily photo scenario and online shopping scenario. We first propose to alleviate the human pose discrepancy by locating 30 human parts detected by a well trained human detector. Then, founded on part features, we propose a two-step calculation to obtain more reliable one-to-many similarities between the query daily photo and online shopping photos: 1) the within-scenario one-to-many similarities between a query daily photo and the auxiliary set are derived by direct sparse reconstruction; and 2) by a cross-scenario many-to-many similarity transfer matrix inferred offline from an extra auxiliary set and the online shopping set, the reliable cross-scenario one-to-many similarities between the query daily photo and all online shopping photos are obtained. We collect a large online shopping dataset and a daily photo dataset, both of which are thoroughly labeled with 15 clothing attributes via Mechanic Turk. The extensive experimental evaluations on the collected datasets well demonstrate the effectiveness of the proposed framework for cross-scenario clothing retrieval.


IEEE Transactions on Image Processing | 2009

Radon Representation-Based Feature Descriptor for Texture Classification

Guangcan Liu; Zhouchen Lin; Yong Yu

In this paper, we aim to handle the intraclass variation resulting from the geometric transformation and the illumination change for more robust texture classification. To this end, we propose a novel feature descriptor called Radon representation-based feature descriptor (RRFD). RRFD converts the original pixel represented images into Radon-pixel images by using the Radon transform. The new Radon-pixel representation is more informative in geometry and has a much lower dimension. Subsequently, RRFD efficiently achieves affine invariance by projecting an image (or an image patch) from the space of Radon-pixel pairs onto an invariant feature space by using a ratiogram, i.e., the histogram of ratios between the areas of triangle pairs. The illumination invariance is also achieved by defining an illumination invariant distance metric on the invariant feature space. Comparing to the existing Radon transform-based texture features, which only achieve rotation and/or scaling invariance, RRFD achieves affine invariance. The experimental results on CUReT show that RRFD is a powerful feature descriptor that is suitable for texture classification.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2010

Unsupervised Object Segmentation with a Hybrid Graph Model (HGM)

Guangcan Liu; Zhouchen Lin; Yong Yu; Xiaoou Tang

In this work, we address the problem of performing class-specific unsupervised object segmentation, i.e., automatic segmentation without annotated training images. Object segmentation can be regarded as a special data clustering problem where both class-specific information and local texture/color similarities have to be considered. To this end, we propose a hybrid graph model (HGM) that can make effective use of both symmetric and asymmetric relationship among samples. The vertices of a hybrid graph represent the samples and are connected by directed edges and/or undirected ones, which represent the asymmetric and/or symmetric relationship between them, respectively. When applied to object segmentation, vertices are superpixels, the asymmetric relationship is the conditional dependence of occurrence, and the symmetric relationship is the color/texture similarity. By combining the Markov chain formed by the directed subgraph and the minimal cut of the undirected subgraph, the object boundaries can be determined for each image. Using the HGM, we can conveniently achieve simultaneous segmentation and recognition by integrating both top-down and bottom-up information into a unified process. Experiments on 42 object classes (9,415 images in total) show promising results.


international conference on computer vision | 2007

A Hybrid Graph Model for Unsupervised Object Segmentation

Guangcan Liu; Zhouchen Lin; Xiaoou Tang; Yong Yu

In this work, we address the problem of performing class specific unsupervised object segmentation, i.e., automatic segmentation without annotated training images. We propose a hybrid graph model (HGM) to integrate recognition and segmentation into a unified process. The vertices of a hybrid graph represent the entities associated to the object class or local image features. The vertices are connected by directed edges and/or undirected ones, which represent the dependence between the shape priors of the class (for recognition) and the similarity between the color/texture priors within an image (for segmentation), respectively. By simultaneously considering the Markov chain formed by the directed subgraph and the minimal cut of the undirected subgraph, the likelihood that the vertices belong to the underlying class can be computed. Given a set of images each containing objects of the same class, our HGM based method automatically identifies in each image the area that the objects occupy. Experiments on 14 sets of images show promising results.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2016

A Deterministic Analysis for LRR

Guangcan Liu; Huan Xu; Jinhui Tang; Qingshan Liu; Shuicheng Yan

The recently proposed low-rank representation (LRR) method has been empirically shown to be useful in various tasks such as motion segmentation, image segmentation, saliency detection and face recognition. While potentially powerful, LRR depends heavily on the configuration of its key parameter, λ. In realistic environments where the prior knowledge about data is lacking, however, it is still unknown how to choose λ in a suitable way. Even more, there is a lack of rigorous analysis about the success conditions of the method, and thus the significance of LRR is a little bit vague. In this paper we therefore establish a theoretical analysis for LRR, striving for figuring out under which conditions LRR can be successful, and deriving a moderately good estimate to the key parameter λ as well. Simulations on synthetic data points and experiments on real motion sequences verify our claims.

Collaboration


Dive into the Guangcan Liu's collaboration.

Top Co-Authors

Avatar

Shuicheng Yan

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar

Yong Yu

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Changsheng Xu

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Yi Ma

ShanghaiTech University

View shared research outputs
Top Co-Authors

Avatar

Bing-Kun Bao

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Congyan Lang

Beijing Jiaotong University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jinhui Tang

Nanjing University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Qingshan Liu

Nanjing University of Information Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge