Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jianzhuang Liu is active.

Publication


Featured researches published by Jianzhuang Liu.


IEEE Transactions on Image Processing | 2010

Local Derivative Pattern Versus Local Binary Pattern: Face Recognition With High-Order Local Pattern Descriptor

Baochang Zhang; Yongsheng Gao; Sanqiang Zhao; Jianzhuang Liu

This paper proposes a novel high-order local pattern descriptor, local derivative pattern (LDP), for face recognition. LDP is a general framework to encode directional pattern features based on local derivative variations. The nth-order LDP is proposed to encode the (n-1)th -order local derivative direction variations, which can capture more detailed information than the first-order local pattern used in local binary pattern (LBP). Different from LBP encoding the relationship between the central point and its neighbors, the LDP templates extract high-order local information by encoding various distinctive spatial relationships contained in a given local region. Both gray-level images and Gabor feature images are used to evaluate the comparative performances of LDP and LBP. Extensive experimental results on FERET, CAS-PEAL, CMU-PIE, Extended Yale B, and FRGC databases show that the high-order LDP consistently performs much better than LBP for both face identification and face verification under various conditions.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2010

Robust 3D Face Recognition by Local Shape Difference Boosting

Yueming Wang; Jianzhuang Liu; Xiaoou Tang

This paper proposes a new 3D face recognition approach, Collective Shape Difference Classifier (CSDC), to meet practical application requirements, i.e., high recognition performance, high computational efficiency, and easy implementation. We first present a fast posture alignment method which is self-dependent and avoids the registration between an input face against every face in the gallery. Then, a Signed Shape Difference Map (SSDM) is computed between two aligned 3D faces as a mediate representation for the shape comparison. Based on the SSDMs, three kinds of features are used to encode both the local similarity and the change characteristics between facial shapes. The most discriminative local features are selected optimally by boosting and trained as weak classifiers for assembling three collective strong classifiers, namely, CSDCs with respect to the three kinds of features. Different schemes are designed for verification and identification to pursue high performance in both recognition and computation. The experiments, carried out on FRGC v2 with the standard protocol, yield three verification rates all better than 97.9 percent with the FAR of 0.1 percent and rank-1 recognition rates above 98 percent. Each recognition against a gallery with 1,000 faces only takes about 3.6 seconds. These experimental results demonstrate that our algorithm is not only effective but also time efficient.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2009

2D Shape Matching by Contour Flexibility

Chunjing Xu; Jianzhuang Liu; Xiaoou Tang

In computer vision, shape matching is a challenging problem, especially when articulation and deformation of parts occur. These variations may be insignificant in terms of human recognition, but often cause a matching algorithm to give results that are inconsistent with our perception. In this paper, we propose a novel shape descriptor of planar contours, called contour flexibility, which represents the deformable potential at each point along a contour. With this descriptor, The local and global features can be obtained from the contour. We then present a shape matching scheme based on the features obtained. Experiments with comparisons to recently published algorithms show that our algorithm performs best.


IEEE Transactions on Neural Networks | 2002

A spatial-temporal approach for video caption detection and recognition

Xiaoou Tang; Xinbo Gao; Jianzhuang Liu; Hong-Jiang Zhang

We present a video caption detection and recognition system based on a fuzzy-clustering neural network (FCNN) classifier. Using a novel caption-transition detection scheme we locate both spatial and temporal positions of video captions with high precision and efficiency. Then employing several new character segmentation and binarization techniques, we improve the Chinese video-caption recognition accuracy from 13% to 86% on a set of news video captions. As the first attempt on Chinese video-caption recognition, our experiment results are very encouraging.


international conference on machine learning | 2008

Pairwise constraint propagation by semidefinite programming for semi-supervised classification

Zhenguo Li; Jianzhuang Liu; Xiaoou Tang

We consider the general problem of learning from both pairwise constraints and unlabeled data. The pairwise constraints specify whether two objects belong to the same class or not, known as the must-link constraints and the cannot-link constraints. We propose to learn a mapping that is smooth over the data graph and maps the data onto a unit hypersphere, where two must-link objects are mapped to the same point while two cannot-link objects are mapped to be orthogonal. We show that such a mapping can be achieved by formulating a semidefinite programming problem, which is convex and can be solved globally. Our approach can effectively propagate pairwise constraints to the whole data set. It can be directly applied to multi-class classification and can handle data labels, pairwise constraints, or a mixture of them in a unified framework. Promising experimental results are presented for classification tasks on a variety of synthetic and real data sets.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2001

Graph-based method for face identification from a single 2D line drawing

Jianzhuang Liu; Yong Tsui Lee

The faces in a 2D fine drawing of an object provide important information for the reconstruction of its 3D geometry. In this paper, a graph-based optimization method is proposed for identifying the faces is a line drawing. The face identification is formulated as a maximum weight clique problem. This formulation is proven to be equivalent to the formulation proposed by Shpitalni and Upson (1996). The advantage of our formulation is that it enables one to develop a much faster algorithm to find the faces in a drawing. The significant improvement in speed is derived from two algorithms provided: the depth-first graph search for quickly generating possible faces from a drawing; and the maximum weight clique finding for obtaining the optimal face configurations of the drawing. The experimental results shown that our algorithm generates the same results of face identification as Shpitalni and Lipsons method, but is much faster when dealing with objects of more than 20 faces.


computer vision and pattern recognition | 2009

Constrained clustering via spectral regularization

Zhenguo Li; Jianzhuang Liu; Xiaoou Tang

We propose a novel framework for constrained spectral clustering with pairwise constraints which specify whether two objects belong to the same cluster or not. Unlike previous methods that modify the similarity matrix with pairwise constraints, we adapt the spectral embedding towards an ideal embedding as consistent with the pairwise constraints as possible. Our formulation leads to a small semidefinite program whose complexity is independent of the number of objects in the data set and the number of pairwise constraints, making it scalable to large-scale problems. The proposed approach is applicable directly to multi-class problems, handles both must-link and cannot-link constraints, and can effectively propagate pairwise constraints. Extensive experiments on real image data and UCI data have demonstrated the efficacy of our algorithm.


Pattern Recognition | 2011

Visual object tracking via sample-based Adaptive Sparse Representation (AdaSR)

Zhenjun Han; Jianbin Jiao; Baochang Zhang; Qixiang Ye; Jianzhuang Liu

When appearance variation of object and its background, partial occlusion or deterioration in object images occurs, most existing visual tracking methods tend to fail in tracking the target. To address this problem, this paper proposes a new approach for visual object tracking based on Sample-Based Adaptive Sparse Representation (AdaSR), which ensures that the tracked object is adaptively and compactly expressed with predefined samples. First, the Sample-Based Sparse Representation, which selects a subset of samples as a basis for object representation by exploiting L1-norm minimization, improves the representation adaptation to partial occlusion for tracking. Second, to keep the temporal consistency and adaptation to appearance variation and deterioration in object images during the tracking process, the objects Sample-Based Sparse Representation is adaptively evaluated based on a Kalman filter, obtaining the AdaSR. Finally, the candidate holding the most similar Sample-Based Sparse Representation to the AdaSR of the tracked object will be regarded as the instantaneous tracking result. In addition, we can easily extend the AdaSR for multi-object tracking by integrating the sample set of each tracked object (named Common Sample-Based Adaptive Sparse Representation Analysis (AdaSRA)). AdaSRA fully analyses Adaptive Sparse Representation similarity for object classification. Our experiments on public datasets show state-of-the-art results, which are better than those of several representative tracking methods.


acm multimedia | 2009

Automatic facial expression recognition on a single 3D face by exploring shape deformation

Boqing Gong; Yueming Wang; Jianzhuang Liu; Xiaoou Tang

Facial expression recognition has many applications in multimedia processing and the development of 3D data acquisition techniques makes it possible to identify expressions using 3D shape information. In this paper, we propose an automatic facial expression recognition approach based on a single 3D face. The shape of an expressional 3D face is approximated as the sum of two parts, a basic facial shape component (BFSC) and an expressional shape component (ESC). The BFSC represents the basic face structure and neutral-style shape and the ESC contains shape changes caused by facial expressions. To separate the BFSC and ESC, our method firstly builds a reference face for each input 3D non-neutral face by a learning method, which well represents the basic facial shape. Then, based on the BFSC and the original expressional face, a facial expression descriptor is designed. The surface depth changes are considered in the descriptor. Finally, the descriptor is input into an SVM to recognize the expression. Unlike previous methods which recognize a facial expression with the help of manually labeled key points and/or a neutral face, our method works on a single 3D face without any manual assistance. Extensive experiments are carried out on the BU-3DFE database and comparisons with existing methods are conducted. The experimental results show the effectiveness of our method.


international conference on computer vision | 2013

Hidden Factor Analysis for Age Invariant Face Recognition

Dihong Gong; Zhifeng Li; Dahua Lin; Jianzhuang Liu; Xiaoou Tang

Age invariant face recognition has received increasing attention due to its great potential in real world applications. In spite of the great progress in face recognition techniques, reliably recognizing faces across ages remains a difficult task. The facial appearance of a person changes substantially over time, resulting in significant intra-class variations. Hence, the key to tackle this problem is to separate the variation caused by aging from the person-specific features that are stable. Specifically, we propose a new method, called Hidden Factor Analysis (HFA). This method captures the intuition above through a probabilistic model with two latent factors: an identity factor that is age-invariant and an age factor affected by the aging process. Then, the observed appearance can be modeled as a combination of the components generated based on these factors. We also develop a learning algorithm that jointly estimates the latent factors and the model parameters using an EM procedure. Extensive experiments on two well-known public domain face aging datasets: MORPH (the largest public face aging database) and FGNET, clearly show that the proposed method achieves notable improvement over state-of-the-art algorithms.

Collaboration


Dive into the Jianzhuang Liu's collaboration.

Top Co-Authors

Avatar

Xiaoou Tang

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shifeng Chen

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Changqing Zou

Hengyang Normal University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chunjing Xu

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Shuicheng Yan

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Wei Zhang

Chinese Academy of Sciences

View shared research outputs
Researchain Logo
Decentralizing Knowledge