Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jian-Huang Lai is active.

Publication


Featured researches published by Jian-Huang Lai.


Pattern Recognition | 2002

Face representation using independent component analysis

Pong Chi Yuen; Jian-Huang Lai

Abstract This paper addresses the problem of face recognition using independent component analysis (ICA). More specifically, we are going to address two issues on face representation using ICA. First, as the independent components (ICs) are independent but not orthogonal, images outside a training set cannot be projected into these basis functions directly. In this paper, we propose a least-squares solution method using Householder Transformation to find a new representation. Second, we demonstrate that not all ICs are useful for recognition. Along this direction, we design and develop an IC selection algorithm to find a subset of ICs for recognition. Three public available databases, namely, MIT AI Laboratory, Yale University and Olivette Research Laboratory, are selected to evaluate the performance and the results are encouraging.


computer vision and pattern recognition | 2015

Jointly learning heterogeneous features for RGB-D activity recognition

Jian-Fang Hu; Wei-Shi Zheng; Jian-Huang Lai; Jianguo Zhang

In this paper, we focus on heterogeneous features learning for RGB-D activity recognition. We find that features from different channels (RGB, depth) could share some similar hidden structures, and then propose a joint learning model to simultaneously explore the shared and feature-specific components as an instance of heterogeneous multi-task learning. The proposed model formed in a unified framework is capable of: 1) jointly mining a set of subspaces with the same dimensionality to exploit latent shared features across different feature channels, 2) meanwhile, quantifying the shared and feature-specific components of features in the subspaces, and 3) transferring feature-specific intermediate transforms (i-transforms) for learning fusion of heterogeneous features across datasets. To efficiently train the joint model, a three-step iterative optimization algorithm is proposed, followed by a simple inference model. Extensive experimental results on four activity datasets have demonstrated the efficacy of the proposed method. A new RGB-D activity dataset focusing on human-object interaction is further contributed, which presents more challenges for RGB-D activity benchmarking.


IEEE Transactions on Image Processing | 2011

Normalization of Face Illumination Based on Large-and Small-Scale Features

Xiaohua Xie; Wei-Shi Zheng; Jian-Huang Lai; Pong Chi Yuen; Ching Y. Suen

A face image can be represented by a combination of large-and small-scale features. It is well-known that the variations of illumination mainly affect the large-scale features (low-frequency components), and not so much the small-scale features. Therefore, in relevant existing methods only the small-scale features are extracted as illumination-invariant features for face recognition, while the large-scale intrinsic features are always ignored. In this paper, we argue that both large-and small-scale features of a face image are important for face restoration and recognition. Moreover, we suggest that illumination normalization should be performed mainly on the large-scale features of a face image rather than on the original face image. A novel method of normalizing both the Small-and Large-scale (S&L) features of a face image is proposed. In this method, a single face image is first decomposed into large-and small-scale features. After that, illumination normalization is mainly performed on the large-scale features, and only a minor correction is made on the small-scale features. Finally, a normalized face image is generated by combining the processed large-and small-scale features. In addition, an optional visual compensation step is suggested for improving the visual quality of the normalized image. Experiments on CMU-PIE, Extended Yale B, and FRGC 2.0 face databases show that by using the proposed method significantly better recognition performance and visual results can be obtained as compared to related state-of-the-art methods.


systems man and cybernetics | 2005

GA-fisher: a new LDA-based face recognition algorithm with selection of principal components

Wei-Shi Zheng; Jian-Huang Lai; Pong Chi Yuen

This paper addresses the dimension reduction problem in Fisherface for face recognition. When the number of training samples is less than the image dimension (total number of pixels), the within-class scatter matrix (Sw) in linear discriminant analysis (LDA) is singular, and principal component analysis (PCA) is suggested to employ in Fisherface for dimension reduction of Sw so that it becomes nonsingular. The popular method is to select the largest nonzero eigenvalues and the corresponding eigenvectors for LDA. To attenuate the illumination effect, some researchers suggested removing the three eigenvectors with the largest eigenvalues and the performance is improved. However, as far as we know, there is no systematic way to determine which eigenvalues should be used. Along this line, this paper proposes a theorem to interpret why PCA can be used in LDA and an automatic and systematic method to select the eigenvectors to be used in LDA using a genetic algorithm (GA). A GA-PCA is then developed. It is found that some small eigenvectors should also be used as part of the basis for dimension reduction. Using the GA-PCA to reduce the dimension, a GA-Fisher method is designed and developed. Compared with the traditional Fisherface method, the proposed GA-Fisher offers two additional advantages. First, optimal bases for dimensionality reduction are derived from GA-PCA. Second, the computational efficiency of LDA is improved by adding a whitening procedure after dimension reduction. The Face Recognition Technology (FERET) and Carnegie Mellon University Pose, Illumination, and Expression (CMU PIE) databases are used for evaluation. Experimental results show that almost 5% improvement compared with Fisherface can be obtained, and the results are encouraging.


Pattern Recognition | 2008

1D-LDA vs. 2D-LDA: When is vector-based linear discriminant analysis better than matrix-based?

Wei-Shi Zheng; Jian-Huang Lai; Stan Z. Li

Recent advances have shown that algorithms with (2D) matrix-based representation perform better than the traditional (1D) vector-based ones. In particular, 2D-LDA has been widely reported to outperform 1D-LDA. However, would the matrix-based linear discriminant analysis be always superior and when would 1D-LDA be better? In this paper, we investigate into these questions and have a comprehensive comparison between 1D-LDA and 2D-LDA in theory and in experiments. We analyze the heteroscedastic problem in 2D-LDA and formulate mathematical equalities to explore the relationship between 1D-LDA and 2D-LDA; then we point out potential problems in 2D-LDA. It is shown that 2D-LDA has eliminated the information contained in the covariance information between different local geometric structures, such as the rows or the columns, which is useful for discriminant feature extraction, whereas 1D-LDA could preserve such information. Interestingly, this new finding indicates that 1D-LDA is able to gain higher Fisher score than 2D-LDA in some extreme case. Furthermore, sufficient conditions on which 2D-LDA would be Bayes optimal for two-class classification problem are derived and comparison with 1D-LDA in this aspect is also analyzed. This could help understand how 2D-LDA is expected to achieve at its best, further discover its relationship with 1D-LDA, and well support other findings. After the theoretical analysis, comprehensive experimental results are reported by fairly and extensively comparing 1D-LDA with 2D-LDA. In contrast to the existing view that some 2D-LDA based algorithms would perform better than 1D-LDA when the number of training samples for each class is small or when the number of discriminant features used is small, we show that it is not always true and show that some standard 1D-LDA based algorithms could perform better in those cases on some challenging data sets.


IEEE Transactions on Image Processing | 2016

Deep Ranking for Person Re-Identification via Joint Representation Learning

Shi-Zhe Chen; Chun-Chao Guo; Jian-Huang Lai

This paper proposes a novel approach to person re-identification, a fundamental task in distributed multi-camera surveillance systems. Although a variety of powerful algorithms have been presented in the past few years, most of them usually focus on designing hand-crafted features and learning metrics either individually or sequentially. Different from previous works, we formulate a unified deep ranking framework that jointly tackles both of these key components to maximize their strengths. We start from the principle that the correct match of the probe image should be positioned in the top rank within the whole gallery set. An effective learning-to-rank algorithm is proposed to minimize the cost corresponding to the ranking disorders of the gallery. The ranking model is solved with a deep convolutional neural network (CNN) that builds the relation between input image pairs and their similarity scores through joint representation learning directly from raw image pixels. The proposed framework allows us to get rid of feature engineering and does not rely on any assumption. An extensive comparative evaluation is given, demonstrating that our approach significantly outperforms all the state-of-the-art approaches, including both traditional and CNN-based methods on the challenging VIPeR, CUHK-01, and CAVIAR4REID datasets. In addition, our approach has better ability to generalize across datasets without fine-tuning.


systems man and cybernetics | 2007

Choosing Parameters of Kernel Subspace LDA for Recognition of Face Images Under Pose and Illumination Variations

Jian Huang; Pong Chi Yuen; Wen-Sheng Chen; Jian-Huang Lai

This paper addresses the problem of automatically tuning multiple kernel parameters for the kernel-based linear discriminant analysis (LDA) method. The kernel approach has been proposed to solve face recognition problems under complex distribution by mapping the input space to a high-dimensional feature space. Some recognition algorithms such as the kernel principal components analysis, kernel Fisher discriminant, generalized discriminant analysis, and kernel direct LDA have been developed in the last five years. The experimental results show that the kernel-based method is a good and feasible approach to tackle the pose and illumination variations. One of the crucial factors in the kernel approach is the selection of kernel parameters, which highly affects the generalization capability and stability of the kernel-based learning methods. In view of this, we propose an eigenvalue-stability-bounded margin maximization (ESBMM) algorithm to automatically tune the multiple parameters of the Gaussian radial basis function kernel for the kernel subspace LDA (KSLDA) method, which is developed based on our previously developed subspace LDA method. The ESBMM algorithm improves the generalization capability of the kernel-based LDA method by maximizing the margin maximization criterion while maintaining the eigenvalue stability of the kernel-based LDA method. An in-depth investigation on the generalization performance on pose and illumination dimensions is performed using the YaleB and CMU PIE databases. The FERET database is also used for benchmark evaluation. Compared with the existing PCA-based and LDA-based methods, our proposed KSLDA method, with the ESBMM kernel parameter estimation algorithm, gives superior performance.


computer vision and pattern recognition | 2008

Face illumination normalization on large and small scale features

Xiaohua Xie; Wei-Shi Zheng; Jian-Huang Lai; Pong Chi Yuen

It is well known that the effect of illumination is mainly on the large-scale features (low-frequency components) of a face image. In solving the illumination problem for face recognition, most (if not all) existing methods either only use extracted small-scale features while discard large-scale features, or perform normalization on the whole image. In the latter case, small-scale features may be distorted when the large-scale features are modified. In this paper, we argue that large-scale features of face image are important and contain useful information for face recognition as well as visual quality of normalized image. Moreover, this paper suggests that illumination normalization should mainly perform on large-scale features of face image rather than the whole face image. Along this line, a novel framework for face illumination normalization is proposed. In this framework, a single face image is first decomposed into large- and small- scale feature images using logarithmic total variation (LTV) model. After that, illumination normalization is performed on large-scale feature image while small-scale feature image is smoothed. Finally, a normalized face image is generated by combination of the normalized large-scale feature image and smoothed small-scale feature image. CMU PIE and (Extended) YaleB face databases with different illumination variations are used for evaluation and the experimental results show that the proposed method outperforms existing methods.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2013

Multi-Exemplar Affinity Propagation

Chang-Dong Wang; Jian-Huang Lai; Ching Y. Suen; Jun-Yong Zhu

The affinity propagation (AP) clustering algorithm has received much attention in the past few years. AP is appealing because it is efficient, insensitive to initialization, and it produces clusters at a lower error rate than other exemplar-based methods. However, its single-exemplar model becomes inadequate when applied to model multisubclasses in some situations such as scene analysis and character recognition. To remedy this deficiency, we have extended the single-exemplar model to a multi-exemplar one to create a new multi-exemplar affinity propagation (MEAP) algorithm. This new model automatically determines the number of exemplars in each cluster associated with a super exemplar to approximate the subclasses in the category. Solving the model is NP--hard and we tackle it with the max-sum belief propagation to produce neighborhood maximum clusters, with no need to specify beforehand the number of clusters, multi-exemplars, and superexemplars. Also, utilizing the sparsity in the data, we are able to reduce substantially the computational time and storage. Experimental studies have shown MEAPs significant improvements over other algorithms on unsupervised image categorization and the clustering of handwritten digits.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2015

Discriminatively Trained And-Or Graph Models for Object Shape Detection

Liang Lin; Xiaolong Wang; Wei Yang; Jian-Huang Lai

In this paper, we investigate a novel reconfigurable part-based model, namely And-Or graph model, to recognize object shapes in images. Our proposed model consists of four layers: leaf-nodes at the bottom are local classifiers for detecting contour fragments; or-nodes above the leaf-nodes function as the switches to activate their child leaf-nodes, making the model reconfigurable during inference; and-nodes in a higher layer capture holistic shape deformations; one root-node on the top, which is also an or-node, activates one of its child and-nodes to deal with large global variations (e.g. different poses and views). We propose a novel structural optimization algorithm to discriminatively train the And-Or model from weakly annotated data. This algorithm iteratively determines the model structures (e.g. the nodes and their layouts) along with the parameter learning. On several challenging datasets, our model demonstrates the effectiveness to perform robust shape-based object detection against background clutter and outperforms the other state-of-the-art approaches. We also release a new shape database with annotations, which includes more than 1500 challenging shape instances, for recognition and detection.

Collaboration


Dive into the Jian-Huang Lai's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Pong Chi Yuen

Hong Kong Baptist University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xiaohua Xie

Sun Yat-sen University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dong Huang

South China Agricultural University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jun Tan

Sun Yat-sen University

View shared research outputs
Top Co-Authors

Avatar

Jian Huang

Hong Kong Baptist University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge