Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Le An is active.

Publication


Featured researches published by Le An.


advanced video and signal based surveillance | 2013

Reference-based person re-identification

Le An; Mehran Kafai; Songfan Yang; Bir Bhanu

Person re-identification refers to recognizing people across non-overlapping cameras at different times and locations. Due to the variations in pose, illumination condition, background, and occlusion, person re-identification is inherently difficult. In this paper, we propose a reference-based method for across camera person re-identification. In the training, we learn a subspace in which the correlations of the reference data from different cameras are maximized using Regularized Canonical Correlation Analysis (RCCA). For re-identification, the gallery data and the probe data are projected into the RCCA subspace and the reference descriptors (RDs) of the gallery and probe are constructed by measuring the similarity between them and the reference data. The identity of the probe is determined by comparing the RD of the probe and the RDs of the gallery. Experiments on benchmark dataset show that the proposed method outperforms the state-of-the-art approaches.


IEEE Transactions on Circuits and Systems for Video Technology | 2016

Person Reidentification With Reference Descriptor

Le An; Mehran Kafai; Songfan Yang; Bir Bhanu

Person identification across nonoverlapping cameras, also known as person reidentification, aims to match people at different times and locations. Reidentifying people is of great importance in crucial applications such as wide-area surveillance and visual tracking. Due to the appearance variations in pose, illumination, and occlusion in different camera views, person reidentification is inherently difficult. To address these challenges, a reference-based method is proposed for person reidentification across different cameras. Instead of directly matching people by their appearance, the matching is conducted in a reference space where the descriptor for a person is translated from the original color or texture descriptors to similarity measures between this person and the exemplars in the reference set. A subspace is first learned in which the correlations of the reference data from different cameras are maximized using regularized canonical correlation analysis (RCCA). For reidentification, the gallery data and the probe data are projected onto this RCCA subspace and the reference descriptors (RDs) of the gallery and probe are generated by computing the similarity between them and the reference data. The identity of a probe is determined by comparing the RD of the probe and the RDs of the gallery. A reranking step is added to further improve the results using a saliency-based matching scheme. Experiments on publicly available datasets show that the proposed method outperforms most of the state-of-the-art approaches.


IEEE Signal Processing Letters | 2015

Person Re-Identification by Robust Canonical Correlation Analysis

Le An; Songfan Yang; Bir Bhanu

Person re-identification is the task to match people in surveillance cameras at different time and location. Due to significant view and pose change across non-overlapping cameras, directly matching data from different views is a challenging issue to solve. In this letter, we propose a robust canonical correlation analysis (ROCCA) to match people from different views in a coherent subspace. Given a small training set as in most re-identification problems, direct application of canonical correlation analysis (CCA) may lead to poor performance due to the inaccuracy in estimating the data covariance matrices. The proposed ROCCA with shrinkage estimation and smoothing technique is simple to implement and can robustly estimate the data covariance matrices with limited training samples. Experimental results on two publicly available datasets show that the proposed ROCCA outperforms regularized CCA (RCCA), and achieves state-of-the-art matching results for person re-identification as compared to the most recent methods.


IEEE Journal on Emerging and Selected Topics in Circuits and Systems | 2013

Dynamic Bayesian Network for Unconstrained Face Recognition in Surveillance Camera Networks

Le An; Mehran Kafai; Bir Bhanu

The demand for robust face recognition in real-world surveillance cameras is increasing due to the needs of practical applications such as security and surveillance. Although face recognition has been studied extensively in the literature, achieving good performance in surveillance videos with unconstrained faces is inherently difficult. During the image acquisition process, the noncooperative subjects appear in arbitrary poses and resolutions in different lighting conditions, together with noise and blurriness of images. In addition, multiple cameras are usually distributed in a camera network and different cameras often capture a subject in different views. In this paper, we aim at tackling this unconstrained face recognition problem and utilizing multiple cameras to improve the recognition accuracy using a probabilistic approach. We propose a dynamic Bayesian network to incorporate the information from different cameras as well as the temporal clues from frames in a video sequence. The proposed method is tested on a public surveillance video dataset with a three-camera setup. We compare our method to different benchmark classifiers with various feature descriptors. The results demonstrate that by modeling the face in a dynamic manner the recognition performance in a multi-camera network is improved over the other classifiers with various feature descriptors and the recognition result is better than using any of the single camera.


Neurocomputing | 2015

Efficient smile detection by Extreme Learning Machine

Le An; Songfan Yang; Bir Bhanu

Smile detection is a specialized task in facial expression analysis with applications such as photo selection, user experience analysis, and patient monitoring. As one of the most important and informative expressions, smile conveys the underlying emotion status such as joy, happiness, and satisfaction. In this paper, an efficient smile detection approach is proposed based on Extreme Learning Machine (ELM). The faces are first detected and a holistic flow-based face registration is applied which does not need any manual labeling or key point detection. Then ELM is used to train the classifier. The proposed smile detector is tested with different feature descriptors on publicly available databases including real-world face images. The comparisons against benchmark classifiers including Support Vector Machine (SVM) and Linear Discriminant Analysis (LDA) suggest that the proposed ELM based smile detector in general performs better and is very efficient. Compared to state-of-the-art smile detector, the proposed method achieves competitive results without preprocessing and manual registration.


international conference on image processing | 2012

Image super-resolution by extreme learning machine

Le An; Bir Bhanu

Image super-resolution is the process to generate high-resolution images from low-resolution inputs. In this paper, an efficient image super-resolution approach based on the recent development of extreme learning machine (ELM) is proposed. We aim at reconstructing the high-frequency components containing details and fine structures that are missing from the low-resolution images. In the training step, high-frequency components from the original high-resolution images as the target values and image features from low-resolution images are fed to ELM to learn a model. Given a low-resolution image, the high-frequency components are generated via the learned model and added to the initially interpolated low-resolution image. Experiments show that with simple image features our algorithm performs better in terms of accuracy and efficiency with different magnification factors compared to the state-of-the-art methods.


Signal Processing | 2014

Face image super-resolution using 2D CCA

Le An; Bir Bhanu

In this paper a face super-resolution method using two-dimensional canonical correlation analysis (2D CCA) is presented. A detail compensation step is followed to add high-frequency components to the reconstructed high-resolution face. Unlike most of the previous researches on face super-resolution algorithms that first transform the images into vectors, in our approach the relationship between the high-resolution and the low-resolution face image are maintained in their original 2D representation. In addition, rather than approximating the entire face, different parts of a face image are super-resolved separately to better preserve the local structure. The proposed method is compared with various state-of-the-art super-resolution algorithms using multiple evaluation criteria including face recognition performance. Results on publicly available datasets show that the proposed method super-resolves high quality face images which are very close to the ground-truth and performance gain is not dataset dependent. The method is very efficient in both the training and testing phases compared to the other approaches. HighlightsA new face super-resolution (SR) method using 2D CCA is presented.The method works directly on the 2D image without reshaping the image into vector.A detail compensation step further enhances the super-resolved face images.Experimental results show that our method outperforms current SR methods.The proposed method is computationally efficient due to small matrices involved.


international conference on distributed smart cameras | 2013

Improving person re-identification by soft biometrics based reranking

Le An; Xiaojing Chen; Mehran Kafai; Songfan Yang; Bir Bhanu

The problem of person re-identification is to recognize a target subject across non-overlapping distributed cameras at different times and locations. The applications of person re-identification include security, surveillance, multi-camera tracking, etc. In a real-world scenario, person re-identification is challenging due to the dramatic changes in a subjects appearance in terms of pose, illumination, background, and occlusion. Existing approaches either try to design robust features to identify a subject across different views or learn distance metrics to maximize the similarity between different views of the same person and minimize the similarity between different views of different persons. In this paper, we aim at improving the re-identification performance by reranking the returned results based on soft biometric attributes, such as gender, which can describe probe and gallery subjects at a higher level. During reranking, the soft biometric attributes are detected and attribute-based distance scores are calculated between pairs of images by using a regression model. These distance scores are used for reranking the initially returned matches. Experiments on a benchmark database with different baseline re-identification methods show that reranking improves the recognition accuracy by moving upwards the returned matches from gallery that share the same soft biometric attributes as the probe subject.


computer vision and pattern recognition | 2014

An Online Learned Elementary Grouping Model for Multi-target Tracking

Xiaojing Chen; Zhen Qin; Le An; Bir Bhanu

We introduce an online approach to learn possible elementary groups (groups that contain only two targets) for inferring high level context that can be used to improve multi-target tracking in a data-association based framework. Unlike most existing association-based tracking approaches that use only low level information (e.g., time, appearance, and motion) to build the affinity model and consider each target as an independent agent, we online learn social grouping behavior to provide additional information for producing more robust tracklets affinities. Social grouping behavior of pairwise targets is first learned from confident tracklets and encoded in a disjoint grouping graph. The grouping graph is further completed with the help of group tracking. The proposed method is efficient, handles group merge and split, and can be easily integrated into any basic affinity model. We evaluate our approach on two public datasets, and show significant improvements compared with state-of-the-art methods.


Information Sciences | 2016

Sparse representation matching for person re-identification

Le An; Xiaojing Chen; Songfan Yang; Bir Bhanu

The need for recognizing people across distributed surveillance cameras leads to the growth of recent research interest in person re-identification. Person re-identification aims at matching people in non-overlapping cameras at different time and locations. It is a difficult pattern matching task due to significant appearance variations in pose, illumination, or occlusion in different camera views. To address this multi-view matching problem, we first learn a subspace using canonical correlation analysis (CCA) in which the goal is to maximize the correlation between data from different cameras but corresponding to the same people. Given a probe from one camera view, we represent it using a sparse representation from a jointly learned coupled dictionary in the CCA subspace. The ?1 induced sparse representation are regularized by an ?2 regularization term. The introduction of ?2 regularization allows learning a sparse representation while maintaining the stability of the sparse coefficients. To compute the matching scores between probe and gallery, their ?2 regularized sparse representations are matched using a modified cosine similarity measure. Experimental results with extensive comparisons on challenging datasets demonstrate that the proposed method outperforms the state-of-the-art methods and using ?2 regularized sparse representation (?1 + ?2) is more accurate compared to use a single ?1 or ?2 regularization term.

Collaboration


Dive into the Le An's collaboration.

Top Co-Authors

Avatar

Bir Bhanu

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dinggang Shen

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Xiaojing Chen

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Feng Shi

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Ehsan Adeli

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David S. Lalush

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Guangkai Ma

University of North Carolina at Chapel Hill

View shared research outputs
Researchain Logo
Decentralizing Knowledge