Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Junzhou Huang is active.

Publication


Featured researches published by Junzhou Huang.


computer vision and pattern recognition | 2011

Robust tracking using local sparse appearance model and K-selection

Junzhou Huang; Lin Yang; Casimir Kulikowsk

Online learned tracking is widely used for its adaptive ability to handle appearance changes. However, it introduces potential drifting problems due to the accumulation of errors during the self-updating, especially for occluded scenarios. The recent literature demonstrates that appropriate combinations of trackers can help balance stability and flexibility requirements. We have developed a robust tracking algorithm using a local sparse appearance model (SPT). A static sparse dictionary and a dynamically online updated basis distribution model the target appearance. A novel sparse representation-based voting map and sparse constraint regularized mean-shift support the robust object tracking. Besides these contributions, we also introduce a new dictionary learning algorithm with a locally constrained sparse representation, called K-Selection. Based on a set of comprehensive experiments, our algorithm has demonstrated better performance than alternatives reported in the recent literature.


Annals of Statistics | 2010

The benefit of group sparsity

Junzhou Huang; Tong Zhang

This paper develops a theory for group Lasso using a concept called strong group sparsity. Our result shows that group Lasso is superior to standard Lasso for strongly group-sparse signals. This provides a convincing theoretical justification for using group sparse regularization when the underlying group structure is consistent with the data. Moreover, the theory predicts some limitations of the group Lasso formulation that are confirmed by simulation studies.


european conference on computer vision | 2010

Robust and fast collaborative tracking with two stage sparse optimization

Lin Yang; Junzhou Huang; Peter Meer; Leiguang Gong; Casimir A. Kulikowski

The sparse representation has been widely used in many areas and utilized for visual tracking. Tracking with sparse representation is formulated as searching for samples with minimal reconstruction errors from learned template subspace. However, the computational cost makes it unsuitable to utilize high dimensional advanced features which are often important for robust tracking under dynamic environment. Based on the observations that a target can be reconstructed from several templates, and only some of the features with discriminative power are significant to separate the target from the background, we propose a novel online tracking algorithm with two stage sparse optimization to jointly minimize the target reconstruction error and maximize the discriminative power. As the target template and discriminative features usually have temporal and spatial relationship, dynamic group sparsity (DGS) is utilized in our algorithm. The proposed method is compared with three state-of-art trackers using five public challenging sequences, which exhibit appearance changes, heavy occlusions, and pose variations. Our algorithm is shown to outperform these methods.


Medical Image Analysis | 2011

Efficient MR image reconstruction for compressed MR imaging.

Junzhou Huang; Shaoting Zhang; Dimitris N. Metaxas

In this paper, we propose an efficient algorithm for MR image reconstruction. The algorithm minimizes a linear combination of three terms corresponding to a least square data fitting, total variation (TV) and L1 norm regularization. This has been shown to be very powerful for the MR image reconstruction. First, we decompose the original problem into L1 and TV norm regularization subproblems respectively. Then, these two subproblems are efficiently solved by existing techniques. Finally, the reconstructed image is obtained from the weighted average of solutions from two subproblems in an iterative framework. We compare the proposed algorithm with previous methods in term of the reconstruction accuracy and computation complexity. Numerous experiments demonstrate the superior performance of the proposed algorithm for compressed MR image reconstruction.


computer vision and pattern recognition | 2010

Automatic image annotation using group sparsity

Shaoting Zhang; Junzhou Huang; Yuchi Huang; Yang Yu; Hongsheng Li; Dimitris N. Metaxas

Automatically assigning relevant text keywords to images is an important problem. Many algorithms have been proposed in the past decade and achieved good performance. Efforts have focused upon model representations of keywords, but properties of features have not been well investigated. In most cases, a group of features is preselected, yet important feature properties are not well used to select features. In this paper, we introduce a regularization based feature selection algorithm to leverage both the sparsity and clustering properties of features, and incorporate it into the image annotation task. A novel approach is also proposed to iteratively obtain similar and dissimilar pairs from both the keyword similarity and the relevance feedback. Thus keyword similarity is modeled in the annotation framework. Numerous experiments are designed to compare the performance between features, feature combinations and regularization based feature selection methods applied on the image annotation task, which gives insight into the properties of features in the image annotation task. The experimental results demonstrate that the group sparsity based method is more accurate and stable than others.


computer vision and pattern recognition | 2012

Learning active facial patches for expression analysis

Lin Zhong; Qingshan Liu; Peng Yang; Bo Liu; Junzhou Huang; Dimitris N. Metaxas

In this paper, we present a new idea to analyze facial expression by exploring some common and specific information among different expressions. Inspired by the observation that only a few facial parts are active in expression disclosure (e.g., around mouth, eye), we try to discover the common and specific patches which are important to discriminate all the expressions and only a particular expression, respectively. A two-stage multi-task sparse learning (MTSL) framework is proposed to efficiently locate those discriminative patches. In the first stage MTSL, expression recognition tasks, each of which aims to find dominant patches for each expression, are combined to located common patches. Second, two related tasks, facial expression recognition and face verification tasks, are coupled to learn specific facial patches for individual expression. Extensive experiments validate the existence and significance of common and specific patches. Utilizing these learned patches, we achieve superior performances on expression recognition compared to the state-of-the-arts.


international conference on computer vision | 2009

Learning with dynamic group sparsity

Junzhou Huang; Xiaolei Huang; Dimitris N. Metaxas

This paper investigates a new learning formulation called dynamic group sparsity. It is a natural extension of the standard sparsity concept in compressive sensing, and is motivated by the observation that in some practical sparse data the nonzero coefficients are often not random but tend to be clustered. Intuitively, better results can be achieved in these cases by reasonably utilizing both clustering and sparsity priors. Motivated by this idea, we have developed a new greedy sparse recovery algorithm, which prunes data residues in the iterative process according to both sparsity and group clustering priors rather than only sparsity as in previous methods. The proposed algorithm can recover stably sparse data with clustering trends using far fewer measurements and computations than current state-of-the-art algorithms with provable guarantees. Moreover, our algorithm can adaptively learn the dynamic group structure and the sparsity number if they are not available in the practical applications. We have applied the algorithm to sparse recovery and background subtraction in videos. Numerous experiments with improved performance over previous methods further validate our theoretical proofs and the effectiveness of the proposed algorithm.


international conference on computer vision | 2013

Pose-Free Facial Landmark Fitting via Optimized Part Mixtures and Cascaded Deformable Shape Model

Xiang Yu; Junzhou Huang; Shaoting Zhang; Wang Yan; Dimitris N. Metaxas

This paper addresses the problem of facial landmark localization and tracking from a single camera. We present a two-stage cascaded deformable shape model to effectively and efficiently localize facial landmarks with large head pose variations. For face detection, we propose a group sparse learning method to automatically select the most salient facial landmarks. By introducing 3D face shape model, we use procrustes analysis to achieve pose-free facial landmark initialization. For deformation, the first step uses mean-shift local search with constrained local model to rapidly approach the global optimum. The second step uses component-wise active contours to discriminatively refine the subtle shape variation. Our framework can simultaneously handle face detection, pose-free landmark localization and tracking in real time. Extensive experiments are conducted on both laboratory environmental face databases and face-in-the-wild databases. All results demonstrate that our approach has certain advantages over state-of-the-art methods in handling pose variations.


international conference on pattern recognition | 2004

A new iris segmentation method for recognition

Junzhou Huang; Yunhong Wang; Tieniu Tan; Jiali Cui

As the first stage, iris segmentation is very important for an iris recognition system. If the iris regions were not correctly segmented, there would possibly exist four kinds of noises in segmented iris regions: eyelashes, eyelids, reflections and pupil, which result in poor recognition performance. This paper proposes a new noise-removing approach based on the fusion of edge and region information. The whole procedure includes three steps: 1) rough localization and normalization, 2) edge information extraction based on phase congruency, and 3) the infusion of edge and region information. Experimental results on a set of 2,096 images show that the proposed method has encouraging performance for improving the recognition accuracy.


international conference on pattern recognition | 2004

An iris image synthesis method based on PCA and super-resolution

Jiali Cui; Yunhong Wang; Junzhou Huang; Tieniu Tan; Zhenan Sun

It is very important for the performance evaluation of iris recognition algorithms to construct very large iris databases. However, limited by the real conditions, there are no very large common iris databases now. In this paper, an iris image synthesis method based on principal component analysis (PCA) and super-resolution is proposed. The iris recognition algorithm based on PCA is first introduced and then, iris image synthesis method is presented. The synthesis method first constructs coarse iris images with the given coefficients. Then, synthesized iris images are enhanced using super-resolution. Through controlling the coefficients, we can create many iris images with specified classes. Extensive experiments show that the synthesized iris images have satisfactory cluster and the synthesized iris databases can be very large.

Collaboration


Dive into the Junzhou Huang's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shaoting Zhang

University of North Carolina at Charlotte

View shared research outputs
Top Co-Authors

Avatar

Chen Chen

University of Texas at Arlington

View shared research outputs
Top Co-Authors

Avatar

Yeqing Li

University of Texas at Arlington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Zheng Xu

University of Texas at Arlington

View shared research outputs
Top Co-Authors

Avatar

Jiawen Yao

University of Texas at Arlington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Feiyun Zhu

University of Texas at Arlington

View shared research outputs
Top Co-Authors

Avatar

Sheng Wang

University of Texas at Arlington

View shared research outputs
Researchain Logo
Decentralizing Knowledge