Kaiqi Huang
Chinese Academy of Sciences
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Kaiqi Huang.
IEEE Transactions on Image Processing | 2009
Shiqi Yu; Tieniu Tan; Kaiqi Huang; Kui Jia; Xinyu Wu
Gender is an important cue in social activities. In this correspondence, we present a study and analysis of gender classification based on human gait. Psychological experiments were carried out. These experiments showed that humans can recognize gender based on gait information, and that contributions of different body components vary. The prior knowledge extracted from the psychological experiments can be combined with an automatic method to further improve classification accuracy. The proposed method which combines human knowledge achieves higher performance than some other methods, and is even more accurate than human observers. We also present a numerical analysis of the contributions of different human components, which shows that head and hair, back, chest and thigh are more discriminative than other components. We also did challenging cross-race experiments that used Asian gait data to classify the gender of Europeans, and vice versa. Encouraging results were obtained. All the above prove that gait-based gender classification is feasible in controlled environments. In real applications, it still suffers from many difficulties, such as view variation, clothing and shoes changes, or carrying objects. We analyze the difficulties and suggest some possible solutions.
international conference on pattern recognition | 2006
Zhang Zhang; Kaiqi Huang; Tieniu Tan
This paper compares different similarity measures used for trajectory clustering in outdoor surveillance scenes. Six similarity measures are presented and the performance is evaluated by correct clustering rate (CCR) and time cost (TC). The experimental results demonstrate that in outdoor surveillance scenes, the simpler PCA+Euclidean distance is competent for the clustering task even in case of noise, as more complex similarity measures such as DTW, LCSS are not efficient due to their high computational cost
international conference on pattern recognition | 2008
Min Li; Zhaoxiang Zhang; Kaiqi Huang; Tieniu Tan
This paper proposes a novel method to address the problem of estimating the number of people in surveillance scenes with people gathering and waiting. The proposed method combines a MID (mosaic image difference) based foreground segmentation algorithm and a HOG (histograms of oriented gradients) based head-shoulder detection algorithm to provide an accurate estimation of people counts in the observed area. In our framework, the MID-based foreground segmentation module provides active areas for the head-shoulder detection module to detect heads and count the number of people. Numerous experiments are conducted and convincing results demonstrate the effectiveness of our method.
computer vision and pattern recognition | 2007
Ying Wang; Kaiqi Huang; Tieniu Tan
This paper addresses human activity recognition based on a new feature descriptor. For a binary human silhouette, an extended radon transform, R transform, is employed to represent low-level features. The advantage of the R transform lies in its low computational complexity and geometric invariance. Then a set of HMMs based on the extracted features are trained to recognize activities. Compared with other commonly-used feature descriptors, R transform is robust to frame loss in video, disjoint silhouettes and holes in the shape, and thus achieves better performance in recognizing similar activities. Rich experiments have proved the efficiency of the proposed method.
international conference on image processing | 2011
Shuai Zheng; Junge Zhang; Kaiqi Huang; Ran He; Tieniu Tan
Recent gait recognition systems often suffer from the challenges including viewing angle variation and large intra-class variations. In order to address these challenges, this paper presents a robust View Transformation Model for gait recognition. Based on the gait energy image, the proposed method establishes a robust view transformation model via robust principal component analysis. Partial least square is used as feature selection method. Compared with the existing methods, the proposed method finds out a shared linear correlated low rank subspace, which brings the advantages that the view transformation model is robust to viewing angle variation, clothing and carrying condition changes. Conducted on the CASIA gait dataset, experimental results show that the proposed method outperforms the other existing methods.
computer vision and pattern recognition | 2011
Junge Zhang; Kaiqi Huang; Yinan Yu; Tieniu Tan
Object localization is a challenging problem due to variations in objects structure and illumination. Although existing part based models have achieved impressive progress in the past several years, their improvement is still limited by low-level feature representation. Therefore, this paper mainly studies the description of object structure from both feature level and topology level. Following the bottom-up paradigm, we propose a boosted Local Structured HOG-LBP based object detector. Firstly, at feature level, we propose Local Structured Descriptor to capture the objects local structure, and develop the descriptors from shape and texture information, respectively. Secondly, at topology level, we present a boosted feature selection and fusion scheme for part based object detector. All experiments are conducted on the challenging PASCAL VOC2007 datasets. Experimental results show that our method achieves the state-of-the-art performance.
Pattern Recognition | 2008
Kaiqi Huang; Liangsheng Wang; Tieniu Tan; Stephen J. Maybank
Autonomous video surveillance and monitoring has a rich history. Many deployed systems are able to reliably track human motion in indoor and controlled outdoor environments. However, object detection and tracking at night remain very important problems for visual surveillance. The objects are often distant, small and their signatures have low contrast against the background. Traditional methods based on the analysis of the difference between successive frames and a background frame will do not work. In this paper, a novel real time object detection algorithm is proposed for night-time visual surveillance. The algorithm is based on contrast analysis. In the first stage, the contrast in local change over time is used to detect potential moving objects. Then motion prediction and spatial nearest neighbor data association are used to suppress false alarms. Experiments on real scenes show that the algorithm is effective for night-time object detection and tracking.
european conference on computer vision | 2014
Chong Wang; Weiqiang Ren; Kaiqi Huang; Tieniu Tan
Localizing objects in cluttered backgrounds is a challenging task in weakly supervised localization. Due to large object variations in cluttered images, objects have large ambiguity with backgrounds. However, backgrounds contain useful latent information, e.g., the sky for aeroplanes. If we can learn this latent information, object-background ambiguity can be reduced to suppress the background. In this paper, we propose the latent category learning (LCL), which is an unsupervised learning problem given only image-level class labels. Firstly, inspired by the latent semantic discovery, we use the typical probabilistic Latent Semantic Analysis (pLSA) to learn the latent categories, which can represent objects, object parts or backgrounds. Secondly, to determine which category contains the target object, we propose a category selection method evaluating each category’s discrimination. We evaluate the method on the PASCAL VOC 2007 database and ILSVRC 2013 detection challenge. On VOC 2007, the proposed method yields the annotation accuracy of 48%, which outperforms previous results by 10%. More importantly, we achieve the detection average precision of 30.9%, which improves previous results by 8% and can be competitive with the supervised deformable part model (DPM) 5.0 baseline 33.7%. On ILSVRC 2013 detection, the method yields the precision of 6.0%, which is also competitive with the DPM 5.0.
systems man and cybernetics | 2010
Tianhao Zhang; Kaiqi Huang; Xuelong Li; Jie Yang; Dacheng Tao
Orthogonal neighborhood-preserving projection (ONPP) is a recently developed orthogonal linear algorithm for overcoming the out-of-sample problem existing in the well-known manifold learning algorithm, i.e., locally linear embedding. It has been shown that ONPP is a strong analyzer of high-dimensional data. However, when applied to classification problems in a supervised setting, ONPP only focuses on the intraclass geometrical information while ignores the interaction of samples from different classes. To enhance the performance of ONPP in classification, a new algorithm termed discriminative ONPP (DONPP) is proposed in this paper. DONPP 1) takes into account both intraclass and interclass geometries; 2) considers the neighborhood information of interclass relationships; and 3) follows the orthogonality property of ONPP. Furthermore, DONPP is extended to the semisupervised case, i.e., semisupervised DONPP (SDONPP). This uses unlabeled samples to improve the classification accuracy of the original DONPP. Empirical studies demonstrate the effectiveness of both DONPP and SDONPP.
systems man and cybernetics | 2011
Kaiqi Huang; Dacheng Tao; Yuan Yuan; Xuelong Li; Tieniu Tan
Inspired by human visual cognition mechanism, this paper first presents a scene classification method based on an improved standard model feature. Compared with state-of-the-art efforts in scene classification, the newly proposed method is more robust, more selective , and of lower complexity. These advantages are demonstrated by two sets of experiments on both our own database and standard public ones. Furthermore, occlusion and disorder problems in scene classification in video surveillance are also first studied in this paper.