Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Junbiao Pang is active.

Publication


Featured researches published by Junbiao Pang.


IEEE Transactions on Image Processing | 2011

Transferring Boosted Detectors Towards Viewpoint and Scene Adaptiveness

Junbiao Pang; Qingming Huang; Shuicheng Yan; Shuqiang Jiang; Lei Qin

In object detection, disparities in distributions between the training samples and the test ones are often inevitable, resulting in degraded performance for application scenarios. In this paper, we focus on the disparities caused by viewpoint and scene changes and propose an efficient solution to these particular cases by adapting generic detectors, assuming boosting style. A pretrained boosting-style detector encodes a priori knowledge in the form of selected features and weak classifier weighting. Towards adaptiveness, the selected features are shifted to the most discriminative locations and scales to compensate for the possible appearance variations. Moreover, the weighting coefficients are further adapted with covariate boost, which maximally utilizes the related training data to enrich the limited new examples. Extensive experiments validate the proposed adaptation mechanism towards viewpoint and scene adaptiveness and show encouraging improvement on detection accuracy over state-of-the-art methods.


Computer Vision and Image Understanding | 2014

Image classification by non-negative sparse coding, correlation constrained low-rank and sparse decomposition

Chunjie Zhang; Jing Liu; Chao Liang; Zhe Xue; Junbiao Pang; Qingming Huang

We propose an image classification framework by leveraging the non-negative sparse coding, correlation constrained low rank and sparse matrix decomposition technique (CCLR-Sc+SPM). First, we propose a new non-negative sparse coding along with max pooling and spatial pyramid matching method (Sc+SPM) to extract local features information in order to represent images, where non-negative sparse coding is used to encode local features. Max pooling along with spatial pyramid matching (SPM) is then utilized to get the feature vectors to represent images. Second, we propose to leverage the correlation constrained low-rank and sparse matrix recovery technique to decompose the feature vectors of images into a low-rank matrix and a sparse error matrix by considering the correlations between images. To incorporate the common and specific attributes into the image representation, we still adopt the idea of sparse coding to recode the Sc+SPM representation of each image. In particular, we collect the columns of the both matrixes as the bases and use the coding parameters as the updated image representation by learning them through the locality-constrained linear coding (LLC). Finally, linear SVM classifier is trained for final classification. Experimental results show that the proposed method achieves or outperforms the state-of-the-art results on several benchmarks


european conference on computer vision | 2008

Multiple Instance Boost Using Graph Embedding Based Decision Stump for Pedestrian Detection

Junbiao Pang; Qingming Huang; Shuqiang Jiang

Pedestrian detection in still image should handle the large appearance and stance variations arising from the articulated structure, various clothing of human as well as viewpoints. In this paper, we address this problem from a view which utilizes multiple instances to represent the variations in multiple instance learning (MIL) framework. Specifically, logistic multiple instance boost (LMIBoost) is advocated to learn the pedestrian appearance model. To efficiently use the histogram feature, we propose the graph embedding based decision stump for the data with non-Gaussian distribution. First the topology structure of the examples are carefully designed to keep between-class far and within-class close. Second, K-means algorithm is adopted to fast locate the multiple decision planes for the weak classifier. Experiments show the improved accuracy of the proposed approach in comparison with existing pedestrian detection methods, on two public test sets: INRIA and VOC2006’s person detection subtask [1].


international conference on computer vision | 2011

Treat samples differently: Object tracking with semi-supervised online CovBoost

Guorong Li; Lei Qin; Qingming Huang; Junbiao Pang; Shuqiang Jiang

Most feature selection methods for object tracking assume that the labeled samples obtained in the next frames follow the similar distribution with the samples in the previous frame. However, this assumption is not true in some scenarios. As a result, the selected features are not suitable for tracking and the “drift” problem happens. In this paper, we consider datas distribution in tracking from a new perspective. We classify the samples into three categories: auxiliary samples (samples in the previous frames), target samples (collected in the current frame) and unlabeled samples (obtained in the next frame). To make the best use of them for tracking, we propose a novel semi-supervised transfer learning approach. Specifically, we assume only target samples follow the same distribution as the unlabeled samples and develop a novel semi-supervised CovBoost method. It could utilize auxiliary samples and unlabeled samples effectively when training the best strong classifier for tracking. Furthermore, we develop a new online updating algorithm for semi-supervised CovBoost, making our tracker handle with significant variations of the tracked target and background successfully. We demonstrate the excellent performance of the proposed tracker on several challenging test videos.


Neurocomputing | 2015

Set-label modeling and deep metric learning on person re-identification

Hao Liu; Bingpeng Ma; Lei Qin; Junbiao Pang; Chunjie Zhang; Qingming Huang

Abstract Person re-identification aims at matching individuals across multiple non-overlapping adjacent cameras. By condensing multiple gallery images of a person as a whole, we propose a novel method named Set-Label Model (SLM) to improve the performance of person re-identification under the multi-shot setting. Moreover, we utilize mutual-information to measure the relevance between query image and gallery sets. To decrease the computational complexity, we apply a Naive–Bayes Nearest-Neighbor algorithm to approximate the mutual-information value. To overcome the limitations of traditional linear metric learning, we further develop a deep non-linear metric learning (DeepML) approach based on Neighborhood Component Analysis and Deep Belief Network. To evaluate the effectiveness of our proposed approaches, SLM and DeepML, we have carried out extensive experiments on two challenging datasets i-LIDS and ETHZ. The experimental results demonstrate that the proposed methods can obtain better performances compared with the state-of-the-art methods.


Neurocomputing | 2014

Object categorization in sub-semantic space

Chunjie Zhang; Jian Cheng; Jing Liu; Junbiao Pang; Chao Liang; Qingming Huang; Qi Tian

Due to the semantic gap, the low-level features are unsatisfactory for object categorization. Besides, the use of semantic related image representation may not be able to cope with large inter-class variations and is not very robust to noise. To solve these problems, in this paper, we propose a novel object categorization method by using the sub-semantic space based image representation. First, exemplar classifiers are trained by separating each training image from the others and serve as the weak semantic similarity measurement. Then a graph is constructed by combining the visual similarity and weak semantic similarity of these training images. We partition this graph into visually and semantically similar sub-sets. Each sub-set of images is then used to train classifiers in order to separate this sub-set from the others. The learned sub-set classifiers are then used to construct a sub-semantic space based representation of images. This sub-semantic space is not only more semantically meaningful than exemplar based representation but also more reliable and resistant to noise than traditional semantic space based image representation. Finally, we make categorization of objects using this sub-semantic space with a structure regularized SVM classifier and conduct experiments on several public datasets to demonstrate the effectiveness of the proposed method.


IEEE Transactions on Multimedia | 2015

Unsupervised Web Topic Detection Using A Ranked Clustering-like Pattern Across Similarity Cascades

Junbiao Pang; Fei Jia; Chunjie Zhang; Weigang Zhang; Qingming Huang; Baocai Yin

Despite the massive growth of social media on the Internet, the process of organizing, understanding, and monitoring user generated content (UGC) has become one of the most pressing problems in todays society. Discovering topics on the web from a huge volume of UGC is one of the promising approaches to achieve this goal. Compared with classical topic detection and tracking in news articles, identifying topics on the web is by no means easy due to the noisy, sparse, and less- constrained data on the Internet. In this paper, we investigate methods from the perspective of similarity diffusion, and propose a clustering-like pattern across similarity cascades (SCs). SCs are a series of subgraphs generated by truncating a similarity graph with a set of thresholds, and then maximal cliques are used to capture topics. Finally, a topic-restricted similarity diffusion process is proposed to efficiently identify real topics from a large number of candidates. Experiments demonstrate that our approach outperforms the state-of-the-art methods on three public data sets.


Information Sciences | 2015

Image classification using boosted local features with random orientation and location selection

Chunjie Zhang; Jian Cheng; Yifan Zhang; Jing Liu; Chao Liang; Junbiao Pang; Qingming Huang; Qi Tian

We propose an image classification method using boosted randomly selected local features.We jointly consider local feature extraction, codebook generation and classifier training.The proposed method trains a series of classifiers using the boosting strategy.Experimental results demonstrate effectiveness and efficiency of the proposed method. The combination of local features with sparse technique has improved image classification performance dramatically in recent years. Although very effective, this strategy still has two shortcomings. First, local features are often extracted in a pre-defined way (e.g. SIFT with dense sampling) without considering the classification task. Second, the codebook is generated by sparse coding or its variants by minimizing the reconstruction error which has no direct relationships with the classification process. To alleviate the two problems, we propose a novel boosted local features method with random orientation and location selection. We first extract local features with random orientation and location using a weighting strategy. This randomization process makes us to extract more types of information for image representation than pre-defined methods. These extracted local features are then encoded by sparse representation. Instead of generating the codebook in a single process, we construct a series of codebooks and the corresponding encoding parameters of local features using a boosting strategy. The weights of local features are determined by the classification performances of learned classifiers. In this way, we are able to combine the local feature extraction and encoding with classifier training into a unified framework and gradually improve the image classification performance. Experiments on several public image datasets prove the effectiveness and efficiency of the proposed method.


Journal of Visual Communication and Image Representation | 2012

Online selection of the best k-feature subset for object tracking

Guorong Li; Qingming Huang; Junbiao Pang; Shuqiang Jiang; Lei Qin

In this paper, we propose a new feature subset evaluation method for feature selection in object tracking. According to the fact that a feature which is useless by itself could become a good one when it is used together with some other features, we propose to evaluate feature subsets as a whole for object tracking instead of scoring each feature individually and find out the most distinguishable subset for tracking. In the paper, we use a special tree to formalize the feature subset space. Then conditional entropy is used to evaluating feature subset and a simple but efficient greedy search algorithm is developed to search this tree to obtain the optimal k-feature subset quickly. Furthermore, our online k-feature subset selection method is integrated into particle filter for robust tracking. Extensive experiments demonstrate that k-feature subset selected by our method is more discriminative and thus can improve tracking performance considerably.


IEEE Transactions on Image Processing | 2015

Beyond Explicit Codebook Generation: Visual Representation Using Implicitly Transferred Codebooks

Chunjie Zhang; Jian Cheng; Jing Liu; Junbiao Pang; Qingming Huang; Qi Tian

The bag-of-visual-words model plays a very important role for visual applications. Local features are first extracted and then encoded to get the histogram-based image representation. To encode local features, a proper codebook is needed. Usually, the codebook has to be generated for each data set which means the codebook is data set dependent. Besides, the codebook may be biased when we only have a limited number of training images. Moreover, the codebook has to be pre-learned which cannot be updated quickly, especially when applied for online visual applications. To solve the problems mentioned above, in this paper, we propose a novel implicit codebook transfer method for visual representation. Instead of explicitly generating the codebook for the new data set, we try to make use of pre-learned codebooks using non-linear transfer. This is achieved by transferring the pre-learned codebooks with non-linear transformation and use them to reconstruct local features with sparsity constraints. The codebook does not need to be explicitly generated but can be implicitly transferred. In this way, we are able to make use of pre-learned codebooks for new visual applications by implicitly learning the codebook and the corresponding encoding parameters for image representation. We apply the proposed method for image classification and evaluate the performance on several public image data sets. Experimental results demonstrate the effectiveness and efficiency of the proposed method.

Collaboration


Dive into the Junbiao Pang's collaboration.

Top Co-Authors

Avatar

Qingming Huang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Chunjie Zhang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Weigang Zhang

Harbin Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Baocai Yin

Dalian University of Technology

View shared research outputs
Top Co-Authors

Avatar

Shuqiang Jiang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Guorong Li

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Qi Tian

University of Texas at San Antonio

View shared research outputs
Top Co-Authors

Avatar

Jing Liu

Chinese Academy of Sciences

View shared research outputs
Researchain Logo
Decentralizing Knowledge