Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Xianbiao Qi is active.

Publication


Featured researches published by Xianbiao Qi.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2014

Pairwise Rotation Invariant Co-Occurrence Local Binary Pattern

Xianbiao Qi; Rong Xiao; Chun-Guang Li; Yu Qiao; Jun Guo; Xiaoou Tang

Designing effective features is a fundamental problem in computer vision. However, it is usually difficult to achieve a great tradeoff between discriminative power and robustness. Previous works shown that spatial co-occurrence can boost the discriminative power of features. However the current existing co-occurrence features are taking few considerations to the robustness and hence suffering from sensitivity to geometric and photometric variations. In this work, we study the Transform Invariance (TI) of co-occurrence features. Concretely we formally introduce a Pairwise Transform Invariance (PTI) principle, and then propose a novel Pairwise Rotation Invariant Co-occurrence Local Binary Pattern (PRICoLBP) feature, and further extend it to incorporate multi-scale, multi-orientation, and multi-channel information. Different from other LBP variants, PRICoLBP can not only capture the spatial context co-occurrence information effectively, but also possess rotation invariance. We evaluate PRICoLBP comprehensively on nine benchmark data sets from five different perspectives, e.g., encoding strategy, rotation invariance, the number of templates, speed, and discriminative power compared to other LBP variants. Furthermore we apply PRICoLBP to six different but related applications-texture, material, flower, leaf, food, and scene classification, and demonstrate that PRICoLBP is efficient, effective, and of a well-balanced tradeoff between the discriminative power and robustness.


british machine vision conference | 2013

Exploring Motion Boundary based Sampling and Spatial-Temporal Context Descriptors for Action Recognition.

Xiaojiang Peng; Yu Qiao; Qiang Peng; Xianbiao Qi

Feature representation is important for human action recognition. Recently, Wang et al. [25] proposed dense trajectory (DT) based features for action video representation and achieved state-of-the-art performance on several action datasets. In this paper, we improve the DT method in two folds. Firstly, we introduce a motion boundary based dense sampling strategy, which greatly reduces the number of valid trajectories while preserves the discriminative power. Secondly, we develop a set of new descriptors which describe the spatial-temporal context of motion trajectories. To evaluate the performance of the proposed methods, we conduct extensive experiments on three benchmarks including KTH, YouTube and HMDB51. The results show that our sampling strategy significantly reduces the computational cost of point tracking without degrading performance. Meanwhile, we achieve superior performance than the state-of-the-art methods by utilizing our spatial-temporal context descriptors.


european conference on computer vision | 2012

Pairwise rotation invariant co-occurrence local binary pattern

Xianbiao Qi; Rong Xiao; Jun Guo; Lei Zhang

Designing effective features is a fundamental problem in computer vision. However, it is usually difficult to achieve a great tradeoff between discriminative power and robustness. Previous works shown that spatial co-occurrence can boost the discriminative power of features. However the current existing co-occurrence features are taking few considerations to the robustness and hence suffering from sensitivity to geometric and photometric variations. In this work, we study the Transform Invariance (TI) of co-occurrence features. Concretely we formally introduce a Pairwise Transform Invariance (PTI) principle, and then propose a novel Pairwise Rotation Invariant Co-occurrence Local Binary Pattern (PRICoLBP) feature, and further extend it to incorporate multi-scale, multi-orientation, and multi-channel information. Different from other LBP variants, PRICoLBP can not only capture the spatial context co-occurrence information effectively, but also possess rotation invariance. We evaluate PRICoLBP comprehensively on nine benchmark data sets from five different perspectives, e.g., encoding strategy, rotation invariance, the number of templates, speed, and discriminative power compared to other LBP variants. Furthermore we apply PRICoLBP to six different but related applications-texture, material, flower, leaf, food, and scene classification, and demonstrate that PRICoLBP is efficient, effective, and of a well-balanced tradeoff between the discriminative power and robustness.


Neurocomputing | 2016

Dynamic texture and scene classification by transferring deep image features

Xianbiao Qi; Chun-Guang Li; Guoying Zhao; Xiaopeng Hong; Matti Pietikäinen

Dynamic texture and scene classification are two fundamental problems in understanding natural video content. Extracting robust and effective features is a crucial step towards solving these problems. However, the existing approaches suffer from the sensitivity to either varying illumination, or viewpoint changes, or even camera motion, and/or the lack of spatial information. Inspired by the success of deep structures in image classification, we attempt to leverage a deep structure to extract features for dynamic texture and scene classification. To tackle with the challenges in training a deep structure, we propose to transfer some prior knowledge from image domain to video domain. To be more specific, we propose to apply a well-trained Convolutional Neural Network (ConvNet) as a feature extractor to extract mid-level features from each frame, and then form the video-level representation by concatenating the first and the second order statistics over the mid-level features. We term this two-level feature extraction scheme as a Transferred ConvNet Feature (TCoF). Moreover, we explore two different implementations of the TCoF scheme, i.e., the spatial TCoF and the temporal TCoF. In the spatial TCoF, the mean-removed frames are used as the inputs of the ConvNet; whereas in the temporal TCoF, the differences between two adjacent frames are used as the inputs of the ConvNet. We evaluate systematically the proposed spatial TCoF and the temporal TCoF schemes on three benchmark data sets, including DynTex, YUPENN, and Maryland, and demonstrate that the proposed approach yields superior performance.


british machine vision conference | 2013

Multi-scale Joint Encoding of Local Binary Patterns for Texture and Material Classification.

Xianbiao Qi; Yu Qiao; Chun-Guang Li; Jun Guo

In the current multi-scale LBP (MS-LBP) on texture and material classification, each scale is encoded into histograms individually. This strategy ignores the correlation between different scales, and loses a lot of discriminative information. In this paper , we propose a novel and effective multi-scale joint encoding of local binary patterns (MSJLBP) for texture and material classification. In MSJ-LBP, the joint encoding strategy can capture the correlation between different scales and hence depict richer local structures. In addition, the proposed MSJ-LBP is computationally simple and rotation invariant. Extensive experiments on four challenging databases (Outex_TC_00012, Brodatz, KTH-TIPS, KTH-TIPS2a) show that the proposed MSJ-LBP significantly outperforms the classical MS-LBP and achieves the state-of-the-art performance.


Image and Vision Computing | 2015

Globally rotation invariant multi-scale co-occurrence local binary pattern

Xianbiao Qi; Linlin Shen; Guoying Zhao; Qingquan Li; Matti Pietikäinen

This paper proposes a globally rotation invariant multi-scale co-occurrence local binary pattern (MCLBP) feature for texture-relevant tasks. In MCLBP, we arrange all co-occurrence patterns into groups according to properties of the co-patterns, and design three encoding functions (Sum, Moment, and Fourier Pooling) to extract features from each group. The MCLBP can effectively capture the correlation information between different scales and is also globally rotation invariant (GRI). The MCLBP is substantially different from most existing LBP variants including the LBP, the CLBP, and the MSJ-LBP that achieves rotation invariance by locally rotation invariant (LRI) encoding. We fully evaluate the properties of the MCLBP and compare it with some powerful features on five challenging databases. Extensive experiments demonstrate the effectiveness of the MCLBP compared to the state-of-the-art LBP variants including the CLBP and the LBPHF. Meanwhile, the dimension and computational cost of the MCLBP is also lower than that of the CLBP_S/M/C and LBPHF_S_M. This paper proposes a globally rotation invariant multi-scale co-occurrence of LBPs (MCLBP).The proposed MCLBP can effectively capture the correlation between the LBPs in different scales.Three globally rotation invariant encoding methods are introduced for MCLBP.The proposed MCLBP performs very well on texture, material, and medical cell classification.


Neurocomputing | 2016

LOAD: Local orientation adaptive descriptor for texture and material classification

Xianbiao Qi; Guoying Zhao; Linlin Shen; Qingquan Li; Matti Pietikäinen

Abstract In this paper, we propose a novel local feature, called Local Orientation Adaptive Descriptor (LOAD), to capture regional texture in an image. In LOAD, we proposed to define point description on an Adaptive Coordinate System (ACS), adopt a binary sequence descriptor to capture relationships between one point and its neighbors and use multi-scale strategy to enhance the discriminative power of the descriptor. The proposed LOAD enjoys not only discriminative power to capture the texture information, but also has strong robustness to illumination variation and image rotation. Extensive experiments on benchmark data sets of texture classification and real-world material recognition show that the LOAD yields the state-of-the-art performance. It is worth to mention that we achieve a superior classification accuracy on Flickr Material Database by using a single feature. Moreover, by combining LOAD with Convolutional Neural Networks (CNN), we obtain significantly better performance than both the LOAD and CNN. This result confirms that the LOAD is complementary to the learning-based features.


british machine vision conference | 2013

Exploring Cross-Channel Texture Correlation for Color Texture Classification.

Xianbiao Qi; Yu Qiao; Chun-Guang Li; Jun Guo

This paper proposes a novel approach to encode cross-channel texture correlation for color texture classification task. Firstly, we quantitatively study the correlation between different color channels using Local Binary Pattern (LBP) as the texture descriptor and using Shannon’s information theory to measure the correlation. We find that (R, G) channel pair exhibits stronger correlation than (R, B) and (G, B) channel pairs. Secondly, we propose a novel descriptor to encode the cross-channel texture correlation. The proposed descriptor can capture well the relative variance of texture patterns between different channels. Meanwhile, our descriptor is computationally efficient and robust to image rotation. We conduct extensive experiments on four challenging color texture databases to validate the effectiveness of the proposed approach. The experimental results show that the proposed approach significantly outperforms its mostly relevant counterpart (Multichannel color LBP), and achieves the state-of-the-art performance.


sino foreign interchange conference on intelligent science and intelligent data engineering | 2011

An evaluation on different graphs for semi-supervised learning

Chun-Guang Li; Xianbiao Qi; Jun Guo; Bo Xiao

Graph-based Semi-Supervised Learning (SSL) has been an active topic in machine learning for about a decade. It is well-known that how to construct the graph is the central concern in recent work since an efficient graph structure can significantly boost the final performance. In this paper, we present a review on several different graphs for graph-based SSL at first. And then, we conduct a series of experiments on benchmark data sets in order to give a comprehensive evaluation on the advantageous and shortcomings for each of them. Experimental results shown that: a) when data lie on independent subspaces and the number of labeled data is enough, the low-rank representation based method performs best, and b) in the majority cases, the local sparse representation based method performs best, especially when the number of labeled data is few.


sino foreign interchange conference on intelligent science and intelligent data engineering | 2012

Dimensionality reduction by low-rank embedding

Chun-Guang Li; Xianbiao Qi; Jun Guo

We consider the dimensionality reduction task under the scenario that data vectors lie on (or near by) multiple independent linear subspaces. We propose a robust dimensionality reduction algorithm, named as Low-Rank Embedding(LRE). In LRE, the affinity weights are calculated via low-rank representation and the embedding is yielded by spectral method. Owing to the affinity weight induced from low-rank model, LRE can reveal the subtle multiple subspace structure robustly. In the virtual of spectral method, LRE transforms the subtle multiple subspaces structure into multiple clusters in the low dimensional Euclidean space in which most of the ordinary algorithms can perform well. To demonstrate the advantage of the proposed LRE, we conducted comparative experiments on toy data sets and benchmark data sets. Experimental results confirmed that LRE is superior to other algorithms.

Collaboration


Dive into the Xianbiao Qi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chun-Guang Li

Beijing University of Posts and Telecommunications

View shared research outputs
Top Co-Authors

Avatar

Jun Guo

Beijing University of Posts and Telecommunications

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yu Qiao

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge