Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Xiaobin Zhu is active.

Publication


Featured researches published by Xiaobin Zhu.


Pattern Recognition | 2014

Sparse representation for robust abnormality detection in crowded scenes

Xiaobin Zhu; Jing Liu; Jinqiao Wang; Changsheng Li; Hanqing Lu

In crowded scenes, the extracted low-level features, such as optical flow or spatio-temporal interest point, are inevitably noisy and uncertainty. In this paper, we propose a fully unsupervised non-negative sparse coding based approach for abnormality event detection in crowded scenes, which is specifically tailored to cope with feature noisy and uncertainty. The abnormality of query sample is decided by the sparse reconstruction cost from an atomically learned event dictionary, which forms a sparse coding bases. In our algorithm, we formulate the task of dictionary learning as a non-negative matrix factorization (NMF) problem with a sparsity constraint. We take the robust Earth Movers Distance (EMD), instead of traditional Euclidean distance, as distance metric reconstruction cost function. To reduce the computation complexity of EMD, an approximate EMD, namely wavelet EMD, is introduced and well combined into our approach, without losing performance. In addition, the combination of wavelet EMD with our approach guarantees the convexity of optimization in dictionary learning. To handle both local abnormality detection (LAD) and global abnormality detection, we adopt two different types of spatio-temporal basis. Experiments conducted on four public available datasets demonstrate the promising performance of our work against the state-of-the-art methods.


Neurocomputing | 2015

Update vs. upgrade

Xiaoyu Zhang; Shupeng Wang; Xiaobin Zhu; Xiaochun Yun; Guangjun Wu; Yipeng Wang

This paper brings up a very important issue for active learning in practice. Traditional active learning mechanism is based on the assumption that the number of classes happens to be known in advance, and thus selective sampling is confined to the determinate model. However, as is the case for many applications, the model class is usually indeterminate and there is every chance that the hypothesis itself is inappropriate. To address this problem, we propose a novel indeterminate multi-class active learning algorithm, which comprehensively evaluates the instance based on both the value in refining the existing model and the potential in triggering model rectification. In this way, balance is effectively achieved between model update and model upgrade. Advantage of the proposed algorithm is demonstrated by experiments of classification tasks on both synthetic and real-world dataset.


IEEE Transactions on Systems, Man, and Cybernetics | 2015

Human Age Estimation Based on Locality and Ordinal Information

Changsheng Li; Qingshan Liu; Weishan Dong; Xiaobin Zhu; Jing Liu; Hanqing Lu

In this paper, we propose a novel feature selection-based method for facial age estimation. The face aging is a typical temporal process, and facial images should have certain ordinal patterns in the aging feature space. From the geometrical perspective, a facial image can be usually seen as sampled from a low-dimensional manifold embedded in the original high-dimensional feature space. Thus, we first measure the energy of each feature in preserving the underlying local structure information and the ordinal information of the facial images, respectively, and then we intend to learn a low-dimensional aging representation that can maximally preserve both kinds of information. To further improve the performance, we try to eliminate the redundant local information and ordinal information as much as possible by minimizing nonlinear correlation and rank correlation among features. Finally, we formulate all these issues into a unified optimization problem, which is similar to linear discriminant analysis in format. Since it is expensive to collect the labeled facial aging images in practice, we extend the proposed supervised method to a semi-supervised learning mode including the semi-supervised feature selection method and the semi-supervised age prediction algorithm. Extensive experiments are conducted on the FACES dataset, the Images of Groups dataset, and the FG-NET aging dataset to show the power of the proposed algorithms, compared to the state-of-the-arts.


Information Sciences | 2016

Boosted random contextual semantic space based representation for visual recognition

Chunjie Zhang; Zhe Xue; Xiaobin Zhu; Huanian Wang; Qingming Huang; Qi Tian

We propose an image representation method using boosted random contextual semantic spaces.The proposed BRCSS can alleviate the semantic gap by using semantic space based image representation.BRCSS uses the visual representation of images in an iterative way with random sampling and re-weighting.Experimental results demonstrate the effectiveness and efficiency of the proposed method. Visual information has been widely used for image representation. Although proven very effective, the visual representation lacks explicit semantics. However, how to generate a proper semantic space for image representation is still an open problem that needs to be solved. To jointly model the visual and semantic representations of images, we propose a boosted random contextual semantic space based image representation method. Images are initially represented using local features distribution histograms. The semantic space is generated by randomly selecting training images. Images are then mapped into the semantic space accordingly. Semantic context is explored to model the correlations of different semantics which is then used for classification. The classification results are used to re-weight training images in a boosted way. The re-weighted images are used to construct new semantic space for classification. In this way, we are able to jointly consider the visual and semantic information of images. Image classification experiments on several public datasets show the effectiveness of the proposed method.


Science in China Series F: Information Sciences | 2015

Context-aware local abnormality detection in crowded scene

Xiaobin Zhu; Xin Jin; Xiaoyu Zhang; Changsheng Li; FuGang He; Lei Wang

In this paper, we propose a novel algorithm by jointly modeling motion and context information targeting at detecting abnormal events in crowded scenes. In our algorithm, context pattern information, extracted through volume local binary patterns computation on three orthogonal planes (LBP-TOP) between local target areas with surrounding areas, is explicitly taken into consideration for localizing abnormality. To capture motion information, a novel feature descriptor named Multi-scale Histogram of Frequency Coefficient is explored by taking Fourier Transform on the extracted dense trajectories. For detection of abnormality, sparse reconstruction cost from a learned event dictionary is adopted to classify local normal and abnormal events. Experiments conducted on three benchmark datasets show superior performance to many related state-of-the-art methods.摘要创新点本论文通过对运动特征和上下文感知联合建模, 提出了一种新颖的算法用于拥挤场景下的异常事件检测。 在本算法中, 通过目标区域和周围区域的三个交叉垂直平面计算时空局部二值模式信义, 以提取上下文信息, 用于精确定位异常事件位置。 另外, 通过稠密轨迹的傅里叶变换系数, 提取则多尺度频率系数直方图用于运动特征。 最后基于已训练字典的重构误差来判别异常事件。 本算法, 在三个基准数据上取得很好效果, 证明了算法的有效性。


Neurocomputing | 2017

ListNet-based object proposals ranking

Yaqi Liu; Xiaoyu Zhang; Xiaobin Zhu; Qingxiao Guan; Xianfeng Zhao

Abstract In object detection, object proposal methods have been widely used to generate candidate regions which may contain objects. Object proposal based on superpixel merging is one kind of object proposal methods, and the merging strategies of superpixels have been extensively explored. However, the ranking of generated candidate proposals still remains to be further studied. In this paper, we formulate the ranking of object proposals as a learning to rank problem, and propose a novel object proposals ranking method based on ListNet. In the proposed method, Selective Search, which is one of the state-of-the-art object proposal methods based on superpixel merging, is adopted to generate the candidate proposals. During the superpixel merging process, five discriminative objectness features are extracted from superpixel sets and the corresponding bounding boxes. Then, to weight each feature, a linear neural network is learned based on ListNet. Consequently, objectness scores can be computed for final candidate proposals ranking. Extensive experiments demonstrate the effectiveness and robustness of the proposed method.


Multimedia Tools and Applications | 2016

Image classification using label constrained sparse coding

Ruijun Liu; Yi Chen; Xiaobin Zhu; Kun Hou

Sparse coding has been widely used for feature encoding in recent years. However, the encoded parameters’ similarity is ignored with sparse coding. Besides, the label information from which class the local feature is extracted is also ignored. To solve this problem, in this paper, we propose a novel feature encoding method called label constrained sparse coding (LCSC) for visual representation. The visual similarities between local features are jointly considered with the corresponding label information of local features. This is achieved by combining the label constraints with the encoding of local features. In this way, we can ensure that similar local features with the same label are encoded with similar parameters. Local features with different labels are encoded with dissimilar parameters to increase the discriminative power of encoded parameters. Besides, instead of optimizing for the coding parameter of each local feature separately, we jointly encode the local features within one sub-region in the spatial pyramid way to combine the spatial and contextual information of local features. We apply this label constrained sparse coding technique for classification tasks on several public image datasets to evaluate its effectiveness. The experimental results shows the effectiveness of the proposed method.


Neurocomputing | 2015

Joint image representation and classification in random semantic spaces

Chunjie Zhang; Xiaobin Zhu; Liang Li; Yifan Zhang; Jing Liu; Qingming Huang; Qi Tian

Local feature based image representation has been widely used for image classification in recent years. Although this strategy has been proven very effective, the image representation and classification processes are relatively independent. This means the image classification performance may be hindered by the representation efficiency. To jointly consider the image representation and classification in an unified framework, in this paper, we propose a novel algorithm by combining image representation and classification in the random semantic spaces. First, we encode local features with the sparse coding technique and use the encoding parameters for raw image representation. These image representations are then randomly selected to generate the random semantic spaces and images are then mapped to these random semantic spaces by classifier training. The mapped semantic representation is then used as the final image representation. In this way, we are able to jointly consider the image representation and classification in order to achieve better performances. We evaluate the performances of the proposed method on several public image datasets and experimental results prove the proposed method?s effectiveness. HighlightsWe jointly consider image representation and classification in unified framework.Images are randomly selected for semantic space construction by training classifiers.We use random semantic spaces for image representation and class prediction.We achieve the state-of-the-art performance on several public datasets.


machine vision applications | 2014

Key observation selection-based effective video synopsis for camera network

Xiaobin Zhu; Jing Liu; Jinqiao Wang; Hanqing Lu

Nowadays, tremendous amount of video is captured endlessly from increased numbers of video cameras distributed around the world. Since needless information is abundant in the raw videos, making video browsing and retrieval is inefficient and time consuming. Video synopsis is an effective way to browse and index such video, by producing a short video representation, while keeping the essential activities of the original video. However, video synopsis for single camera is limited in its view scope, while understanding and monitoring overall activity for large scenarios is valuable and demanding. To solve the above issues, we propose a novel video synopsis algorithm for partially overlapping camera network. Our main contributions reside in three aspects: First, our algorithm can generate video synopsis for large scenarios, which can facilitate understanding overall activities. Second, for generating overall activity, we adopt a novel unsupervised graph matching algorithm to associate trajectories across cameras. Third, a novel multiple kernel similarity is adopted in selecting key observations for eliminating content redundancy in video synopsis. We have demonstrated the effectiveness of our approach on real surveillance videos captured by our camera network.


Neurocomputing | 2016

Weighted hierarchical geographic information description model for social relation estimation

Kai Zhang; Xiaochun Yun; Xiaoyu Zhang; Xiaobin Zhu; Chao Li; Shupeng Wang

Abstract Social relation estimation has been attracting researchers attention worldwide, and rapid development of LBSN (Location-Based Social Network) provides researchers an additional resource to estimate users’ social relations. Previous works have fulfilled the social relation estimation with spatial information extracted from LBSN, while ignored or paid a little attention to the property of location. In this paper, a hierarchical grid based method is proposed to define location ID, and locations property is taken full advantage of when extracting features, in which way to exploit users spatial information more sufficiently. Besides, in order to train a robust estimation model, we design the model based on semi-supervised learning. Our careful consideration of the above issues ultimately leads to a general framework that outperforms competitors, and experiments prove the effectiveness finally.

Collaboration


Dive into the Xiaobin Zhu's collaboration.

Top Co-Authors

Avatar

Xiaoyu Zhang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Haisheng Li

Beijing Technology and Business University

View shared research outputs
Top Co-Authors

Avatar

Jing Liu

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Peng Li

China University of Petroleum

View shared research outputs
Top Co-Authors

Avatar

Shupeng Wang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Chunjie Zhang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Hanqing Lu

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Qian Wang

Beijing Technology and Business University

View shared research outputs
Top Co-Authors

Avatar

Qiang Cai

Beijing Technology and Business University

View shared research outputs
Researchain Logo
Decentralizing Knowledge