Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Xiuqing Wu is active.

Publication


Featured researches published by Xiuqing Wu.


IEEE Transactions on Image Processing | 2010

Active Reranking for Web Image Search

Xinmei Tian; Dacheng Tao; Xian-Sheng Hua; Xiuqing Wu

Image search reranking methods usually fail to capture the users intention when the query term is ambiguous. Therefore, reranking with user interactions, or active reranking, is highly demanded to effectively improve the search performance. The essential problem in active reranking is how to target the users intention. To complete this goal, this paper presents a structural information based sample selection strategy to reduce the users labeling efforts. Furthermore, to localize the users intention in the visual feature space, a novel local-global discriminative dimension reduction algorithm is proposed. In this algorithm, a submanifold is learned by transferring the local geometry and the discriminative information from the labelled images to the whole (global) image database. Experiments on both synthetic datasets and a real Web image search dataset demonstrate the effectiveness of the proposed active reranking scheme, including both the structural information based active sample selection strategy and the local-global discriminative dimension reduction algorithm.


international conference on image processing | 2006

Automatic Video Genre Categorization using Hierarchical SVM

Xun Yuan; Wei Lai; Tao Mei; Xian-Sheng Hua; Xiuqing Wu; Shipeng Li

This paper presents an automatic video genre categorization scheme based on the hierarchical ontology on video genres. Ten computable spatio-temporal features are extracted to distinguish the different genres using a hierarchical support vector machines (SVM) classifier built by cross-validation, which consists of a series of SVM classifiers united in a binary-tree form. As the order and genre partition strategy of the SVM classifier series affect the over performance of the united classifier, two optimal SVM binary trees, local and global, are constructed aiming at finding the best categorization orders, i.e., the best tree structure, of the genre ontology. Experimental results show that the proposed scheme outperforms C4.5 decision tree, typical 1-vs-1 SVM scheme, as well as the hierarchical SVM built by K-means.


systems man and cybernetics | 2009

Correlative Linear Neighborhood Propagation for Video Annotation

Jinhui Tang; Xian-Sheng Hua; Meng Wang; Zhiwei Gu; Guo-Jun Qi; Xiuqing Wu

Recently, graph-based semisupervised learning methods have been widely applied in multimedia research area. However, for the application of video semantic annotation in multilabel setting, these methods neglect an important characteristic of video data: The semantic concepts appear correlatively and interact naturally with each other rather than exist in isolation. In this paper, we adapt this semantic correlation into graph-based semisupervised learning and propose a novel method named correlative linear neighborhood propagation to improve annotation performance. Experiments conducted on the Text REtrieval Conference VIDeo retrieval evaluation data set have demonstrated its effectiveness and efficiency.


international conference on multimedia and expo | 2008

Learning to video search rerank via pseudo preference feedback

Yuan Liu; Tao Mei; Xian-Sheng Hua; Jinhui Tang; Xiuqing Wu; Shipeng Li

Conventional approaches to video search reranking only care whether search results are relevant or irrelevant to the given query, while the ranking order of these results indicating the level of relevance or typicality are usually neglected. This paper presents a novel learning-based approach to video search reranking by investigating the ranking order information. The proposed approach, called pseudo preference feedback (PPF), automatically discovers an optimal set of pseudo preference pairs from the initial ranked list and learns a reranking model by ranking support vector machines (ranking SVM) based on the selected pairs. We have proved that PPF can be used for any reranking purpose such as video search and concept detection. We conducted comprehensive experiments for both automatic search and concept detection tasks over TRECVID 2006-2007 benchmark, and showed that PPF could gain significant improvements over the baselines.


acm multimedia | 2007

Structure-sensitive manifold ranking for video concept detection

Jinhui Tang; Xian-Sheng Hua; Guo-Jun Qi; Meng Wang; Tao Mei; Xiuqing Wu

Pairwise similarity of samples is an essential factor in graph propagation based semi-supervised learning methods. Usually it is estimated based on Euclidean distance. However, the structural assumption, which is a basic assumption in these methods, has not been taken into consideration in the normal pairwise similarity measure. In this paper, we propose a novel graph-based learning approach, named Structure-Sensitive Manifold Ranking (SSMR),based on a structure-sensitive similarity measure. Instead of using distance only, SSMR takes local distribution differences into account to more accurately measure pairwise similarity. Furthermore, we show that SSMR can also be deduced from a partial differential equation based anisotropic diffusion. Experiments conducted on the TRECVID dataset show that this approach significantly outperforms existing graph-based semi-supervised learning methods for video semantic concept detection.


acm multimedia | 2006

Manifold-ranking based video concept detection on large database and feature pool

Xun Yuan; Xian-Sheng Hua; Meng Wang; Xiuqing Wu

In this paper we discuss a typical case in video concept detection: to learn target concept using only a small number of positive samples. A novel manifold-ranking based scheme is proposed, which consists of three major components: feature pool construction, pre-filtering, and manifold-ranking. First, as there are large variations in the effective features for different concepts, a large feature pool is constructed, from which the most effective features can be selected automatically or semi-automatically. Second, to tackle the issue of large computation cost for successive manifold-ranking process when large video database is incorporated, we employ a pre-filtering process to filter out the majority of irrelevant samples while retaining the most relevant ones. And last, the manifold-ranking algorithm is used to explore the relationship among all of the rest samples based on the selected features. This scheme is extensible and flexible in terms of adding new features into the feature pool, introducing human interactions on selecting features, and defining new concepts.


IEEE Transactions on Multimedia | 2008

Video Annotation Based on Kernel Linear Neighborhood Propagation

Jinhui Tang; Xian-Sheng Hua; Guo-Jun Qi; Yan Song; Xiuqing Wu

The insufficiency of labeled training data for representing the distribution of the entire dataset is a major obstacle in automatic semantic annotation of large-scale video database. Semi-supervised learning algorithms, which attempt to learn from both labeled and unlabeled data, are promising to solve this problem. In this paper, a novel graph-based semi-supervised learning method named kernel linear neighborhood propagation (KLNP) is proposed and applied to video annotation. This approach combines the consistency assumption, which is the basic assumption in semi-supervised learning, and the local linear embedding (LLE) method in a nonlinear kernel-mapped space. KLNP improves a recently proposed method linear neighborhood propagation (LNP) by tackling the limitation of its local linear assumption on the distribution of semantics. Experiments conducted on the TRECVID data set demonstrate that this approach outperforms other popular graph-based semi-supervised learning methods for video semantic annotation.


IEEE Transactions on Multimedia | 2011

Bayesian Visual Reranking

Xinmie Tian; Linjun Yang; Jingdong Wang; Xiuqing Wu; Xian-Sheng Hua

Visual reranking has been proven effective to refine text-based video and image search results. It utilizes visual information to recover “true” ranking list from the noisy one generated by text-based search, by incorporating both textual and visual information. In this paper, we model the textual and visual information from the probabilistic perspective and formulate visual reranking as an optimization problem in the Bayesian framework, termed Bayesian visual reranking. In this method, the textual information is modeled as a likelihood, to reflect the disagreement between reranked results and text-based search results which is called ranking distance. The visual information is modeled as a conditional prior, to indicate the ranking score consistency among visually similar samples which is called visual consistency. Bayesian visual reranking derives the best reranking results by maximizing visual consistency while minimizing ranking distance. To model the ranking distance more precisely, we propose a novel pair-wise method which measure the ranking distance based on the disagreement in terms of pair-wise orders. For visual consistency, we study three different regularizers to mine the best way for its modeling. We conduct extensive experiments on both video and image search datasets. Experimental results demonstrate the effectiveness of our proposed Bayesian visual reranking.


Pattern Recognition | 2011

A transductive multi-label learning approach for video concept detection

Jingdong Wang; Yinghai Zhao; Xiuqing Wu; Xian-Sheng Hua

In this paper, we address two important issues in the video concept detection problem: the insufficiency of labeled videos and the multiple labeling issue. Most existing solutions merely handle the two issues separately. We propose an integrated approach to handle them together, by presenting an effective transductive multi-label classification approach that simultaneously models the labeling consistency between the visually similar videos and the multi-label interdependence for each video. We compare the performance between the proposed approach and several representative transductive and supervised multi-label classification approaches for the video concept detection task over the widely used TRECVID data set. The comparative results demonstrate the superiority of the proposed approach.


multimedia information retrieval | 2008

Transductive multi-label learning for video concept detection

Jingdong Wang; Yinghai Zhao; Xiuqing Wu; Xian-Sheng Hua

Transductive video concept detection is an effective way to handle the lack of sufficient labeled videos. However, another issue, the multi-label interdependence, is not essentially addressed in the existing transductive methods. Most solutions only applied the transductive single-label approach to detect each individual concept separately, but ignoring the concept relation, or simply imposed the smoothness assumption over the multiple labels for each video, without indeed exploring the interdependence between the concepts. On the other hand, the semi-supervised extension of supervised multi-label classifiers, such as correlative multi-label support vector machines, is usually intractable and hence impractical due to the quite expensive computational cost. In this paper, we propose an effective transductive multi-label classification approach, which simultaneously models the labeling consistency between the visually similar videos and the multi-label interdependence for each video in an integrated framework. We compare the performance between the proposed approach and several representative transductive single-label and supervised multi-label classification approaches for the video concept detection task over the widely-used TRECVID data set. The comparative results demonstrate the superiority of the proposed approach.

Collaboration


Dive into the Xiuqing Wu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jinhui Tang

Nanjing University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Guo-Jun Qi

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Yuan Liu

University of Science and Technology of China

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xinmei Tian

University of Science and Technology of China

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Meng Wang

University of Science and Technology of China

View shared research outputs
Top Co-Authors

Avatar

Yan Song

University of Science and Technology of China

View shared research outputs
Researchain Logo
Decentralizing Knowledge