Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Xinmei Tian is active.

Publication


Featured researches published by Xinmei Tian.


ACM Transactions on Multimedia Computing, Communications, and Applications | 2012

Sparse transfer learning for interactive video search reranking

Xinmei Tian; Dacheng Tao; Yong Rui

Visual reranking is effective to improve the performance of the text-based video search. However, existing reranking algorithms can only achieve limited improvement because of the well-known semantic gap between low-level visual features and high-level semantic concepts. In this article, we adopt interactive video search reranking to bridge the semantic gap by introducing users labeling effort. We propose a novel dimension reduction tool, termed sparse transfer learning (STL), to effectively and efficiently encode users labeling information. STL is particularly designed for interactive video search reranking. Technically, it (a) considers the pair-wise discriminative information to maximally separate labeled query relevant samples from labeled query irrelevant ones, (b) achieves a sparse representation for the subspace to encodes users intention by applying the elastic net penalty, and (c) propagates users labeling information from labeled samples to unlabeled samples by using the data distribution knowledge. We conducted extensive experiments on the TRECVID 2005, 2006 and 2007 benchmark datasets and compared STL with popular dimension reduction algorithms. We report superior performance by using the proposed STL-based interactive video search reranking.


IEEE Transactions on Image Processing | 2010

Active Reranking for Web Image Search

Xinmei Tian; Dacheng Tao; Xian-Sheng Hua; Xiuqing Wu

Image search reranking methods usually fail to capture the users intention when the query term is ambiguous. Therefore, reranking with user interactions, or active reranking, is highly demanded to effectively improve the search performance. The essential problem in active reranking is how to target the users intention. To complete this goal, this paper presents a structural information based sample selection strategy to reduce the users labeling efforts. Furthermore, to localize the users intention in the visual feature space, a novel local-global discriminative dimension reduction algorithm is proposed. In this algorithm, a submanifold is learned by transferring the local geometry and the discriminative information from the labelled images to the whole (global) image database. Experiments on both synthetic datasets and a real Web image search dataset demonstrate the effectiveness of the proposed active reranking scheme, including both the structural information based active sample selection strategy and the local-global discriminative dimension reduction algorithm.


IEEE Transactions on Multimedia | 2015

Query-Dependent Aesthetic Model With Deep Learning for Photo Quality Assessment

Xinmei Tian; Zhe Dong; Kuiyuan Yang; Tao Mei

The automatic assessment of photo quality from an aesthetic perspective is a very challenging problem. Most existing research has predominantly focused on the learning of a universal aesthetic model based on hand-crafted visual descriptors . However, this research paradigm can achieve only limited success because (1) such hand-crafted descriptors cannot well preserve abstract aesthetic properties , and (2) such a universal model cannot always capture the full diversity of visual content. To address these challenges, we propose in this paper a novel query-dependent aesthetic model with deep learning for photo quality assessment. In our method, deep aesthetic abstractions are discovered from massive images , whereas the aesthetic assessment model is learned in a query- dependent manner. Our work addresses the first problem by learning mid-level aesthetic feature abstractions via powerful deep convolutional neural networks to automatically capture the underlying aesthetic characteristics of the massive training images . Regarding the second problem, because photographers tend to employ different rules of photography for capturing different images , the aesthetic model should also be query- dependent . Specifically, given an image to be assessed, we first identify which aesthetic model should be applied for this particular image. Then, we build a unique aesthetic model of this type to assess its aesthetic quality. We conducted extensive experiments on two large-scale datasets and demonstrated that the proposed query-dependent model equipped with learned deep aesthetic abstractions significantly and consistently outperforms state-of-the-art hand-crafted feature -based and universal model-based methods.


conference on multimedia modeling | 2015

Photo Quality Assessment with DCNN that Understands Image Well

Zhe Dong; Xu Shen; Houqiang Li; Xinmei Tian

Photo quality assessment from the view of human aesthetics, which tries to classify images into the categories of good and bad, has drawn a lot of attention in computer vision field. Up to now, experts have proposed many methods to deal with this problem. Most of those methods are based on the design of hand-crafted features. However, due to the complexity and subjectivity of human’s aesthetic activities, it is difficult to describe and model all the factors that affect the photo aesthetic quality. Therefore those methods just obtain limited success. On the other hand, deep convolutional neural network has been proved to be effective in many computer vision problems and it does not need human efforts in the design of features. In this paper, we try to adopt a deep convolutional neural network that “understands” images well to conduct the photo aesthetic quality assessment. Firstly, we implement a deep convolutional neural network which has eight layers and millions of parameters. Then to “teach” this network enough knowledge about images, we train it on the ImageNet which is one of the largest available image database. Next, for each given image, we take the activations of the last layer of the neural network as its aesthetic feature. The experimental results on two large and reliable image aesthetic quality assessment datasets prove the effectiveness of our method.


acm multimedia | 2009

Query aware visual similarity propagation for image search reranking

Li Wang; Linjun Yang; Xinmei Tian

Image search reranking is an effective approach to refining the text-based image search result. In the reranking process, the estimation of visual similarity is critical to the performance. However, the existing measures, based on global or local features, cannot be adapted to different queries. In this paper, we propose to estimate a query aware image similarity by incorporating the global visual similarity, local visual similarity and visual word co-occurrence into an iterative propagation framework. After the propagation, a query aware image similarity combining the advantages of both global and local similarities is achieved and applied to image search reranking. The experiments on a real-world Web image dataset demonstrate that the proposed query aware similarity outperforms the global, local similarity and their linear combination, for image search reranking.


IEEE MultiMedia | 2011

Real-Time Video Copy-Location Detection in Large-Scale Repositories

Bo Liu; Zhu Li; Linjun Yang; Meng Wang; Xinmei Tian

By exploring the temporal relationships inherent in video, a probabilistic framework can help identify and locate copies of query videos.


IEEE Transactions on Multimedia | 2012

Query Difficulty Prediction for Web Image Search

Xinmei Tian; Yijuan Lu; Linjun Yang

Image search plays an important role in our daily life. Given a query, the image search engine is to retrieve images related to it. However, different queries have different search difficulty levels. For some queries, they are easy to be retrieved (the search engine can return very good search results). While for others, they are difficult (the search results are very unsatisfactory). Thus, it is desirable to identify those “difficult” queries in order to handle them properly. Query difficulty prediction (QDP) is an attempt to predict the quality of the search result for a query over a given collection. QDP problem has been investigated for many years in text document retrieval, and its importance has been recognized in the information retrieval (IR) community. However, little effort has been conducted on the image query difficulty prediction problem for image search. Compared with QDP in document retrieval, QDP in image search is more challenging due to the noise of textual features and the well-known semantic gap of visual features. This paper aims to investigate the QDP problem in Web image search. A novel method is proposed to automatically predict the quality of image search results for an arbitrary query. This model is built based on a set of valuable features that are designed by exploring the visual characteristic of images in the search results. The experiments on two real image search datasets demonstrate the effectiveness of the proposed query difficulty prediction method. Two applications, including optimal image search engine selection and search results merging, are presented to show the promising applicability of QDP.


acm multimedia | 2011

Learning to judge image search results

Xinmei Tian; Yijuan Lu; Linjun Yang; Qi Tian

Given the explosive growth of the Web and the popularity of image sharing Web sites, image retrieval plays an increasingly important role in our daily lives. Search engines aim to provide beneficial image search results to users in response to queries. The quality of image search results depends on many factors: chosen search algorithms, ranking functions, indexing features, the base image database, etc. Applying different settings for these factors generates search result lists with varying levels of quality. Previous research has shown that no setting can always perform optimally for all queries. Therefore, given a set of search result lists generated by different settings, it is crucial to automatically determine which result list is the best in order to present it to users. This paper proposes a novel method to automatically identify the best search result list from a number of candidates. There are three main innovations in this paper. First, we propose a preference learning model to quantitatively study the best image search result identification problem. Second, we propose a set of valuable preference learning related features by exploring the visual characters of returned images. Third, our method shows promising potential in applications such as reranking ability assessment and optimal search engine selection. Experiments on two image search datasets show that our method achieves about 80% prediction accuracy for reranking ability assessment, and selects optimal search engine for about 70% queries correctly.


conference on multimedia modeling | 2010

Visual reranking with local learning consistency

Xinmei Tian; Linjun Yang; Xiuqing Wu; Xian-Sheng Hua

The graph-based reranking methods have been proven effective in image and video search. The basic assumption behind them is the ranking score consistency, i.e., neighboring nodes (visually similar images or video shots) in a graph having close ranking scores, which is modeled through a regularizer term. The existing reranking methods utilise pair-wise regularizers, e.g., the Laplacian regularizer and the normalized Laplacian regularizer, to estimate the consistency over the graph from the pair-wise perspective by requiring the scores to be close for pairs of samples. However, since the consistency is a term defined over the whole set of neighboring samples, it is characterized by the local structure of the neighboring samples, i.e., the multiple-wise relations among the neighbors. The pair-wise regularizers fail to capture the desired property of consistency since they treat the neighboring samples independently. To tackle this problem, in this paper, we propose to use local learning regularizer to model the multiple-wise consistency, by formulating the consistent score estimation over a local area into a learning problem. Experiments on the TRECVID benchmark dataset and a real Web image dataset demonstrate the superiority of the local learning regularizer in visual reranking.


Pattern Recognition | 2015

Multi-task proximal support vector machine

Ya Li; Xinmei Tian; Mingli Song; Dacheng Tao

With the explosive growth of the use of imagery, visual recognition plays an important role in many applications and attracts increasing research attention. Given several related tasks, single-task learning learns each task separately and ignores the relationships among these tasks. Different from single-task learning, multi-task learning can explore more information to learn all tasks jointly by using relationships among these tasks. In this paper, we propose a novel multi-task learning model based on the proximal support vector machine. The proximal support vector machine uses the large-margin idea as does the standard support vector machines but with looser constraints and much lower computational cost. Our multi-task proximal support vector machine inherits the merits of the proximal support vector machine and achieves better performance compared with other popular multi-task learning models. Experiments are conducted on several multi-task learning datasets, including two classification datasets and one regression dataset. All results demonstrate the effectiveness and efficiency of our proposed multi-task proximal support vector machine. HighlightsPropose highly efficient multi-task proximal support vector machine (MTPSVM).Develop a method to optimize the learning procedure of MTPSVM.Unbalanced MTPSVM is proposed to deal with the unbalanced sample problem.Propose proximal support vector regression (SVR) and multi-task proximal SVR.Extensive experiments demonstrate the effectiveness and efficiency of our MTPSVM.

Collaboration


Dive into the Xinmei Tian's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xu Shen

University of Science and Technology of China

View shared research outputs
Top Co-Authors

Avatar

Ya Li

University of Science and Technology of China

View shared research outputs
Top Co-Authors

Avatar

Yijuan Lu

Texas State University

View shared research outputs
Top Co-Authors

Avatar

Cong Guo

University of Science and Technology of China

View shared research outputs
Top Co-Authors

Avatar

Xiuqing Wu

University of Science and Technology of China

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge