Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chaoran Cui is active.

Publication


Featured researches published by Chaoran Cui.


IEEE Transactions on Multimedia | 2017

Comprehensive Feature-Based Robust Video Fingerprinting Using Tensor Model

Xiushan Nie; Yilong Yin; Jiande Sun; Ju Liu; Chaoran Cui

Content-based near-duplicate video detection (NDVD) is essential for effective search and retrieval, and robust video fingerprinting is a good solution for NDVD. Most existing video fingerprinting methods use a single feature or concatenate different features to generate video fingerprints, and show good performance under single-mode modifications such as noise addition and blurring. However, when they suffer combined modifications, the performance is degraded to a certain extent because such features cannot characterize the video content completely. By contrast, the assistance and consensus among different features can improve the performance of video fingerprinting. Therefore, in the present study, we mine the assistance and consensus among different features based on a tensor model, and we present a new comprehensive feature to fully use them in the proposed video fingerprinting framework. We also analyze what the comprehensive feature really is for representing the original video. In this framework, the video is initially set as a high-order tensor that consists of different features, and the video tensor is decomposed via the Tucker model with a solution that determines the number of components. Subsequently, the comprehensive feature is generated by the low-order tensor obtained from tensor decomposition. Finally, the video fingerprint is computed using this feature. A matching strategy used for narrowing the search is also proposed based on the core tensor. The robust video fingerprinting framework is resistant not only to single-mode modifications but also to their combination.


international acm sigir conference on research and development in information retrieval | 2010

Web page publication time detection and its application for page rank

Zhumin Chen; Jun Ma; Chaoran Cui; Hongxing Rui; Shaomang Huang

Publication Time (P-time for short) of Web pages is often required in many application areas. In this paper, we address the issue of P-time detection and its application for page rank. We first propose an approach to extract P-time for a page with explicit P-time displayed on its body. We then present a method to infer P-time for a page without P-time. We further introduce a temporal sensitive page rank model using P-time. Experiments demonstrate that our methods outperform the baseline methods significantly.


Journal of the Association for Information Science and Technology | 2015

Improving Image Annotation via Ranking-Oriented Neighbor Search and Learning-Based Keyword Propagation

Chaoran Cui; Jun Ma; Tao Lian; Zhumin Chen; Shuaiqiang Wang

Automatic image annotation plays a critical role in modern keyword‐based image retrieval systems. For this task, the nearest‐neighbor–based scheme works in two phases: first, it finds the most similar neighbors of a new image from the set of labeled images; then, it propagates the keywords associated with the neighbors to the new image. In this article, we propose a novel approach for image annotation, which simultaneously improves both phases of the nearest‐neighbor–based scheme. In the phase of neighbor search, different from existing work discovering the nearest neighbors with the predicted distance, we introduce a ranking‐oriented neighbor search mechanism (RNSM), where the ordering of labeled images is optimized directly without going through the intermediate step of distance prediction. In the phase of keyword propagation, different from existing work using simple heuristic rules to select the propagated keywords, we present a learning‐based keyword propagation strategy (LKPS), where a scoring function is learned to evaluate the relevance of keywords based on their multiple relations with the nearest neighbors. Extensive experiments on the Corel 5K data set and the MIR Flickr data set demonstrate the effectiveness of our approach.


international acm sigir conference on research and development in information retrieval | 2013

Ranking-oriented nearest-neighbor based method for automatic image annotation

Chaoran Cui; Jun Ma; Tao Lian; Xiaofang Wang; Zhaochun Ren

Automatic image annotation plays a critical role in keyword-based image retrieval systems. Recently, the nearest-neighbor based scheme has been proposed and achieved good performance for image annotation. Given a new image, the scheme is to first find its most similar neighbors from labeled images, and then propagate the keywords associated with the neighbors to it. Many studies focused on designing a suitable distance metric between images so that all labeled images can be ranked by their distance to the given image. However, higher accuracy in distance prediction does not necessarily lead to better ordering of labeled images. In this paper, we propose a ranking-oriented neighbor search mechanism to rank labeled images directly without going through the intermediate step of distance prediction. In particular, a new learning to rank algorithm is developed, which exploits the implicit preference information of labeled images and underlines the accuracy of the top-ranked results. Experiments on two benchmark datasets demonstrate the effectiveness of our approach for image annotation.


international conference on multimedia retrieval | 2011

Active learning through notes data in Flickr: an effortless training data acquisition approach for object localization

Lei Zhang; Jun Ma; Chaoran Cui; Piji Li

Most of the state-of-the-art systems for object localization rely on supervised machine learning techniques, and are thus limited by the lack of labeled training data. In this paper, our motivation is to provide training dataset for object localization effectively and efficiently. We argue that the notes data in Flickr can be exploited as a novel source for object modeling. At first, we apply a text mining method to gather semantically related images for a specific class. Then a handful of images are selected manually as seed images or initial training set. At last, the training set is expanded by an incremental active learning framework. Our approach requires significantly less manual supervision compared to standard methods. The experimental results on the PASCAL VOC 2007 and NUS-WIDE datasets show that the training data acquired by our approach can complement or even substitute conventional training data for object localization.


conference on information and knowledge management | 2012

Semantically coherent image annotation with a learning-based keyword propagation strategy

Chaoran Cui; Jun Ma; Shuaiqiang Wang; Shuai Gao; Tao Lian

Automatic image annotation plays an important role in modern keyword-based image retrieval systems. Recently, many neighbor-based methods have been proposed and achieved good performance for image annotation. However, existing work mainly focused on exploring a distance metric learning algorithm to determine the neighbors of an image, and neglected the subsequent keyword propagation process. They usually used some simple heuristic propagation rules, and propagated each keyword independently without considering the inherent semantic coherence among keywords. In this paper, we propose a novel learning-based keyword propagation strategy and incorporate it into the neighbor-based method framework. In particular, we employ the structural SVM to learn a scoring function which can evaluate different candidate keyword sets for a test image. Moreover, we explicitly enforce the semantic coherence constraint for the propagated keywords in our approach. The annotation of the test image is propagated as a whole rather than separate keywords. Experiments on two benchmark data sets demonstrate the effectiveness of our approach for image annotation and ranked retrieval.


Neurocomputing | 2018

Learning to Rank Images for Complex Queries in Concept-based Search

Chaoran Cui; Jialie Shen; Zhumin Chen; Shuaiqiang Wang; Jun Ma

Concept-based image search is an emerging search paradigm that utilizes a set of concepts as intermediate semantic descriptors of images to bridge the semantic gap. Typically, a user query is rather complex and cannot be well described using a single concept. However, it is less effective to tackle such complex queries by simply aggregating the individual search results for the constituent concepts. In this paper, we propose to introduce the learning to rank techniques to concept-based image search for complex queries. With freely available social tagged images, we first build concept detectors by jointly leveraging the heterogeneous visual features. Then, to formulate the image relevance, we explicitly model the individual weight of each constituent concept in a complex query. The dependence among constituent concepts, as well as the relatedness between query and non-query concepts, are also considered through modeling the pairwise concept correlations in a factorization way. Finally, we train our model to directly optimize the image ranking performance for complex queries under a pairwise learning to rank framework. Extensive experiments on two benchmark datasets well verified the promise of our approach.


international acm sigir conference on research and development in information retrieval | 2017

Distribution-oriented Aesthetics Assessment for Image Search

Chaoran Cui; Huidi Fang; Xiang Deng; Xiushan Nie; Hongshuai Dai; Yilong Yin

Aesthetics has become increasingly prominent for image search to enhance user satisfaction. Therefore, image aesthetics assessment is emerging as a promising research topic in recent years. In this paper, distinguished from existing studies relying on a single label, we propose to quantify the image aesthetics by a distribution over quality levels. The distribution representation can effectively characterize the disagreement among the aesthetic perceptions of users regarding the same image. Our framework is developed on the foundation of label distribution learning, in which the reliability of training examples and the correlations between quality levels are fully taken into account. Extensive experiments on two benchmark datasets well verified the potential of our approach for aesthetics assessment. The role of aesthetics in image search was also rigorously investigated.


Multimedia Tools and Applications | 2017

Social tag relevance learning via ranking-oriented neighbor voting

Chaoran Cui; Jialie Shen; Jun Ma; Tao Lian

High quality tags play a critical role in applications involving online multimedia search, such as social image annotation, sharing and browsing. However, user-generated tags in real world are often imprecise and incomplete to describe the image contents, which severely degrades the performance of current search systems. To improve the descriptive powers of social tags, a fundamental issue is tag relevance learning, which concerns how to interpret the relevance of a tag with respect to the contents of an image effectively. In this paper, we investigate the problem from a new perspective of learning to rank, and develop a novel approach to facilitate tag relevance learning to directly optimize the ranking performance of tag-based image search. Specifically, a supervision step is introduced into the neighbor voting scheme, in which the tag relevance is estimated by accumulating votes from visual neighbors. Through explicitly modeling the neighbor weights and tag correlations, the risk of making heuristic assumptions is effectively avoided. Besides, our approach does not suffer from the scalability problem since a generic model is learned that can be applied to all tags. Extensive experiments on two benchmark datasets in comparison with the state-of-the-art methods demonstrate the promise of our approach.


ACM Transactions on Intelligent Systems and Technology | 2017

Augmented Collaborative Filtering for Sparseness Reduction in Personalized POI Recommendation

Chaoran Cui; Jialie Shen; Liqiang Nie; Jun Ma

As mobile device penetration increases, it has become pervasive for images to be associated with locations in the form of geotags. Geotags bridge the gap between the physical world and the cyberspace, giving rise to new opportunities to extract further insights into user preferences and behaviors. In this article, we aim to exploit geotagged photos from online photo-sharing sites for the purpose of personalized Point-of-Interest (POI) recommendation. Owing to the fact that most users have only very limited travel experiences, data sparseness poses a formidable challenge to personalized POI recommendation. To alleviate data sparseness, we propose to augment current collaborative filtering algorithms along from multiple perspectives. Specifically, hybrid preference cues comprising user-uploaded and user-favored photos are harvested to study users’ tastes. Moreover, heterogeneous high-order relationship information is jointly captured from user social networks and POI multimodal contents with hypergraph models. We also build upon the matrix factorization algorithm to integrate the disparate sources of preference and relationship information, and apply our approach to directly optimize user preference rankings. Extensive experiments on a large and publicly accessible dataset well verified the potential of our approach for addressing data sparseness and offering quality recommendations to users, especially for those who have only limited travel experiences.

Collaboration


Dive into the Chaoran Cui's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xiushan Nie

Shandong University of Finance and Economics

View shared research outputs
Top Co-Authors

Avatar

Jun Ma

Shandong University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xiaoming Xi

Shandong University of Finance and Economics

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lei Zhu

Shandong Normal University

View shared research outputs
Top Co-Authors

Avatar

Muwei Jian

Shandong University of Finance and Economics

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge