Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Changhu Wang is active.

Publication


Featured researches published by Changhu Wang.


acm multimedia | 2006

Image annotation refinement using random walk with restarts

Changhu Wang; Feng Jing; Lei Zhang; Hong-Jiang Zhang

Image annotation plays an important role in image retrieval and management. However, the results of the state-of-the-art image annotation methods are often unsatisfactory. Therefore, it is necessary to refine the imprecise annotations obtained by existing annotation methods. In this paper, a novel approach to automatically refine the original annotations of images is proposed. On the one hand, for Web images, textual information, e.g. file name and surrounding text, is used to retrieve a set of candidate annotations. On the other hand, for non-Web images that are lack of textual information, a relevance model-based algorithm using visual information is used to decide the candidate annotations. Then, candidate annotations are re-ranked and only the top ones are reserved as the final annotations. To re-rank the annotations, an algorithm using Random Walk with Restarts (RWR) is proposed to leverage both the corpus information and the original confidence information of the annotations. Experimental results on both non-Web images of Corel dataset and Web images of photo forum sites demonstrate the effectiveness of the proposed method.


computer vision and pattern recognition | 2011

Edgel index for large-scale sketch-based image search

Yang Cao; Changhu Wang; Liqing Zhang; Lei Zhang

Retrieving images to match with a hand-drawn sketch query is a highly desired feature, especially with the popularity of devices with touch screens. Although query-by-sketch has been extensively studied since 1990s, it is still very challenging to build a real-time sketch-based image search engine on a large-scale database due to the lack of effective and efficient matching/indexing solutions. The explosive growth of web images and the phenomenal success of search techniques have encouraged us to revisit this problem and target at solving the problem of web-scale sketch-based image retrieval. In this work, a novel index structure and the corresponding raw contour-based matching algorithm are proposed to calculate the similarity between a sketch query and natural images, and make sketch-based image retrieval scalable to millions of images. The proposed solution simultaneously considers storage cost, retrieval accuracy, and efficiency, based on which we have developed a real-time sketch-based image search engine by indexing more than 2 million images. Extensive experiments on various retrieval tasks (basic shape search, specific image search, and similar image search) show better accuracy and efficiency than state-of-the-art methods.


computer vision and pattern recognition | 2007

Content-Based Image Annotation Refinement

Changhu Wang; Feng Jing; Lei Zhang; Hong-Jiang Zhang

Automatic image annotation has been an active research topic due to its great importance in image retrieval and management. However, results of the state-of-the-art image annotation methods are often unsatisfactory. Despite continuous efforts in inventing new annotation algorithms, it would be advantageous to develop a dedicated approach that could refine imprecise annotations. In this paper, a novel approach to automatically refining the original annotations of images is proposed. For a query image, an existing image annotation method is first employed to obtain a set of candidate annotations. Then, the candidate annotations are re-ranked and only the top ones are reserved as the final annotations. By formulating the annotation refinement process as a Markov process and defining the candidate annotations as the states of a Markov chain, a content-based image annotation refinement (CIAR) algorithm is proposed to re-rank the candidate annotations. It leverages both corpus information and the content feature of a query image. Experimental results on a typical Corel dataset show not only the validity of the refinement, but also the superiority of the proposed algorithm over existing ones.


international acm sigir conference on research and development in information retrieval | 2008

Learning to reduce the semantic gap in web image retrieval and annotation

Changhu Wang; Lei Zhang; Hong-Jiang Zhang

We study in this paper the problem of bridging the semantic gap between low-level image features and high-level semantic concepts, which is the key hindrance in content-based image retrieval. Piloted by the rich textual information of Web images, the proposed framework tries to learn a new distance measure in the visual space, which can be used to retrieve more semantically relevant images for any unseen query image. The framework differentiates with traditional distance metric learning methods in the following ways. 1) A ranking-based distance metric learning method is proposed for image retrieval problem, by optimizing the leave-one-out retrieval performance on the training data. 2) To be scalable, millions of images together with rich textual information have been crawled from the Web to learn the similarity measure, and the learning framework particularly considers the indexing problem to ensure the retrieval efficiency. 3) To alleviate the noises in the unbalanced labels of images and fully utilize the textual information, a Latent Dirichlet Allocation based topic-level text model is introduced to define pairwise semantic similarity between any two images. The learnt distance measure can be directly applied to applications such as content-based image retrieval and search-based image annotation. Experimental results on the two applications in a two million Web image database show both the effectiveness and efficiency of the proposed framework.


acm multimedia | 2006

IGroup: web image search results clustering

Feng Jing; Changhu Wang; Yuhuan Yao; Kefeng Deng; Lei Zhang; Wei-Ying Ma

In this paper, we propose, IGroup, an efficient and effective algorithm that organizes Web image search results into clusters. IGroup is different from all existing Web image search results clustering algorithms that only cluster the top few images using visual or textual features. Our proposed algorithm first identifies several query-related semantic clusters based on a key phrases extraction algorithm originally proposed for clustering general Web search results. Then, all the resulting images are separated and assigned to corresponding clusters. As a result, all the resulting images are organized into a clustering structure with semantic level. To make the best use of the clustering results, a new user interface (UI) is proposed. Different from existing Web image search interfaces, which show only a limited number of suggested query terms or representative image thumbnails of some clusters, the proposed interface displays both representative thumbnails and appropriate titles of semantically coherent image clusters. Comprehensive user studies have been completed to evaluate both the clustering algorithm and the new UI.


multimedia information retrieval | 2006

Scalable search-based image annotation of personal images

Changhu Wang; Feng Jing; Lei Zhang; Hong-Jiang Zhang

With the prevalence of digital cameras, more and more people have considerable digital images on their personal devices. As a result, there are increasing needs to effectively search these personal images. Automatic image annotation may serve the goal, for the annotated keywords could facilitate the search processes. Although many image annotation methods have been proposed in recent years, their effectiveness on arbitrary personal images is constrained by their limited scalability, i.e. limited lexicon of small-scale training set. To be scalable, we propose a search-based image annotation (SBIA) algorithm that is analogous to Web page search. First, content-based image retrieval (CBIR) technology is used to retrieve a set of visually similar images from a large-scale Web image set. Then, a text-based keyword search (TBKS) technique is used to obtain a ranked list of candidate annotations for each retrieved image. Finally, a fusion algorithm is used to combine the ranked lists into the final annotation list. The application of both efficient search technologies and Web-scale image set guarantees the scalability of the proposed algorithm. Experimental results on U. Washington dataset show not only the effectiveness and efficiency of the proposed algorithm but also the advantage of image retrieval using annotation results over that using visual features.


conference on information and knowledge management | 2007

Learning query-biased web page summarization

Changhu Wang; Feng Jing; Lei Zhang; Hong-Jiang Zhang

Query-biased Web page summarization is the summarization of a Web page reflecting the relevance of it to a specific query. It plays an important role in search results representation of Web search engines. In this paper, we propose a learning-based query-biased Web page summarization method. The summarization problem is solved within the typical sentence selection framework. Different from existing Web page summarization methods that use page content or link context alone, both of them are considered as the sources of sentences in this work. Most of existing learning-based summarization methods treat summarization as a sentence classification problem and train a classifier to discriminate between extracted sentences and non-extracted sentences of all training documents. The basic assumption of these methods is that sentences from different documents are comparable with respect to the class information. In contrast to the classification scheme, a ranking scheme is introduced to rank extracted sentences higher than non-extracted sentences of each training document. The underlying assumption that sentences within a document are comparable is weaker and more reasonable than the assumption of classification-based scheme. Extensive results using intrinsic evaluation metrics gauge many aspects of the proposed method.


Multimedia Systems | 2008

Scalable search-based image annotation

Changhu Wang; Feng Jing; Lei Zhang; Hong-Jiang Zhang

With the popularity of digital cameras, more and more people have accumulated considerable digital images on their personal devices. As a result, there are increasing needs to effectively search these personal images. Automatic image annotation may serve the goal, for the annotated keywords could facilitate the search processes. Although many image annotation methods have been proposed in recent years, their effectiveness on arbitrary personal images is constrained by their limited scalability, i.e. limited lexicon of small-scale training set. To be scalable, we propose a search-based image annotation algorithm that is analogous to information retrieval. First, content-based image retrieval technology is used to retrieve a set of visually similar images from a large-scale Web image set. Second, a text-based keyword search technique is used to obtain a ranked list of candidate annotations for each retrieved image. Third, a fusion algorithm is used to combine the ranked lists into a final candidate annotation list. Finally, the candidate annotations are re-ranked using Random Walk with Restarts and only the top ones are reserved as the final annotations. The application of both efficient search techniques and Web-scale image set guarantees the scalability of the proposed algorithm. Moreover, we provide an annotation rejection scheme to point out the images that our annotation system cannot handle well. Experimental results on U. Washington dataset show not only the effectiveness and efficiency of the proposed algorithm but also the advantage of image retrieval using annotation results over that using visual features.


acm multimedia | 2013

Indexing billions of images for sketch-based retrieval

Xinghai Sun; Changhu Wang; Chao Xu; Lei Zhang

Because of the popularity of touch-screen devices, it has become a highly desirable feature to retrieve images from a huge repository by matching with a hand-drawn sketch. Although searching images via keywords or an example image has been successfully launched in some commercial search engines of billions of images, it is still very challenging for both academia and industry to develop a sketch-based image retrieval system on a billion-level database. In this work, we systematically study this problem and try to build a system to support query-by-sketch for two billion images. The raw edge pixel and Chamfer matching are selected as the basic representation and matching in this system, owning to the superior performance compared with other methods in extensive experiments. To get a more compact feature and a faster matching, a vector-like Chamfer feature pair is introduced, based on which the complex matching is reformulated as the crossover dot-product of feature pairs. Based on this new formulation, a compact shape code is developed to represent each image/sketch by projecting the Chamfer features to a linear subspace followed by a non-linear source coding. Finally, the multi-probe Kmedoids-LSH is leveraged to index database images, and the compact shape codes are further used for fast reranking. Extensive experiments show the effectiveness of the proposed features and algorithms in building such a sketch-based image search system.


acm multimedia | 2012

Query-adaptive shape topic mining for hand-drawn sketch recognition

Zhenbang Sun; Changhu Wang; Liqing Zhang; Lei Zhang

In this work, we study the problem of hand-drawn sketch recognition. Due to large intra-class variations presented in hand-drawn sketches, most of existing work was limited to a particular domain or limited pre-defined classes. Different from existing work, we target at developing a general sketch recognition system, to recognize any semantically meaningful object that a child can recognize. To increase the recognition coverage, a web-scale clipart image collection is leveraged as the knowledge base of the recognition system. To alleviate the problems of intra-class shape variation and inter-class shape ambiguity in this unconstrained situation, a query-adaptive shape topic model is proposed to mine object topics and shape topics related to the sketch, in which, multiple layers of information such as sketch, object, shape, image, and semantic labels are modeled in a generative process. Besides sketch recognition, the proposed topic model can also be used for related applications such as sketch tagging, image tagging, and sketch-based image search. Extensive experiments on different applications show the effectiveness of the proposed topic model and the recognition system.

Collaboration


Dive into the Changhu Wang's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Liqing Zhang

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yang Cao

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar

Changcheng Xiao

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jie Wu

Shanghai Jiao Tong University

View shared research outputs
Researchain Logo
Decentralizing Knowledge