Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yuli Gao is active.

Publication


Featured researches published by Yuli Gao.


IEEE Transactions on Image Processing | 2008

Integrating Concept Ontology and Multitask Learning to Achieve More Effective Classifier Training for Multilevel Image Annotation

Jianping Fan; Yuli Gao; Hangzai Luo

In this paper, we have developed a new scheme for achieving multilevel annotations of large-scale images automatically. To achieve more sufficient representation of various visual properties of the images, both the global visual features and the local visual features are extracted for image content representation. To tackle the problem of huge intraconcept visual diversity, multiple types of kernels are integrated to characterize the diverse visual similarity relationships between the images more precisely, and a multiple kernel learning algorithm is developed for SVM image classifier training. To address the problem of huge interconcept visual similarity, a novel multitask learning algorithm is developed to learn the correlated classifiers for the sibling image concepts under the same parent concept and enhance their discrimination and adaptation power significantly. To tackle the problem of huge intraconcept visual diversity for the image concepts at the higher levels of the concept ontology, a novel hierarchical boosting algorithm is developed to learn their ensemble classifiers hierarchically. In order to assist users on selecting more effective hypotheses for image classifier training, we have developed a novel hyperbolic framework for large-scale image visualization and interactive hypotheses assessment. Our experiments on large-scale image collections have also obtained very positive results.


acm multimedia | 2004

Multi-level annotation of natural scenes using dominant image components and semantic concepts

Jianping Fan; Yuli Gao; Hangzai Luo

Automatic image annotation is a promising solution to enable semantic image retrieval via keywords. In this paper, we propose a multi-level approach to annotate the semantics of <b><i>natural scenes</i></b> by using both the dominant image components (salient objects) and the relevant semantic concepts. To achieve automatic image annotation at the content level, we use salient objects as the dominant image components for image content representation and feature extraction. To support automatic image annotation at the concept level, a novel image classification technique is developed to map the images into the most relevant semantic image concepts. In addition, Support Vector Machine (SVM) classifiers are used to learn the detection functions for the pre-defined salient objects and finite mixture models are used for semantic concept interpretation and modeling. An <b><i>adaptive EM algorithm</i></b> has been proposed to determine the optimal model structure and model parameters simultaneously. We have also demonstrated that our algorithms are very effective to enable multi-level annotation of <b><i>natural scenes</i></b> in a large-scale image dataset.


acm multimedia | 2006

Automatic image annotation by incorporating feature hierarchy and boosting to scale up SVM classifiers

Yuli Gao; Jianping Fan; Xiangyang Xue; Ramesh Jain

The performance of image classifiers largely depends on two inter-related issues:(1)suitable frameworks for image content representation and automatic feature extraction;(2) effective algorithms for image classifier training and feature subset selection. To address the first issue, a multiresolution grid-based framework is proposed for image content representation and feature extraction to bypass the time-consuming and erroneous process for image segmentation. To address the second issue, a hierarchical boosting algorithm is proposed by incorporating feature hierarchy and boosting to scale up SVM image classifier training in high-dimensional feature space. The high-dimensional multi-modal heterogeneous visual features are partitioned into multiple low-dimensional single-modal homogeneous feature subsets and each of them characterizes certain visual property of images. For each homogeneous feature subset, principal component analysis (PCA)is performed to exploit the feature correlations and a weak classifier is learned simultaneously. After the weak classifiers for different feature subsets and grid sizes are available, they are combined to boost an optimal classifier for the given object class or image concept, and the most representative feature subsets and grid sizes are selected. Our experiments on a specific domain of natural images have obtained very positive results.


IEEE Transactions on Multimedia | 2008

Mining Multilevel Image Semantics via Hierarchical Classification

Jianping Fan; Yuli Gao; Hangzai Luo; Ramesh Jain

In this paper, we have proposed a novel framework for mining multilevel image semantics via hierarchical classification. To bridge the semantic gap more successfully, salient objects are used to characterize the intermediate image semantics effectively. The salient objects are defined as the connected image regions that capture the dominant visual properties linked to the corresponding physical objects in an image. To achieve a more reliable and tractable concept learning in high-dimensional feature space, a novel algorithm called product of mixture-experts (PoM) is proposed to reduce the size of training images and speed up concept learning. A novel hierarchical concept learning algorithm is proposed by incorporating concept ontology and multitask learning to enhance the discrimination power of the concept models and reduce the computational complexity for learning the concept models for large amount of image concepts, which may have huge intra-concept variations and inter-concept similarities on their visual properties. A hyperbolic image visualization algorithm has been developed for allowing users to specify their queries easily and assess the query results interactively. Our experiments on large-scale image collections have also obtained very positive results.


Pattern Recognition | 2005

Statistical modeling and conceptualization of natural images

Jianping Fan; Yuli Gao; Hangzai Luo; Guangyou Xu

Multi-level annotation of images is a promising solution to enable semantic image retrieval by using various keywords at different semantic levels. In this paper, we propose a multi-level approach to interpret and annotate the semantics of natural images by using both the dominant image components and the relevant semantic image concepts. In contrast to the well-known image-based and region-based approaches, we use the concept-sensitive salient objects as the dominant image components to achieve automatic image annotation at the content level. By using the concept-sensitive salient objects for image content representation and feature extraction, a novel image classification technique is developed to achieve automatic image annotation at the concept level. To detect the concept-sensitive salient objects automatically, a set of detection functions are learned from the labeled image regions by using support vector machine (SVM) classifiers with an automatic scheme for searching the optimal model parameters. To generate the semantic image concepts, the finite mixture models are used to approximate the class distributions of the relevant concept-sensitive salient objects. An adaptive EM algorithm has been proposed to determine the optimal model structure and model parameters simultaneously. In addition, a large number of unlabeled samples have been integrated with a limited number of labeled samples to achieve more effective classifier training and knowledge discovery. We have also demonstrated that our algorithms are very effective to enable multi-level interpretation and annotation of natural images.


international acm sigir conference on research and development in information retrieval | 2004

Automatic image annotation by using concept-sensitive salient objects for image content representation

Jianping Fan; Yuli Gao; Hangzai Luo; Guangyou Xu

Multi-level annotation of images is a promising solution to enable more effective semantic image retrieval by using various keywords at different semantic levels. In this paper, we propose a multi-level approach to annotate the semantics of natural scenes by using both the dominant image components and the relevant semantic concepts. In contrast to the well-known image-based and region-based approaches, we use the salient objects as the dominant image components to achieve automatic image annotation at the content level. By using the salient objects for image content representation, a novel image classification technique is developed to achieve automatic image annotation at the concept level. To detect the salient objects automatically, a set of detection functions are learned from the labeled image regions by using Support Vector Machine (SVM) classifiers with an automatic scheme for searching the optimal model parameters. To generate the semantic concepts, finite mixture models are used to approximate the class distributions of the relevant salient objects. An adaptive EM algorithm has been proposed to determine the optimal model structure and model parameters simultaneously. We have also demonstrated that our algorithms are very effective to enable multi-level annotation of natural scenes in a large-scale dataset.


computer vision and pattern recognition | 2010

Harvesting large-scale weakly-tagged image databases from the web

Jianping Fan; Yi Shen; Ning Zhou; Yuli Gao

To leverage large-scale weakly-tagged images for computer vision tasks (such as object detection and scene recognition), a novel cross-modal tag cleansing and junk image filtering algorithm is developed for cleansing the weakly-tagged images and their social tags (i.e., removing irrelevant images and finding the most relevant tags for each image) by integrating both the visual similarity contexts between the images and the semantic similarity contexts between their tags. Our algorithm can address the issues of spams, polysemes and synonyms more effectively and determine the relevance between the images and their social tags more precisely, thus it can allow us to create large amounts of training images with more reliable labels by harvesting from large-scale weakly-tagged images, which can further be used to achieve more effective classifier training for many computer vision tasks.


IEEE Transactions on Circuits and Systems for Video Technology | 2009

JustClick : Personalized Image Recommendation via Exploratory Search From Large-Scale Flickr Images

Jianping Fan; Daniel A. Keim; Yuli Gao; Hangzai Luo; Zongmin Li

In this paper, we have developed a novel framework called JustClick to enable personalized image recommendation via exploratory search from large-scale collections of Flickr images. First, a topic network is automatically generated to summarize large-scale collections of Flickr images at a semantic level. Hyperbolic visualization is further used to enable interactive navigation and exploration of the topic network, so that users can gain insights of large-scale image collections at the first glance, build up their mental query models interactively and specify their queries (i.e., image needs) more precisely by selecting the image topics on the topic network directly. Thus, our personalized query recommendation framework can effectively address both the problem of query formulation and the problem of vocabulary discrepancy and null returns. Second, a small set of most representative images are recommended for the given image topic according to their representativeness scores. Kernel principal component analysis and hyperbolic visualization are seamlessly integrated to organize and layout the recommended images (i.e., most representative images) according to their nonlinear visual similarity contexts, so that users can assess the relevance between the recommended images and their real query intentions interactively. An interactive interface is implemented to allow users to express their time-varying query intentions precisely and to direct our JustClick system to more relevant images according to their personal preferences. Our experiments on large-scale collections of Flickr images show very positive results.


multimedia information retrieval | 2008

A novel approach to enable semantic and visual image summarization for exploratory image search

Jianping Fan; Yuli Gao; Hangzai Luo; Daniel A. Keim; Zongmin Li

In this paper, we have developed a novel scheme to incorporate topic network and representativeness-based sampling for achieving semantic and visual summarization and visualization of large-scale collections of Flickr images. First, topic network is automatically generated for summarizing and visualizing large-scale collections of Flickr images at a semantic level, so that users can select more suitable keywords for more precise query formulation. Second, the diverse visual similarities between the semantically-similar images are characterized more precisely by using a mixture-of-kernels and a representativeness-based image sampling algorithm is developed to achieve similarity-based summarization and visualization of large amounts of images under the same topic, so that users can find some particular images of interest more effectively. Our experiments on large-scale image collections with diverse semantics have provided very positive results.


computer vision and pattern recognition | 2009

Efficient multi-label classification with hypergraph regularization

Gang Chen; Jianwen Zhang; Fei Wang; Changshui Zhang; Yuli Gao

Many computer vision applications, such as image classification and video indexing, are usually multi-label classification problems in which an instance can be assigned to more than one category. In this paper, we present a novel multi-label classification approach with hypergraph regularization that addresses the correlations among different categories. First, a hypergraph is constructed to capture the correlations among different categories, in which each vertex represents one training instance and each hyperedge for one category contains all the instances belonging to the same category. Then, an improved SVM like learning system incorporating the hypergraph regularization, called Rank-HLapSVM, is proposed to handle the multi-label classification problems. We find that the corresponding optimization problem can be efficiently solved by the dual coordinate descent method. Many promising experimental results on the real datasets including ImageCLEF and MediaMill demonstrate the effectiveness and efficiency of the proposed algorithm.

Collaboration


Dive into the Yuli Gao's collaboration.

Top Co-Authors

Avatar

Jianping Fan

University of North Carolina at Charlotte

View shared research outputs
Top Co-Authors

Avatar

Hangzai Luo

East China Normal University

View shared research outputs
Top Co-Authors

Avatar

Chunlei Yang

University of North Carolina at Charlotte

View shared research outputs
Top Co-Authors

Avatar

Yi Shen

University of North Carolina at Charlotte

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ramesh Jain

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge