Quannan Li
University of California, Los Angeles
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Quannan Li.
computer vision and pattern recognition | 2013
Quannan Li; Jiajun Wu; Zhuowen Tu
Obtaining effective mid-level representations has become an increasingly important task in computer vision. In this paper, we propose a fully automatic algorithm which harvests visual concepts from a large number of Internet images (more than a quarter of a million) using text-based queries. Existing approaches to visual concept learning from Internet images either rely on strong supervision with detailed manual annotations or learn image-level classifiers only. Here, we take the advantage of having massive well organized Google and Bing image data, visual concepts (around 14, 000) are automatically exploited from images using word-based queries. Using the learned visual concepts, we show state-of-the-art performances on a variety of benchmark datasets, which demonstrate the effectiveness of the learned mid-level representations: being able to generalize well to general natural images. Our method shows significant improvement over the competing systems in image classification, including those with strong supervision.
computer vision and pattern recognition | 2009
Xiang Bai; Quannan Li; Longin Jan Latecki; Wenyu Liu; Zhuowen Tu
In this paper, we focus on the problem of detecting/matching a query object in a given image. We propose a new algorithm, shape band, which models an object within a bandwidth of its sketch/contour. The features associated with each point on the sketch are the gradients within the bandwidth. In the detection stage, the algorithm simply scans an input image at various locations and scales for good candidates. We then perform fine scale shape matching to locate the precise object boundaries, also by taking advantage of the information from the shape band. The overall algorithm is very easy to implement, and our experimental results show that it can outperform stat-of-the-art contour based object detection algorithms.
international conference on image processing | 2007
Longin Jan Latecki; Quannan Li; Xiang Bai; Wenyu Liu
This paper proposes a new approach for skeletonization based on the skeleton strength map (SSM) caculated by Euclidean distance transform of a binary image. After the distance transform and gradient are computed, isotropic diffusion is performed on the gradient vector field and the skeleton strength map is computed from the diffused vector field. A critical point set is then selected from local maxima of the SSM. The critical points are located on significant visual parts of the object. The skeleton is obtained by connecting the critical points with geodesic paths. This approach overcomes intrinsic drawbacks of distance transform based skeletons, since it yields stable and connected skeletons without losing significant visual parts.
international conference on computer vision | 2011
Yi Hong; Quannan Li; Jiayan Jiang; Zhuowen Tu
This paper extends the neighborhood components analysis method (NCA) to learning a mixture of sparse distance metrics for classification and dimensionality reduction. We emphasize two important properties in the recent learning literature, locality and sparsity, and (1) pursue a set of local distance metrics by maximizing a conditional likelihood of observed data; and (2) add l1-norm of eigenvalues of the distance metric to favor low rank matrices of fewer parameters. Experimental results on standard UCI machine learning datasets, face recognition datasets, and image categorization datasets demonstrate the feasibility of our approach for both distance metric learning and dimensionality reduction.
international conference on image processing | 2008
Quannan Li; Xiang Bai; Wenyu Liu
Skeletonization of gray-scale images is a challenging problem in computer vision due to the difficulty of segmenting grayscale images to get the complete contour. Compared with previous skeletonization algorithms which use computational methods to avoid segmentation, this paper reveals that it is applicable to skeletonize gray-scale images from boundaries directly. We start from boundaries of gray-scale images and perform Euclidean Distance Transform on boundaries. Then we compute the gradient magnitude of the distance transform and perform isotropic vector diffusion. After diffusion, the Skeleton Strength Map (SSM) is computed and skeleton can be extracted from SSM. The experiments show that this method can obtain good performance from boundaries so long as major boundary segments are preserved.
pacific-rim symposium on image and video technology | 2009
Chengqian Wu; Xiang Bai; Quannan Li; Xingwei Yang; Wenyu Liu
In this paper, a novel algorithm is introduced to group contours from clutter images by integrating high-level information (prior of part segments) and low-level information (paths of segmentations of clutter images). The partial shape similarity between these two levels of information is embedded into the particle filter framework, an effective recursively estimating model. The particles in the framework are modeled as the paths on the edges of segmentation results (Normalized Cuts in this paper). At prediction step, the paths extend along the edges of Normalized Cuts; while, at the update step, the weights of particles update according to their partial shape similarity with priors of the trained contour segments. Successful results are achieved against the noise of the testing image, the inaccuracy of the segmentation result as well as the inexactness of the similarity between the contour segment and edges segmentation. The experimental results also demonstrate robust contour grouping performance in the presence of occlusion and large texture variation within the segmented objects.
computer vision and pattern recognition | 2012
Quannan Li; Cong Yao; Liwei Wang; Zhuowen Tu
Codebook learning is one of the central research topics in computer vision and machine learning. In this paper, we propose a new codebook learning algorithm, Randomized Forest Sparse Coding (RFSC), by harvesting the following three concepts: (1) ensemble learning, (2) divide-and-conquer, and (3) sparse coding. Given a set of training data, a randomized tree can be used to perform data partition (divide-and-conquer); after a tree is built, a number of bases are learned from the data within each leaf node for a sparse representation (subspace learning via sparse coding); multiple trees with diversities are trained (ensemble), and the collection of bases of these trees constitute the codebook. These three concepts in our codebook learning algorithm have the same target but with different emphasis: subspace learning via sparse coding makes a compact representation, and reduces the information loss; the divide-and-conquer process efficiently obtains the local data clusters; an ensemble of diverse trees provides additional robustness. We have conducted classification experiments on cancer images as well as a variety of natural image datasets and the experiment results demonstrate the efficiency and effectiveness of the proposed method.
Archive | 2012
Xiang Bai; Quannan Li; Tianyang Ma; Wenyu Liu; Longin Jan Latecki
In this chapter, we introduce a new binarization method of gray level images. We first extract skeleton curves from Canny edge image. Second, a Skeleton Strength Map (SSM) is calculated from Euclidean distance transform. Starting from the boundary edges, the distance transform is firstly computed and its gradient vector field is calculated. After that, the isotropic diffusion is performed on the gradient vector field and the SSM is computed from the diffused vector field. It has two advantages that make it useful for skeletonization: 1) the SSM serves as the form of the likelihood of a pixel being a skeleton point: the value at pixels of the skeleton is large while at pixels that are away from the center of object, the SSM value decays very fast; 2) By computing the SSM from the distance transform, a parameterized noise smoothing is obtained. Then, skeleton curves are classified into foreground and background classes by comparing the mean value of their local edge pixels and neighbors lowest intensity. Finally, the binarization result is obtained by choosing foreground skeleton curve pixels as seed points and presenting region growing algorithm on gray scale image with certain growing criteria. Images with different types of document components and degradations are used to test the effectiveness of the proposed algorithm. Results demonstrate that the method performs well on images with low contrast, noise and non-uniform illumination.
international conference on machine learning | 2013
Quannan Li; Jingdong Wang; David P. Wipf; Zhuowen Tu
Archive | 2010
Xiang Bai; Hairong Liu; Wenyu Liu; Quannan Li