Liman Liu
South Central University for Nationalities
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Liman Liu.
Information Sciences | 2014
Wenbing Tao; Yicong Zhou; Liman Liu; Kunqian Li; Kun Sun; Zhiguo Zhang
In the paper we present a new Spatial Adjacent Bag of Features (SABOF) model, in which the spatial information is effectively integrated into the traditional BOF model to enhance the scene and object recognition performance. The SABOF model chooses the frequency of each keyword and the largest frequency of its neighboring pairs to construct the feature histogram. Using the feature histogram whose dimension is only twice larger than that of the original BOF model, the SABOF model drastically enhances the discrimination performance. Combining the Superpixel Adjacent Histogram (SAH) Fulkerson et al., 2009 [12] with multiple segmentations Pantofaru et al., 2008 [33] and Russell et al., 2006 [36], the SABOF method effectively deals with the segmentation and classification of objects with different sizes. Changing the segmentation scale parameter to obtain multiple superpixel segmentations and correspondingly adjusting the neighbor parameters of the SAH method multiple classifiers are trained so that, the SABOF method can fuse multiple results of these classifiers to obtain better classification performance than any single classifier. The superpixel-based conditional random field (CRF) is used to further improve the classification performance. The experimental results of scene classification and of object recognition and localization on classical data sets demonstrate the performance of the proposed model and algorithm.
Signal Processing | 2014
Xiangli Liao; Hongbo Xu; Yicong Zhou; Kunqian Li; Wenbing Tao; Qiuju Guo; Liman Liu
Abstract In this paper, a new unsupervised segmentation method is proposed. The method integrates the star shape prior of the image object with salient point detection algorithm. In the proposed method, the Harris salient point detection is first applied to the color image to obtain the initial salient points. A regional contrast based saliency extraction method is then used to select rough object regions in the image. To restrict the distribution of salient points, an adaptive threshold segmentation is applied to the saliency map to get the saliency mask. And then the salient region points can be obtained by placing the saliency mask on the initial Harris salient points. In order to make sure the salient points which we get are inside the image object thus the star shape constraint can be applied to the graph cuts segmentation, the Affinity Propagation (AP) clustering is employed to find the salient key points among the salient region points. Finally, these salient key points are regarded as foreground seeds and the star shape prior is introduced to graph cuts segmentation framework to extract the foreground object. Extensive experiments and comparisons on public database are provided to demonstrate the good performance of the proposed method.
Information Sciences | 2016
Liman Liu; Wenbing Tao; Haihua Liu
Foreground histogram consistency is a commonly used global constraint for co-segmentation problems. However, scale, target posture as well as the viewpoint changes usually make it hard to guarantee the absolute consistency of histogram. In this paper, we formulate a unified framework by incorporating the local region searching strategy and hierarchical constraint with the global scale-invariant framework. Additionally, complementary saliency is adopted to reduce the unreliable prediction brought by single saliency estimation method. Such automatic foreground inferring strategy also shows good compatibility and interaction with the new unified co-segmentation framework. The comparison experiments conducted on the public datasets demonstrate the good performance of the proposed method.
Pattern Recognition | 2018
Kunqian Li; Wenbing Tao; Xiaobai Liu; Liman Liu
A heuristic four color labeling method is proposed to give robust initial foul-phase partition for Multiphase Multiple Piecewise Constant (MMPC) model.A regional adjacency cracking method is proposed to remove unnecessary adjacency constraints which impede the four color labeling.Compared with the random four color labeling, the color map of heuristic coloring shows better consistency for the homogenous regions.The heuristic four color labeling based approach reaches the good or even better segmentation with fewer iterations. Multilabel segmentation is an important research branch in image segmentation field. In our previous work, Multiphase Multiple Piecewise Constant and Geodesic Active Contour (MMPC-GAC) model was proposed, which can effectively describe multiple objects and background with intensity inhomogeneity. It can be approximately solved with Multiple Layer Graph (MLG) methods. To make the optimization more efficient and limit the approximate error, four-color labeling theorem was further introduced which can limit the MLG within three layers (representing four phases). However, the adopted random four-color labeling method usually provides chaotic color maps with obvious inhomogeneity for those semantic consistent regions. For this case, a new and alternative method named heuristic four-color labeling is proposed in this paper, which aims to generate more reasonable color maps with a global view of the whole image. And compared with the random four-color labeling strategy, the whole iterative algorithm based on our method usually produces better segmentations with faster convergence, particularly for images with clutters and complicated structures. This strategy is a good substitute for random coloring method when the latter produces unsatisfactory messy segmentation. Experiments conducted on public dataset demonstrate the effectiveness of the proposed method.
Information Sciences | 2016
Kun Sun; Liman Liu; Wenbing Tao
In this paper a quasi-dense matching algorithm called Progressive match Expansion via Coherent Subspace (PECS) is proposed. Our algorithm starts from a set of sparse seeds and iteratively expands potential matches in the region around the seeds until no more matches could be found. The main difference from previous methods is that our method computes a coherent non-rigid transformation to relate the matching pixels instead of using local planar affine model to approximate the real scene surface within the expanding region. The points in the expanding region are treated as a whole and all the matches satisfying the transformation are found at once. First, dense SIFT descriptors are extracted and a subspace encoding the feature similarity between two point sets is computed. This embedding pulls two points closer if they have similar features and makes the non-rigid transformation learning more robust. Second, a coherent non-rigid transformation in the subspace is computed to move one point set towards the other and the spatially aligned two points denote a match. The proposed model is robust to scenes with non-smooth surface and experiment results reveal the good performance in both 2D image matching and 3D reconstruction.
energy minimization methods in computer vision and pattern recognition | 2015
Kun Sun; Peiran Li; Wenbing Tao; Liman Liu
In this article we propose a new method to find matches between two images, which is based on a framework similar to the Mixture Point Matching (MPM) algorithm. The main contribution is that both feature and spatial information are considered. We treat one point set as the centroid of the Gaussian Mixture Model (GMM) and the other point set as the data. Different from traditional methods, we propose to assign each GMM component a different weight according to the feature matching score. In this way the feature information is introduced as a reasonable prior to guide the matching, and the spatial transformation offers a global constraint so that local ambiguity can be alleviated. Experiments on real data show that the proposed method is not only robust to outliers, deformation and rotation, but also can acquire the most matches while preserving high precision.
Ninth International Conference on Graphic and Image Processing (ICGIP 2017) | 2018
Kun Sun; Wenbing Tao; Liman Liu; Zijian Liu
With the improvement of 3D reconstruction theory and the rapid development of computer hardware technology, the reconstructed 3D models are enlarging in scale and increasing in complexity. Models with tens of thousands of 3D points or triangular meshes are common in practical applications. Due to storage and computing power limitation, it is difficult to achieve real-time display and interaction with large scale 3D models for some common 3D display software, such as MeshLab. In this paper, we propose a display system for large-scale 3D scene models. We construct the LOD (Levels of Detail) model of the reconstructed 3D scene in advance, and then use an out-of-core view-dependent multi-resolution rendering scheme to realize the real-time display of the large-scale 3D model. With the proposed method, our display system is able to render in real time while roaming in the reconstructed scene and 3D camera poses can also be displayed. Furthermore, the memory consumption can be significantly decreased via internal and external memory exchange mechanism, so that it is possible to display a large scale reconstructed scene with over millions of 3D points or triangular meshes in a regular PC with only 4GB RAM.
IEEE Geoscience and Remote Sensing Letters | 2017
Kun Sun; Liman Liu; Wenbing Tao
The Gaussian mixture model (GMM)-based methods have achieved great success in point set registration. However, they cannot be directly applied to image matching, because the features extracted from two images usually contain a large portion of outliers. In this letter, we propose a new method to extend the powerful GMM to the field of image feature points matching. The algorithm consists of two main steps. In the first step, points extracted from the images are mapped into a new subspace, in which feature similarity information is fused to get the new representation of the points. The second step performs an improved progressive process with the GMM to find correspondences satisfying the coherent constraint. In this way, finding correspondences among large outliers is feasible and the iteration converges faster. Experimental results on benchmark data sets show that the proposed method can find more correct matches with high accuracy.
Circuits Systems and Signal Processing | 2017
Liman Liu; Kunqian Li; Xiangli Liao
AbstractIn this paper, a co-segmentation algorithm based on 3D heat diffusion named co-diffusion is proposed. The image set is considered as a metal cuboid, and the K heat sources with constant temperature, which maximize the sum of the temperature of the system under anisotropic heat diffusion, are found to cluster the image set. The co-diffusion co-segmentation is an intuitive extension of the diffusion segmentation in Kim et al. (Proceedings of ICCV, 2011) while the performance is greatly improved. Comparatively, the proposed algorithm advances in the following three aspects: (1) The proposed algorithm can obtain better optimization because the heat diffusion is directly solved in 3D image set space, while the algorithm in Kim et al. (2011) deals with many independent 2D heat diffusions and solves the optimization by approximate belief propagation. (2) The marginal gain of every candidate heat source is globally determined in the image set, which can effectively compensate the wrong segmentations caused by the locality of the 2D image diffusion (Kim et al. 2011). (3) The K heat sources are chosen in image set while the algorithm (Kim et al. 2011) appoints mandatory K heat sources to each image in set, which will inevitably cause wrong segmentations for some images. The superiority of the proposed co-diffusion segmentation method is examined and demonstrated through a large number of experiments by using some typical datasets.
CCF Chinese Conference on Computer Vision | 2017
Qingshan Xu; Kun Sun; Wenbing Tao; Liman Liu
Although vocabulary tree based algorithm has high efficiency for image retrieval, it still faces a dilemma when dealing with large data. In this paper, we show that image indexing is the main bottleneck of vocabulary tree based image retrieval and then propose how to exploit the GPU hardware and CUDA parallel programming model for efficiently solving the image index phase and subsequently accelerating the remaining retrieval stage. Our main contributions include tree structure transformation, image package processing and task parallelism. Our GPU-based image index is up to around thirty times faster than the original method and the whole GPU-based vocabulary tree algorithm is improved by twenty percentage in speed.