Wenbin Zou
Shenzhen University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Wenbin Zou.
IEEE Transactions on Image Processing | 2014
Zhi Liu; Wenbin Zou; Olivier Le Meur
This paper proposes a novel saliency detection framework termed as saliency tree. For effective saliency measurement, the original image is first simplified using adaptive color quantization and region segmentation to partition the image into a set of primitive regions. Then, three measures, i.e., global contrast, spatial sparsity, and object prior are integrated with regional similarities to generate the initial regional saliency for each primitive region. Next, a saliency-directed region merging approach with dynamic scale control scheme is proposed to generate the saliency tree, in which each leaf node represents a primitive region and each non-leaf node represents a non-primitive region generated during the region merging process. Finally, by exploiting a regional center-surround scheme based node selection criterion, a systematic saliency tree analysis including salient node selection, regional saliency adjustment and selection is performed to obtain final regional saliency measures and to derive the high-quality pixel-wise saliency map. Extensive experimental results on five datasets with pixel-wise ground truths demonstrate that the proposed saliency tree model consistently outperforms the state-of-the-art saliency models.
british machine vision conference | 2013
Wenbin Zou; Zhi Liu; Joseph Ronsin
Low-rank matrix recovery (LRMR) model, aiming at decomposing a matrix into a low-rank matrix and a sparse one, has shown the potential to address the problem of saliency detection, where the decomposed low-rank matrix naturally corresponds to the background, and the sparse one captures salient objects. This is under the assumption that the background is consistent and objects are obviously distinctive. Unfortunately, in real images, the background may be cluttered and may have low contrast with objects. Thus directly applying the LRMR model to the saliency detection has limited robustness. This paper proposes a novel approach that exploits bottom-up segmentation as a guidance cue of the matrix recovery. This method is fully unsupervised, yet obtains higher performance than the supervised LRMR model. A new challenging dataset PASCAL-1500 is also introduced to validate the saliency detection performance. Extensive evaluations on the widely used MSRA-1000 dataset and also on the new PASCAL-1500 dataset demonstrate that the proposed saliency model outperforms the state-of-the-art models.
IEEE Signal Processing Letters | 2014
Zhi Liu; Wenbin Zou; Lina Li; Liquan Shen; Olivier Le Meur
Co-saliency detection, an emerging and interesting issue in saliency detection, aims to discover the common salient objects in a set of images. This letter proposes a hierarchical segmentation based co-saliency model. On the basis of fine segmentation, regional histograms are used to measure regional similarities between region pairs in the image set, and regional contrasts within each image are exploited to evaluate the intra-saliency of each region. On the basis of coarse segmentation, an object prior for each region is measured based on the connectivity with image borders. Finally, the global similarity of each region is derived based on regional similarity measures, and then effectively integrated with intra-saliency map and object prior map to generate the co-saliency map for each image. Experimental results on two benchmark datasets demonstrate the better co-saliency detection performance of the proposed model compared to the state-of-the-art co-saliency models.
international conference on computer vision | 2015
Wenbin Zou; Nikos Komodakis
The state-of-the-art salient object detection models are able to perform well for relatively simple scenes, yet for more complex ones, they still have difficulties in highlighting salient objects completely from background, largely due to the lack of sufficiently robust features for saliency prediction. To address such an issue, this paper proposes a novel hierarchy-associated feature construction framework for salient object detection, which is based on integrating elementary features from multi-level regions in a hierarchy. Furthermore, multi-layered deep learning features are introduced and incorporated as elementary features into this framework through a compact integration scheme. This leads to a rich feature representation, which is able to represent the context of the whole object/background and is much more discriminative as well as robust for salient object detection. Extensive experiments on the most widely used and challenging benchmark datasets demonstrate that the proposed approach substantially outperforms the state-of-the-art on salient object detection.
IEEE Transactions on Image Processing | 2015
Wenbin Zou; Zhi Liu; Joseph Ronsin; Yong Zhao; Nikos Komodakis
This paper presents a novel unsupervised algorithm to detect salient regions and to segment out foreground objects from background. In contrast to previous unidirectional saliency-based object segmentation methods, in which only the detected saliency map is used to guide the object segmentation, our algorithm mutually exploits detection/segmentation cues from each other. To achieve this goal, an initial saliency map is generated by the proposed segmentation driven low-rank matrix recovery model. Such a saliency map is exploited to initialize object segmentation model, which is formulated as energy minimization of Markov random field. Mutually, the quality of saliency map is further improved by the segmentation result, and serves as a new guidance for the object segmentation. The optimal saliency map and the final segmentation are achieved by jointly optimizing the defined objective functions. Extensive evaluations on MSRA-B and PASCAL-1500 datasets demonstrate that the proposed algorithm achieves the state-of-the-art performance for both the salient region detection and the object segmentation.
international conference on multimedia and expo | 2014
Lina Li; Zhi Liu; Wenbin Zou; Xiang Zhang; Olivier Le Meur
This paper addresses the problem of co-saliency detection, which aims to identify the common salient objects in a set of images and is important for many applications such as object co-segmentation and co-recognition. First, the segmentation driven low-rank matrix recovery model is used for intra saliency detection in each individual image of the image set, to highlight the regions whose features are sparse in each image. Then, a region-level fusion method, which exploits inter-region dissimilarities on color histograms and global consistency of regions over the image set, adjusts the intra saliency maps to obtain the region-level co-saliency maps, which can highlight co-salient object regions and suppress irrelevant regions. Finally, a pixel-level refinement method, which integrates color-spatial similarity between pixel and region with image border connectivity based object prior, generates the pixel-level co-saliency maps with better quality. Extensive experiments on two benchmark datasets demonstrate that the proposed co-saliency model consistently outperforms the state-of-the-art co-saliency models in both subjective and objective evaluation.
IEEE Transactions on Image Processing | 2017
Yule Yuan; Wenbin Zou; Yong Zhao; Xinan Wang; Xuefeng Hu; Nikos Komodakis
This paper presents a robust and efficient method for license plate detection with the purpose of accurately localizing vehicle license plates from complex scenes in real time. A simple yet effective image downscaling method is first proposed to substantially accelerate license plate localization without sacrificing detection performance compared with that achieved using the original image. Furthermore, a novel line density filter approach is proposed to extract candidate regions, thereby significantly reducing the area to be analyzed for license plate localization. Moreover, a cascaded license plate classifier based on linear support vector machines using color saliency features is introduced to identify the true license plate from among the candidate regions. For performance evaluation, a data set consisting of 3977 images captured from diverse scenes under different conditions is also presented. Extensive experiments on the widely used Caltech license plate data set and our newly introduced data set demonstrate that the proposed approach substantially outperforms state-of-the-art methods in terms of both detection accuracy and run-time efficiency, increasing the detection ratio from 91.09% to 96.62% while decreasing the run time from 672 to 42 ms for processing an image with a resolution of
IEEE Transactions on Image Processing | 2014
Wenbin Zou; Cong Bai; Joseph Ronsin
1082\times 728
international conference on image processing | 2012
Wenbin Zou; Joseph Ronsin
. The executable code and our collected data set are publicly available.
Journal of The Optical Society of America A-optics Image Science and Vision | 2018
Shuming Jiao; Zhi Jin; Changyuan Zhou; Wenbin Zou; Xia Li
This paper addresses the problem of automatic figure-ground segmentation, which aims at automatically segmenting out all foreground objects from background. The underlying idea of this approach is to transfer segmentation masks of globally and locally (glocally) similar exemplars into the query image. For this purpose, we propose a novel high-level image representation method named as object-oriented descriptor. Using this descriptor, a set of exemplar images glocally similar to the query image is retrieved. Then, using over-segmented regions of these retrieved exemplars, a discriminative classifier is learned on-the-fly and subsequently used to predict foreground probability for the query image. Finally, the optimal segmentation is obtained by combining the online prediction with typical energy optimization of Markov random field. The proposed approach has been extensively evaluated on three datasets, including Pascal VOC 2010, VOC 2011 segmentation challenges, and iCoseg dataset. Experiments show that the proposed approach outperforms state-of-the-art methods and has the potential to segment large-scale images containing unknown objects, which never appear in the exemplar images.