Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Cong Bai is active.

Publication


Featured researches published by Cong Bai.


IEEE Transactions on Image Processing | 2014

Online Glocal Transfer for Automatic Figure-Ground Segmentation

Wenbin Zou; Cong Bai; Joseph Ronsin

This paper addresses the problem of automatic figure-ground segmentation, which aims at automatically segmenting out all foreground objects from background. The underlying idea of this approach is to transfer segmentation masks of globally and locally (glocally) similar exemplars into the query image. For this purpose, we propose a novel high-level image representation method named as object-oriented descriptor. Using this descriptor, a set of exemplar images glocally similar to the query image is retrieved. Then, using over-segmented regions of these retrieved exemplars, a discriminative classifier is learned on-the-fly and subsequently used to predict foreground probability for the query image. Finally, the optimal segmentation is obtained by combining the online prediction with typical energy optimization of Markov random field. The proposed approach has been extensively evaluated on three datasets, including Pascal VOC 2010, VOC 2011 segmentation challenges, and iCoseg dataset. Experiments show that the proposed approach outperforms state-of-the-art methods and has the potential to segment large-scale images containing unknown objects, which never appear in the exemplar images.


international conference on acoustics, speech, and signal processing | 2013

Multi-object tracking using sparse representation

Weizhi Lu; Cong Bai; Joseph Ronsin

Recently sparse representation has been successfully applied to single object tracking by observing the reconstruction error of candidate object with sparse representation. In practice, sparse representation also shows competitive performance on multi-class classification, and thus is potential for multi-object tracking. In this paper we explore this technique for on-line multi-object tracking through a simple tracking-by-detection scheme, with background subtraction for object detection and sparse representation for object recognition. Final experiments demonstrate that the proposed approach only combining color histogram and 2-dimensional coordinates as features, achieves favorable performance over state-of-the-art work in persistent identity tracking.


Multimedia Tools and Applications | 2015

K-means based histogram using multiresolution feature vectors for color texture database retrieval

Cong Bai; Jinglin Zhang; Zhi Liu; Wan-Lei Zhao

Colorand texture are two important features in content-based image retrieval. It has been shown that using the combination of both could provide better performance. In this paper, a K-means based histogram (KBH) using the combination of color and texture features for image retrieval is proposed. Multiresolution feature vectors representing color and texture features are directly generated from the coefficients of Discrete Wavelet Transform (DWT), and K-means is exploited to partition the vector space with the objective to reduce the number of histogram bins. Thereafter, a fusion of z-score normalized Chi-Square distance between KBHs is employed as the similarity measure. Experiments have been conducted on four natural color texture data sets to examine the sensitivity of KBH to its parameters. The performance of the proposed approach has been compared with state-of-the-art approaches. Results evaluated in terms of Precision-Recall and Average Retrieval Rate (ARR) show that our approach outperforms the referred approaches


international conference on internet multimedia computing and service | 2015

Saliency detection for RGBD images

Hangke Song; Zhi Liu; Huan Du; Guangling Sun; Cong Bai

Additional depth information from RGBD images is one of characteristics different from conventional 2D images. In this paper, we propose an effective saliency model to detect salient regions in RGBD images. Color contrast and depth contrast are first enhanced with the weighting of depth-based object probability. Then the region merging based saliency refinement is exploited to obtain the color saliency map and depth saliency map, respectively. Finally, a location prior of salient objects is integrated with color saliency and depth saliency to obtain the regional saliency map. Both subjective and objective evaluations on a public RGBD image dataset demonstrate that the proposed saliency model outperforms the state-of-the-art saliency models.


4th International Conference on Intelligent Interactive Multimedia Systems and Services (KES-IMSS2011) | 2011

Analysis of histogram descriptor for image retrieval in DCT domain

Cong Bai; Kidiyo Kpalma; Joseph Ronsin

Many researches of content-based image retrieval appear in transform domain. We analyze and enhance a histogram method for image retrieval in DCT domain. This approach is based on 4×4 block DCT. After pre-processing, AC and DC Patterns are extracted from DCT coefficients. After various experiments, we propose to use zig-zag scan with fewer DCT coefficients to construct the AC-Pattern. Moreover adjacent patterns are defined by observing distances between them and merged in AC-Pattern histogram. Then the descriptors are constructed from AC-Pattern and DC-Pattern histograms and the combination of these descriptors is used to do image retrieval. Performance analysis is done on two common face image databases. Experiments show that we can get better performance by using our proposals.


IEEE Transactions on Multimedia | 2017

Salient Object Segmentation via Effective Integration of Saliency and Objectness

Linwei Ye; Zhi Liu; Lina Li; Liquan Shen; Cong Bai; Yang Wang

This paper proposes an effective salient object segmentation method via the graph-based integration of saliency and objectness. Based on the superpixel segmentation result of the input image, a graph is built to represent superpixels using regular vertex, background seed vertex with the addition of a terminal vertex. The edge weights on the graph are defined by integrating the difference of appearance, saliency, and objectness between superpixels. Then, the object probability of each superpixel is measured by finding the shortest path from the corresponding vertex to the terminal vertex on the graph, and the resultant object probability map can generally better highlight salient objects and suppress background regions compared to both saliency map and objectness map. Finally, the object probability map is used to initialize salient object and background, and effectively incorporated into the framework of graph cut to obtain the final salient object segmentation result. Extensive experimental results on three public benchmark datasets show that the proposed method consistently improves the salient object segmentation performance and outperforms the state-of-the-art salient object segmentation methods. Furthermore, experimental results also demonstrate that the proposed graph-based integration method is more effective than other fusion schemes and robust to saliency maps generated using various saliency models.


Journal of Visual Communication and Image Representation | 2018

Saliency-based multi-feature modeling for semantic image retrieval

Cong Bai; Jia-nan Chen; Ling Huang; Shengyong Chen

Abstract Semantic gap is an important challenging problem in content-based image retrieval (CBIR) up to now. Bag-of-words (BOW) framework is a popular approach that tries to reduce the semantic gap in CBIR. In this paper, an approach integrating visual saliency model with BOW is proposed for semantic image retrieval. Images are firstly segmented into background regions and foreground objects by a visual saliency-based segmentation method. And then multi-features including Scale Invariant Feature Transform (SIFT) features packed in BOW are extracted from regions and objects respectively and fused considering different characteristics of background regions and foreground objects. Finally, a fusion of z-score normalized Chi-Square distance is adopted as the similarity measurement. This proposal has been implemented on two widely used benchmark databases and the results evaluated in terms of mean Average Precision (mAP) show that our proposal outperforms the referred state-of-the-art approaches.


Journal of Visual Communication and Image Representation | 2018

Saliency integration driven by similar images

Jingru Ren; Zhi Liu; Xiaofei Zhou; Guangling Sun; Cong Bai

Abstract This paper proposes a saliency integration approach via the use of similar images to elevate saliency detection performance. Given the input image, a group of similar images are first retrieved, and meanwhile, the corresponding multiple saliency maps of the input image are generated by using existing saliency models. Then, the saliency fusion map is generated by using an adaptive fusion method to integrate such saliency maps, for which the fusion weights are measured by the corresponding similarity between each similar image and the input image. Next, an inter-image graph, for each pair of input image and similar image, is constructed to propagate the confident saliency values from the similar image to the input image, yielding the saliency propagation map. Finally, the saliency fusion map and the saliency propagation map are integrated to obtain the final saliency map. Experimental results on two public datasets demonstrate that the proposed approach achieves the better saliency detection performance compared to the existing saliency models and other saliency integration approaches.


advances in multimedia | 2016

Sparse Representation Based Histogram in Color Texture Retrieval

Cong Bai; Jia-nan Chen; Jinglin Zhang; Joseph Ronsin

Sparse representation is proposed to generate the histogram of feature vectors, namely sparse representation based histogram SRBH, in which a feature vector is represented by a number of basis vectors instead of by one basis vector in classical histogram. This amelioration makes the SRBH to be a more accurate representation of feature vectors, which is confirmed by the analysis in the aspect of reconstruction errors and the application in color texture retrieval. In color texture retrieval, feature vectors are constructed directly from coefficients of Discrete Wavelet Transform DWT. Dictionaries for sparse representation are generated by K-means. A set of sparse representation based histograms from different feature vectors is used for image retrieval and chi-squared distance is adopted for similarity measure. Experimental results assessed by Precision-Recall and Average Retrieval Rate ARR on four widely used natural color texture databases show that this approach is robust to the number of wavelet decomposition levels and outperforms classical histogram and state-of-the-art approaches.


decision support systems | 2013

A New Histogram-Based Descriptor for Images Retrieval from Databases

Cong Bai; Miloud Chikr El Mezouar; Kamel Belloulata; Nasreddine Taleb; Lakhdar Belhallouche; Djamal Boukerroui

In this paper, we propose a new approach for designing histogram-based descriptors. For demonstration purpose, we generate a descriptor based on the histogram of differential-turning angle scale space (d-TASS) function and its derived data. We then compare the proposed histogram-based descriptor with the traditional histogram descriptors in terms of retrieval performance from image databases. Experiments on three shapes databases demonstrate the efficiency and the effectiveness of the new technique: the proposed technique of histogram-based descriptor outperforms the traditional one. These experiments showed also that the proposed histogram-based descriptor using d-TASS function and the derived features performs well compared with the state-of-the-art. When applied to texture images retrieval, the proposed approach yields higher performance than the traditional histogram-based descriptors. From these results, we believe that the proposed histogram-based descriptor should perform efficiently for medical images retrieval so we will focus on this aspect in the future work.

Collaboration


Dive into the Cong Bai's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jia-nan Chen

Zhejiang University of Technology

View shared research outputs
Top Co-Authors

Avatar

Jinglin Zhang

Nanjing University of Information Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Shengyong Chen

Zhejiang University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ling Huang

Zhejiang University of Technology

View shared research outputs
Top Co-Authors

Avatar

Qing Ma

Zhejiang University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge