Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Fanman Meng is active.

Publication


Featured researches published by Fanman Meng.


IEEE Transactions on Multimedia | 2014

A Fast HEVC Inter CU Selection Method Based on Pyramid Motion Divergence

Jian Xiong; Hongliang Li; Qingbo Wu; Fanman Meng

The newly developed HEVC video coding standard can achieve higher compression performance than the previous video coding standards, such as MPEG-4, H.263 and H.264/AVC. However, HEVCs high computational complexity raises concerns about the computational burden on real-time application. In this paper, a fast pyramid motion divergence (PMD) based CU selection algorithm is presented for HEVC inter prediction. The PMD features are calculated with estimated optical flow of the downsampled frames. Theoretical analysis shows that PMD can be used to help selecting CU size. A k nearest neighboring like method is used to determine the CU splittings. Experimental results show that the fast inter prediction method speeds up the inter coding significantly with negligible loss of the peak signal-to-noise ratio.


IEEE Transactions on Multimedia | 2012

Object Co-Segmentation Based on Shortest Path Algorithm and Saliency Model

Fanman Meng; Hongliang Li; Guanghui Liu; King Ngi Ngan

Segmenting common objects that have variations in color, texture and shape is a challenging problem. In this paper, we propose a new model that efficiently segments common objects from multiple images. We first segment each original image into a number of local regions. Then, we construct a digraph based on local region similarities and saliency maps. Finally, we formulate the co-segmentation problem as the shortest path problem, and we use the dynamic programming method to solve the problem. The experimental results demonstrate that the proposed model can efficiently segment the common objects from a group of images with generally lower error rate than many existing and conventional co-segmentation methods.


IEEE Transactions on Multimedia | 2013

Co-Salient Object Detection From Multiple Images

Hongliang Li; Fanman Meng; King Ngi Ngan

In this paper, we propose a novel method to discover co-salient objects from a group of images, which is modeled as a linear fusion of an intra-image saliency (IaIS) map and an inter-image saliency (IrIS) map. The first term is to measure the salient objects from each image using multiscale segmentation voting. The second term is designed to detect the co-salient objects from a group of images. To compute the IrIS map, we perform the pairwise similarity ranking based on an image pyramid representation. A minimum spanning tree is then constructed to determine the image matching order. For each region in an image, we design three types of visual descriptors, which are extracted from the local appearance, e.g., color, color co-occurrence and shape properties. The final region matching problem between the images is formulated as an assignment problem that can be optimized by linear programming. Experimental evaluation on a number of images demonstrates the good performance of the proposed method on co-salient object detection.


Journal of Visual Communication and Image Representation | 2016

Beyond pixels: A comprehensive survey from bottom-up to semantic image segmentation and cosegmentation ☆

Hongyuan Zhu; Fanman Meng; Jianfei Cai; Shijian Lu

Abstract Image segmentation refers to the process to divide an image into meaningful non-overlapping regions according to human perception, which has become a classic topic since the early ages of computer vision. A lot of research has been conducted and has resulted in many applications. While many segmentation algorithms exist, there are only a few sparse and outdated summarizations available. Thus, in this paper, we aim to provide a comprehensive review of the recent progress in the field. Covering 190 publications, we give an overview of broad segmentation topics including not only the classic unsupervised methods, but also the recent weakly-/semi-supervised methods and the fully-supervised methods. In addition, we review the existing influential datasets and evaluation metrics. We also suggest some design choices and research directions for future research in image segmentation.


IEEE Transactions on Circuits and Systems for Video Technology | 2016

Blind Image Quality Assessment Based on Multichannel Feature Fusion and Label Transfer

Qingbo Wu; Hongliang Li; Fanman Meng; King Ngi Ngan; Bing Luo; Chao Huang; Bing Zeng

In this paper, we propose an efficient blind image quality assessment (BIQA) algorithm, which is characterized by a new feature fusion scheme and a k-nearest-neighbor (KNN)-based quality prediction model. Our goal is to predict the perceptual quality of an image without any prior information of its reference image and distortion type. Since the reference image is inaccessible in many applications, the BIQA is quite desirable in this context. In our method, a new feature fusion scheme is first introduced by combining an images statistical information from multiple domains (i.e., discrete cosine transform, wavelet, and spatial domains) and multiple color channels (i.e., Y, Cb, and Cr). Then, the predicted image quality is generated from a nonparametric model, which is referred to as the label transfer (LT). Based on the assumption that similar images share similar perceptual qualities, we implement the LT with an image retrieval procedure, where a query images KNNs are searched for from some annotated images. The weighted average of the KNN labels (e.g., difference mean opinion score or mean opinion score) is used as the predicted quality score. The proposed method is straightforward and computationally appealing. Experimental results on three publicly available databases (i.e., LIVE II, TID2008, and CSIQ) show that the proposed method is highly consistent with human perception and outperforms many representative BIQA metrics.


IEEE Transactions on Multimedia | 2014

MRF-Based Fast HEVC Inter CU Decision With the Variance of Absolute Differences

Jian Xiong; Hongliang Li; Fanman Meng; Shuyuan Zhu; Qingbo Wu; Bing Zeng

The newly developed High Efficiency Video Coding (HEVC) Standard has improved video coding performance significantly in comparison to its predecessors. However, more intensive computation complexity is introduced by implementing a number of new coding tools. In this paper, a fast coding unit (CU) decision based on Markov random field (MRF) is proposed for HEVC inter frames. First, it is observed that the variance of the absolute difference (VAD) is proportional with the rate-distortion (R-D) cost. The VAD based feature is designed for the CU selection. Second, the decision of CU splittings is modeled as an MRF inference problem, which can be optimized by the Graphcut algorithm. Third, a maximum a posteriori (MAP) approach based on the R-D cost is conducted to evaluate whether the unsplit CUs should be further split or not. Experimental results show that the proposed algorithm can achieve about 53% reduction of the coding time with negligible coding performance degradation, which outperforms the state-of-the-art algorithms significantly.


IEEE Signal Processing Letters | 2014

Noise-Robust Texture Description Using Local Contrast Patterns via Global Measures

Tiecheng Song; Hongliang Li; Fanman Meng; Qingbo Wu; Bing Luo; Bing Zeng; Moncef Gabbouj

This letter presents a noise-robust descriptor by exploring a set of local contrast patterns (LCPs) via global measures for texture classification. To handle image noise, the directed and undirected difference masks are designed to calculate three types of local intensity contrasts: directed, undirected, and maximum difference responses. To describe pixel-wise features, these responses are separately quantized and encoded into specific patterns based on different global measures. These resulting patterns (i.e., LCPs) are jointly encoded to form our final texture representation. Experiments are conducted on the well-known Outex and CUReT databases in the presence of high levels of noise. Compared to many state-of-the-art methods, the proposed descriptor achieves superior texture classification performance while enjoying a compact feature representation.


IEEE Transactions on Systems, Man, and Cybernetics | 2013

Image Cosegmentation by Incorporating Color Reward Strategy and Active Contour Model

Fanman Meng; Hongliang Li; Guanghui Liu; King Ngi Ngan

The design of robust and efficient cosegmentation algorithms is challenging because of the variety and complexity of the objects and images. In this paper, we propose a new cosegmentation model by incorporating a color reward strategy and an active contour model. A new energy function corresponding to the curve is first generated with two considerations: the foreground similarity between the image pairs and the background consistency in each of the image pair. Furthermore, a new foreground similarity measurement based on the rewarding strategy is proposed. Then, we minimize the energy function value via a mutual procedure which uses dynamic priors to mutually evolve the curves. The proposed method is evaluated on many images from commonly used databases. The experimental results demonstrate that the proposed model can efficiently segment the common objects from the image pairs with generally lower error rate than many existing and conventional cosegmentation methods.


IEEE Transactions on Multimedia | 2015

Fast HEVC Inter CU Decision Based on Latent SAD Estimation

Jian Xiong; Hongliang Li; Fanman Meng; Qingbo Wu; King Ngi Ngan

The emerging high efficiency video coding (HEVC) standard has improved compression performance significantly in comparison with H.264/AVC. However, more intensive computational complexity has been introduced by adopting a number of new coding tools. In this paper, a fast inter CU decision is proposed based on the latent sum of absolute differences (SAD) estimation. Firstly, a two-layer motion estimation (ME) method is designed to take advantage of the latent SAD cost. The new ME method can obtain the SAD costs for both the upper CU and its sub-CUs. Secondly, a concept of motion compensation rate- distortion (R-D) cost is defined, and an exponential model is proposed to express the relationship between the motion compensation R-D cost and the SAD cost. Then, a fast CU decision approach is designed based on the exponential model. The fast CU decision is implemented by comparing a derived threshold with the SAD cost difference between the upper and sub SAD costs. Experimental results show that the proposed algorithm achieves an average of 52% and 58.4% reductions of the coding time at the cost of 1.61% and 2% bit-rate increases under the low delay and random access conditions, respectively.


IEEE Transactions on Image Processing | 2013

Feature Adaptive Co-Segmentation by Complexity Awareness

Fanman Meng; Hongliang Li; King Ngi Ngan; Liaoyuan Zeng; Qingbo Wu

In this paper, we propose a novel feature adaptive co-segmentation method that can learn adaptive features of different image groups for accurate common objects segmentation. We also propose image complexity awareness for adaptive feature learning. In the proposed method, the original images are first ranked according to the image complexities that are measured by superpixel changing cue and object detection cue. Then, the unsupervised segments of the simple images are used to learn the adaptive features, which are achieved using an expectation-minimization algorithm combining l 1-regularized least squares optimization with the consideration of the confidence of the simple image segmentation accuracies and the fitness of the learned model. The error rate of the final co-segmentation is tested by the experiments on different image groups and verified to be lower than the existing state-of-the-art co-segmentation methods.

Collaboration


Dive into the Fanman Meng's collaboration.

Top Co-Authors

Avatar

Qingbo Wu

University of Electronic Science and Technology of China

View shared research outputs
Top Co-Authors

Avatar

Hongliang Li

University of Electronic Science and Technology of China

View shared research outputs
Top Co-Authors

Avatar

King Ngi Ngan

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Bing Luo

University of Electronic Science and Technology of China

View shared research outputs
Top Co-Authors

Avatar

Chao Huang

University of Electronic Science and Technology of China

View shared research outputs
Top Co-Authors

Avatar

King N. Ngan

University of Electronic Science and Technology of China

View shared research outputs
Top Co-Authors

Avatar

Bing Zeng

University of Electronic Science and Technology of China

View shared research outputs
Top Co-Authors

Avatar

Tiecheng Song

Chongqing University of Posts and Telecommunications

View shared research outputs
Top Co-Authors

Avatar

Jianfei Cai

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Shuyuan Zhu

University of Electronic Science and Technology of China

View shared research outputs
Researchain Logo
Decentralizing Knowledge