Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Wenmin Wang is active.

Publication


Featured researches published by Wenmin Wang.


IEEE MultiMedia | 2014

Local Stereo Matching with Improved Matching Cost and Disparity Refinement

Jianbo Jiao; Ronggang Wang; Wenmin Wang; Shengfu Dong; Zhenyu Wang; Wen Gao

Recent local stereo matching methods have achieved comparable performance with global methods. However, the final disparity map still contains significant outliers. In this article, the authors propose a local stereo matching method that employs a new combined cost approach and a secondary disparity refinement mechanism. They formulate combined cost using a modified color census transform and truncated absolute differences of color and gradients. They also use symmetric guided filter for cost aggregation. Unlike in traditional stereo matching, they propose a novel secondary disparity refinement to further remove the remaining outliers. Experimental results on the Middlebury benchmark show that their method ranks fifth out of 153 submitted methods, and its the best cost-volume filtering-based local method. Experiments on real-world sequences and depth-based applications also validate the proposed methods effectiveness.


north american chapter of the association for computational linguistics | 2015

Clustering sentences with density peaks for multi-document summarization

Yang Zhang; Yunqing Xia; Yi Liu; Wenmin Wang

Multi-document Summarization (MDS) is of great value to many real world applications. Many scoring models are proposed to select appropriate sentences from documents to form the summary, in which the clustering-based methods are popular. In this work, we propose a unified sentence scoring model which measures representativeness and diversity at the same time. Experimental results on DUC04 demonstrate that our MDS method outperforms the DUC04 best method and the existing clustering-based methods, and it yields close results compared to the state-of-the-art generic MDS methods. Advantages of the proposed MDS method are two-fold: (1) The density peaks clustering algorithm is firstly adopted, which is effective and fast. (2) No external resources such as Wordnet and Wikipedia or complex language parsing algorithms is used, making reproduction and deployment very easy in real environment.


international conference on image processing | 2015

A low-light image enhancement method for both denoising and contrast enlarging

Lin Li; Ronggang Wang; Wenmin Wang; Wen Gao

In this paper, a novel united low-light image enhancement framework for both contrast enhancement and denoising is proposed. First, the low-light image is segmented into superpixels, and the ratio between the local standard deviation and the local gradients is utilized to estimate the noise-texture level of each superpixel. Then the image is inverted to be processed in the following steps. Based on the noise-texture level, a smooth base layer is adaptively extracted by the BM3D filter, and another detail layer is extracted by the first order differential of the inverted image and smoothed with the structural filter. These two layers are adaptively combined to get a noise-free and detail-preserved image. At last, an adaptive enhancement parameter is adopt into the dark channel prior dehazing process to enlarge contrast and prevent over/under enhancement. Experimental results demonstrate that our proposed method outperforms traditional methods in both subjective and objective assessments.


Journal of Visual Communication and Image Representation | 2016

Spatially variant defocus blur map estimation and deblurring from a single image

Xinxin Zhang; Ronggang Wang; Xiubao Jiang; Wenmin Wang; Wen Gao

A blur map estimation method using edge information is proposed.The blur map is segmented into multiple superpixels according to the image contours.Ringing artifacts and noise are detected and removed after deconvolution. In this paper, we propose a single image deblurring algorithm to remove spatially variant defocus blur based on the estimated blur map. Firstly, we estimate the blur map from a single image by utilizing the edge information and K nearest neighbors (KNN) matting interpolation. Secondly, the local kernels are derived by segmenting the blur map according to the blur amount of local regions and image contours. Thirdly, we adopt a BM3D-based non-blind deconvolution algorithm to restore the latent image. Finally, ringing artifacts and noise are detected and removed, to obtain a high quality in-focus image. Experimental results on real defocus blurred images demonstrate that our proposed algorithm outperforms some state-of-the-art approaches.


international symposium on circuits and systems | 2015

Fast intra mode decision algorithm based on refinement in HEVC

Longfei Gao; Shengfu Dong; Wenmin Wang; Ronggang Wang; Wen Gao

High Efficiency Video Coding (HEVC) is the next generation video compression standard providing significant coding performance. It adopts 35 intra prediction modes with larger CU size to improve the intra encoding efficiency, so that cause a high computational complexity. In this paper, two fast intra-prediction algorithms are proposed to reduce the number of candidate modes for rate-distortion (RD) optimization. We obtain an optimal adjacent modes (OAM) list consisting of dominant directions through the analysis of costs of several general direction modes. Furthermore, we improve the most probable mode (MPM) algorithm to make full use of the spatial correlation between neighbour prediction blocks instead of simply merging the prediction modes of neighbour prediction blocks into the candidate list. Experimental results show that the proposed algorithms can reduce about 27.3% of the encoding time compared to the HEVC test model 14.0, while the decrease of coding quality is negligible.


Neurocomputing | 2016

Local Quantization Code histogram for texture classification

Yang Zhao; Ronggang Wang; Wenmin Wang; Wen Gao

In this paper, an efficient local operator, namely the Local Quantization Code (LQC), is proposed for texture classification. The conventional local binary pattern can be regarded as a special local quantization method with two levels, 0 and 1. Some variants of the LBP demonstrate that increasing the local quantization level can enhance the local discriminative capability. Hence, we present a simple and unified framework to validate the performance of different local quantization levels. In the proposed LQC, pixels located in different quantization levels are separately counted and the average local gray value difference is adopted to set a series of quantization thresholds. Extensive experiments are carried out on several challenging texture databases. The experimental results demonstrate the LQC with appropriate local quantization level can effectively characterize the local gray-level distribution.


computer analysis of images and patterns | 2017

A Multilayer Backpropagation Saliency Detection Algorithm Based on Depth Mining

Chunbiao Zhu; Ge Li; Xiaoqiang Guo; Wenmin Wang; Ronggang Wang

Saliency detection is an active topic in multimedia field. Several algorithms have been proposed in this field. Most previous works on saliency detection focus on 2D images. However, for some complex situations which contain multiple objects or complex background, they are not robust and their performances are not satisfied. Recently, 3D visual information supplies a powerful cue for saliency detection. In this paper, we propose a multilayer backpropagation saliency detection algorithm based on depth mining by which we exploit depth cue from four different layers of images. The evaluation of the proposed algorithm on two challenging datasets shows that our algorithm outperforms state-of-the-art.


acm multimedia | 2017

Video Imagination from a Single Image with Transformation Generation

Baoyang Chen; Wenmin Wang; Jinzhuo Wang

In this work, we focus on a challenging task: synthesizing multiple imaginary videos given a single image. Major problems come from high dimensionality of pixel space and the ambiguity of potential motions. To overcome those problems, we propose a new framework that produce imaginary videos by transformation generation. The generated transformations are applied to the original image in a novel volumetric merge network to reconstruct frames in imaginary video. Through sampling different latent variables, our method can output different imaginary video samples. The framework is trained in an adversarial way with unsupervised learning. For evaluation, we propose a new assessment metric RIQA. In experiments, we test on 3 datasets varying from synthetic data to natural scene. Our framework achieves promising performance in image quality assessment. The visual inspection indicates that it can successfully generate diverse five-frame videos in acceptable perceptual quality.


IEEE Transactions on Circuits and Systems for Video Technology | 2016

Multilevel Modified Finite Radon Transform Network for Image Upsampling

Yang Zhao; Ronggang Wang; Wenmin Wang; Wen Gao

A local line-like feature is the most important discriminate information in the image upsampling scenario. In recent example-based upsampling methods, grayscale and gradient features are often adopted to describe the local patches, but these simple features cannot accurately characterize complex patches. In this paper, we present a feature representation of local edges by means of a multilevel filtering network, namely, multilevel modified finite Radon transform network (MMFRTN). In the proposed MMFRTN, the MFRT is utilized in the filtering layer to extract the local line-like feature; the nonlinear layer is set to be a simple local binary process; for the feature-pooling layer, we concatenate the mapped patches as the feature of local patch. Then, we propose a new example-based upsampling method by means of the MMFRTN feature. Experimental results demonstrate the effectiveness of the proposed method over some state-of-the-art methods.


international symposium on circuits and systems | 2015

Context-adaptive fast motion estimation of HEVC

Xufeng Li; Ronggang Wang; Xiaole Cui; Wenmin Wang

High Efficient Video Coding (HEVC) is the latest coding standard with superior compression efficiency while its encoding complexity is much higher compared with H.264/AVC. Motion estimation is one of the most time-consuming parts in video coding. In the reference software of HEVC, TZ (Test Zone) search method is adopted as the fast motion estimation method. However, its complexity is still high. There are many other fast motion estimation methods, for example, the hexagon search method, but their performance loss is larger than TZ search. In order to balance coding speed and performance, a new context-adaptive fast motion estimation algorithm is proposed in this paper. In this the proposed, motion intensity is defined in block-level, motion vectors and motion vector differences of neighbor blocks are utilized to measure the motion intensity. When motion intensity is large, TZ search method is used; otherwise, hexagon search method is used. Experimental results show that the proposed method can save 39% ~ 60% of motion estimation time with average 0.5% of BD-rate loss.

Collaboration


Dive into the Wenmin Wang's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge