Wenqi Ren
Tianjin University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Wenqi Ren.
european conference on computer vision | 2016
Wenqi Ren; Si Liu; Hua Zhang; Jinshan Pan; Xiaochun Cao; Ming-Hsuan Yang
The performance of existing image dehazing methods is limited by hand-designed features, such as the dark channel, color disparity and maximum contrast, with complex fusion schemes. In this paper, we propose a multi-scale deep neural network for single-image dehazing by learning the mapping between hazy images and their corresponding transmission maps. The proposed algorithm consists of a coarse-scale net which predicts a holistic transmission map based on the entire image, and a fine-scale net which refines results locally. To train the multi-scale deep network, we synthesize a dataset comprised of hazy images and corresponding transmission maps based on the NYU Depth dataset. Extensive experiments demonstrate that the proposed algorithm performs favorably against the state-of-the-art methods on both synthetic and real-world images in terms of quality and speed.
IEEE Transactions on Image Processing | 2016
Wenqi Ren; Xiaochun Cao; Jinshan Pan; Xiaojie Guo; Wangmeng Zuo; Ming-Hsuan Yang
Low-rank matrix approximation has been successfully applied to numerous vision problems in recent years. In this paper, we propose a novel low-rank prior for blind image deblurring. Our key observation is that directly applying a simple low-rank model to a blurry input image significantly reduces the blur even without using any kernel information, while preserving important edge information. The same model can be used to reduce blur in the gradient map of a blurry input. Based on these properties, we introduce an enhanced prior for image deblurring by combining the low rank prior of similar patches from both the blurry image and its gradient map. We employ a weighted nuclear norm minimization method to further enhance the effectiveness of low-rank prior for image deblurring, by retaining the dominant edges and eliminating fine texture and slight edges in intermediate images, allowing for better kernel estimation. In addition, we evaluate the proposed enhanced low-rank prior for both the uniform and the non-uniform deblurring. Quantitative and qualitative experimental evaluations demonstrate that the proposed algorithm performs favorably against the state-of-the-art deblurring methods.Low-rank matrix approximation has been successfully applied to numerous vision problems in recent years. In this paper, we propose a novel low-rank prior for blind image deblurring. Our key observation is that directly applying a simple low-rank model to a blurry input image significantly reduces the blur even without using any kernel information, while preserving important edge information. The same model can be used to reduce blur in the gradient map of a blurry input. Based on these properties, we introduce an enhanced prior for image deblurring by combining the low rank prior of similar patches from both the blurry image and its gradient map. We employ a weighted nuclear norm minimization method to further enhance the effectiveness of low-rank prior for image deblurring, by retaining the dominant edges and eliminating fine texture and slight edges in intermediate images, allowing for better kernel estimation. In addition, we evaluate the proposed enhanced low-rank prior for both the uniform and the non-uniform deblurring. Quantitative and qualitative experimental evaluations demonstrate that the proposed algorithm performs favorably against the state-of-the-art deblurring methods.
IEEE Transactions on Image Processing | 2015
Xiaochun Cao; Wenqi Ren; Wangmeng Zuo; Xiaojie Guo; Hassan Foroosh
Texts in natural scenes carry critical semantic clues for understanding images. When capturing natural scene images, especially by handheld cameras, a common artifact, i.e., blur, frequently happens. To improve the visual quality of such images, deblurring techniques are desired, which also play an important role in character recognition and image understanding. In this paper, we study the problem of recovering the clear scene text by exploiting the text field characteristics. A series of text-specific multiscale dictionaries (TMD) and a natural scene dictionary is learned for separately modeling the priors on the text and nontext fields. The TMD-based text field reconstruction helps to deal with the different scales of strings in a blurry image effectively. Furthermore, an adaptive version of nonuniform deblurring method is proposed to efficiently solve the real-world spatially varying problem. Dictionary learning allows more flexible modeling with respect to the text field property, and the combination with the nonuniform method is more appropriate in real situations where blur kernel sizes are depth dependent. Experimental results show that the proposed method achieves the deblurring results with better visual quality than the state-of-the-art methods.
computer vision and pattern recognition | 2017
Yanyang Yan; Wenqi Ren; Yuanfang Guo; Rui Wang; Xiaochun Cao
Camera motion introduces motion blur, affecting many computer vision tasks. Dark Channel Prior (DCP) helps the blind deblurring on scenes including natural, face, text, and low-illumination images. However, it has limitations and is less likely to support the kernel estimation while bright pixels dominate the input image. We observe that the bright pixels in the clear images are not likely to be bright after the blur process. Based on this observation, we first illustrate this phenomenon mathematically and define it as the Bright Channel Prior (BCP). Then, we propose a technique for deblurring such images which elevates the performance of existing motion deblurring algorithms. The proposed method takes advantage of both Bright and Dark Channel Prior. This joint prior is named as extreme channels prior and is crucial for achieving efficient restorations by leveraging both the bright and dark information. Extensive experimental results demonstrate that the proposed method is more robust and performs favorably against the state-of-the-art image deblurring methods on both synthesized and natural images.
computer vision and pattern recognition | 2016
Hua Zhang; Si Liu; Changqing Zhang; Wenqi Ren; Rui Wang; Xiaochun Cao
In this study, we present a weakly supervised approach that discovers the discriminative structures of sketch images, given pairs of sketch images and web images. In contrast to traditional approaches that use global appearance features or relay on keypoint features, our aim is to automatically learn the shared latent structures that exist between sketch images and real images, even when there are significant appearance differences across its relevant real images. To accomplish this, we propose a deep convolutional neural network, named SketchNet. We firstly develop a triplet composed of sketch, positive and negative real image as the input of our neural network. To discover the coherent visual structures between the sketch and its positive pairs, we introduce the softmax as the loss function. Then a ranking mechanism is introduced to make the positive pairs obtain a higher score comparing over negative ones to achieve robust representation. Finally, we formalize above-mentioned constrains into the unified objective function, and create an ensemble feature representation to describe the sketch images. Experiments on the TUBerlin sketch benchmark demonstrate the effectiveness of our model and show that deep feature representation brings substantial improvements over other state-of-the-art methods on sketch classification.
international conference on computer vision | 2017
Wenqi Ren; Jinshan Pan; Xiaochun Cao; Ming-Hsuan Yang
computer vision and pattern recognition | 2018
Wenqi Ren; Lin Ma; Jiawei Zhang; Jinshan Pan; Xiaochun Cao; Wei Liu; Ming-Hsuan Yang
IEEE Transactions on Information Forensics and Security | 2019
Yanyang Yan; Wenqi Ren; Xiaochun Cao
neural information processing systems | 2018
Wenqi Ren; Jiawei Zhang; Lin Ma; Jinshan Pan; Xiaochun Cao; Wangmeng Zuo; Wei Liu; Ming-Hsuan Yang
IEEE Transactions on Pattern Analysis and Machine Intelligence | 2018
Jinshan Pan; Wenqi Ren; Zhe Hu; Ming-Hsuan Yang