Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Zhe Hu is active.

Publication


Featured researches published by Zhe Hu.


computer vision and pattern recognition | 2014

Deblurring Text Images via L0-Regularized Intensity and Gradient Prior

Jinshan Pan; Zhe Hu; Zhixun Su; Ming-Hsuan Yang

We propose a simple yet effective L0-regularized prior based on intensity and gradient for text image deblurring. The proposed image prior is motivated by observing distinct properties of text images. Based on this prior, we develop an efficient optimization method to generate reliable intermediate results for kernel estimation. The proposed method does not require any complex filtering strategies to select salient edges which are critical to the state-of-the-art deblurring algorithms. We discuss the relationship with other deblurring algorithms based on edge selection and provide insight on how to select salient edges in a more principled way. In the final latent image restoration step, we develop a simple method to remove artifacts and render better deblurred images. Experimental results demonstrate that the proposed algorithm performs favorably against the state-of-the-art text image deblurring methods. In addition, we show that the proposed method can be effectively applied to deblur low-illumination images.


european conference on computer vision | 2012

Good regions to deblur

Zhe Hu; Ming-Hsuan Yang

The goal of single image deblurring is to recover both a latent clear image and an underlying blur kernel from one input blurred image. Recent works focus on exploiting natural image priors or additional image observations for deblurring, but pay less attention to the influence of image structures on estimating blur kernels. What is the useful image structure and how can one select good regions for deblurring? We formulate the problem of learning good regions for deblurring within the Conditional Random Field framework. To better compare blur kernels, we develop an effective similarity metric for labeling training samples. The learned model is able to predict good regions from an input blurred image for deblurring without user guidance. Qualitative and quantitative evaluations demonstrate that good regions can be selected by the proposed algorithms for effective image deblurring.


international conference on image processing | 2010

Single image deblurring with adaptive dictionary learning

Zhe Hu; Jia-Bin Huang; Ming-Hsuan Yang

We propose a motion deblurring algorithm that exploits sparsity constraints of image patches using one single frame. In our formulation, each image patch is encoded with sparse coefficients using an over-complete dictionary. The sparsity constraints facilitate recovering the latent image without solving an ill-posed deconvolution problem. In addition, the dictionary is learned and updated directly from one single frame without using additional images. The proposed method iteratively utilizes sparsity constraints to recover latent image, estimates the deblur kernel, and updates the dictionary directly from one single image. The final deblurred image is then recovered once the deblur kernel is estimated using our method. Experiments show that the proposed algorithm achieves favorable results against the state-of-the-art methods.


british machine vision conference | 2012

Fast Non-uniform Deblurring using Constrained Camera Pose Subspace.

Zhe Hu; Ming-Hsuan Yang

where Kθ is the matrix that warps latent image L to the transformed copy at a sampled pose θ and S denotes the set of sampled camera poses. While these algorithms show promising results, they entail high computational cost as the high-dimensional camera motion space and the latent image have to be computed during the iterative optimization procedures. In this paper, we propose a fast single-image deblurring algorithm to remove non-uniform blur. We first introduce an initialization method that facilitates convergence and avoid local minimums of the formulated optimization problem. We then propose a new camera motion estimation method which optimizes on a small set of pose weights of a constrained camera pose subspace at a time rather than using the entire space. We develop an iterative method to refine the camera motion estimation and introduce perturbation at each iteration to obtain robust solutions. Fig. 1 summarizes the main steps of our method.


european conference on computer vision | 2014

Deblurring Face Images with Exemplars

Jinshan Pan; Zhe Hu; Zhixun Su; Ming-Hsuan Yang

The human face is one of the most interesting subjects involved in numerous applications. Significant progress has been made towards the image deblurring problem, however, existing generic deblurring methods are not able to achieve satisfying results on blurry face images. The success of the state-of-the-art image deblurring methods stems mainly from implicit or explicit restoration of salient edges for kernel estimation. When there is not much texture in the blurry image (e.g., face images), existing methods are less effective as only few edges can be used for kernel estimation. Moreover, recent methods are usually jeopardized by selecting ambiguous edges, which are imaged from the same edge of the object after blur, for kernel estimation due to local edge selection strategies. In this paper, we address these problems of deblurring face images by exploiting facial structures. We propose a maximum a posteriori (MAP) deblurring algorithm based on an exemplar dataset, without using the coarse-to-fine strategy or ad-hoc edge selections. Extensive evaluations against state-of-the-art methods demonstrate the effectiveness of the proposed algorithm for deblurring face images. We also show the extendability of our method to other specific deblurring tasks.


computer vision and pattern recognition | 2014

Joint Depth Estimation and Camera Shake Removal from Single Blurry Image

Zhe Hu; Li Xu; Ming-Hsuan Yang

Camera shake during exposure time often results in spatially variant blur effect of the image. The non-uniform blur effect is not only caused by the camera motion, but also the depth variation of the scene. The objects close to the camera sensors are likely to appear more blurry than those at a distance in such cases. However, recent non-uniform deblurring methods do not explicitly consider the depth factor or assume fronto-parallel scenes with constant depth for simplicity. While single image non-uniform deblurring is a challenging problem, the blurry results in fact contain depth information which can be exploited. We propose to jointly estimate scene depth and remove non-uniform blur caused by camera motion by exploiting their underlying geometric relationships, with only single blurry image as input. To this end, we present a unified layer-based model for depth-involved deblurring. We provide a novel layer-based solution using matting to partition the layers and an expectation-maximization scheme to solve this problem. This approach largely reduces the number of unknowns and makes the problem tractable. Experiments on challenging examples demonstrate that both depth and camera shake removal can be well addressed within the unified framework.


computer vision and pattern recognition | 2016

Soft-Segmentation Guided Object Motion Deblurring

Jinshan Pan; Zhe Hu; Zhixun Su; Hsin-Ying Lee; Ming-Hsuan Yang

Object motion blur is a challenging problem as the foreground and the background in the scenes undergo different types of image degradation due to movements in various directions and speed. Most object motion deblurring methods address this problem by segmenting blurred images into regions where different kernels are estimated and applied for restoration. Segmentation on blurred images is difficult due to ambiguous pixels between regions, but it plays an important role for object motion deblurring. To address these problems, we propose a novel model for object motion deblurring. The proposed model is developed based on a maximum a posterior formulation in which soft-segmentation is incorporated for object layer estimation. We propose an efficient algorithm to jointly estimate object segmentation and camera motion where each layer can be deblurred well under the guidance of the soft-segmentation. Experimental results demonstrate that the proposed algorithm performs favorably against the state-of-the-art object motion deblurring methods on challenging scenarios.


computer vision and pattern recognition | 2016

Image Deblurring Using Smartphone Inertial Sensors

Zhe Hu; Lu Yuan; Stephen Lin; Ming-Hsuan Yang

Removing image blur caused by camera shake is an ill-posed problem, as both the latent image and the point spread function (PSF) are unknown. A recent approach to address this problem is to record camera motion through inertial sensors, i.e., gyroscopes and accelerometers, and then reconstruct spatially-variant PSFs from these readings. While this approach has been effective for highquality inertial sensors, it has been infeasible for the inertial sensors in smartphones, which are of relatively low quality and present a number of challenging issues, including varying sensor parameters, high sensor noise, and calibration error. In this paper, we identify the issues that plague smartphone inertial sensors and propose a solution that successfully utilizes the sensor readings for image deblurring. With both the sensor data and the image itself, the proposed method is able to accurately estimate the sensor parameters online and also the spatially-variant PSFs for enhanced deblurring performance. The effectiveness of this technique is demonstrated in experiments on a popular mobile phone. With this approach, the quality of image deblurring can be appreciably raised on the most common of imaging devices.


International Journal of Computer Vision | 2015

Learning Good Regions to Deblur Images

Zhe Hu; Ming-Hsuan Yang

The goal of single image deblurring is to recover both a latent clear image and an underlying blur kernel from one input blurred image. Recent methods focus on exploiting natural image priors or additional image observations for deblurring, but pay less attention to the influence of image structure on estimating blur kernels. What is the useful image structure and how can one select good regions for deblurring? We formulate the problem of learning good regions for deblurring within the conditional random field framework. To better compare blur kernels, we develop an effective similarity metric for labeling training samples. The learned model is able to predict good regions from an input blurred image for deblurring without user guidance. Qualitative and quantitative evaluations demonstrate that good regions can be selected by the proposed algorithms for effective single image deblurring.


asian conference on computer vision | 2016

Debluring Low-Resolution Images

Jinshan Pan; Zhe Hu; Zhixun Su; Ming-Hsuan Yang

The recent years have witnessed significant advances in image deblurring. In general, the success of deblurring methods depends heavily on extraction of salient structures from a blurry image for kernel estimation. Most deblurring methods often operate on high-resolution images where contours or edges can be extracted for kernel estimation. However, recovering reliable structures from low-resolution images becomes extremely challenging. In this paper, we propose a spatially variant deblurring algorithm for low-resolution images based on the exemplars. To exploit the exemplar information, we develop a super-resolution guided method to help the restoration of reliable image structures which can be used for kernel estimation. Experimental evaluations against the state-of-the-art methods show that the proposed algorithm performs favorably in deblurring low-resolution images. Furthermore, we show that the SR results obtained as byproducts in our method are comparable compared to other blind SR methods.

Collaboration


Dive into the Zhe Hu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jinshan Pan

Dalian University of Technology

View shared research outputs
Top Co-Authors

Avatar

Zhixun Su

Dalian University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wei-Sheng Lai

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jinshan Pan

Dalian University of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge