2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) | 2019

Dense Scene Information Estimation Network for Dehazing

 
 
 
 

Abstract


Image dehazing continues to be one of the most challenging inverse problems. Deep learning methods have emerged to complement traditional model-based methods and have helped define a new state of the art in achievable dehazed image quality. Yet, practical challenges remain in dehazing of real-world images where the scene is heavily covered with dense haze, even to the extent that no scene information can be observed visually. Many recent dehazing methods have addressed this challenge by designing deep networks that estimate physical parameters in the haze model, i.e. ambient light (A) and transmission map (t). The inverse of the haze model may then be used to estimate the dehazed image. In this work, we develop two novel network architectures to further this line of investigation. Our first model, denoted as At-DH, designs a shared DenseNet based encoder and two distinct DensetNet based decoders to jointly estimate the scene information viz. A and t respectively. This in contrast to most recent efforts (include those published in CVPR 18) that estimate these physical parameters separately. As a natural extension of At-DH, we develop the AtJ-DH network, which adds one more DenseNet based decoder to jointly recreate the haze-free image along with A and t. The knowledge of (ground truth) training dehazed/clean images can be exploited by a custom regularization term that further enhances the estimates of model parameters A and t in AtJ-DH. Experiments performed on challenging benchmark image datasets of NTIRE 19 and NTIRE 18 demonstrate that At-DH and AtJ-DH can outperform state-of-the-art alternatives, especially when recovering images corrupted by dense haze.

Volume None
Pages 2122-2130
DOI 10.1109/CVPRW.2019.00265
Language English
Journal 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)

Full Text