Analysis of Probabilistic multi-scale fractional order fusion-based de-hazing algorithm
11 Analysis of Probabilistic multi-scale fractional order fusion-based de-hazing algorithm
U. A. Nnolim
Department of Electronic Engineering University of Nigeria, Nsukka, Enugu
Abstract
In this report, a de-hazing algorithm based on probability and multi-scale fractional order-based fusion is proposed. The proposed scheme improves on a previously implemented multiscale fraction order-based fusion by augmenting its local contrast and edge sharpening features. It also brightens de-hazed images, while avoiding sky region over-enhancement. The results of the proposed algorithm are analyzed and compared with existing methods from the literature and indicate better performance in most cases.
Introduction
The end product of image de-hazing includes contrast maximization and improved visibility of details. Single image de-hazing has become an attractive option over multi image methods. This is primarily due to feasibility and convenience since single image-based methods do not require multiple instances of the same image scene [1]. There are four categories of image de-hazing namely enhancement, restoration, fusion and deep learning-based approaches [2]. These categories range from simple to complex architectures based on works from the literature [2]. The fusion and deep learning-based methods are relatively recent, while incorporating various techniques. However, most algorithms work best for either single image de-hazing or underwater image enhancement. This is notwithstanding the similarities between hazy and underwater images, which both suffer from poor visibility due to inadequate contrast. The deep learning-based approaches are highly involved, requiring a large image database for training. Furthermore, there is need for graphic processing units (GPUs) to reduce the execution time of such methods. Other authors utilize other methods to accelerate and optimize their algorithms [3]. These all add to the problems of colour distortion, minimal contrast enhancement, poor colour rendition and minimal colour correction for several images. Furthermore, a number of restoration and fusion-based methods require the manual setting of parameters, which may not yield the best result for all images.
2. Background and motivation
In previous work, we developed a versatile multi-scale fractional order fusion-based de-hazing algorithm, which could effectively process both underwater and hazy images without modification. Furthermore, the algorithm was completely automated, with all vital parameters computed from the input image. Results showed promising results and fast operation and minimal runtime, which surpassed several of the existing algorithms. The algorithm utilized fractional-based filters at varying scales to process the image, while fusing results together to generate a much more detailed image. However, this proposed algorithm yielded minimal local contrast enhancement in addition to edge over-enhancement for certain images with a high amount of detail. Furthermore, the algorithm yielded dark regions for some processed images. Thus, in this work, we combined tonal correction and localized contrast operators with a revised formulation for the fractional multi-scale-based algorithm. This led to the improvement and amplification of local contrast enhancement, while brightening de-hazed images, avoiding colour distortion, sky region and edge over-enhancement.
3. Proposed algorithm
The proposed scheme utilizes probabilistic and simultaneous estimation of illumination and reflectance components developed by Fu et al [4]. This is combined with the bilateral filter by Tomasi and Manduchi [5] for multi-scale illumination estimation rather than the Gaussian filter. This is in addition to the revised fractional multi-scale fusion-based algorithm used in improving the fine detail at each stage. This enables the localized contrast enhancement and avoidance of edge over-sharpening due to the non-linear means (bilateral) filter-based estimation. The system block diagram of the proposed scheme is shown in Fig. 1. The full details of the proposed scheme and its variants can be found in [6].
Fig. 1. Probabilistic illumination correction + fractional-derivative-based multi-scale de-hazing system
4. Experiments and results
This section presents the results and comparisons of the proposed scheme with both hazy and underwater image enhancement algorithms from the literature using subjective evaluation, objective assessment using various metrics, image statistics and histogram information.
The proposed scheme was tested using numerous underwater image scenes and datasets commonly found in the literature. Furthermore, we compared results using popular and recent state-of-the-art underwater image enhancement algorithms. The result is shown in Fig. 2(a). Fig. 2(a) consists of a figure from [7] (amended with PA-1 results), which includes the following algorithms: Contrast Limited Adaptive Histogram Equalization (CLAHE), Retinex, White balance, combined Methods of He et al, Zhu et al and Non-local de-hazing, DehazeNet, cycle-consistent adversarial networks (CycleGAN), Li’s method and the adjusted CycleGAN to compute structural similarity index metric (SSIM) loss (CycleGAN+SSIM Loss size) [7]. For all tables, bolded black indicates the best values while bolded red depicts the second best results. In Fig. 2(b)(1) and (2), we compare the PA with image results obtained with other algorithms from the literature such as Ancuti et al [8], Bazeille et al [9], Carlevaris-Bianco et al [10], Chiang et al [11], Galdran et al [12], Serikawa and Lu [13], compared with PA. The images produced by Ancuti et al, Me et al and PA are the best compared with other methods. However, observation of the RGB histogram plots show that PA yields the most stretched colour histogram, indicating highest contrast enhancement. This is also observed I Table 1, which presents the mean and standard deviation values for the hue saturation and value components of the original and processed underwater images. (a)
Fig. 2(a) Figure from [7] amended with visual result of FMIRCES and PA-1 for comparison
Enhanced image with fractional derivative-based RGB-IV-IRCES algorithm
Enhanced image with fractional derivative-based RGB-IV-IRCES algorithm
Enhanced image with fractional derivative-based RGB-IV-IRCES algorithm
Enhanced image with fractional derivative-based RGB-IV-IRCES algorithm Enhanced image with fractional derivative-based RGB-IV-IRCES algorithm Enhanced image with fractional derivative-based RGB-IV-IRCES algorithm (o) FMIRCES
Enhanced image with fractional derivative-based IRCES + Bilateral Filtering algorithm
Enhanced image with fractional derivative-based IRCES + Bilateral Filtering algorithm
Enhanced image with fractional derivative-based IRCES + Bilateral Filtering algorithm Enhanced image with fractional derivative-based IRCES + Bilateral Filtering algorithm
Enhanced image with fractional derivative-based IRCES + Bilateral Filtering algorithm Enhanced image with fractional derivative-based IRCES + Bilateral Filtering algorithm (p) PA-1
Overall, PA yields the highest standard deviation of saturation and value components, which indicate improved colour rendition and contrast enhancement respectively. This is also observed I Table 1, which presents the mean and standard deviation values for the hue saturation and value components of the original and processed underwater images. The colour cast is considerably eliminated in the image obtained with PA compared with the other algorithms. This is also observed in Fig. 2(c), which once more shows that PA surpasses several of the other algorithms by improving contrast and colour rendition, while eliminating colour cast. It yields results similar to Galdran et al but with sharper focus and the histogram would be similar to those of Fig. 2(b)(1) and (2), with highly stretched red, green and blue colour channels or highly enhanced saturation and value components. We also utilize the mean (mu), central moment of order 2 (mu_2) [14], standard deviation (sigma), skewness (gamma) [14], momental skewness (alpha) [14] and kurtosis (kappa) [14] to measure the various properties of the processed images. The mean indicates the centeredness of the histogram of the processed image. The standard deviation is linked to contrast, while the skewness determines the directional bias of the image histogram and the kurtosis defines the narrowness of the peak of the histogram. Thus, a more stretched out histogram would have a kurtosis value less than 3, while an unmodified histogram with a sharp peak would have a value greater than or equal to 3. Looking at the results in Table 2, the results of PA yield the lowest kurtosis values for the red, green and blue channels. This is expected since PA stretches out the histogram and we also expect the standard deviation to increase as the histogram is stretched, indicating more spread [15]. Also, this shows in the standard deviation values of the green and blue channels in Table 2, whose histograms have been considerably stretched compared to the red channel, improving the contrast. These numerical results are in line with the image histogram plots in Fig. 2(b)(3). We will also observe that the PA indicates high localized contrast for certain images, based on the histogram plots. (1) (2) (3) (b)
KEY
Original image Ancuti Bazeille Carlevaris-Bianco et al Chiang & Chen Galdran et al Serikawa & Lu PA
Fig. 2 (b) processed images using various algorithms and corresponding histograms with key to figures Table 1. Statistical values for hue saturation and value channels of processed images from Fig. 2(b)(1) and Fig. 2(b)(2)
Image
Diver and statue (Fig. 2(b)(1))
Algos Statistical measures
Original image PA Ancuti et al Bazeille et al Carlevaris-Bianco et al Chiang & Chen Galdran et al Serikawa & Lu H_mean
S_mean
V_mean
H_std
S_std
V_std
Image
Fishes (Fig. 2(b)(2))
Algos Statistical measures
Original image PA Ancuti et al Bazeille et al Carlevaris-Bianco et al Chiang & Chen Galdran et al Serikawa & Lu H_mean
S_mean
V_mean
H_std
S_std
V_std Table 2. Statistical values for red, green and blue channels of processed images from Fig. 2(b)(3) components r g b mu mu_2 sigma gamma alpha kappa Original image mu mu_2 sigma gamma alpha kappa
Ancuti et al mu mu_2 sigma gamma alpha kappa
Bazeille et al mu mu_2 sigma gamma alpha kappa
Carlevaris-Bianco et al mu mu_2 sigma gamma alpha kappa Chiang and Chen mu mu_2 sigma gamma alpha kappa Galdran et al mu mu_2 sigma gamma alpha kappa Serikawa & Lu mu mu_2 sigma gamma alpha kappa PA The remaining images in Fig. 2(c) to 2(e) compare the results of PA with the methods by Ancuti et al [8], Bazeille et al [9], Carlevaris-Bianco et al [10], Chiang & Chen [11], Galdran et al [12] and Serikawa & Lu [13] with their corresponding image histograms. Results indicate consistent trends for PA in terms of histogram stretching effects and once more shows the most spread out histogram. Since underwater and hazy images share similar degradation effects, we expect the results of the hazy images to be similar to the underwater image results. Also note that PA removes colour cast without modification of the algorithm and also incorporates local contrast enhancement without histogram equalization-based methods. The histograms in general show highly stretched colour channels compared with the original images. These have been supported with computation of statistical values enumerated earlier in this section. This utilization of image colour histograms to profile various underwater images has been used in previous work [16] [17] [18] [19] with interesting outcomes and once more helps in visualization of the properties of the PA. (c) KEY
Original image PA Ancuti Bazeille et al Carlevaris-Bianco et al Chiang & Chen Fattal Galdran et al Serikawa & Lu
Fig. 2 (c) processed images using various algorithms and key to figures (d) KEY
Original image Galdran et al Ancuti et al PA
Fig. 2 (d) processed images using various algorithms and key to figures (e) KEY Original image Galdran et al Original image Galdran et al PA PA Original image Galdran et al Original image Galdran et al PA PA Original image Galdran et al Original image Galdran et al PA PA
Fig. 2 (e) processed images using various algorithms and key to figures We present results for sample hazy images processed with PA-2 in this section and compare the proposed approach to several existing algorithms from the literature. The algorithms compared include those by Dong et al [20], Ancuti et al [8], Kratz and Nishino [21], Zhang et al [22], Oakley and Bu [23], Kim et al [24], Hsieh et al [25], Meng et al [26], Gibson and Nguyen [27], Yang et al [28], Guo et al [29], Anwar and Khosla [30], Liu et al [31], Ju et al [32], Kopf et al [33], Tan [34], Fattal [35], Tarel and Hautiere [36], Zhu, et al [37], He, et al [38], Ren, et al [3], Dai & Tarel [39], Nishino, et al [40], Galdran, et al [41], Wang and He [42], AMEF [43], PDE-GOC-SSR-CLAHE/PDE-Retinex [44], PDE-IRCES [45], FMIRCES [46] [47] against PA-2 (with k = 1 in this case to yield balanced results). In Fig. 2, PA-1 shows considerable colour correction and local contrast enhancement compared to most of the listed algorithms. This is in spite of its relatively lower complexity compared to the deep neural network-based approaches. This is remarkable since several of these algorithms are highly computationally and structurally complex. With no available runtime information for these particular set of algorithms, it is not possible to compare their execution times with PA-1. For the Tiananmen image in Fig. 3, best results are observed with Zhu, et al, He, et al, Ren, et al, PA-2, PDE-GOC-SSR-CLAHE/PDE-Retinex, which exhibit improved contrast and detail enhancement with no (or minimal) over-enhancement/discolouration of the sky region. The method by He, et al (darker with more halos), and PDE-GOC-SSR-CLAHE both depict visual halos. The PDE-IRCES clearly shows less distortion of sky region but also less contrast in the formerly hazy regions of the image. Tarel and Hautiere show more enhanced details but with discolouration of sky regions. The AMEF once more yields an image with dull and distorted colours.
For the
Toys image in Fig. 4, the best results are observed with PA-2, PDE-GOC-SSR-CLAHE/PDE-Retinex, algorithms by He et al (visible halos), Wang, et al, Ren et al, Dai, et al and Zhu, et al. The rest depict faded image results and colour distortion with marginal detail enhancement, while AMEF yields a pale image with distorted colours.
In Fig. 5 (
Canyon image), the best results are those of Ren et al, Zhu et al, PDE-Retinex, FMIRCES, PDE-IRCES in terms of colour and contrast improvement. The others such as Fattal, Tan and Nishino et al result in over-enhancement. For the pumpkin s image in Fig. 6, most methods yield reasonable results. The best are those by Ren et al, Dong et al, Fattal, He et al, PDE-Retinex, Zhu et al, Yeh et al, Ancuti et al, Kratz and Nishino, PDE-IRCES, FMIRCES and PA-2. In Fig. 7 (
Canon image), best results include He et al, Nishino et al, Wang et al, PDE-Retinex, FMIRCES, PA-2, Dong et al, PDE-IRCES (though too dark). The results of Anwar and Khosla (colour distortion and over-brightness), Tarel and Hautiere, Galdran et al and Zhu et al (under-enhancement with haze). In Fig. 8 ( brickhouse image), generally acceptable results are observed. The best include Zhu et al, He et al, Tarel and Hautiere, Zhang et al, Dong et al, PDE-Retinex, PDE-IRCES, FMIRCES, PA-2 and Hsieh et al. Poor results are observed in Kim et al, Ancuti et al, Yeh et al, Fattal, Ren et al, (which are under-enhanced) Guo et al (over-enhanced) and Liu et al (darkened image details). 5 (a) (b) (c) (d) (e) (f) (g) (h) (i) (j) (k) (l) (m) (n) (o) Fig. 3. (a) Original hazy image (b) Tarel, et al (c) He, et al (d) Nishino, et al (e) Ren et al (f) Galdran et al (EVID) (g) Wang & He (h) Dong et al (i) PDE-Retinex (j) Zhu et al (k) PDE-IRCES (l) AMEF (m) FMIRCES (n) PA-2 (o) PA-2 (without GOCS) (a) (b) (c) (d) (e) (f) (g) (h) (i) (j) (k) (l) (m) Fig. 4. (a) Original hazy image (b) Fattal (c) Tan (d) Yang et al (e) PDE-IRCES (f) Nishino et al (g) Zhu et al (h) He et al (i) PDE-Retinex (j) Ren et al (k) AMEF (l) FMIRCES (m) PA-2
Enhanced image with fractional derivative-based RGB-IV-IRCES algorithm
Enhanced image with fractional derivative-based IRCES + Bilateral Filtering algorithm
Enhanced image with fractional derivative-based IRCES + Bilateral Filtering algorithm
Enhanced image with fractional derivative-based IRCES + Bilateral Filtering algorithm (a) (b) (c) (d) (e) (f) (g) (h) (i) (j) (k) Fig. 5. (a) Original hazy
Tianamen image (b) Tarel & Hautiere (c) Zhu, et al (d) PA-2 (e) PDE-IRCES-2 (f) He, et al (g) PDE-GOC-SSR-CLAHE (h) Ren et al (i) Galdran (AMEF) (j) Ju et al [32] (k) FMIRCES [46] [47]
Fig. 6. (a) Original hazy image (b) Hsieh et al (c) Zhu, et al (d) Kim et al (e) Ancuti, et al (f) He et al (g) Tarel, et al (h) Zhang, et al (i) Yeh, et al (j) Fattal (k) Dong, et al (l) Guo, et al (m) PDE-Retinex (n) Ren et al (o) PDE-IRCES (p) AMEF (q) FMIRCES (r) PA-2 (without GOCS)
Enhanced image with fractional derivative-based RGB-IV-IRCES algorithm (a) (b) (c) (d) (e) (f) (g) (h) (i) (j) (k) (l) (m) (n) (o) (p)
Enhanced image with fractional derivative-based RGB-IV-IRCES algorithm (q)
Enhanced image with fractional derivative-based IRCES + Bilateral Filtering algorithm (r) (a) (b) (c) (d) (e) (f) (g) (h) (i) (j) (k) (l) (m) (n) Fig. 7. (a) Original hazy
Toys image (b) Tarel & Hautiere [36] (c) Dai et al [39] (d) PA-2 (e) He et al [38] (f) Nishino, et al [40] (g) PDE-GOC-SSR-CLAHE [44] (h) PDE-IRCES [45] (i) Galdran, et al (EVID) [41] (j) Wang & He [42] (k) Zhu, et al [37] (l) Ren, et al [3] (m) Galdran (AMEF) [43] (n) FMIRCES [46] [47] (a) (b) (c) (d) (e) (f) (g) (h) (i)
Enhanced image with fractional derivative-based RGB-IV-IRCES algorithm (j) (k) (l) (m) (n) Fig. 8. (a) Original hazy image (b) Fattal (c) Dong et al (d) He et al (e) Yeh et al (f) PDE-Retinex (g) Kratz and Nishino (h) Ancuti et al (i) Ren et al (j) Zhu et al (k) PDE-IRCES (l) FMIRCES (m) AMEF (n) PA-2 In Fig. 9, we show the colour histograms for 99 hazy images processed with FMIRCES and PA and the same trend observed in enhanced underwater images is also seen here. The histograms of images processed with PA are highly stretched compared to images processed with FMIRCES. Furthermore, values for images processed with PA are much more spread out indicating increased standard deviation, variance and contrast. Thus, the improvements of PA over FMIRCES is also observed in the histogram plot comparison. (a)
Enhanced image with fractional derivative-based IRCES + Bilateral Filtering algorithm (b) Fig. 9. Image colour histograms of hazy images processed with (a) FMIRCES and (b) PA In Table 3, we compare the average runtimes of the various available de-hazing algorithms with PA. Results indicate that PA has the second fastest runtime compared to the other algorithms. It also yields better results than the fastest algorithm from previous work (which is the FMIRCES) [46] by sacrificing some speed for improved localized contrast and better colour rendition.
Table 3. Average runtimes (seconds) for images processed with available algorithm implementations
Algos He, et al Zhu, et al Ren et al AMEF PDE- GOC- SSR- CLAHE PDE- IRCES FMIRCES PA Mean times(s)
5. Conclusion
Based on the results, we can see that the proposed algorithm improves on the initial formulation by addressing the local contrast and edge enhancement in addition to image brightening. The image histogram and statistics indicate improvements in addition to the objective metrics used to corroborate the results. The proposed approach further validates the use of tonal correction and mapping operators as viable alternatives to the image de-hazing problem. Moreover, the proposed approaches surpass a majority of contemporary and much more complex underwater and hazy image enhancement algorithms from the literature. The proposed schemes are versatile since they process both hazy and underwater images adequately. These outcomes are verified in terms of contrast and colour enhancement/correction, with good visual and objective results and minimized runtime. References [1] S. Lee, S. Yun, J.-H. Nam, C. S. Won and S.-W. Jung, "A review on dark channel prior based image dehazing algorithms,"
EURASIP Journal on Image and Video Processing, vol. 2016, no. 4, pp. 1-23, 2016. [2] D. Singh and V. Kumar, "A Comprehensive Review of Computational Dehazing Techniques,"
Archives of Computational Methods in Engineering, pp. 1-13, September 2018. [3] W. Ren, S. Liu, H. Zhang, J. Pan, X. Cao and M.-H. Yang, "Single Image Dehazing via Multi-Scale Convolutional Neural Networks," in
European Conference on Computer Vision , Amsterdam, The Netherlands, Springer International Publishing, 8 October 2016. [4] X. Fu, Y. Liao, D. Zeng, Y. Huang, X.-P. Zhang and X. Ding, "A Probabilistic Method for Image Enhancement With Simultaneous Illumination and Reflectance Estimation,"
IEEE Transactions on Image Processing, vol. 24, no. 12, pp. 4965 - 4977, December 2015. [5] C. Tomasi and R. Manduchi, "Bilateral Filtering for Gray and Color Images," in
Proceedings of the 1998 IEEE International Conference on Computer Vision , Bombay, India, 7-7 January 1998. [6] U. A. Nnolim, "Probabilistic Multiscale Fractional Tonal Correction Bilateral Filter based Hazy Image Enhancement,"
International Journal of Image and Graphics (IJIG), p. (accepted), 2019. [7] J. Lu, N. Li, S. Zhang, Z. Yu, H. Zheng and B. Zheng, "Multi-scale adversarial network for underwater image restoration,"
Optics and Laser Technology, no. (article in press), 2018. [8] C. Ancuti, C. Ancuti, T. Haber and P. Bekaert, "Enhancing underwater images and videos by fusion," in
IEEE Conference on Computer Vision and Pattern Recognition , Providence, RI, USA, 16-21 June 2012. [9] S. Bazeille, I. Quidu, L. Jaulin and J. P. Malkasse, "Automatic underwater image pre-processing," in
Proceedings of the Characterisation du Milieu Marin (CMM '06) , Brest, France, 16-19 October 2006. [10] N. Carlevaris-Bianco, A. .. Mohan and R. M. Eustice, "Initial results in underwater single image dehazing," in
Proceedings of IEEE International Conference on Oceans , Seattle, Washington, USA, 20-23 September 2010. [11] J. Chiang and Y. Chen, "Underwater image enhancement by wavelength compensation and dehazing,"
IEEE Transactions on Image Processing, vol. 21, no. 4, pp. 1756-1769, 2012. [12] A. Galdran, D. Pardo, A. Picón and A. Alvarez-Gila, "Automatic red-channel underwater image restoration,"
Journal of Visual Communication and Image Representation, vol. 26, pp. 132-145, Jan 31 2015. [13] S. Serikawa and H. Lu, "Underwater image dehazing using joint trilateral filter,"
Computers in Elecrical Engineering, vol. 40, no. 1, pp. 41-50, 2014. [14] V. Patrascu, "Image enhancement method using piecewise linear transforms," in
European Signal Processing Conference (EUSIPCO-2004) , Vienna, Austria, 2004. [15] U. A. Nnolim and P. Lee, "A Review and Evaluation of Image Contrast Enhancement algorithms based on statistical measures," in
IASTED Signal and Image Processing Conference Proceeding , Kailua Kona, HI, USA, August 18-20, 2008. [16] U. A. Nnolim, "Smoothing and enhancement algorithms for underwater images based on partial differential equations,"
SPIE Journal of Electronic Imaging, vol. 26, no. 2, pp. 1-21, March 22 2017. [17] U. A. Nnolim, "Analysis of proposed PDE-based underwater image enhancement algorithms," 10 Dec 2016. [Online]. Available: http://arxiv.org/pdf/. [18] U. A. Nnolim, "Improved partial differential equation (PDE)-based enhancement for underwater images using local-global contrast operators and fuzzy homomorphic processes,"
IET Image Processing, vol. 11, no. 11, pp. 1059-1067, November 2017. [19] U. A. Nnolim, "Improved underwater image enhancement algorithms based on partial differential equations (PDEs)," 2017. [Online]. Available: http://arxiv.org/pdf/. [20] X.-M. Dong, X.-Y. Hu, S.-L. Peng and D.-C. Wang, "Single color image dehazing using sparse priors," in , Hong Kong, China, 26–29 September 2010. [21] L. Kratz and K. Nishino, "Factorizing scene albedo and depth from a single foggy image," in
IEEE International Conference on Computer Vision (ICCV) , Kyoto, Japan, September 2009. [22] X.-S. Zhang, S.-B. Gao, C.-Y. Li and Y.-J. Li, "A Retina Inspired Model for Enhancing Visibility of Hazy Images,"
Frontiers in Computer Science, vol. 9, no. 151, pp. 1-13, 22nd December 2015. 1 [23] J. B. Oakley and H. Bu, "Correction of simple contrast loss in color images,"
IEEE Transactions on Image Processing, vol. 16, no. 2, pp. 511-522, 2007. [24] J.-H. Kim, J.-Y. Sim and C.-S. Kim, "Single image dehazing based on contrast enhancement," in
IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) , May 22-27 2011. [25] C.-H. Hsieh, Z.-M. Weng and Y.-S. Lin, "Single Image haze removal with pixel-based transmission map estimation," in
WSEAS Recent Advances in Information Science , 2016. [26] G. Meng, Y. Wang, J. Duan, S. Xiang and C. Pan, "Efficient Image Dehazing with Boundary Constraint and Contextual Regularization," in
IEEE International Conference on Computer Vision (ICCV-2013) , Sydney, Australia, 1-8 December 2013. [27] K. Gibson and T. Nguyen, "Fast single image fog removal using the adaptive Wiener Filter," in , September 2013. [28] S. Yang, Q. Zhu, J. Wang, D. Wu and Y. Xie, "An Improved Single Image Haze Removal Algorithm Based on Dark Channel Prior and Histogram Specification," in , Brno, Czech Republic, 22 November 2013. [29] F. Guo, Z. Cai, B. Xie and J. Tang, "Automatic Image Haze Removal Based on Luminance Component," in , Chengdu, China, 23-25 September 2010. [30] M. I. Anwar and A. Khosla, "Vision enhancement through single image fog removal,"
Engineering Science and Technology, an International Journal, vol. 20, p. 1075–1083, March 2017. [31] Y. Liu, H. Li and M. Wang, "Single Image Dehazing via Large Sky Region Segmentation and Multiscale Opening Dark Channel Model,"
IEEE Access,
May 31 2017 . [32] M. Ju, D. Zhang and X. Wang, "Single image dehazing via an improved atmospheric scattering model,"
The Visual Computer, vol. 33, no. 12, pp. 1613-1625, Dec 1 2017 . [33] J. Kopf, B. Neubert, B. Chen, M. Cohen, D. Cohen-Or, O. Deussen, M. Uyttendaele and D. Lischinski, "Deep photo: Model-based photograph enhancement and viewing," in
ACM SIGGRAPH Asia 2008 Papers , New York, NY, USA , 2008. [34] R. T. Tan, "Visibility in bad weather from a single image," in
IEEE Conference on Computer Vision and Pattern Recognition , 2008. [35] R. Fattal, "Single Image Dehazing,"
ACM Transactions on Graphics, vol. 72, no. 3, pp. 1-9, 2008. [36] J. Tarel and N. Hautiere, "Fast visibility restoration from a single color or gray level image," in
IEEE 12th International Conference on Computer Vision , Kyoto, Japan, 29 September – 2 October 2009. [37] Q. Zhu, J. Mai and L. Shao, "A Fast Single Image Haze Removal Algorithm Using Color Attenuation Prior,"
IEEE Transactions on Image Processing, vol. 24, no. 11, pp. 3522-3533, November 2015. [38] K. He, J. Sun and X. Tang, "Single Image Haze Removal Using Dark Channel Prior,"
IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), vol. 33, no. 12, pp. 2341-2353, 2010. [39] S.-k. Dai and J.-P. Tarel, "Adaptive Sky Detection and Preservation in Dehazing Algorithm," in
IEEE International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS) , Nusa Dua, Bali, Indonesia, 9-12 November 2015. [40] K. Nishino, L. Kratz and S. Lombardi, "Bayesian Defogging,"
International Journal of Computer Vision, vol. 98, no. 3, p. 263–278, July 2012. [41] A. Galdran, J. Vazquez-Corral, D. Pardo and M. Bertalmio, "Enhanced Variational Image Dehazing,"
SIAM Journal on Imaging Sciences, pp. 1-26, September 2015. [42] W. Wang and C. He, "Depth and Reflection Total Variation for Single Image Dehazing," 22 January 2016. [Online]. Available: https://arxiv.org/abs/1601.05994.pdf. [Accessed 13 October 2016]. [43] A. Galdran, "Artificial Multiple Exposure Image Dehazing,"
Signal Processing, vol. 149, pp. 135-147, August 2018. [44] U. A. Nnolim, "Partial differential equation-based hazy image contrast enhancement,"
Computers and Electrical Engineering, vol. 72, pp. 670-681, November 2018. [45] U. A. Nnolim, "Image de-hazing via gradient optimized adaptive forward-reverse flow-based partial differential equation,"
Journal of Circuits Systems and Computers, vol. 28, no. 6, pp. 1-30, July 5 2019. 2 [46] U. A. Nnolim, "Adaptive Multi-Scale Entropy Fusion De-Hazing Based on Fractional Order,"
Journal of Imaging, vol. 4, no. 108, pp. 1-22, September 6 2018. [47] U. A. Nnolim, "Adaptive multi-scale entropy fusion de-hazing based on fractional order,"2018.