Single Image Dehazing Based on Generic Regularity
SSingle Image Dehazing Based on Generic Regularity
Kushal Borkar, Snehasis Mukherjee
IIIT SriCity
Abstract
This paper proposes a novel technique for single image dehazing. Most of thestate-of-the-art methods for single image dehazing relies either on Dark ChannelPrior (DCP) or on Color line. The proposed method combines the two differ-ent approaches. We initially compute the dark channel prior and then applya Nearest-Neighbor (NN) based regularization technique to obtain a smoothtransmission map of the hazy image. We consider the effect of airlight on theimage by using the color line model to assess the commitment of airlight ineach patch of the image and interpolate at the local neighborhood where theestimate is unreliable. The NN based regularization of the DCP can removethe haze, whereas, the color line based interpolation of airlight effect makes theproposed system robust against the variation of haze within an image due tomultiple light sources. The proposed method is tested on benchmark datasetsand shows promising results compared to the state-of-the-art, including the deeplearning based methods, which require a huge computational setup. Moreover,the proposed method outperforms the recent deep learning based methods whenapplied on images with sky regions.
Keywords: image dehazing, transmission estimation, naive bayesclassification, nearest neighbor regularization
1. Introduction
Images captured by the camera, are often affected by haze due to severalatmospheric conditions such as fog, smoke, multiple light sources, the scatteringof light, etc. Presence of haze in an image obscure the clarity and hence needsto be restored. The flux of light per unit area received by the camera fromthe scene is attenuated along the line of sight. This redirection of light lessensthe immediate scene transmission and replaces with a layer of scattered lightknown as airlight. Such scattering of light due to rigid particles floating in theair reduces the visibility of the scene.
Email addresses: [email protected] (Kushal Borkar), [email protected] (Snehasis Mukherjee)
Preprint submitted to Elsevier August 28, 2018 a r X i v : . [ c s . C V ] A ug a) (b)Figure 1: Sample result of the proposed approach. (a) Input hazy image. (b) Dehazed imageusing our approach. Image methods endeavor to recover original scene radiance by expelling theimpact of haze from the picture(as shown as an example in Figure 1). Since theeffect of scattering of light at any pixel depends on the depth of the pixel, thedegradation of the image due to haze is spatial-variant. However, recovering theeffect of scene radiance from a pixel on an object is perplexing as the measure offog and haze relies upon the separation between the object and the camera. Inthis way, global enhancement strategies do not function admirably for a sceneimage, because of the presence of objects at different depth and the background.The conventional methods exploited multiple images to overcome the problem.For example, the method of Narasimhan et.al. [1] require multiple pictures ofa similar scene taken under various climate conditions. The method of Shwartzet.al. [2] requires images with the various level of polarization. (a) (b) (c) (d)
However, images taken in various climatic conditions may be unavailablealways. Recently researches are going on for single image dehazing, where ref-erence images for the given hazy images are unrequired.After the introduction of the concept of Dark Channel Prior(DCP), peoplestarted dehazing an image without the help of a reference image [3]. Tan et.al.
Figure 2: Result of dehazing an image. (a) input haze image. (b) refined transmission mapafter NN regularization (c) final haze-free image (d) depth map of the output Haze-free Image. . Related Work
Dehazing is a task of image remaking; the corruption of a hazy image is adirect result of the suspended particles in the turbid air. Single image dehazingis a challenging task which draws significant attention from researchers of thefields of Computer Graphics and Image Processing. The physical model oftenused to characterize the haze formation that causes degradation of an image, isknown as Koschmieder’s atmospheric scattering model [12]: I ( x ) = t ( x ) J ( x ) + (1 − t ( x )) A, (1)where I ( x ) is the observed intensity value of pixel x in image I ; J ( x ) is thescene radiance of a haze-free image at x and t ( x ) is the medium transmissiondescribing the portion of the light which reaches the camera without scattering. A is the airlight, a global vector quantity describing the ambient light. The goalof haze removal task is to recover J , A and t from I .In (1), the transmittance t ( x ) does not change much for a given wavelengthin a hazy image. The term J ( x ) t ( x ) of (1) is termed as direct attenuation [3],which provides the information about the quantity of radiance received by theobserver. The second part, called airlight , provides an approximation of themeasure of the atmospheric light included due to the dissipating course of theviewer.Several attempts have been made for fog and haze expulsion in outdoorscene pictures [13]. Notwithstanding, the considerable significant advance inthe research on haze removal methods was the idea of DCP. It is observedthat, in the more considerable part of the local neighborhood regions which donot contain the sky, a few pixels in many instances get extremely low-intensityvalue at the minimum one color (RGB) channel (called ”dark pixels”). In caseof hazy image, the intensity of these dark pixels in the corresponding channel iscontributed by the airlight. Along these lines, these dark pixels can explicitlygive a precise estimation of the fog’s transmission. In any case, the progressionto reclassify the transmission approximation utilizing soft matting techniqueoften leads to transmission underestimate.When the density of haze varies smoothly in space, the effect of the homo-geneous atmosphere transmission t can be shown as: t ( x ) = e βd ( x ) , (2)where d ( x ) is the depth at pixel x and β is the scattering coefficient (in threedimensional space) of the atmosphere. So, (2) specifies the amount of radiancereceived by the observer, which is attenuated exponentially with the depth.Several algorithms for image dehazing follow the image formation model (1)to dehaze images by recouping J . However, to regain haze-free image, both(1) and (2) are followed. First J is obtained by subtracting a constant valuecorresponding to the dark pixels and then estimate the value of transmission t precisely by assuming A is known. Haze lessens the contrast in the picture,and different techniques depend on this perception for rebuilding. He et al. [14]4mplifies the contrast in each patch, while keeping up a global coherent image.This method enhanced the contrast for image dehazing and as a result, distantobjects to the viewer seemed to be smooth and over saturated, causing thedehazed image look artificial. In [15], Park et al. evaluated the amount of hazein the image, from the contrast between the RGB channels, which diminishes ashaze increments. This postulation of estimation of amount of haze is erroneousin the greay area. In [16], Zhu et al. assessed the haze in view of the perceptionthat hazy regions are portrayed by high brightness and low saturation.Some efforts have been made by utilizing a prior on the depth of the picture,for haze expulsion. A smoothness prior is utilized in [17], expecting the imageto be smooth except the pixels with depth discontinuities. Nishino et al. [18]expected the albedo and depth as factually free and together approximated themusing priors on both. The albedo-prior accepted the distribution of gradientsin images of characteristic scenes displaying a heavy-tail dispersion, and it isestimated as a summed up ordinary appropriation and further approximatedas Gaussian distribution. The depth-prior is scene-dependent and is pickedphysically, either as piece-wise consistent for urban scenes or definitely changingfor non-urban scenes.The DCP based approaches [14] accept that within a small image patchethere will be no less than one pixel with a dark color channel and utilize thisnegligible incentive as an approximate of the haze. This prior works extremelywell, with the exception of in bright regions of the scene where intensity valuesof the considerable number of channels are likewise high. Hence, DCP cannotmanage the sky area of the images in light of the fact that the dull channelpixels are potentially inaccessible in those splendid picture areas. Also, DCPbased techniques are tedious because of the soft matting procedure involved inthe method.Recently, several techniques have been proposed to address the shortcomingof DCP [14]. In [19], color ellipsoids are fitted in the RGB color space. Theseellipsoids are utilized to arrange a unified approach to deal with the constraintsof the DCP based techniques, and another strategy was proposed to assess thetransmission in every ellipsoid. In [5, 20], color lines were fit in RGB space,searching for little fixes with a steady transmission. The traditional color linebased methods depend on the assumption that pixels in a haze-free image con-stitute color lines in RGB space [12]. These lines go through the inception andcome from color varieties inside items. As appeared in [5], the color lines infoggy scenes do not go through the starting point any longer, because of theadded substance fog part.Some significant attempts have been made using some priors other thanthe DCP and color line [21, 22, 23, 24]. In [23], globally guided regularizationtechnique is applied for dehazing. In [24], contrast of the image is enhancedusing boundary conditions. Tang et al. proposed a learning based technique forimage dehazing [22]. However, the DCP and colorline based methods continueto dominate the other methods due to better performances, except images withhigh depth area (such as sky area). Riaz et al. proposed a DCP based approach[25] where the inefficacy of DCP at sky area is handled by limiting the contrast5t the sky region. However, in most of the cases limiting the contrast causesloss of minute color information of the image.The proposed method follows the concepts of DCP [14] and color line [5].In addition to applying the above two priors, the proposed method applies aNearest Neighbor (NN) regularization technique which helps reduce the execu-tion time. The domain-adaptive nature of the patches helps in maintaining theoriginal color and contrast of the image. Finally, the adaptive nature of the sizeof the mask help in maintaining the original color in high depth areas of theimage (area which is far from the camera, such as, sky area).Recently, deep learning based techniques are being successfully applied in allareas of Computer vision. A few efforts have been made to apply deep learningtechniques for haze removal [9, 10, 8]. Unlike the traditional image dehazingtechniques, AOD-Net in [10] proposes an end-to-end system for image dehazing,where a simple CNN is introduced to directly estimate the haze-free image,without obtaining the transmission map of the image. Dehazenet is proposed in[9], where a transformation map of the input image is obtained from a CNN, by atrainable model. The CNN based methods usually give better results for imagedehazing, compared to the traditional prior based models. However, for sceneimages where objects are far from the camera, the CNN based methods do notwork well. Also, the deep network-based methods need a huge computationalset up, which may not be affordable in many cases, e.g., hand-held devices. Nextwe discuss the mathematical background of the proposed method.
3. Background of the Proposed Approach
We assess the transmission by considering A not constant for the scene. Weobserve that (1) expect the pixel intensity I ( x ) to be radiometrically-straight.Hence, additionally to adjust methods that rely upon this course of action il-lustrate, our system requires the reversal of the procurement nonlinearities. Inthis section we explain the mathematical background of the proposed approachand the significance of the approach in the image dehazing task. The DCP is based on the perception on haze-free images: the most of thecases, the patches taken from a haze-free image must have no less than one colorchannel having a low intensity incentive at a few pixels [14]. According to thisperception, for any image J , we characterize: J dark ( x ) = min c ∈{ r,g,b } ( min y ∈ Ω( x ) J c ( y )) , (3)where Ω( x ) is the neighborhood patch centered at x , J c is the color channel ofthe picture J and J dark the dark channel of J . The intensity of J dark is lowand tends to zero if J is a haze-free image. The low intensities in the haze-freelocale are due to the following three elements:6 a) (b) (c) (d)Figure 3: (a), (c) Input hazy images. (b), (d) Depth map of the Input Hazed Images • Shadows of objects: For example, the shadows of leaves, trees and rocksin landscape images or the shadows of buildings in cityscape images; • Dark objects or surfaces: For example, dark tree trunk and stone; • Colorful objects or surfaces: Color of any object (for example, greengrass/tree/plant, blue water surface or red flower/leaf/wall) mostly tendsto be very close to one of the color planes such as, Red, Green, Blue, Cyan,Magenta, Yellow, Black and White.However, the DCP based techniques work well for images with high contrast andlow depth region. For images consisting of regions with high depth value, DCPbased methods fail to maintain the original color of the region. For example, skyregion is usually bright and hence, do not have dark intensity values. We candeal with the sky locales by utilizing the Haze atmospheric scattering model (1),which is applied in the proposed model and discussed in the next subsection.
In Figure 3, we can observe that the transmission map is of an even consis-tency but is having unanticipated depth jumps in the hazy image. In the image-dehazing context, we assume that comparable pixels have an almost identicaltransmission value, according to [5, 26].In order to proficiently quantify the similarity between two image pixels, wehave to think about the spatial variation and smoothness. Given an image pixel x , we characterize its element vector for similitude estimation in light of NNregularization as given below: f ( x ) = ( R, G, B, λX, λY ) T , (4)where the intensities of x are depicted as R , G and B in the RGB color space,respectively; X and Y are the spatial coordinates of x , and λ is the balancingfactor. 7espite the fact that we utilize indistinguishable features from [27], however,is not the same as matting, so we adjust our technique to the important andbroadly utilized conditions for matting utilizing (4). Colors in the small patches of a haze-free image generally lie on the linegoing through the starting point as shown by Omer et al. [12], if we representcolors as coordinates in the the RGB space. But the line is shifted from theorigin by A due to the additive airlight component in the case of hazy images.We assume that A is constant. As we expect piecewise smooth depth d whichgives us a smooth scattering coefficient β which verifies that t ( x ) is piecewisesmooth which is smooth at pixels that correspond to the same object. And t ( x ) is also varying smoothly and slowing in the scene known from equation (2)except at depth discontinuities. This assumption of piecewise smooth geometrywas used by Carr et al. [28] and Nishino et al. [18]. So, the equation (1) for asmall patch can be rewritten as: I ( x ) = J ( x ) t + (1 − t ) A, x ∈ Ω , (5)where t is a fixed transmission value in the patch Ω, and the airlight componentis known to us. If we estimate the color line directly, then the measurementof color line direction will be erroneous if some dark pixels are present. In theproposed model, as the image is first subjected to DCP, will be less prone toerror. Naive Bayes is a conditional likelihood model. Given an issue occurrence tobe characterized, spoken to by a vector x = ( x , . . . , x n ) representing n features(independent factors), it relegates to this case probabilities p ( C k | x , . . . , x n ) foreach one of the K conceivable results or classes C k . The predicament with theabove expression is that, if the quantity of highlights n is substantial or if an el-ement can go up against a considerable number of values, at that point applyingsuch a model on likelihood tables is troublesome. We, therefore, reformulate themodel to make it less demanding to deal with. Appropriating Bayes’ theorem,the conditional likelihood can be calculated as: p ( C k | x ) = p ( C k ) · p ( x | C k ) p ( x ) . (6)The denominator of (6) can be considered as a constant. The numerator isproportional to the conditional likelihood p ( C k | x , . . . , x n ), which can be revisedas given below, utilizing the chain lead for numerous implementations of the8eaning of conditional likelihood: p ( C k , x , . . . , x n ) = p ( x , . . . , x n , C k )= p ( x | x , . . . , x n , C k ) · p ( x , . . . , x n , C k )= p ( x | x , . . . , x n , C k ) · p ( x | x , . . . , x n ,C k ) · p ( x , . . . , x n , C k )= p ( x | x , . . . , x n , C k ) · p ( x | x , . . . , x n ,C k ) · p ( x | x , . . . , x n , C k )= . . . = p ( x | x , . . . , x n , C k ) · p ( x | x , . . . , x n ,C k ) · · · p ( x n − | x n , C k ) · p ( x n | C k ) · p ( C k ) (7)The “naive” conditional independence presumptions are connected here for theproposed dehazing approach. Let us expect that each component x i is restric-tively independent of each different feature x j for j (cid:54) = i , given the classification C . Which implies, p ( x i | x i +1 , . . . , x n , C k ) = p ( x i | C k ). Along these lines, theproposed conditional likelihood model can be communicated as p ( C k | x , . . . , x n ) ∝ p ( C k ) p ( x | C k ) p ( x | C k ) . . . = p ( C k ) n (cid:89) i =1 p ( x i | C k ) . (8)where ∝ denotes proportionality.For evaluating the colorline heading, we develop a model that assigns thevalue in view of the component vector for similitude estimation in condition(4), where the element vectors are directly free. In attendance, we shift thequantity of pixels given to the Naive Bayes Classification Model. By changingthe number of pixels we take the probability of more number of pixels by findingthe probabilities utilizing the Naive Bayes’ condition to compute the posteriorprobability for every pixel based on the inherent vector in (4). The pixel withthe most astounding posterior probability obtain the result of the proposedexpectation model.
4. Proposed Approach
In this section, we explain the steps to dehaze an image utilizing the nearbyfix demonstrate in (5) and its related transmission estimation system in (2).We started with a brief review of the mathematical background in the previoussection. Now we demonstrate the way the mathematical models can be usedto check the input image and consider small patches of pixels as possible fixesthat satisfy (3). As articulated in the previous section, pixels that relate to analmost planar surface lie on a color line in RGB space depicted by (4) and (5).9n this way, in each fix we run a Naive Bayes Classification Model unequivocallybuilt using (8) that scans for a line bolstered by a significant number of pixels.We at that point check whether the line found is reliable with our arrangementshow by testing it against a rundown of conditions postured by the model.A line that concludes everyone of these tests effectively is then utilized forassessing the transmission as per (3) and (4). The subsequent values at thatpoint are allocated to every one of the pixels that help in finding the color line.We evaluate the transmission in patches where we neglect to discover a linethat meets every one of the conditions by utilizing the NN Regularization andreapplying the above strides on the relating patch. Next we discuss the processof estimating transmission map of an image using DCP.
We believe that the transmission t within a neighborhood patch Ω( x ) issteady. Performing the minimum operation in the neighborhood patch of thehazy picture in (4), we have the following:min y ∈ Ω( x ) ( I c ( y )) = min y ∈ Ω( x ) ( J c ( y )) t + (1 − t ) A c . (9)The above equation can be derived further as:min y ∈ Ω( x ) ( I c ( y ) A c ) = min y ∈ Ω( x ) ( J c ( y ) A c ) t + (1 − t ) . (10)Considering the min operation again on the above condition, in order to get theminimum of the three colors in RGB channel, we have:min c ( min y ∈ Ω( x ) ( I c ( y ) A c ) = min c ( min y ∈ Ω( x ) ( J c ( y ) A c ) t ) + (1 − t ) . (11)As indicated by (3), the dark channel J dark of the haze-free pixel intensity J should tend to zero. Also, as A c is a positive constant, thus:min c (cid:16) min y ∈ Ω( x ) ( J c ( y ) A c ) (cid:17) = 0 . (12)From (9) and (10), we get: t = 1 − min c (cid:16) min y ∈ Ω( x ) ( I c ( y ) A c ) (cid:17) . (13)The above mathematical formulation gives us an estimation of the transmission.As stated earlier, the DCP is not a appropriate prior for the sky locales. Sincethe sky is at boundless infinite depth and practically has zero transmission, the(9) smoothly tackles both sky and non-sky regions in the proposed approach.We dehaze the image from the assessed approximation of transmission obtainedfrom (9) and utilize the neighborhood local patch model.Reasonably, even in clear environment, the atmosphere around us is notcompletely free of any molecule. Along these lines, the haze still exist when wehave a look at inaccessible items. In addition, the presence of haze is a majorprompt for a human to comprehend depth.10 .2. Dehazing Algorithm In this component, we illustrate the initiatives for dehazing an image usingour method. First, we dehaze the image by removing the dark pixels fromthe image without applying a soft matting algorithm [11] to convalesce thetransmission. We signify the transmission outline t ( x ) and get the accompanyingcost function: E ( t ) = t T Lt + λ ( t − ˜ t ) T +1 , (14)where L is the Matting Laplacian matrix introduced by Levin [29], and λ is aregularization parameter.We analyse the transmission estimations of a few points with low depth val-ues, and propose recuperating the transmission estimations of the rest of thepoints by finding their closest match from the arrangement of exact focuses inlight of the built k-d tree with the 5-dimensional component vectors charac-terized in (4). Conforming transmission maps along with the input image isshown in Figure 3, from which we can perceive that the transmission outline issignificantly smooth aside from unanticipated depth jumps. As specified earlier,the dehazing issue is extremely under-constrained. Hence, we have to surmisea few assumptions from (8), (11) and (12) on the normal transmission map.The refined transmission outline exercising the above strategy accomplishes tocapture the sharp edge discontinuities and outline the shape of the items. Withthe transmission outline, we can remunerate the scene radiance as per (1). Yet,the direct attenuation may be near to zero when the transmission is near tozero. In this way, we limit the transmission t ( x ) to a lower bound t , so thata specific measure of haze are retained. The scene radiance J ( x ) is eventuallyrecouped by: J ( x ) = I ( x ) − ˆ Amax ( t ( x ) , t ) + ˆ A, (15)where ˆ A denotes the estimated airlight of the image.A traditional range of t falls in the interval [0.09, 0.1], which is fixed experi-mentally. Since the scene radiance is generally unlike the atmospheric light, theimage after disturbance by haze evacuation looks diminish. Hence, we incresethe insight into J ( x ) for illustration. We examine the information from the inputimage and consider limited windows of pixels as hopeful patches following (4).Pixels relating to an almost-planar mono-chromatic surface lie on a color line inRGB space as indicated by (4). We evaluate the color line by applying “NaiveBayes Classifier” on the intensity values in RGB space and get two focuses (cid:126)p and (cid:126)p lying on the straight line and an arrangement of inliers. The conditionof the line will be (cid:126)P = (cid:126)P + ρ (cid:126)D , where ρ represent the parameter of the line, (cid:126)P = (cid:126)p and the heading proportion (cid:126)D = (cid:126)p − (cid:126)p || ( (cid:126)p − (cid:126)p ) || . We examine the assessedline with conditions in particular, noteworthy number of inliers, positive inclineof (cid:126)D and unimodality analogously in [5] and eliminate the questionable ones.Figure 4 demonstrates the effect of applying the Naive-Bayes Classifier and theproposed NN regularization technique on a sample hazy image. The second rowof Figure 4 illustrates the effect of applying different patch sizes on the sameimage. 11 a) (b) (c)(d) (e) (f)Figure 4: Result of the proposed method with varying patch size. (a) input haze image. (b)Output Image without Nearest Neighbour Regularization (c) Output image without applyingNaive Bayes’ Classification Model (d) pixels are given to Naive Bayes Classification Modelwith patch size 7 × ×
15 and (f) 30 × .2.1. Computing ˆ A We can make our estimates for ˆ A using the Dark Channel [3] of the patch.Dark Channel of the specific patch Ω is determined as follows (following [30]: Dark (Ω) = min x ∈ Ω( x ) (cid:16) min c ∈ R,G,B I c ( x ) (cid:17) , (16)where I c is the c th color channel, for c ∈ { R, G, B } .We have already defined the direction ratio (cid:126)D in the previous section, con-sidering the standard conditions, significant line support, positive reflect of (cid:126)D ,unimodality and valid transmission according to (13) and (16) as discussed in[5]. We check whether the visible pixels efficiently found are consistent with ourformation model by running it following (4), (15) and (16). With respect to (cid:126)D , we conventionally consider the normal to the inclined plane from the preciseorigin as determined as (cid:126)N = (cid:126)p × (cid:126)p || ( (cid:126)p × (cid:126)p ) || .We compute ˆ A by minimizing the following error from the existing corre-sponding normals from the above defination: E ( ˆ A ) = 1Ω · (cid:88) i ∈ Ω ( N i · ˆ A ) , (17)where the line parameters are denoted by N i · ˆ A of the patch pixels. The · typically denotes the dot-product in RGB space. This reliable measure consistsof projecting the line parameters onto an error function, E ( ˆ A ). Therefore, (17)vanishes over uniformly distributed pixels and becomes positive. Therefore, wehave ∂E∂ ˆ A = ˆ A · (cid:16) (cid:88) i N i N Ti (cid:17) = 0 . (18)From the (18), we need the non-trivial solution as ˆ A is a non-null vectorwhich can be acquired by registering a covariance matrix from the normals andafterwards obtaining the result as a covariance matrix similar to the smallesteigenvalue. We can retrieve the magnitude a ( x ) of the airlight component, from theevaluated ˆ A and acquire the color line correlating with the patch by limitingthe accompanying error: E line ( ρ, s ) = min ρ,s || P + ρD − s ˆ A || . (19)Here, airlight component is denoted by s . The solution of (19) is obtainedfollowing [7]. The registered airlight portion is then approved with the accom-panying conditions: vast intersection and convergence angles, close intersection,substantial range and shading changeability. Out of them, vast intersectionangle and close convergence are done likewise approaches as detailed in [7].13e estimate J from the refined transmission map and the output imageobtained from the proposed method and a black image E . From the intermediateoutcomes, it can be observed that the underlying J has noticeable artefacts inthe sky locale, which is progressively exterminating to amid the improvement.The outcome is given in Figure 5, exhibiting the gradual decline in the graphalongside the images as output. Figure 5: The convergence of the proposed approach. The objective function in Eq. (19) ismonotonically declining. The transitional consequences of J and 10 × E at the given iteration. For assessing the airlight part a ( x ) we dispose of a couple of patches andintroduce the estimation of airlight at each pixel. This is done by limiting theaccompanying capacity: ψ ( a ( x )) = (cid:88) Ω (cid:88) x ∈ Ω ( a ( x ) − (cid:101) a ( x ) ( σ a (Ω)) + β · (cid:88) x a ( x ) || I ( x ) || + α · (cid:88) Ω (cid:88) x ∈ L ( x ) ( a ( x ) − (cid:126)a ( x ) || I ( x ) − I ( y ) || . (20)Here, (cid:126)a ( x ) is the assessed magnitude of the airlight component, L ( x ) is the localneighborhood of x and a ( x ) represents the adequate segment to be registeredand σ a (Ω) is the error difference of the estimate of (cid:126)a ( x ) inside the patch Ω. Theinitial two terms constitute the capacity utilized as a part of [5]. To obtain abetter result, we include the last term which guarantees that the part would14e a little portion of I ( x ). The latter term of (9) is utilized to change thesegment with the intensity of I ( x ). Presently, to limit the vitality work givenby condition (20), we change over this to the following structure:Ψ( a ) = ( a − (cid:101) a ) T · Σ( a − (cid:101) a ) + αa T La + βb T a, (21)where a and (cid:101) a denote the vector forms of a ( x ) and (cid:101) a ( x ) respectively, Σ denotethe covariance grid of the pixels where the approximation is made and L is theLaplacian framework of the diagram built by considering each pixel as a vertexand interfacing the neighboring vertices. The influence of the edge between thepair of vertices x and y is || I ( x ) − I ( y ) || . Here α and β are scalars controlling thesignificance of each term. Airlight at every pixel is now obtained by figuring a ( x ) ˆ A . Consequently, theimmediate transmission can be recouped from J ( x ) t ( x ) = I ( x ) − a ( x ) ˆ A. (22)As we do not have t ( x ) unequivocally, we elevate the contrast utilizing theairlight and attempt to recoup J ( x ). For instance, if the recouped image is R i m ( x ), at pixel x : R i m ( x ) = J ( x ) t ( x )1 − Y ( a ( x ) ˆ A ) , (23)where Y is given by the following equation: Y ( I ( x )) = 0 . I R ( x ) + 0 . I G ( x ) + 0 . I B ( x ) , (24)where Y ( I ( x )) processes the luma at the pixel x . The idea is to upgrade thepixel x relying upon how much intensity (brightness) is expelled out from it.In many cases, the image remains dull even after the above operation. So weutilize gamma revision to reestablish the natural shine of the image.
5. RESULTS
We assess the performance of the proposed method on the Berkeley Seg-mentation Dataset (BSDS300), containing natural images [31]. This is a diversedataset of clear open air characteristic images and consequently represents themoderate scenes that may have been affected by haze. We have evaluatedairlight part in condition (17). We utilize similar parameters for every images:in condition (10) we set λ = 0.1 and we scale σ ( x ) in the interim [0,1] tomaintain a strategic distance from numeric issues. For experimenting on thesynthetic dataset, we utilize the dataset proposed in [5]. The dataset containseleven dehazed pictures, manufactured distance maps and relating reenactedfog pictures. An indistinguishably appropriated zero-mean Gaussian commo-tion with three one of a kind clamour level: n = 0:01; 0:025; 0:05 was added tothese images (with picture control scaled to [0; 1]).15 able 1: L1 errors (scaled to the interval [0,1]), after applying the proposed method oversynthetic hazy images with different measure of noise and for different standard deviation;compared to the competing methods. σ [5] [14] [27] ours0 0.097/ 0.069/ 0.058/ /0.051 /0.058 / /0.074 0.065 0.064 / 0.095/0.107 0.114 /0.063 0.035 0.026 /0.067 0.038 Lawn1 0.025 0.109/ 0.056/ / /0.077 0.065 0.056 /0.102 0.121 0.107 / 0.080/ 0.0500.043 / 0.065/ 0.104/ 0.060 / 0.116/ 0.082 Church 0.025 0.058/ 0.089/ 0.047/ able 2: The average results of MSE, SSIM, PSNR and WSNR on the hazy Images. Methods MSE SSIM PSNR WSNRHazy Input Image
He [14]
Berman [34]
Liao [24]
Tang [22] our result
MSCNN [8]
DehazeNet [9]
AODNet [10]
Table 3: The average results of MSE, SSIM, PSNR and WSNR on the Images with the SkyRegion.
Metric Hazy [14] [22] [8] [9] [10] our resultMSE
SSIM
PSNR
WSNR
Table I condenses the L1 errors on non-sky pixels of the transmission mapsand the fog free images of the synthetic dataset. The results of four sampleimages are given in the table. We utilize a progression of assessment criteria asdistant as to separate between each coordinate of the haze-free image along withthe result. Notwithstanding the extensively used mean square error (MSE) andthe structural similarity (SSIM) [32] measures, we utilized some more assessmentframeworks, for example, weighted peak signal-to-noise ratio (WSNR) and peaksignal-to-noise ratio (PSNR) [33]. Our method is compared with seven state-of-the-art methods (including deep learning based methods) when applied on thehazy images of the BSDS dataset, results of which is shown in Table II.In Table II, the last three lines demonstrate the results of applying deeplearning based techniques. As we can observe, the effects of applying the pro-posed method on hazy images is much better than the best in class handmadeapproaches. Additionally, the proposed method is comparable to the deep-learning based procedures as well.In Table III, the result of implementing the proposed method on imagescontaining sky and other high depth regions, compared to the state-of-the-artmethods, are shown. Figure 7 shows how the accuracy of the proposed methodsvary with respect to depth. We can observe that, as the depth increases, theproposed method gives much better result compared to the state-of-the-art,including the deep learning based methods.17a) (b)
Figure 6: Graphs showing the performance of the proposed approach compared to the state-of-the-art, with increasing depth value. (a) Images without sky area, (b) Images with skyarea.(a) Hazy Image (b) He et al [10] (c) Fattal [5] (d) Behat [35] (e) Berman [34] (f) our result(g) Hazy Image (h) He et al [14] (i) Fattal [5] (j) Berman[34] (k) Lu [36] (l) our result(m) Hazy Image (n) He et al [14] (o) Fattal [5] (p) Tang [22] (q) DehazeNet [9] (r) our resultFigure 7: Comparison on natural images: [Left] Input Hazy Images [Right] Our result. Middlecolumns display results by several competing methods. .2. Qualitative results In Figure 8, we compare the results of the proposed approach with the state-of-the-art single image dehazing techniques [5, 14, 34, 35, 36]. As noted by [5],the picture after fog evacuation may look diminish, since the scene radiance istypically not as splendid as the airlight. The techniques in [5, 14] provide goodresults, yet do not have some miniaturized scale differentiate when contrastedwith [34] and to our our method. In the consequence of [36] there are artefactsin the boundary in between portions. (a) Hazy Image (b) He et al [14] (c) MSCNN [8] (d) DehazeNet [9] (e) AODNet [10] (f) our result(g) Hazy Image (h) He et al [14] (i) Tang [22] (j) MSCNN [8] (k) DehazeNet [9] (l) our result(m) Hazy Image (n) He et al [14] (o) MSCNN [8] (p) DehazeNet [9] (q) AODNet [10] (r) Our resultFigure 8: Comparison on natural images: [Left] Input Hazy Images [Right] Our result. Middlecolumns display results by several competing methods.
Figure 9 demonstrates a comparison between the outcomes obtained by theproposed technique and the state-of-the-art (explicitly, profound deep-learningbased techniques) when applied on images containing sky region. The proposedapproach outperforms the competing techniques on this kind of images.Our assumption in regards to having a fog-free pixel in each haze line doesnot enclose as clear by a few cloudy pixels that set a most extreme radius,e.g. the red structures. In spite of that, the transmission in those territoriesis evaluated accurately because of the regularization that proliferates the depthdata spatially from the other fog lines.Dehazing the sky area in a hazy image is really challenging, in light of thefact that clouds and mist are similar normal phenomenons with a similar air19iffusing illustrate. This issue is facilitated, in any case proceeds in DCP [14],Non-Local Dehazing [34], DehazeNet [9] and MSCNN [8] results. While MSCNNmakes the opposite curio of overenhancement: see the sky region of Yosemite foran example (Figure 9). AOD-Net [10] can oust the obscurity, without displayingfake shading tones or turned dissent shapes. In any case, AOD-Net does notexplicitly consider the treatment of white scenes(can be found in sky range ofYosemite and Building in Figure 9). Our method appears to be capable offinding the sky region to keep the shading, and ensures a decent dehazing swayin different locale.
6. Conclusion