Rain Streak Removal for Single Image via Kernel Guided CNN
Ye-Tao Wang, Xi-Le Zhao, Tai-Xiang Jiang, Liang-Jian Deng, Yi Chang, Ting-Zhu Huang
11 Rain Streak Removal for Single Image via KernelGuided CNN
Ye-Tao Wang, Xi-Le Zhao ∗ , Tai-Xiang Jiang ∗ , Liang-Jian Deng, Yi Chang, and Ting-Zhu Huang Abstract —Rain streak removal is an important issue and has re-cently been investigated extensively. Existing methods, especiallythe newly emerged deep learning methods, could remove therain streaks well in many cases. However the essential factorin the generative procedure of the rain streaks, i.e., the motionblur, which leads to the line pattern appearances, were neglectedby the deep learning rain streaks removal approaches and thisresulted in over-derain or under-derain results. In this paper, wepropose a novel rain streak removal approach using a kernelguided convolutional neural network (KGCNN), achieving thestate-of-the-art performance with simple network architectures.We first model the rain streak interference with its motionblur mechanism. Then, our framework starts with learningthe motion blur kernel, which is determined by two factorsincluding angle and length, by a plain neural network, denotedas parameter net, from a patch of the texture component. Then,after a dimensionality stretching operation, the learned motionblur kernel is stretched into a degradation map with the samespatial size as the rainy patch. The stretched degradation maptogether with the texture patch is subsequently input into a derainconvolutional network, which is a typical ResNet architectureand trained to output the rain streaks with the guidance of thelearned motion blur kernel. Experiments conducted on extensivesynthetic and real data demonstrate the effectiveness of theproposed method, which preserves the texture and the contrastwhile removing the rain streaks.
Index Terms —rain streak removal, guided kernel, convolu-tional neural network(CNN).
I. I
NTRODUCTION
Outdoor vision systems are frequently affected by badweather conditions, one of which is the rain. Because of thehigh motion velocities and the light scattering, raindrops usuallyintroduce bright streaks into the images or videos acquiredby cameras. This undesirable interference will degrade theperformance of various computer vision algorithms [1], suchas object detection [2], event detection [3], action recognition[4], and scene analysis [5]. Therefore, alleviating the effectsfrom the rain is an essential task and has been investigatedextensively. Fig. 1 exhibits one example of the single imagerain streak removal.Most of the traditional methods always focus on thediscriminative characteristics of the rain streaks and the cleanbackground, for instance, the high-frequency property [8], [9], ∗ Corresponding authors.Y.-T. Wang, X.-L. Zhao, T.-X. Jiang, L.-J. Deng and T.-Z. Huang are with theSchool of Mathematical Sciences, University of Electronic Science and Tech-nology of China, Chengdu, 611731, P. R. China. Y. Chang is with the Instituteof Control and Information Technology, Huazhong University of Science andTechnology, Wuhan, 430074, P. R. China. E-mails: [email protected],[email protected], [email protected], [email protected],[email protected]. (a)Rainy (b)DID [6](c)DDN [7] (d)Proposed KGCNN
Fig. 1. An example of rain streak removal for a real-world rainy image.
Top-left : the rainy image.
Top-right : the derain result by DID [6].
Button-left :the derain result by DDN [7].
Button-right : the derain result by the proposedKGCNN. directionality [10]–[14] and repeatability [15] of the rain streaksand the piecewise smoothness [13]–[17] of the background(without loss of generality, we denote the rain-free contentas “background” throughout this paper). It is common for themodel based methods to elaborately tailor an optimizationmodel with the hand-crafted regularizer expressing the priorknowledge. Although these model based methods go into muchdepth on the distinct characteristics of the rains streaks, theyare often insufficient to cover the majority of the importantfactors in a real rainy scene since the degradation of rainstreaks can be very complex. The traditional learning basedmethods attempt to overcome this shortage by inferring thediscriminative dictionaries [18], the GMMs [16], [17], thestochastic distributions [19] or the convolutional filters [20]–[22] from the data. Benefit from the complex representationability of the convolutional neural network, the deep learningtechniques [6], [20], [23]–[28] further leverage the data to themost extent, and obtain promising results.As it can be seen in Fig. 1, state-of-the-art de-rainingalgorithms [6], [7] tend to obtain the under-derain result(bottom-left) or the over-derain result (top-right). We attributethese phenomena to the fact that it is challenging for derainingmethods, even the deep learning based methods, to distinguishthe rain streaks and the line pattern textures (e.g. the grass in Fig.1). The rain streaks removal performance of the neural networkcan be heighten by adopting deeper architecture as [7] or a r X i v : . [ c s . C V ] A ug Input Texture layerOutput
Rain streaksDecomposition
Motion blur kernelSubtraction C o n v + R e L U C o n v + R e L U C o n v + R e L U C o n v + R e L U F u ll y c o nn e c t e d F u ll y c o nn e c t e d Parameter net C o n v + B N C o n v + R e L U + B N C o n v + R e L U + B N C o n v + R e L U + B N C o n v + R e L U + B N Conv + ReLU+ BN
Conv + ReLU+ BN
Skip connection Skip connection
Derain net
Angle length VectorizatoinPCA
Stretching D i m e n s i o n a li t y s t r e t c h i n g Fig. 2. The flowchart of the proposed KGCNN single image rain streak removal framework. 1) The rainy image is decomposed into the texture componentand the structure component. 2) A rainy patch is feed to the parameter net to obtain the angle and the length of the motion blur kernel. 3) The motion blurkernel is stretched to the degradation map. 3) The rainy patch and the degradation map is then transmitted into the derain net, whose output is the rain streakpatch. 4) Finally, derain image is obtained by the subtraction of the rainy image and the rain streak image. elaborately designing the architecture, in which the contextureinformation are taken into account, as [6], [27]. However, it isstill difficult to purposefully enhance the capacity of the neuralnetwork to face the fore-mentioned challenge. Meanwhile,anther question is that can we address this challenge andachieve promising performance with the common and simpleneural network structures?In this paper, referring to [1], in which it was pointed outthat the appearance of the rain streaks is mainly related tothe motion blur mechanism, we propose a novel degradationmodel of the rain streak interference taking the motion blurprocedure into account. By modeling the degradation of rainstreaks as the motion blur, we are able to utilize two importantdistinct characteristics of the rain streaks, i.e., the repeatabilityand the directionality. The line pattern textures do not possessthe same generation mechanism as the rain streaks’. Therefore,this modeling strategy would contribute to distinguish the rainstreaks and the line pattern textures. After modeling the rainstreaks with motion blur kernel, the questions come to 1)how to infer the motion kernel from the data, and 2) howto utilize the information provided by the motion blur kernelwhen deraining.In our approach, we assume that the rain streaks in a smallpatch approximately share the same motion blur kernel. Atthe beginning, a rainy patch is feed to the parameter net, aplain 6-layer network, to infer the angle and the length of themotion blur kernel. To enable the learned motion blur kernelsto participate in the subsequent deraining process, we adoptthe dimensionality stretching strategy [29], which stretched themotion blur kernels to degradation maps with the same spatialresolution as the detail patches. Then, the detail patch togetherwith the degradation map is input into a common 26 layerResNet, whose output is a patch of rain streaks. Finally, weobtain the derain results by subtracting the rain streaks fromrainy image. The core idea of our framework is exploiting thegeneration mechanism of the rain streaks to guide rain streakremoval, and the flowchart of our framework is illustrated inFig. 2.
Contributions : The contributions of this paper mainlyinclude four aspects. • We build a novel rain streak generation model whichtakes the motion blur kernel into account. This modelingstrategy enables us to utilize the repeatability and thedirectionality of the rain streaks. • A sub-net, i.e., the parameter net, is built to learn theparameters (length and angle) of the motion blur kernel.Unlike existing methods using sub-net to embed thecontextual information, our sub-net is designed to exploitthe generation information of the rain streaks. • We propose an effective kernel guided CNN (KGCNN)framework, in which the network structures are commonand simple, for rain streak removal. Within this framework,the automatically learned motion blur kernel thoroughlyguides the process of rain streak removal. • Extensive experiments are conducted on publicly availablereal and synthetic data. Qualitative and quantitativecomparisons with existing state-of-the-art methods arepresented. The results show that the KGCNN removesrain streaks well while keeping the texture and the contrastof background.The organization of this paper is as follows. We providean overview of the existing deraining methods in Section II.Section III gives the detailed architecture of the proposedKGCNN. In Section IV, experimental results on the syntheticdata and the real-world data are reported. Finally, we drawconclusions in Section V.II. R
ELATED W ORKS
In the past decades, numerous methods have been proposedto improve the visibility of images/videos captured with rainstreak interference. [6], [8]–[28], [30]–[43]. Traditionally, thesemethods can be divided into two categories: single imagebased methods and multiple-images/videos based methods.Nevertheless, the explosive development of the deep learningbrings in a novel branch, i.e., the deep learning methods.
A. Single Image Based Methods
For the single image derain task, Kang et al . [8] decomposeda rainy image into low-frequency (LF) and high-frequency(HF) components using a bilateral filter and then performedmorphological component analysis (MCA)-based dictionarylearning and sparse coding to separate the rain streaks in the HFcomponent. To alleviate the loss of the details when learningHF image bases, Sun et al . [9] tactfully exploited the structuralsimilarity of the derived HF image bases. Kim et al. [44]took advantage of the nonlocal similarity. Chen et al . [15]considered the similar and repeated patterns of the rain streaksand the smoothness of the rain-free content. Sparse coding anddictionary learning were adopted by Luo et al. [18] and Son et al. [45]. In their results, the details of backgrounds werewell preserved. The recent work by Li et al. [17] utilized theGaussian mixture model (GMM) patch priors for rain streakremoval, with the ability to account for rain streaks of differentorientations and scales. Meanwhile, the directional propertyof rain streaks received a lot of attention in [10]–[12], [30]and these methods achieved promising performances. Ren etal. [33] removed the rain streaks from the image recoveryperspective. Wang et al. [32] took advantage of the imagedecomposition and dictionary learning.
B. Video Based Methods
Garg et al . [36] first raised a video rain streak removalmethod with a comprehensive analysis of the visual effectsof rain on an imaging system. Since then, many approacheshave been proposed for the video rain streak removal task andobtained good performances with different rain circumstances.Tripathi et al . [37] took the spatiotemporal properties intoconsideration. In [15], the similarity and repeatability ofrain streaks were considered, and a generalized low-rankappearance model was proposed. Chen et al. [46] considered thehighly dynamic scenes. Whereafter, Kim et al . [39] consideredthe temporal correlation of rain streaks and the low-ranknature of clean videos. Santhaseelan et al. [40] detected andremoved the rain streaks based on phase congruency features.Additionally, comprehensive early existing video based methodswere reviewed in [38]. You et al. [41] took the raindropsadhered to a windscreen or window glass into account. In[13] and [14], a novel tensor-based video rain streak removalapproach was proposed by considering the directional property.The rain streaks and the clean background were stochasticallymodeled as a mixture of Gaussians by Wei et al. [19]. Theconvolutional sparse coding (CSC), which has shown its abilityin image cartoon-texture decomposition [22], was also adoptedby Li et al. [20] for the video rain streaks removal. Ren etal. [42] addressed the video desnow and derain task based onmatrix decomposition.
C. Deep Learning Based Methods
The deep learning based method was first applied to derainin [47], in which a 3-layer convolutional neural network (CNN)was designed to remove static raindrops and dirt spots frompictures taken through window glass. Fu et al. [34] was the first to successfully tailor a deep CNN for the rain streakremoval task. Moreover, in [7], Fu et al. designed the deepdetail network (DDN) to further improved the performance byadopting the well-known deep residual network (ResNet) [48]structure. Pan et al. [49] simultaneously operated on the texturecomponent and the structure component. Yang et al. [27] addeda binary map, which reflects the contextual information, in therain streak observation model and constructed a deep networkthat jointly detected and removed rain streaks. Meanwhile, theincreasingly popular Generative Adversarial Networks (GAN)was first used in [35] for rain streak removal and recentlyapplied to the task of dealing with adherent raindrop [23]. In[25], Chen et al. proposed a CNN Framework for the video rainstreak removal task, while the recurrent neural network wasadopted by Liu et al [26]. For jointly rain-density estimationand derain, Zhang et al . [6] raised a density aware multi-streamdensely connected convolutional neural network (DID). In [24],both the rain component and the background component areconsidered to remove rain streaks. Fan et al . [50] developed aresidual-guide feature fusion network, which was detachableto meet different rainy conditions. A lightweight pyramid ofnetworks was proposed in [51], using the domain-specificknowledge to simplify the learning process.III. T HE F RAMEWORK OF T HE K ERNEL G UIDED C ONVOLUTIONAL N EURAL NETWORK
In this section, we will give our rain streak observation modeland subsequently clarify the detail architecture of the proposedderain framework. As exhibited in Fig. 2, there are mainlythree parts, i.e., the parameter net, the dimensionality stretching,and the derain net. The main stream is 1) decomposing therainy image into texture component and structure component;2) processing the patches in the texture component usingthe parameter net, the PCA operation, and the derain net; 3)subtracting the obtained rain streaks and obtaining the derainresult.
A. Observation Model
As mentioned previously, the rain streaks can be approxi-mately viewed as sharing the same motion blur kernel. Hencethe basic unit of our observation model is the patch. Similar tomany existing methods, the rainy image is modeled as a linearsuperposition: O = B + R , (1)where O , B , and R ∈ R m × n are patches of the observed rainyimage, the underlying background (i.e. the background) andthe rain streaks, respectively. After taking the motion blur intoconsideration, the observation model in Eq. (1) turns to be: O = B + K ( θ, l ) ⊗ M , (2)where θ and l are respectively the angle and length of themotion blur kernel K ∈ R p × p , M is the raindrops mask, and ⊗ denotes the convolution operation. Because of the highvelocity of the raindrops, the appearance of the rain streaksare mostly linear. Hence using the angle θ and the length l to characterize the motion blur kernel of the rain streaks isreasonable and its advantage illustrated in the next subsection. Meanwhile, many existing methods conduct the rain streakremoval procedures on the detail component [7], [34], [49] orthe high-frequency (HF) component [8], [9]. Following thisresearch line, we adopt the guided filter method in [52] as thelow-pass filter because it is simple and fast to implement . Therainy patch is decomposed into two parts the texture component O T (denoted as “detail component” in [7], [34], [49]) and thestructure component O S , and they satisfy O = O S + O T .The advantages of processing on the texture componenthave been fully discussed in [7], [34]. In order to facilitate thereaders, we briefly bring them herein. It can be found in Fig.2 that the texture component consists of all rain streaks, i.e., O T = B T + R , so that training and testing on the texturecomponent O T is sufficient and compact. Meanwhile, thetexture component is sparser and the range of the values issignificantly decreased compared to the pixels in the originalimage domain. This also decreases the mapping range of theneural network, making the network focus on the importantinformation.After the decomposition in Eq. (3), the observation modelbecomes O T = B T + K ( θ, l ) ⊗ M , (3)where B T ∈ R m × n × c is the rain free content of the texturecomponent and the goal turns to estimate the clean texture partand separate the rain streaks from the rainy texture component.In this work, considering the benefits of processing on thetexture component, we attempt to design and train a CNNderainer F D ( · ; Θ D ) , which maps the texture O T patch into therain streaks patch R = K ( θ, l ) ⊗ M .Modeling the rain streaks with the motion blur kernel K maintains two advantages. One is that two important factors,i.e., the length l and the angle θ , of the rain streak appearanceare uniformly encoded by the motion blur kernel. Another oneis that the repeatability of the rain streaks allows us to easilyinfer the two parameters from an input texture patch. In thenext subsection, we would present the detail of how to estimatethe parameters and embed the learned motion blur kernel tothe deraining procedure. B. The Parameter Sub-Network
Since the CNN has shown its overwhelming superiority onfeature extraction, we plan to use a CNN to learn the motionblur kernel. Initially, given a CNN F K ( · ; Θ K ) : R m × n → R p × p ,which maps the input texture patch to the motion blur kernel,with network parameter Θ K , the loss function for training thisCNN architecture is L K (Θ K ) = 1 n n (cid:88) i =1 (cid:107)F K ( O i T ; Θ K ) − K i (cid:107) F , (4)where (cid:107)·(cid:107) F denotes the Frobenius norm and i index the patchesand motion blur kernels.However, the performance of F K ( · ) is not satisfactory.Without the fully connect layer, it dose not converge. Aswe pointed out above, the motion blur kernel within the As discussed in [34], the choice of low-pass filter is not limited to guidedfiltering. generation of rain streaks is conclusively decided by twoparameters, i.e., the angle θ and length l . This indicates that theintrinsic information lies in a parameter space with much lowerdimension than the convolution filter space. Working directlyon the low dimension information can not only facilitate thetask of motion blur kernel estimation, but also prevent possiblyoverfitting, which would be verified in the experimental part.Therefore, we adopt the CNN F P ( · ; Θ P ) : R m × n → R , whichmaps the input texture patch to the parameter vector, withnetwork parameter Θ P and the loss function thereof for trainingturns to: L P (Θ P ) = 1 n n (cid:88) i =1 (cid:107)F P ( O i T ; Θ P )) − p i (cid:107) F , (5)where p = [ θ, l ] (cid:62) is the parameter vector. The architecture of F P ( · ) (denoted as “parameter net”) is exhibited in Fig. 2. Oncethe parameters θ and l are determined, the motion blur kernel K is unique. C. Dimensionality Stretching
After maintaining the motion blur kernel K , the questioncomes to how to utilize the motion blur kernel when deraining.In general, the input of the derain net F D ( · ; Θ D ) , which wouldbe detailed in the next subsection, is supposed to be the texturepatch together with the motion blur kernel learned from thistexture patch, since the motion blur kernel consists of the priorknowledge of the rain streaks. If we simply splice the texturepatch O T ∈ R m × n and motion blur kernel K ∈ R p × p , weightsharing in CNN makes that the texture patch could not getthe entire information of the motion blur kernel. Hence, adimensionality stretching operation in [29] is necessary.The dimensionality stretching strategy is schematicallyillustrated in Fig. 2. At the beginning, the motion blur kernel K is vectorized into a vector k ∈ R p . After the vectorization, k is projected onto a t -dimensional linear space by the principalcomponent analysis (PCA) technique. Then the projected vector k t ∈ R t is stretched into degradation maps M ∈ R m × n × t .All values in the j -th horizontal slice with size m × n of the3-dimensional M are same as the j -th element of k t . By doingso, the degradation maps then can be concatenated with thetexture patch, making CNN possible to handle the two inputs.However, different from [29], in the case of plain motionblur kernel with only two parameters, we found that t is stilltoo large compared with number of channels of the textureimage. Meanwhile, since that working on texture componentleads to the pixel values being close to zero, the informationof the texture image may be drowned in the information ofmotion blur kernel with a relatively large t . To tackle this issue,degradation maps will be concatenated with the texture imageafter the first convolutional layer in the derain net, as shownin Fig. 2. D. Derain Net
As previously mentioned in Sec I, instead of elaboratelydesigning the architecture, we resort to the typical ResNetstructure. A cascade of × convolutional layers are applied to perform the deraining. Each layer is composed of three types ofoperations, including convolution (denoted as “Conv”), rectifiedlinear units [53] (denoted as “ReLU”), and batch normalization[54] (denoted as “BN”). We still use Frobenius norm in theloss function, which is L D (Θ D ) = 1 n n (cid:88) i =1 (cid:107) f d ( O itexture , K i − R i (cid:107) F , (6)where R is rain streaks. After subtracting the rain streaks R from the rainy image O , we could get the background. Disscusion:
As we mentioned above, distinguishing the rainstreaks and the line pattern textures is important but challenging.In this work, we face this challenge by exploiting the generationmechanism of the rain streaks to guide the rain streak removal.Within our framework, the generation mechanism of the rainstreaks is taken into consideration, and the prior knowledge ofthe rain streaks, i.e., the angle and the length of the motionblur kernel, are automatically learned. The embedding of themotion blur kernel into the derain net, which maintains a plainResNet structure, greatly enhances the performance (see thecomparisons in Sec. IV-E). To some extent, the utilizationof the motion blur kernel in our method can be viewed asthe traditional optimization model utilizing the regularizer toexpress the prior knowledge.IV. E
XPERIMENTAL RESULTS
To evaluate the performance of the proposed KGCNNframework, we test it on both synthetic and real-world rainyimages. The networks are trained on synthesized rainy images.We compare our KGCNN with six state-of-the-art methods,including three traditional methods: the unidirectional globalsparse model (UGSM ) [11], the discriminative sparse codingmethod (DSC ) [18], and the method using layer prior (LP )[16], as well as three deep learning based methods: the density-aware multi-stream deraining dense network (DID ) [6], a plainconvolutional neural network deraining method (CNN ) [34],and the deep detail network (DDN ) [7]. A. Rainy Images Simulation
With the observation model in Eq. (2), the synthetic rainyimages are generated by the following steps. (1) Transformthe background from RGB color space to YUV color space .(2) Generate the raindrops mask M by adding salt and peppernoise with signal-noise ratio from 0.9 to 1.0 to a zero matrixwith the same size as the Y channel of the background, andadding a Gaussian blur with standard variance from 0.2 to 0.5.(3) Generate the motion blur kernel K with angle θ sampledfrom [45 ◦ , ◦ ] and length l varing from 15 to 30. (5) Directlyadded the generated rain streaks R = K ⊗ M to the backgroundon Y channel, and the intensity values greater than 1 are set as1. (6) Finally, transform the image back to RGB color space. http://yu-li.github.io/ https://github.com/hezhangsprinter/DID-MDN https://xueyangfu.github.io/projects/tip2017.html https://xueyangfu.github.io/projects/cvpr2017.html https://en.wikipedia.org/wiki/YUV B. Experiments Setting
For fair comparisons, we use the default parameters in thecodes for traditional methods and the default trained models forthe deep learning methods. Since existing rainy datasets do notconsist of the information of the motion blur kernel, we trainour networks only on our synthetic data. The patch size is setas × × . Guided filter with radius 15 and regularization1 is selected to decompose the rain images. By preserving of the energy, the kernel is projected onto a space of dimension162. Because of the full connection, the input image should besplit into several patches for experiments. We use Adam [55]optimizers with learning rate 0.01. Our model is trained andtested on Python 3.5.2 with TensorFlow 1.0.1 framework on adesktop of GPU NVIDIA GeForce GTX 1060 with 6GB. Forother compared methods based in Matlab, they are running onMatlab 2017A. C. Synthesized Data
In this subsection, we evaluate performance of different state-of-the-art methods on the synthetic rainy images. Three datasetsare selected: 1) the benchmark dataset provided by Dr. Yu Liusing the rain streaks rendering technique in [56] (denoted asRain12), 2) 3 synthetic rainy images by our simulating method,and 3) several synthetic rainy images in [11]. Due to the limitof space, we only show partial results in this section, andplease see more results in the supplementary materials.For quantitative comparisons, we adopt the peak signal tonoise ratio (PSNR), structure similarity index (SSIM) [57],feature similarity index (FSIM) [58], universal image qualityindex (UIQI) [59], and gradient magnitude similarity deviation(GMSD, smaller is better) [60] as the quality metrics of thederaining results. Particularly, since the compared methodsare implemented with different programming languages (orplatforms), e.g., UGSM with Matlab and CNN with Python,we save all output images of different methods as png format,then reload them in Matlab and compute the correspondingquantitative results on RGB color space.To show that KGCNN could remove rain streaks whilekeeping the texture and the contrast of background, we showthe rain streak images (residual images between rainy imagesand resulted images).Normalization is performed to the rain streak images so thatwe could distinguish whether the proposed method changestexture and contrast significantly or removes rain streakscompletely. For instance, if the rain streak images are too bright,it indicates the method significantly changes intensity contrast.For the first dataset, Fig. 3 shows the visual results, local close-up images and rain streak images on one synthetic rainy imageof the Rain12 dataset. We can see that the proposed KGCNNmethod could remove the rain streaks completely while otherapproaches fail to do so (see local close-up images for bettercomparisons in Fig. 3). Especially, it is easy to see that theobtained rain streaks by the proposed approach do not containthe structures of background, which indicates KGCNN has avery good ability for rain streak removal. From the perspectiveof quantitative results, KGCNN method performs best for the
The background Rainy image DID [6] DSC [18] LP [16]UGSM [11] CNN [34] DNN [7] Proposed KGCNN
Fig. 3. Rain streak removal results by different methods on one rainy image of the Rain12 dataset.
12 synthesized images, compared with other six state-of-the-artmethods (see Table I for more details).
TABLE IQ
UANTITATIVE COMPARISONS OF RAIN STREAK REMOVAL RESULTS BY
DID [6], DSC [18], LP [61], UGSM [62], CNN [34], DDN [7],
AND
KGCNN
ON THE R AIN
12 (
AVERAGE VALUE ).Method PSNR SSIM FSIM UIQI GMSD Time (s)rainy 28.822 0.910 0.910 0.968 0.134 -DID 27.485 0.919 0.918 0.941 0.086
DSC 28.584 0.915 0.917 0.965 0.097 153.792LP 30.825 0.947 0.935 0.966 0.070 321.328UGSM 32.185 0.958 0.947
For the second dataset, we generate another 3 synthesizedrainy images (road, night, and street) for test. Some of resultedderain images by different methods are selected to be shownin Fig. 4 and Fig. 5. The visual results also demonstrate thatthe KGCNN method not only removes rain streaks completely,but also preserves the background information well. We reportthe quantitative performance of the derain results obtained bydifferent approaches in Table II, which shows the superiorityof the KGCNN method.UGSM performs quite competitively for Rain12.Therefore,it is necessary to take more test images from UGSM (tree,panda, and bamboo) to compare the rain removal ability of theproposed method and UGSM method. In addition, based onthe code provided by the authors in [62], we also change therain streaks’ angles of these images from UGSM to generatethree new synthesize images (tree2, panda2, and bamboo2) fortesting. Fig. 6 and Fig. 7 respectively present the visual andrain streak results of these images from [62], which indicatesthe superiority of the proposed method.In Fig. 8 and Fig. 9, rain streaks with very large anglesare added to background to formulate rainy images. Notingthat the UGSM is based on directional priors, which is quiteeffective to the case of vertical rain streaks, but less effective
TABLE IIQ
UANTITATIVE COMPARISONS OF RAIN STREAK REMOVAL RESULTS BY
DID [6], DSC [18], LP [61], UGSM [62], CNN [34], DDN [7],
AND
KGCNN ON SYNTHETIC RAINY IMAGES BY OUR SIMULATING METHOD .Images Method PSNR SSIM FSIM UIQI GMSD Time (s) road rainy 18.171 0.809 0.909 0.824 0.150 -DID 22.409 0.889 0.927 1.005 0.104
DSC 22.302 0.863 0.926 night rainy 18.506 0.539 0.787 0.753 0.261 -DID 25.294 0.851 0.924 0.813 0.099
DSC 22.219 0.598 0.835 0.904 0.208 200.172LP 23.034 0.780 0.872 0.806 0.175 297.194UGSM 20.626 0.641 0.824 0.784 0.225 5.568TIP 19.215 0.648 0.851 0.757 0.193 9.185CVPR 23.781 0.674 0.882 0.846 0.172 0.375KGCNN tree rainy 18.798 0.837 0.882 0.968 0.162 -DID 22.359 0.865 0.902 0.975 0.128
DSC 21.519 0.846 0.898 0.992 0.141 307.007LP 22.393 0.894 0.907 0.987 0.145 276.631UGSM 22.382 0.913 0.921 0.985 0.111 8.236TIP 18.793 0.872 0.903 0.969 0.126 14.075CVPR 24.013 0.900 0.908 0.994 0.121 0.845KGCNN for the case of oblique rain streaks. Therefore, from Fig. 8and Fig. 9, we can know that the proposed KGCNN methodperforms significantly better than UGSM method, both visuallyand quantitatively. Moreover, the KGCNN method also exhibitsbetter ability of rain streak removal, compared with other state-of-the-art methods. Table III also demonstrate the effectivenessof our method from the perspective of quantitative results.
D. Real-world data
For real-world data, since the ground truth images areunknown, we do not give the quantitative comparisons andonly evaluate the performance of different methods visually,including the derain images and the rain streak images. FromFig. 10, the methods of DID, DDN, and KGCNN exhibit (a) (b) (c) (d) (e) (f) (g) (h) (i)
Fig. 4. Rain streak removal results by different methods on 3 synthetic rain images (road, night, and street) by our simulating method. From left to right: (a) thebackground, (b) the rainy images, the derain results by (c) DID [6], (d) DSC [18], (e) LP [61], (f) UGSM [62], (g) CNN [34], (h) DDN [7], and (i) KGCNN. (a) (b) (c) (d) (e) (f) (g) (h) (i)
Fig. 5. The rain streak images of the rain streak removal results by different methods on 3 synthetic rain images (road, night, and street) by our simulatingmethod. From left to right: (a) the background, (b) the rainy images, the derain results by (c) DID [6], (d) DSC [18], (e) LP [61], (f) UGSM [62], (g) CNN[34], (h) DDN [7], and (i) KGCNN. (a) (b) (c) (d) (e) (f) (g) (h) (i)
Fig. 6. Rain streak removal results by different methods on 3 synthetic rainy images (tree, panda, and bamboo) selected from [62]. From left to right: (a) thebackground, (b) the rainy images, the derain results by (c) DID [6], (d) DSC [18], (e) LP [61], (f) UGSM [62], (g) CNN [34], (h) DDN [7], and (i) KGCNN. similar visual results and remove rain streaks completely, whileother approaches fail to remove all rain streaks. In addition, from the rain streak images in Fig. 10, the methods of DIDand DDN fail to separate the rain streaks and background (a) (b) (c) (d) (e) (f) (g) (h) (i)
Fig. 7. The rain streak images of the rain streak removal results by different methods on 3 synthetic rainy images (tree, panda, and bamboo) selected from[62]. From left to right: (a) the background, (b) the rainy images, the derain results by (c) DID [6], (d) DSC [18], (e) LP [61], (f) UGSM [62], (g) CNN [34],(h) DDN [7], and (i) KGCNN. (a) (b) (c) (d) (e) (f) (g) (h) (i)
Fig. 8. Rain streak removal results by different methods on 3 new synthetic rainy images (tree2, panda2, and bamboo2). From left to right: (a) the background,(b) the rainy images, the derain results by (c) DID [6], (d) DSC [18], (e) LP [61], (f) UGSM [62], (g) CNN [34], (h) DDN [7], and (i) KGCNN. (a) (b) (c) (d) (e) (f) (g) (h) (i)
Fig. 9. The rain streak images of the rain streak removal results by different methods on 3 new synthetic rainy images (tree2, panda2, and bamboo2). Fromleft to right: (a) the background, (b) the rainy images, the derain results by (c) DID [6], (d) DSC [18], (e) LP [61], (f) UGSM [62], (g) CNN [34], (h) DDN[7], and (i) KGCNN. texture well and leave some background texture to rain streaks,which demonstrates that the networks of DID and DDN could not distinguish background texture and the rain streakswell. Moreover, the DID method also changes image contrast
TABLE IIIQ
UANTITATIVE COMPARISONS OF RAIN STREAK REMOVAL RESULTS BY
DID [6], DSC [18], LP [61], UGSM [62], CNN [34], DDN [7],
AND
KGCNN ON SYNTHETIC RAINY IMAGES SELECTED IN [62].Rain type original images images with large angleImages Method PSNR SSIM FSIM UIQI GMSD Time (s) PSNR SSIM FSIM UIQI GMSD Time (s) tree rainy 27.375 0.934 0.942 0.989 0.076 - 21.475 0.895 0.926 0.952 0.078 -DID 27.826 0.926 0.937 0.982 0.060
DSC 29.646 0.940 0.944 0.996 0.064 87.726 24.466 0.908 0.924 0.982 0.073 89.726LP 29.435 0.927 0.912 0.997 0.076 226.091 25.497 0.909 0.899 0.983 0.081 236.091UGSM 30.659 0.954 0.948 0.998 0.056 0.979 22.842 0.906 0.927 0.964 0.074 0.979CNN 26.532 0.942 0.941 0.989 0.065 6.139 21.201 0.915 0.934 0.952 0.068 6.339DDN 26.944 0.944 0.945 0.977 0.066 0.812 28.640 0.936 0.935 panda rainy 27.102 0.920 0.946 0.978 0.083 - 17.569 0.750 0.871 0.836 0.140 -DID 27.120 0.923 0.946 0.952 0.066 0.625 24.077 0.891 0.914 0.960 0.094 0.665DSC 26.568 0.914 0.940 0.968 0.077 70.239 21.688 0.790 0.867 0.959 0.124 70.639LP 29.250 0.938 0.943 0.994 0.073 133.709 21.197 0.857 0.904 0.908 0.105 143.709UGSM 27.823 0.925 0.926 0.994 0.082 2.505 19.621 0.798 0.876 0.885 0.118 2.750CNN 24.838 0.927 0.937 0.976 0.070 3.421 17.145 0.791 0.883 0.832 0.105 4.442DDN 25.693 0.921 0.947 0.934 0.077
KGCNN bamboo rainy 26.091 0.923 0.930 0.966 0.102 - 26.997 0.944 0.938 0.967 0.090 -DID 27.355 0.939 0.930 0.964 0.071 0.625 27.987 0.949 0.938 0.990 0.062 0.635DSC 26.426 0.923 0.925 0.964 0.086 120.959 27.374 0.942 0.933 0.959 0.080 124.459LP 29.337 0.954 0.933 0.953 0.071 218.103 29.594 0.960 0.936 1.032 0.069 213.103UGSM 27.647 0.936 0.919
KGCNN significantly.In Fig. 11, the KGCNN method removes the rain streakscompletely while other approaches still exist obvious rainstreaks. From the rain streak images, our method could separaterain streaks excellently, while other method leaves somebackground texture to the separated rain streaks. Especially thevisual result by DID method shows a little of blur effect dueto the loss of the texture information. Particularly, readers canfind more results in Fig. 12 which also verifies the superiorityof the proposed KGCNN method.
E. Influence of kernel in the KGCNN method
In this paper, we propose a kernel guided CNN method forthe image rain streak removal application. The kernel playsa very important role to the KGCNN method. There are stilltwo problems. (1) Dose the derain net output the rain streaksonly using the information of rainy image and ignoring thekernel information? (2) Dose the derain net work better if weretain it without the input of kernel? To show the influenceof the kernel in our method, we discard the kernel guidedassumption to see the results what will happen with Rain12.We use KGCNN to represent the proposed method, KGCNN a to represent the proposed method with kernel informationbeing zero, and KGCNN b to represent the detrain net trainedindividually with our training data. Fig. 13 shows the visualresults there methods. It is easy to know that the kernel playsan import role in KGCNN. The derain net dose use the kernelto output the rain streaks (see the result of KGCNN a ) and even we train the derain net individually, the result is stillgood enough (see the result of KGCNN b ). The quantitativeresults in Table IV also demonstrate the similar conclusion. Insummary, the kernel guided assumption is quite important tothe framework of the KGCNN method. TABLE IVQ
UANTITATIVE COMPARISONS OF RAIN STREAK REMOVAL RESULTS BY
KGCNN a , KGCNN b , AND
KGCNN ON R AIN
12 (
AVERAGE VALUE ).Method PSNR SSIM FSIM UIQI GMSDrainy 28.822 0.910 0.910 0.968 0.134KGCNN a b F. Discussions on the depth and breadth
Increasing the depth of network or increasing the filternumber of network can improve a network’s capacity. Wealso investigate the optimal network design to achieve the bestderain results. In this section, we test the impact of networkdepth and width of KGCNN on Rain12. Especially, we test fordepth ∈ { , , } and filter number ∈ { , , } . TableV shows the average values of the quantitative results. As isclear, adding more hidden layers achieves better results overincreasing the number of filters per layer while increasingcomputational time. But we could see that there is over-fitting (a) (b) (c) (d) (e) (f) (g) (h) Fig. 10. Rain streak removal results and rain streak images by different methods on real rainy images. From left to right: (a) the rainy images, the results by(b) DID [6], (c) DSC [18], (d) LP [61], (e) UGSM [62], (f) CNN [34], (g) DDN [7], and (h) KGCNN. (a) (b) (c) (d) (e) (f) (g) (h)
Fig. 11. Rain streak removal results and rain streak images by different methods on real rainy images. From left to right: (a) the rainy images, the results by(b) DID [6], (c) DSC [18], (d) LP [61], (e) UGSM [62], (f) CNN [34], (g) DDN [7], and (h) KGCNN. when depth is 34 and filter numbers is 48. To balance theperformance between avoiding the over-fitting and reducingthe computation, we choose 26 as depth and 36 as filter numberfor our experiments above.
TABLE VQ
UANTITATIVE COMPARISONS OF RAIN STREAK REMOVAL RESULTS BYDIFFERENT DEPTH AND FILTER NUMBER ON R AIN
AVERAGE VALUE ).depth filter number PSNR SSIM FSIM UIQI GMSD18 24 34.049 0.969 0.963
36 34.853 0.972 0.966 0.986 0.05148 34.414 0.969 0.963 0.983 0.059
V. C
ONCLUSION
We have presented a deep learning architecture calledKGCNN for removing rain streaks for single images. Usingguided kernel on the texture component, our approach learnsthe mapping function between rain image on detail componentand rain streaks. We show that convolutional neural networks,a technology widely used for high-level vision task, withguided kernel can also be exploited to successfully deal withnatural images under bad weather conditions. We also show thatKGCNN noticeably outperforms other state-of-the-art methodswith respect to image quality. In addition, by using guidedkernel, we are able to show that we do not need a very complexnetwork to perform rain streak removal. A
CKNOWLEDGMENT
The research is supported by NSFC (61876203, 61772003,61702083) and the Fundamental Research Funds for theCentral Universities (ZYGX2016J132, ZYGX2016KYQD142,ZYGX2016J129). Thank the authors of DID [6], DSC [18],LP [61], UGSM [62], CNN [34], DDN [7] for providing thecode. R
EFERENCES[1] K. Garg and S.-K Nayar. Vision and rain.
International Journal ofComputer Vision , 75(1):3–27, 2007.[2] X. Zhang, C. Zhu, S. Wang, Y.-P. Liu, and M. Ye. A bayesian approachto camouflaged moving object detection.
IEEE Transactions on Circuitsand Systems for Video Technology , 27(9):2001–2013, 2017.[3] M.-S Shehata, J. Cai, W.-M. Badawy, T.-W Burr, M.-S Pervez, R.-J Johannesson, and A. Radmanesh. Video-based automatic incidentdetection for smart roads: the outdoor environmental challenges regardingfalse alarms.
IEEE Transactions on Intelligent Transportation Systems ,9(2):349–360, 2008.[4] S.-J. Song, C.-L. Lan, J.-L. Xing, W.-K. Zeng, and J.-Y. Liu. Spatio-temporal attention-based lstm networks for 3d action recognition anddetection.
IEEE Transactions on Image Processing , 27(7):3459–3471,2018.[5] L. Itti, C. Koch, E. Niebur, et al. A model of saliency-based visualattention for rapid scene analysis.
IEEE Transactions on Pattern Analysisand Machine Intelligence , 20(11):1254–1259, 1998.[6] H. Zhang and V.-M Patel. Density-aware single image de-raining usinga multi-stream dense network. arXiv preprint arXiv:1802.07412 , 2018.[7] X.-Y. Fu, J.-B. Huang, D.-L. Zeng, Y. Huang, X.-H. Ding, and J. Paisley.Removing rain from single images via a deep detail network. In
IEEEConference on Computer Vision and Pattern Recognition (CVPR) , pages1715–1723, 2017.[8] L.-W. Kang, C.-W. Lin, and Y.-H. Fu. Automatic single-image-basedrain streaks removal via image decomposition.
IEEE Transactions onImage Processing , 21(4):1742–1755, 2012.[9] S.-H. Sun, S.-P. Fan, and Y.-C. Wang. Exploiting image structuralsimilarity for single image rain removal. In the IEEE InternationalConference on Image Processing (ICIP) , pages 4482–4486, 2014.[10] Y. Chang, L.-X. Yan, and S. Zhong. Transformed low-rank model forline pattern noise removal. In the IEEE Conference on Computer Visionand Pattern Recognition (CVPR) , pages 1726–1734, 2017. (a) (b) (c) (d) (e) (f) (g) (h) Fig. 12. Rain streak removal results and rain streak images by different methods on real rainy images. From left to right: (a) the rainy images, the results by(b) DID [6], (c) DSC [18], (d) LP [61], (e) UGSM [62], (f) CNN [34], (g) DDN [7], and (h) KGCNN.[11] L.-J. Deng, T.-Z. Huang, X.-L. Zhao, and T.-X. Jiang. A directionalglobal sparse model for single image rain removal.
Applied MathematicalModelling , 59:662–679, 2018.[12] S.-L. Du, Y.-G. Liu, M. Ye, Z.-Y. Xu, J. Li, and J.-G. Liu. Single imagederaining via decorrelating the rain streaks and background scene ingradient domain.
Pattern Recognition , 79:303–317, 2018.[13] T.-X. Jiang, T.-Z. Huang, X.-L. Zhao, L.-J. Deng, and Y. Wang. Anovel tensor-based video rain streaks removal approach via utilizingdiscriminatively intrinsic priors. In the IEEE Conference on ComputerVision and Pattern Recognition (CVPR) , pages 4057–4066, July 2017.[14] T.-X. Jiang, T.-Z. Huang, X.-L. Zhao, L.-J. Deng, and Y. Wang. Fastderain: A novel video rain streak removal method using directionalgradient priors. arXiv preprint arXiv:1803.07487 , 2018.[15] Y.-L. Chen and C.-T. Hsu. A generalized low-rank appearance modelfor spatio-temporally correlated rain streaks. In the IEEE InternationalConference on Computer Vision (ICCV) , pages 1968–1975, 2013.[16] Y. Li, R.-T Tan, X.-J. Guo, J.-B. Lu, and M.-S Brown. Rain streakremoval using layer priors. In the IEEE Conference on Computer Visionand Pattern Recognition (CVPR) , pages 2736–2744, 2016.[17] Y. Li, R.-T Tan, X.-J. Guo, J.-B. Lu, and M.-S Brown. Single image rainstreak decomposition using layer priors.
IEEE Transactions on ImageProcessing , 26(8):3874–3885, 2017. (a) (b) (c) (d) (e) Fig. 13. Rain streak removal results by different methods on 3 selected imagesfrom Rain12 dataset. From left to right: (a) the background, (b) the rainyimages, the derain results by (c) KGCNN a , (d) KGCNN b , and (d) KGCNN.[18] Y. Luo, Y. Xu, and H. Ji. Removing rain from a single image viadiscriminative sparse coding. In the IEEE International Conference onComputer Vision (ICCV) , pages 3397–3405, 2015.[19] W. Wei, L.-X. Yi, Q. Xie, Q. Zhao, D.-Y. Meng, and Z.-B. Xu. Should weencode rain streaks in video as deterministic or stochastic? In Proceedingsof the IEEE Conference on Computer Vision and Pattern Recognition ,pages 2516–2525, 2017.[20] M.-H. Li, Q. Xie, Q. Zhao, W. Wei, S.-H. Gu, J. Tao, and D.-Y. Meng.Video rain streak removal by multiscale convolutional sparse coding.In the IEEE Conference on Computer Vision and Pattern Recognition(CVPR) , 2018.[21] S.-H. Gu, D.-Y. Meng, W.-M. Zuo, and L. Zhang. Joint convolutionalanalysis and synthesis sparse representation for single image layerseparation. In the IEEE International Conference on Computer Vision(ICCV) , pages 1717–1725. IEEE, 2017.[22] H. Zhang and V.-M Patel. Convolutional sparse and low-rank coding-based rain streak removal. In the IEEE Winter Conference on Applicationsof Computer Vision (WACV) , pages 1259–1267. IEEE, 2017.[23] R. Qian, Robby T. Tan, W.-H. Yang, J.-J. Su, and J.-Y. Liu. Attentivegenerative adversarial network for raindrop removal from a single image.In the IEEE Conference on Computer Vision and Pattern Recognition(CVPR) , June 2018.[24] S.-Y. Li, W.-Q. Ren, J.-W. Zhang, J. Yu, and X.-J. Guo. Fast singleimage rain removal via a deep decomposition-composition network. arXivpreprint arXiv:1804.02688 , 2018.[25] J. Chen, C.-H. Tan, J.-H. Hou, L.-P. Chau, and H. Li. Robust videocontent alignment and compensation for rain removal in a cnn framework. arXiv preprint arXiv:1803.10433 , 2018.[26] J.-Y. Liu, W.-H. Yang, S. Yang, and Z.-M. Guo. Erase or fill? deepjoint recurrent rain removal and reconstruction in videos. In
The IEEEConference on Computer Vision and Pattern Recognition (CVPR) , June2018.[27] W.-H. Yang, R.-T. Tan, J.-S. Feng, J.-Y. Liu, Z.-M. Guo, and S.-C. Yan.Deep joint rain detection and removal from a single image. In the IEEEConference on Computer Vision and Pattern Recognition (CVPR) , July2017.[28] W. Wei, D.-Y. Meng, Q. Zhao, and Z.-B. Xu. Semi-supervised cnn forsingle image rain removal. arXiv preprint arXiv:1807.11078 , 2018.[29] K. Zhang, W.-M Zuo, and L. Zhang. Learning a single convolutionalsuper-resolution network for multiple degradations. arXiv preprintarXiv:1712.06116 , 2017.[30] L. Zhu, C.-W. Fu, D. Lischinski, and P.-A. Heng. Joint bi-layeroptimization for single-image rain streak removal. In the IEEEInternational Conference on Computer Vision (ICCV) , Oct 2017.[31] B.-H. Chen, S.-C. Huang, and S.-Y. Kuo. Error-optimized sparserepresentation for single image rain removal.
IEEE Transactions onIndustrial Electronics , 64(8):6573–6581, 2017.[32] Y.-L. Wang, S.-C. Liu, C. Chen, and B. Zeng. A hierarchical approachfor rain or snow removing in a single color image.
IEEE Transactionson Image Processing , 26(8):3936–3950, 2017.[33] D.-W. Ren, W.-M. Zuo, D. Zhang, L. Zhang, and M.-H. Yang. Simulta-neous fidelity and regularization learning for image restoration. arXivpreprint arXiv:1804.04522 , 2018.[34] X.-Y. Fu, J.-B. Huang, X.-H. Ding, Y.-H. Liao, and J. Paisley. Clearingthe skies: A deep network architecture for single-image rain removal.
IEEE Transactions on Image Processing , 26(6):2944–2956, 2017. [35] H. Zhang, V. Sindagi, and V.-M Patel. Image de-raining using a condi-tional generative adversarial network. arXiv preprint arXiv:1701.05957 ,2017.[36] K. Garg and S.-K Nayar. Detection and removal of rain from videos.In the IEEE Conference on Computer Vision and Pattern Recognition(CVPR) , volume 1, 2004.[37] A. Tripathi and S Mukhopadhyay. Video post processing: low-latencyspatiotemporal approach for detection and removal of rain.
IET imageprocessing , 6(2):181–196, 2012.[38] A.-K. Tripathi and S. Mukhopadhyay. Removal of rain from videos: areview.
Signal, Image and Video Processing , 8(8):1421–1430, 2014.[39] J.-H. Kim, J.-Y. Sim, and C.-S. Kim. Video deraining and desnowingusing temporal correlation and low-rank matrix completion.
IEEETransactions on Image Processing , 24(9):2658–2670, 2015.[40] V. Santhaseelan and V.-K Asari. Utilizing local phase information toremove rain from video.
International Journal of Computer Vision ,112(1):71–89, 2015.[41] S.-D. You, R.-T Tan, R. Kawakami, Y. Mukaigawa, and K. Ikeuchi.Adherent raindrop modeling, detectionand removal in video.
IEEEtransactions on pattern analysis and machine intelligence , 38(9):1721–1733, 2016.[42] W.-H. Ren, J.-D. Tian, Z. Han, A. Chan, and Y.-D. Tang. Video desnowingand deraining based on matrix decomposition. In the IEEE Conferenceon Computer Vision and Pattern Recognition (CVPR) , pages 4210–4219,2017.[43] L. Shen, Z.-H. Yue, Q. Chen, F. Feng, and J. Ma. Deep joint rain andhaze removal from single images. arXiv preprint arXiv:1801.06769 ,2018.[44] J.-H. Kim, C. Lee, J.-Y. Sim, and C.-S. Kim. Single-image derainingusing an adaptive nonlocal means filter. In the IEEE InternationalConference on Image Processing (ICIP) , pages 914–917, 2013.[45] C.-H. Son and X.-P. Zhang. Rain removal via shrinkage-based sparsecoding and learned rain dictionary. arXiv preprint arXiv:1610.00386 ,2016.[46] J. Chen and L.-P. Chau. A rain pixel recovery algorithm for videoswith highly dynamic scenes.
IEEE Transactions on Image Processing ,23(3):1097–1104, 2014.[47] D. Eigen, D. Krishnan, and R. Fergus. Restoring an image taken througha window covered with dirt or rain. In the IEEE International ConferenceonComputer Vision (ICCV) , pages 633–640, 2013.[48] K.-M. He, X.-Y. Zhang, S.-Q. Ren, and J. Sun. Deep residual learningfor image recognition. In the IEEE conference on computer vision andpattern recognition , pages 770–778, 2016.[49] J.-S. Pan, S.-F. Liu, D.-Q. Sun, J.-W. Zhang, Y. Liu, J. Ren, Z.-C. Li, J.-H.Tang, H.-C. Lu, Y.-W. Tai, and et al. Learning dual convolutional neuralnetworks for low-level vision. In the IEEE Conference on ComputerVision and Pattern Recognition (CVPR) , 2018.[50] Z.-W. Fan, H.-F. Wu, X.-Y. Fu, Y. Hunag, and X.-H. Ding. Residual-guide feature fusion network for single image deraining. arXiv preprintarXiv:1804.07493 , 2018.[51] X.-Y. Fu, B. Liang, Y. Huang, X.-H. Ding, and J. Paisley. Lightweightpyramid networks for image deraining. arXiv preprint arXiv:1805.06173 ,2018.[52] K.-M. He, J. Sun, and X.-O. Tang. Guided image filtering.
IEEETransactions on Pattern Analysis and Machine Intelligence , 35:1397–1409, 2012.[53] A. Krizhevsky, I. Sutskever, and G.-E Hinton. Imagenet classification withdeep convolutional neural networks. In
Advances in neural informationprocessing systems (NIPS) , pages 1097–1105, 2012.[54] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep networktraining by reducing internal covariate shift. In
International Conferenceon Machine Learning (ICML) , pages 448–456, 2015.[55] D.-P Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 , 2014.[56] K. Garg and S.-K Nayar. Photorealistic rendering of rain streaks. In
ACM Transactions on Graphics (TOG) , volume 25, pages 996–1002.ACM, 2006.[57] Z. Wang, A.-C Bovik, H.-R Sheikh, and E.-P Simoncelli. Imagequality assessment: from error visibility to structural similarity.
IEEETransactions on Image Processing , 13(4):600–612, 2004.[58] L. Zhang, L. Zhang, X.-Q. Mou, and D. Zhang. Fsim: A featuresimilarity index for image quality assessment.
IEEE transactions onImage Processing , 20(8):2378–2386, 2011.[59] Z. Wang and A.-C Bovik. A universal image quality index.
IEEE SignalProcessing Letters , 9(3):81–84, 2002. [60] W.-F. Xue, L. Zhang, X.-Q. Mou, and A.-C Bovik. Gradient magnitudesimilarity deviation: A highly efficient perceptual image quality index. IEEE Transactions on Image Processing , 23(2):684–695, 2014.[61] Y. Li and M.-S. Brown. Single image layer separation using relativesmoothness. In
IEEE Conference on Computer Vision and PatternRecognition , pages 2752–2759, 2014.[62] L.-J. Deng, T.-Z. Huang, X.-L. Zhao, and T.-X. Jiang. A directionalglobal sparse model for single image rain removal.