SSparse Norm Filtering
Chengxi Ye † , Dacheng Tao ‡ , Mingli Song § , David W. Jacobs † , Min Wu †† Department of Computer Science, University of Maryland ‡ Centre for Quantum Computation and Intelligent Systems, University of Technology Sydney § College of Computer Science, Zhejiang University (a) (b) (c)
Figure 1:
Examples of filtering results using different norms. (a) Left: original image Middle: smoothed image via minimizing l energy. Right: sharpened image. (b) Up: image with pepper and salt noise. Down: smoothed result by minimizing the l norm. (c) Left: drag-and-drop editing. Right: seamlessly editing using l norm filtering. Abstract
Optimization-based filtering smoothes an image by minimiz-ing a fidelity function and simultaneously preserves edges byexploiting a sparse norm penalty over gradients. It has ob-tained promising performance in practical problems, such asdetail manipulation, HDR compression and deblurring, andthus has received increasing attentions in fields of graphics,computer vision and image processing. This paper derivesa new type of image filter called sparse norm filter (SNF)from optimization-based filtering. SNF has a very simpleform, introduces a general class of filtering techniques, andexplains several classic filters as special implementations ofSNF, e.g. the averaging filter and the median filter. It hasadvantages of being halo free, easy to implement, and lowtime and memory costs (comparable to those of the bilateralfilter). Thus, it is more generic than a smoothing operatorand can better adapt to different tasks. We validate theproposed SNF by a wide variety of applications includingedge-preserving smoothing, outlier tolerant filtering, detailmanipulation, HDR compression, non-blind deconvolution,image segmentation, and colorization.
CR Categories:
I.3.3 [Computer Graphics]: Pciture/Im-age Generation—Display Algorithms I.4.3 [Image Processingand Computer Vision]: Enhancement—Grayscale Manipula-tion I.4.9 [Image Processing and Computer Vision]: Appli-cation;
Keywords: sparse norm, image filtering, optimization
Image filtering plays a fundamental role in image processing,computer graphics and computer vision, and has been widelyused to reduce noise and extract useful image structures. Inparticular, edge-preserving smoothing operations have been studied for decades and have been proven to be critical for awide variety of applications including blurring, sharpening,stylization and edge detection.In general, existing edge-preserving filtering techniques canbe classified into the following two groups: weighted averagefiltering and optimization-based filtering.Well-known techniques of weighted average filtering includesanisotropic diffusion [Perona and Malik 1990; Black andSapiro 1998] and bilateral filtering [Tomasi and Manduchi1998]. Anisotropic diffusion uses the gradients of each pixelto guide a diffusion process and avoids blurring across edges.The bilateral filter can be regarded as a non-local diffusionprocess that uses pixel intensities within a neighborhood toguide the diffusion. Both approaches can be implementedusing explicit weighted averaging. Acceleration of weightedfiltering has been a research hotspot in recent years [Parisand Durand 2007; Porikli 2008; Yang et al. 2009; He et al.2010b; Gastal and Oliveira 2012].Optimization-based filtering formulates edge preserving fil-tering as an optimization problem that consists of a fidelityterm and a penalty term [Rudin et al. 1992; Farbman et al.2008; Xu and Lu 2011]. Edge preserving is enforced by in-troducing a sparse norm penalty on the gradients, thus thecost function is usually non-quadratic, and solving the sys-tem is more time consuming [Wang et al. 2008] comparedwith weighted average filtering. Nevertheless, this frame-work often produces high quality results.In this paper, we present a novel type of edge-preserving fil-ter, called sparse norm filter (SNF), derived from a sparse op-timization problem. For each pixel, the filtering output mini-mizes its difference with its neighboring pixels; the penalty isdefined by a sparse norm. Although SNF is closely relatedto and produces results as excellent as optimization-basedfilters, it is conceptually and computationally simpler thanoptimization-based filters. a r X i v : . [ c s . G R ] M a y NF naturally preserves edges through the use of the sparsenorm, and is capable of producing halo-free filtering effects,which is a desirable but lacking property of current weightedaverage filtering techniques. We demonstrate many of theother favorable properties of this simple and versatile ap-proach to filtering via a wide variety of applications. Fig. 1shows some applications of our filtering technique. Fig. 1(a)demonstrates our smoothing and sharpening results that ap-proximate the l energy. Note that the filtering result pre-serves edges and does not introduce halos. Fig. 1(b) showsour l norm filtering effect to remove the pepper and saltnoise. Fig. 1(c) shows a new way of seamless editing en-abled by the l norm filtering. More detailed discussions onapplications will be presented in the Applications section. One simple and classic way to smooth an image is to min-imize the difference of each pixel with nearby ones, whichcan be formulated as min I newi ∑ j ∈ N i (cid:0) I newi − I j (cid:1) . (1)The solution of this optimization can be found by averagingthe nearby pixels and is known as the box filter when weconsider a square neighborhood I newi = | N i | ∑ j ∈ N i I j . (2)The box filter can be calculated in linear time with the in-tegral image technique [Porikli 2008]. However, it does notpreserve the salient structures or edges in an image.Modern filtering techniques solve this problem by taking theweighted average of nearby pixels [Perona and Malik 1990;Black and Sapiro 1998; Tomasi and Manduchi 1998]. In theanisotropic diffusion framework, the neighborhood consistsof the adjacent pixels, and the system has to be iterated tensof times to produce good smoothing result. Most recent fil-tering techniques consider a larger neighborhood consistingof tens or hundreds of neighboring pixels and the filtering issolved in one or a few rounds. Edges are preserved by con-structing the weight matrix with the criterion that similarnearby pixels shall be given higher weights. As an exam-ple, the bilateral filter [Tomasi and Manduchi 1998] uses theintensity to measure similarity and assigns weights by I newi = ∑ j ∈ N i w i j I j ∑ j ∈ N Ni w i j , w i j = exp (cid:32) − ( j − i ) σ s (cid:33) exp (cid:32) − (cid:0) I j − I i (cid:1) σ r (cid:33) . (3)Edge-preserving image smoothing can also be achieved bysolving the following optimization problem min B (cid:107) B − I (cid:107) q + λ (cid:107) ∇ B (cid:107) p . (4)The penalty term λ (cid:107) ∇ B (cid:107) p controls the amount of smooth-ness of the output B and the fidelity term (cid:107) B − I (cid:107) q controls the similarity with the input I . When p = q = , the op-timization problem is the well-known Tikhonov regulariza-tion [Tikhonov et al. 1995]. The explicit solution can befound by B = (cid:16) Id + λ ∇ T ∇ (cid:17) − I . (5)Since sparse norms have better tolerance for outliers thanthe l norm, the optimization was later extended to to-tal variation regularization with p = [Rudin et al. 1992]and even sparser versions [Farbman et al. 2008][Xu andLu 2011] for edge-preserving purposes. Solving these non-quadratic optimizations is more time-consuming. Thus, vari-able splitting [Wang et al. 2008] is usually exploited to castthe original large optimization problem into several smallsub-problems and alternatively minimize each of these sub-problems. We propose SNF by generalizing (1) through allowing theoriginal l norm to be a fractional-norm. To preserve strongedges, we need to smooth the image while to tolerate theoutlier pixels by assigning lower weights to them. This typeof adaptive weighting ideas have been well explored in robuststatistics [Black and Sapiro 1998] and we achieve this byexploiting sparse norms. Then, SNF is defined by min I newi ∑ j ∈ N i | I newi − I j | p , < p < = . (6)Minimizing this non-quadratic cost function when p < isdifficult. Especially when p < , the cost function is non-convex and conventional gradient descent-based algorithmsare easily trapped into local minima.In this paper, we consider two approximation strategies. Thefirst strategy iteratively exploits the weighted least squaretechnique ∑ j ∈ N i | I newi − I j | p ≈ ∑ j ∈ N i | I i − I j | p − (cid:0) I newi − I j (cid:1) = ∑ j ∈ N i w i j (cid:0) I newi − I j (cid:1) . (7)By taking the derivative, we find that the solution canbe approximated by using weighted average filtering I newi = ∑ j ∈ Ni w ij I j ∑ j ∈ Ni w ij , with w i j = | I i − I j | p − . This solution can be under-stood as one iteration of the anisotropic diffusion process,with the diffusity w i j calculated based at the current pixelintensity. This way of weighting naturally enforces fidelitywith the input image. Similar to the anisotropic diffusion,we can iteratively update the diffusity once we update theimage with this weighted average filtering result. In practice,like the bilateral filter, one iteration is usually sufficient be-cause the diffusion is non-local. It is noteworthy that when I i = I j , the weight goes to infinity. In practice, we can avoidthis by setting a threshold and raising the pixel differencesby/to the threshold. We can also modify the optimization byeighting pixels according to distance using a Gaussian-likeweight. However, we observe that treating all neighboringpixels equally is good enough in practice.Another strategy quantizes the solution into a set of discretevalues. For each of these discrete values Q b , we calculate ∑ j ∈ N i | Q b − I j | p for each pixel i , which can be done efficientlyby using the box filter. We compare the energy at eachof these discrete values and select the minimum. Similartechnique is used to approximate the median filter [Yanget al. 2009]. In this strategy, only discrete solutions at certainquantization levels are allowed because this approximationis based on brute force searching in the solution space. Inpractice, this strategy is more preferable when images arecontaminated by outliers, e.g., the salt and pepper noise,we need a large number of iterations of the first strategy(if possible) to reach a suitable solution. For example, ifthe center pixel is noised and we conduct one iteration offiltering, we will assign high weights to similar pixels thatare potentially noised. Thus, the obtained solution can befar away from suitable.Both strategies are valuable. The first strategy makes theresults look natural to the eye and its effect is similar tothe bilateral filter, while the second strategy can filter outoutliers and its effect is similar to the median filter. In allexperiments except the outlier-tolerant-filtering, we choosethe first strategy. The sparse norm filter benefits from off-the-shelf accelera-tion methods [Yang et al. 2009; Gastal and Oliveira 2012],and can be calculated in linear time O ( BN ) , where B is thenumber of bins for quantization and the pixel number N .For a grayscale image, the brute force solution can be cal-culated with B box filters if we quantize the intensities into B bins. For the weighted average solution, we can similarlyquantize the center pixel intensities (in the weight term) into B bins [Yang et al. 2009]. The weighted sum (numerator),and the sum of weights (denominator) can also be calculatedusing B box filters, respectively. In comparison, an excellentstate-of-art filtering technique [He et al. 2010b] uses 7 boxfilters. In our Matlab implementation, the box filter takes0.04 seconds per mega-pixel. The weighted average imple-mentation of the sparse norm filter takes 0.5 seconds permega-pixel when B = , and 1 second with B = . Experi-ments are carried out on an Intel i7 3610QM CPU with 8Gmemory. The pixel level operations will experience signifi-cant speedups in C++ implementations. For example, ourdirect single thread implementation of the box filter in C++took 0.01 seconds per mega-pixel. Filtering based methodsare faster than optimization-based methods [Farbman et al.2008; Xu and Lu 2011], as the latter take 2-4 seconds permega-pixel in the same environment. Optimization-based filters [Rudin et al. 1992; Farbman et al.2008; Xu and Lu 2011; Elad 2002] have been widely usedimage enhancement tasks, e.g. denoising, edge preservingsmoothing and deconvolution [Levin et al. 2007; Krishnanand Fergus 2009], and share the form of (4), or in the pixellevel notation min I newi (cid:107) I newi − I i (cid:107) q + λ (cid:107) ∇ I newi (cid:107) p , ≤ p , q ≤ . (8) The norm in the fidelity term is usually an l norm in existingworks. SNF simplifies (8) by integrating the fidelity termand the sparse norm penalty. By setting λ = , changingthe l q norm in the first term to the l p norm, and definingthe neighborhood to contain the current pixel, we can writethe first term in (8) into the second term and reduce (8)to the form of (6). Thus we establish the connection withoptimization-based filtering.It is noteworthy that in most optimization-based filters, theneighborhood is of small size and only contains the adjacentpixels. By contrast, SNF extends the concept of neighbor-hood in a non-local way to potentially include more pixels.We consider the difference of a pixel with all the pixels, notonly those that are horizontal and vertical. SNF has ad-vantages over optimization-based filter: a one pass approx-imation exists and is less likely to be trapped in poor localminima, thanks to the non-local diffusion.In addition, SNF has a close relationship with several well-known filters. By setting p = , SNF reduces to the averagingfilter or box filter if we consider square neighborhoods. Bysetting p = , SNF is equivalent to the median filter. Thiscan be proved by taking the derivative on the original costfunction. By setting p = , the sparse norm filter is thedominant mode filter [Kass and Solomon 2007]. Explicit filtering techniques are known to create faint lightrims along strong edges known as halo artifacts. This unre-alistic effect has been widely discussed in [He et al. 2010b;Farbman et al. 2008; Xu and Lu 2011]. This section showsthat SNF can produce halo-free results. Fig 2 compares rep-resentative edge-preserving smoothing techniques. Althoughall the methods can produce high quality results, we findsome tiny differences. Optimization-based smoothing algo-rithms [Farbman et al. 2008; Xu and Lu 2011] are morecapable of producing halo-free looks, but the obtained re-sults can occasionally be unexpected if the optimization isnon-convex. In Fig. 2(c) the edges look overly smoothed;the l -smoothing preserves edges perfectly but it also re-tains speckles. Traditional weighted average filtering tech-niques produce smoother looks, but tend to produce halosnear strong edges. These halos also lead to unnatural tran-sitions in sharpening.By using SNF with p < , similar pixels will be assignedlarger weights than dissimilar pixels, thus the filter is edgepreserving. When is approaching to zero, the sparse normapproximates the l energy, and the filter result exhibits novisible halo effects (Fig. 3), since pixels with different inten-sities are assigned much lower weights than the pixels withsimilar weights (Fig. 4(b)). The idea is also similar to theedge-stopping diffusion in the anisotropic diffusion frame-work [Black and Sapiro 1998].We use SNF to decompose the image into a base layer anddetail layer I = B + D . Here the base layer B is the cartoon-like filtering result using the sparse norm filter. Detailenhancement can be achieved by boosting the detail layer I boosted = B + × D . We demonstrate the results on a flowerphoto (Fig. 4(a)) by trying different combinations of thefiltering radius r and the norm p (Fig. 5). a) (b) (c)(d) (e) (f)(g) (h) (i)(j) (k) (l) Figure 2:
Filtering results. For each result, left: original image. Middle: smoothing result. Right: sharpening result. (a)Original image. (b) WLS [10] result λ = , p = . . (c) WLS result, λ = , p = . . (d) Guided image filter [7], r = , ε = . . (e) Guided image filter, r = , σ r = . . (f) Bilateral filter [3], r = , σ r = . . (g) Bilateral filter, r = , σ r = . .(h) l -smoothing [11], λ = . . (i) l -smoothing, λ = . . (j) SNF, r = , p = . . (k) SNF, r = , p = . . (l) SNF, r = , p = . . a) (b) (c) (d) (e) Figure 3:
Halo effects. (a) Original image. (b) Bilateral filter result using σ s = , σ r = . . (c) Guided image filter resultusing r = , ε = . . (d) Our result using p = . , r = . (e) Our result using p = . , r = . (a) −1 −0.5 0 0.5 10246810 gradient w e i gh t p=0.1p=0.5p=1.0p=1.5p=2.0 (b) Figure 4: (a) The original flower image. (b) Weights assigned to different gradients under different norms.
Radius\p 0.01 0.8 1.8 5 20 50
Figure 5:
Smoothing/sharpening using various radius/norm settings. Left half of each image: smoothing result. Right half:sharpening result by adding the detail layer. .2 Outlier Tolerant Filtering
Standard edge preserving filters [Perona and Malik 1990;Tomasi and Manduchi 1998] are very effective for Gaussian-like noise reduction. In the presence of extreme noise, noneof them are as robust as the classic median filter. The culpritis the weighting can be misled by noise. In comparison, thesparse norm filter is a whole class of filter that can performsimilarly with the median filter.We take an example image from [Kass and Solomon 2007].We avoid the outliers by first using brute force search toapproximate the global solution of (6) at a few discrete val-ues [Yang et al. 2009]. This intermediate result has a quan-tized look. (Fig 6, row 1 columns 2&3) We calculate thediffusity at this approximate solution / use this as the guid-ance image and use the one pass weighted average filtering(7) to output a smoothed image. (Fig 6, row 2)
HDR tone mapping is a popular application which can beachieved by compressing the base layer B while keeping thedetail layer D . In the following comparison (Fig 7) we can seethat the weighted least square (WLS) [Farbman et al. 2008],Fattal02 [Fattal et al. 2002], Durand02 [Durand and Dorsey2002] have visible halos near the strong edges. For the sparsenorm filter, we set p = . and radius to be 1/6 of the imageheight to conduct one pass of non-local diffusion to extractthe base layer. We observe under the same p = . the WLSmethod seems to be trapped in a local minimum because thecost function is non-convex. Although Drago03 [Drago et al.2003], Pattanaik00 [Pattanaik et al. 2000], Mantiuk06 [Man-tiuk et al. 2002], Reinhard05 [Reinhard and Devlin 2005] tryto reduce the halo, they fail to make some details visible inthe results. Ringing artifacts are common in deconvolution when the ker-nel estimation is not accurate or when frequency nulls occur.The ringing artifacts can be significantly reduced by puttinga sparse norm prior on the gradient term [Levin et al. 2007;Krishnan and Fergus 2009]. Similarly we put a non-localsparse norm on the gradients min I newi (cid:107) I newi − k (cid:79) I ( i ) (cid:107) + λ N i ∑ j ∈ N i , j (cid:54) = i (cid:107) I newi − I j (cid:107) p , < p < (9)and use an alternative minimization [Wang et al. 2008] tech-nique to deconvolve the blurry image (Fig 8). In the fol-lowing comparison we compare our result with the standardTikhonov regularization which uses an l penalty on the gra-dient term. We notice SNF can produce crisper results withfewer ringing effects. The sparse norm filter can naturally incorporates a guid-ance or joint image [Petschnigg et al. 2004] to provide thefiltering weight or diffusity. Below we show the result offlash/No-flash denoising taking the image using flash as thejoint image to remove the noise in the non-flash image, using p = . , r = (Fig 9). The sparse norm is usually used to model the gradient pro-file of natural images in various computer vision models. Inthe following experiment (Fig 10) we show it can be usedto accelerate the normalized cut [Shi and Malik 2004] us-ing the joint filtering techniques. Since normalized cut findsthe eigenvectors of a diffusion/affinity matrix, we replacethe slow matrix multiplication (which is quadratic to theneighborhood radius) in the eigensolver with our joint SNFwhich takes constant time with any neighborhood size [Yeet al. 2012]. We use the original image as the guidance im-age and easily provides 10x-100x acceleration depending onthe filtering radius. Moreover, we can extend the techniqueto explain and accelerate other normalized cut related algo-rithms [He et al. 2010b; Levin et al. 2004; Levin et al. 2008b;Levin et al. 2008a; He et al. 2011; He et al. 2010a].
We demonstrate an application in colorization [Levin et al.2004] as an example using joint filtering. Colorization canalso be achieved by finding the stable distribution of an edge-preserving filter [Ye et al. 2012]. The guiding weight of thisfilter is calculated from the gray scale image, and similarnearby pixels are assigned higher weights, which is naturallyenforced using the sparse norm. To promise pixels similar ingray scale intensities are assigned similar colors, we use thisguiding weight/diffusity to spread the color cues obtainedfrom the input color strokes. We use a straightforward gradi-ent descent algorithm to update the diffusion system. Withless than 10 iterations, we can obtain high quality results(Fig 11(d)). This algorithm can also be used to re-color theflash image, shown Fig 9(d).
This acceleration technique enabled by the sparse norm filtercan be extended to the non-sparse norm. Seamless editing isa popular feature in image processing. Due to inconsistentcolor between the source and target, simple drag-and-dropediting is known to create artificial boundaries. The Poissonequation is widely used to seamlessly fill in a target regionusing a source region. In this framework, guided interpola-tion is conducted via solving ∆ I = div ( ∇ J ) in the fill-in area Ω , subject to the Dirichlet boundary condition [P´erez et al.2004]. If we extend this equation by taking non-local gradi-ents, the high dimensional Poisson equation, hints at a newsystem. | N i | I newi − ∑ j ∈ N i I j = | N j | J i − ∑ j ∈ N i J j (10)We also solve the above system also with the gradient de-scent algorithm. Since the diffusion is non-local, the algo-rithm converges within 10 iterations. Only ∑ j ∈ N i I j needs tobe updated in each iteration, which can be calculated usingthe box filter. We compare our algorithm with the originalPoisson solver. In our environment, we use the backslashoperation in Matlab to solve the Poisson equation, whichtakes 3 seconds per mega-pixel, excluding the time requiredto construct the sparse linear system. As reported above,the box filter takes only 0.04 seconds per mega-pixel in Mat-lab, or 0.01 seconds in C++. The results are comparable inquality (Fig 12).nput image p = p = . Figure 6:
Left: original image. Middle: p = . , r = . Right: p = . , r = . First row: approximate solution of (6) usingbrute force search. Second row: followed by a guided diffusion. (a) Original (b) Farbman08 (c) Durand02(d) Drago03 (e) Fattal02 (f) Pattanaik00(g) Mantiuk06 (h) Reinhard05 (i) SNF Figure 7:
HDR tone compression comparison. a) (b) (c) (d)
Figure 8:
Deconvolution comparison. (a) Input. (b) Estimated kernel. (c) Tikhonov regularization result. (d) Sparse normdeconvolution using p = . , r = . (a) (b) (c) (d) Figure 9: (a) Noisy image. (b) Flash image. (c) Joint filtered image. (d) Recolored image.
Figure 10:
Column1: input image from [Shi and Malik 2009]. Columns 2-6: segments using the p = . , r = / of theimage. (a) (b) (c) (d) Figure 11:
Colorization. (a) Input gray scale image. (b) Input color strokes. (c) Result by [Levin et al. 2004]. (d) Our resultusing p = . , r = / of the image height. a) (b) (c) (d)(e) (f) (g) Figure 12:
Seamless editing. (a) Source image. (b) Target image. (c) Drag-and-drop result. (d) 1st iteration of our algorithm.(e) 3rd iteration of our algorithm. (f) Our algorithm output. (g) [P´erez et al. 2003] output.
In this work we present a simple but fundamental filterthat builds connections with various classic smoothing tech-niques. The sparse norm filter can be regarded as a non-local extension of the optimization-based smoothing meth-ods, which allows one-pass approximate solution via filtering.Through a variety of applications in image processing andcomputer vision, we demonstrate that the sparse norm fil-ter gives new insights into popular applications and provideshigh quality accelerations.
References
Black, M., and Sapiro, G.
IEEE Transactions on Image Processing 7 , 3,421–432.
Drago, F., Myszkowski, K., Annen, T., and Chiba, N.
Computer Graphics Forum 22 , 3, 419–426.
Durand, F., and Dorsey, J.
ACMTransactions on Graphics 21 , 3, 257–266.
Elad, M.
IEEE Transactions on Image Processing 11 ,10, 1141–1151.
Farbman, Z., Fattal, R., Lischinski, D., and Szeliski,R.
ACM Transactions onGraphics 27 , 3, 1.1–1.10.
Fattal, R., Lischinski, D., and Werman, M.
ACMTransactions on Graphics 21 , 3, 249–256.
Gastal, E. S. L., and Oliveira, M. M.
ACMTransactions on Graphics 31 , 4, 1–13.
He, K., Sun, J., and Tang, X.
Proceedings ofIEEE Conference on Computer Vision and Pattern Recog-nition , IEEE, 2165–2172.
He, K., Sun, J., and Tang, X.
Proceedings of European Conference on ComputerVision , Springer, 1–14.
He, K., Sun, J., and Tang, X.
Proceedings of IEEEConference on Computer Vision and Pattern Recognition ,IEEE, 2341–2353.
Kass, M., and Solomon, J.
ACM Transactions on Graphics 29 , 4, 1–10.
Krishnan, D., and Fergus, R.
Advances in NeuralInformation Processing . Levin, A., Lischinski, D., and Weiss, Y.
ACM Transactions on Graphics23 , 3, 689.
Levin, A., Fergus, R., Durand, F., and Freeman,W. T.
ACM Transactions on Graphics26 , 10, 1141–1151.
Levin, A., Lischinski, D., and Weiss, Y.
IEEE Transac-tions on Pattern Analysis and Machine Intelligence 30 , 2,228–242.
Levin, A., Rav-Acha, A., and Lischinski, D.
IEEE Transactions on Pattern Analysis andMachine Intelligence 30 , 10, 1699–1712.
Mantiuk, R., Myszkowski, K., and Seidel, H.-P.
ACM Transactions on AppliedPerception 3 , 3, 87–94. aris, S., and Durand, F.
International Journal of Computer Vision 81 , 1, 24–52.
Pattanaik, S., Tumblin, J., Yee, H., and Greenberg,D.
Proceedings of ACM SIGGRAPH , ACM,47–54.
P´erez, P., Gangnet, M., and Blake, A.
ACM Transactions on Graphics 22 , 3, 313.
Perona, P., and Malik, J.
IEEE Transactionson Pattern Analysis and Machine Intelligence 12 , 7, 629–639.
Petschnigg, G., Szeliski, R., Agrawala, M., Cohen,M., Hoppe, H., and Toyama, K.
ACM Transac-tions on Graphics 23 , 3, 664.
Porikli, F.
Proceedings of International Conference on ComputerVision and Pattern Recognition , IEEE Computer Society,1–8.
Reinhard, E., and Devlin, K.
IEEETransactions on Visualization and Computer Graphics 11 ,1, 13–24.
Rudin, L., Osher, S., and Fatemi, E.
PhysicaD: Nonlinear Phenomena 60 , 1–4, 259–268.
Shi, J., and Malik, J.
IEEE Transactions on Pattern Analysisand Machine Intelligence 22 , 8, 888–905.
Tikhonov, A. N., Goncharsky, A. V., Stepanov, V. V.,and Yagola, A. G.
Numerical methods for thesolution of ill-posed problems . Springer.
Tomasi, C., and Manduchi, R.
Proceedings of InternationalConference on Computer Vision , IEEE, 839–846.
Wang, Y., Yang, J., Yin, W., and Zhang, Y.
SIAM Journal on Imaging Sciences1 , 3, 248.
Xu, L., and Lu, C.
Image Rochester NY 30 , 6, 1–12.
Yang, Q., Tan, K., and Ahuja, N.
Proceedings of IEEE Conference onComputer Vision and Pattern Recognition , IEEE Com-puter Society, 557–564.
Ye, C., Lin, Y., Song, M., Chen, C., and Jacobs, D. W. arXiv preprintarXiv preprint