Restoration of Images Corrupted by Impulse Noise and Mixed Gaussian Impulse Noise using Blind Inpainting
RRESTORATION OF IMAGES CORRUPTED BY IMPULSE NOISEAND MIXED GAUSSIAN IMPULSE NOISE USING BLINDINPAINTING
MING YAN ∗ Abstract.
This article studies the problem of image restoration of observed images corrupted byimpulse noise and mixed Gaussian impulse noise. Since the pixels damaged by impulse noise containno information about the true image, how to find this set correctly is a very important problem.We propose two methods based on blind inpainting and (cid:96) minimization that can simultaneouslyfind the damaged pixels and restore the image. By iteratively restoring the image and updating theset of damaged pixels, these methods have better performance than other methods, as shown in theexperiments. In addition, we provide convergence analysis for these methods, these algorithms willconverge to coordinatewise minimum points. In addition, they will converge to local minimum points(or with probability one) with some modifications in the algorithms. Key words. impulse noise, mixed Gaussian impulse noise, total variation, blind inpainting,image restoration, (cid:96) minimization
1. Introduction.
Observed images are often corrupted by impulse noise duringimage acquisition and transmission, caused by malfunctioning pixels in camera sen-sors, faulty memory locations in hardware, or bit errors in transmission [1]. Thereare two common types of impulse noise: salt-and-pepper impulse noise and random-valued impulse noise. Assume that the dynamic range of an image is [ d min , d max ]. Forimages corrupted by salt-and-pepper impulse noise, the noisy pixels can take only twovalues d min and d max , while for images corrupted by random-valued impulse noise, thenoisy pixels can take any random value between d min and d max .In this work, the original unknown M × N image u is defined on a domain Ω = { ( i, j ) : i = 1 , . . . , M, j = 1 , . . . , N } , and the observed M × N image f is modeled as f i,j = (cid:26) ( Hu ) i,j + ( n ) i,j , ( i, j ) ∈ Ω , ( n ) i,j , ( i, j ) ∈ Ω c := Ω \ Ω . (1.1)Here, n is the impulse noise, and n is the additive zero-mean Gaussian white noise. H is the identity or a blurring operator, which is assumed to be continuous. Thesubset Ω c of Ω denotes the region where the information of Hu is missing. Theproblem is to find the true image u from observed image f given the operator H .If Ω c is empty, there is no impulse noise, then we have f = Hu + n , which isan image denoising (and deblurring) problem, and it has been extensively studiedby both signal processing researchers and mathematicians. If Ω c is not empty andknown, this can be considered as an image inpainting (and deblurring) problem.Here, we will consider the last and most difficult case where Ω c is not empty andunknown. The challenge of this problem is to restore the lost details, and remove theimpulse noise simultaneously. If n = 0, this problem is an impulse noise removal(and deblurring) problem and if n (cid:54) = 0 it becomes a mixed Gaussian impulse noiseremoval (and deblurring) problem. There are already several types of approaches forsolving these problems.The first type of approaches treats n as outliers and uses the (cid:96) norm in thefidelity term to increase the robustness of inpainting to outliers [2, 3, 4, 5], and the ∗ Department of Mathematics, University of California, Los Angeles, CA, 90095 USA. E-mail:[email protected]. 1 a r X i v : . [ m a t h . O C ] A p r MING YAN problem is to solve minimize u (cid:88) i,j | ( Hu ) i,j − f i,j | + λ J ( u ) , (1.2)where J ( u ) is a regularization on the true image u . There are many candidates forthe regularization J ( u ), and some examples are Tikhonov regularization [6], Gemanand Reynolds’ half quadratic variational models [7], Rudin, Osher and Fatemi’s totalvariation models [8, 9], and framelet based models [10, 11]. This approach does notneed to find the damaged pixels and performs well in impulse noise removal. However,for the case of images corrupted by mixed Gaussian impulse noise, the Gaussian noiseis not treated properly.The second type of approaches is the two-stage approach [12, 13, 14, 15, 11, 16, 17],which estimates the inpainting region Ω c before estimating u . In these approaches,the second stage becomes a regular image inpainting (and deblurring) problem [18,19, 20, 21] minimize u (cid:88) ( i,j ) ∈ Ω (( Hu ) i,j − f i,j ) + λ J ( u ) . (1.3)The success of these two-stage approaches relies on the accurate detection of Ω c ,e.g. adaptive median filter (AMF) [22] is used to detect salt-and-pepper impulsenoise, while adaptive center-weighted median filter (ACWMF) [23] and rank-orderedlogarithmic difference (ROLD) [24] are utilized to detect random-valued impulse noise.Though adaptive median filter can detect most pixels damaged by salt-and-pepperimpulse noise, it is more difficult to detect pixels corrupted by random-valued impulsenoise than salt-and-pepper impulse noise. Recently, by considering two different typesof noise, Dong et al. [25] proposed a new method using framelet to remove random-valued impulse noise plus Gaussian noise by solvingminimize u,v (cid:88) i,j (( Hu ) i,j + v i,j − f i,j ) + λ (cid:107) W u (cid:107) + λ (cid:88) i,j | v i,j | , (1.4)where W is a transformation from the image to the framelet coefficients. Two un-knowns u (restored image) and v (noise) are introduced into this variational model,and their methods can simultaneously find u and v using split Bregman iterations[26].Dong et al.’s method uses (cid:96) norm as a convex approximation of (cid:96) term tomake the result v sparse, and keep the problem convex in the meantime. However,using non-convex optimization ( (cid:96) p when p <
1) has better performance than convexoptimization in dealing with the sparsity, as shown in compressive sensing [27]. Even (cid:96) minimization and smoothed (cid:96) minimization are used in many algorithms [28, 29,30, 31, 32, 33]. In this paper, we will use (cid:96) minimization instead of (cid:96) minimization inthe problem, and by using (cid:96) minimization, the problem of finding u can be solved byconsidering a problem of finding u and Ω . In addition, using alternating minimizationalgorithm, it can be solved easily by alternately solving the image inpainting problemand finding the damaged pixels.The work is organized as follows. In section 2 and 3, we introduce our generalmethods for removing impulse noise using two different treatments for the (cid:96) term: I)the (cid:96) term is put in the objective function, II) the (cid:96) term is in the constraint. Thealgorithms for these two models are similar. The convergence analysis of these two estoration of Images Corrupted by Impulse Noise using Blind Inpainting
2. Blind Inpainting Models using (cid:96) Term.2.1. Formulation.
For an M × N image, Λ ∈ { , } M × N is a binary matrixrepresenting a subset Ω s of the pixels as follows:Λ i,j = (cid:26) , if pixel ( i, j ) ∈ Ω s , , otherwise . (2.1)The connection between binary matrix Λ and subset Ω s will be used many times inthe follow.Given a degraded image f , our objective is to estimate the damaged (or missing)pixels and restore them. We propose the following model using (cid:96) minimization tosolve this problem:minimize u,v F P ( u, v ) ≡ (cid:88) i,j (( Hu ) i,j + v i,j − f i,j ) + λ J ( u ) + λ (cid:107) v (cid:107) , (2.2)where J ( u ) is the regularization term on the image, λ and λ are two positive param-eters. Here P means that (cid:96) term is used as a penalty term in the objective function.The parameter λ is dependent on the noise level of n . The higher the noise level,the larger the parameter should be. The parameter λ is dependent on the noise levelof impulse noise. The difference from Dong et al.’s method is that (cid:96) norm is replacedby (cid:96) term. It is difficult to solve this problem because of the (cid:96) term in the function. (cid:96) term makes the problem non-convex and the objective function is non-continuous.Because what we need to find is just u , we can eliminate v from problem (2.2) bydefining E P ( u ) as min v F P ( u, v ), and the problem becomesminimize u E P ( u ) = min v F P ( u, v ) . (2.3)However, E P ( u ) is still non-convex and difficult to solve. In order to solve this prob-lem, we will transform the problem into a continuous and multi-convex problem of u and Λ by introducing a new variable Λ, and by solving the new problem of u and Λ,we can obtain a local optimal solution for the original problem (2.3) of u only.First of all, we provide the intuition behind choosing (cid:96) minimization instead of (cid:96) minimization for v . Because of the speciality of (cid:96) minimizations, v can be easilyeliminated from min v F P ( u, v ) and we can obtain the function of u only as follows: E P ( u ) = 12 (cid:88) i,j R (( Hu ) i,j − f i,j ) + λ J ( u ) , (2.4)where R ( x ) = min( | x | , λ ). Similarly, we can obtain the function of u only when (cid:96) term is used instead of (cid:96) term as follows: E P ( u ) ≡ (cid:88) i,j R (( Hu ) i,j − f i,j ) + λ J ( u ) , (2.5) MING YAN where R ( x ) = (cid:26) | x | , if | x | ≤ λ , λ | x | − λ , otherwise . The data fidelity terms (cid:80) i,j R (( Hu ) i,j − f i,j ) and (cid:80) i,j R (( Hu ) i,j − f i,j ) inproblems (2.4) and (2.5) are used to approximate the negative log-likelihood resultingfrom the mixed Gaussian impulse noise model, with each R or R describing thenegative log-likelihood for each pixel, because the noise is independently distributedat all pixels. What we are trying to do is finding better and simpler model for mixedGaussian impulse noise. We can simulate the probability distribution of the pixelvalues when it is corrupted by both additive Gaussian and random-valued impulsenoise. For a fixed pixel value (128 in Fig. 2.1), a value is added as a Gaussian noise,and it is replaced by any random value between d min and d max (0 and 255 in Fig. 2.1)with some probability related to the impulse noise level. This is run for 10 timesand the approximated negative log-likelihood function is shown in Fig. 2.1. We cansee that it is a constant when the value is far from the true pixel value. In this figurewe also show R ( x − R ( x − nega t i v e l og − li k e li hood negative log-likelihood R R Fig. 2.1: Negative log-likelihood value for the mixed Gaussian impulse noise, andcomparison with R and R From Fig. 2.1, we can find that it is more reasonable to use (cid:96) term instead of (cid:96) term. However, (cid:96) minimization makes the problem convex and it is easier to find thesolution, while the problem is non-convex and difficult to solve if (cid:96) minimization isused. Next, we will introduce an auxiliary variable Λ or Ω and we can find a localminimizer of E P ( u ) by solving the new problem of u and Λ.For any fixed ¯ u , we can find the optimal v by solving the following optimizationproblem: minimize v (cid:88) i,j ( H ¯ u + v − f ) + λ (cid:107) v (cid:107) . The solution is v i,j = , if | f i,j − ( H ¯ u ) i,j | < λ ,f i,j − ( H ¯ u ) i,j , if | f i,j − ( H ¯ u ) i,j | > λ , f i,j − ( H ¯ u ) i,j , if | f i,j − ( H ¯ u ) i,j | = 2 λ . estoration of Images Corrupted by Impulse Noise using Blind Inpainting v i,j (cid:54) = 0, we have v i,j = f i,j − ( H ¯ u ) i,j . Therefore if we denoteΛ i,j = (cid:26) , if v i,j (cid:54) = 0 , , if v i,j = 0 , (2.6)then we have a new problem with u and Λ as follows:minimize u, Λ ∈{ , } M × N F ( u, Λ) ≡ (cid:88) i,j Λ i,j (( Hu ) i,j − f i,j ) + λ J ( u ) + λ (cid:88) i,j (1 − Λ i,j ) . (2.7)Problem (2.7) can be solved easily by alternating minimization method, and thealgorithm for solving (2.7) is described in section 2.2. For a general alternative mini-mization procedure for convex and non-convex problems, please see [34]. Remark:
In fact, the constraint of Λ ∈ { , } M × N can further be relaxed intoΛ ∈ [0 , M × N , and we have the following multi-convex problem:minimize u, Λ ∈ [0 , M × N F ( u, Λ) = 12 (cid:88) i,j Λ i,j (( Hu ) i,j − f i,j ) + λ J ( u ) + λ (cid:88) i,j (1 − Λ i,j ) . (2.8)If u is fixed, F ( u, Λ) is a function of Λ only and it is separable. The optimalsolution for Λ with fixed u isΛ i,j = , if | f i,j − ( Hu ) i,j | > λ , , if | f i,j − ( Hu ) i,j | < λ ,t, if | f i,j − ( Hu ) i,j | = 2 λ , where t is 0 or 1 for unrelaxed problem (2.7), and t is any number in [0 ,
1] forrelaxed problem (2.8). If we eliminate Λ as before, min Λ ∈{ , } M × N F ( u, Λ) andmin Λ ∈ [0 , M × N F ( u, Λ) are functions with respect to u only and same as E P ( u ). Be-cause all the problems are non-convex, there may exist many local optimal solutionswhich may not be global optimal solutions. We will show in section 4 that by solvingproblem (2.8), we can obtain a local minimizer of function E P ( u ). The objective function defined in (2.8) is non-convex. It isstill difficult to solve it in the pair ( u, Λ), but we can use alternating minimizationmethod, which separates the energy minimization over u and Λ into two steps. Forsolving the problem in u with Λ fixed, it is a convex optimization problem for imageinpainting and the problem of finding Λ with u fixed can be solved in one step. Thesetwo subproblems are1) Finding u : Given an estimate of the support matrix Λ, the minimization over u is just an image inpainting (and deblurring) problem [35]:minimize u (cid:88) ( i,j ) ∈ Ω (( Hu ) i,j − f i,j ) + λ J ( u ) . (2.9)There are many existing methods for solving this problem.2) Finding Λ: Given an estimate of the image u , the minimization over Λ becomes:minimize Λ ∈ [0 , M × N (cid:88) i,j Λ i,j (( Hu ) i,j − f i,j ) − λ (cid:88) i,j Λ i,j . (2.10) MING YAN
Since this minimization problem of Λ is separable, it can be solved exactly in onlyone step: Λ i,j = Hu ) i,j − f i,j ) / > λ , Hu ) i,j − f i,j ) / λ , Hu ) i,j − f i,j ) / < λ . (2.11)Therefore, the proposed algorithm for blind inpainting with (cid:96) minimization is it-eratively finding u and Λ. As mentioned in section 2.1, min Λ ∈{ , } M × N F ( u, Λ) =min Λ ∈{ , } M × N F ( u, Λ) for all fixed u , thus we can force Λ i,j ∈ { , } during thealgorithm. When (( Hu ) i,j − f i,j ) / λ , we can randomly choose Λ i,j to be 0 or 1.The detailed algorithm for blind inpainting is described below, the initial Λ ischosen by the methods for detecting the impulse noise (AMF for salt-and-pepper im-pulse noise and ACWMF for random-valued impulse noise). Usually three iterationsare sufficient, as shown in the experiments. Algorithm 1
Proposed blind inpainting algorithm.
Input: f , λ , λ , Λ , (cid:15) Initialization: k = 1. while k < F ( u k , Λ k ) − F ( u k − , Λ k − ) > (cid:15) do Obtain u k by solving (2.9).Obtain Λ k by (2.11). k = k + 1. end while Here (cid:15) is chosen to be small and served as a stopping criteria to stop the algorithmwhen the difference in function values between two iterations is too small. λ and λ are two parameters depending on the noise levels of n and n . Remark:
This algorithm and the algorithm in next section depend on the ini-tial Λ , and choosing a better Λ will reduce the total number of iterations and therestoration result (because of the non-convexity of the problem). Therefore, we canchoose the result of AMF and ACWMF for salt-and-pepper and random-valued im-pulse noise respectively as initial Λ . For salt-and-pepper impulse noise, AMF willprovide a very accurate initial guess for Λ for most cases and the improvement frommore iterations is not too much, while for random-valued impulse noise, the outputof ACWMF is not accurate, and more iterations are needed to improve the detectionof corrupted pixels and the recovery result.
3. Blind Inpainting Using Adaptive Outlier Pursuit.
In the previous sec-tion, we proposed a method for blind inpainting by putting a (cid:96) term in the objectivefunction, which can be solved by iteratively updating the set of pixels damaged byimpulse noise (or the binary matrix Λ ∈ { , } M × N ) and restoring the image. Insteadof putting the (cid:96) term in the objective function, we can also put a constraint on (cid:107) v (cid:107) ,which will be equivalent to a constraint on (cid:80) i,j Λ i,j . This technique has been appliedto robust 1-bit compressive sensing where there are sign flips in the binary measure-ments belonging to {− , } [36] and robust matrix completion [37]. We proposed analgorithm, named adaptive outlier pursuit (AOP), which can adaptively find the signflips (outliers) and reconstruct the signal by using other measurements assumed to becorrect. Since images corrupted by impulse noise can also be considered as sparsely estoration of Images Corrupted by Impulse Noise using Blind Inpainting L , this can be obtained from the noise level of impulse noise.Therefore, the new problem isminimize u,v (cid:80) i,j (( Hu ) i,j + v i,j − f i,j ) + λ J ( u ) , subject to (cid:107) v (cid:107) ≤ L, (3.1)which can be written asminimize u,v F C ( u, v ) ≡ (cid:88) i,j (( Hu ) i,j + v i,j − f i,j ) + λ J ( u ) + ι { v : (cid:107) v (cid:107) ≤ L } , (3.2)where ι { v : (cid:107) v (cid:107) ≤ L } is the indicator function equals to zero when (cid:107) v (cid:107) ≤ L and + ∞ otherwise. Here C means that the (cid:96) term is put in the constraint. We can furthereliminate v , and the problem of u only is:minimize u E C ( u ) ≡ min v F C ( u, v ) . (3.3)Similarly, we can introduce new variation Λ and the corresponding problem of u and Λ is minimize u, Λ (cid:80) i,j Λ i,j (( Hu ) i,j − f i,j ) + λ J ( u ) , subject to (cid:80) i,j (1 − Λ i,j ) ≤ L, Λ i,j ∈ { , } , (3.4)which can be described in another way asminimize u, Λ ∈{ , } M × N F ( u, Λ) ≡ (cid:88) i,j
12 Λ i,j (( Hu ) i,j − f i,j ) + λ J ( u ) + ι { Λ: (cid:80) i,j (1 − Λ i,j ) ≤ L } , (3.5)where ι { Λ: (cid:80) i,j (1 − Λ i,j ) ≤ L } is the indicator function equals to zero when (cid:80) i,j (1 − Λ i,j ) ≤ L and + ∞ otherwise. Note that we can also relax the constraint Λ ∈ { , } M × N to Λ ∈ [0 , M × N and the problem becomes a multi-convex problem, as done in the previoussection. In addition we have min Λ ∈{ , } M × N F ( u, Λ) = min Λ ∈ [0 , M × N F ( u, Λ) = E C ( u ).This problem can also be solved iteratively as in the previous section. The u -subproblem is same as the previous one, and the Λ-subproblem is slightly different.In order to update Λ, we have to solveminimize Λ (cid:80) i,j Λ i,j (( Hu ) i,j − f i,j ) / , subject to (cid:80) i,j (1 − Λ i,j ) ≤ L, Λ i,j ∈ { , } . (3.6)This problem is to choose M × N − L elements with least sum from M × N elements { (( Hu ) i,j − f i,j ) / } M,Ni =1 ,j =1 . Given a u estimated from (2.9), we can update Λ in onestep: Λ i,j = (cid:26) , if (( Hu ) i,j − f i,j ) / ≥ λ , , if (( Hu ) i,j − f i,j ) / < λ , (3.7) MING YAN where λ is the L th largest term of { (( Hu ) i,j − f i,j ) / } M,Ni =1 ,j =1 . If the L th and( L + 1) th largest terms are equal, then we can choose any binary matrix Λ such that (cid:80) i,j Λ i,j = M × N − L andmin i,j, Λ i,j =0 (( Hu ) i,j − f i,j ) / ≥ max i,j, Λ i,j =1 (( Hu ) i,j − f i,j ) / . (3.8)The algorithm for blind inpainting using AOP is described below. Algorithm 2
Proposed blind inpainting using AOP.
Input: f , λ , L , Λ , (cid:15) Initialization: k = 1. while k < F ( u k , Λ k ) − F ( u k − , Λ k − ) > (cid:15) do Obtain u k by solving (2.9).Obtain Λ k by (3.7). k = k + 1. end while Here (cid:15) , Λ and λ are chosen in the same way as algorithm 1. The integer L is anestimation of the number of corrupted pixels, which is easy to obtain from the noiselevel of the impulse noise.The difference between these two algorithms is the Λ-subproblem, the threshold λ is fixed for algorithm 1, while λ is changing for AOP. AOP can be considered asone special case of algorithm 1 with changing λ . However the performance of thesetwo algorithms is similar, and the parameter L is easier to obtain than λ . So we willonly use algorithm 2 for numerical experiments.
4. Convergence Analysis.
In this section, we establish some convergence re-sults for these two algorithms. We will show that these two algorithms will stopin finite steps, and the output is a coordinatewise minimum point of F ( u, Λ) (or F ( u, Λ)) with relaxed constraint Λ i,j ∈ [0 , u, ˜Λ) is a coordinatewiseminimum point of F ( u, Λ) (or F ( u, Λ)) means that ˜ u is a minimizer of F ( u, ˜Λ) (or F ( u, ˜Λ)) and ˜Λ is a minimizer of F (˜ u, Λ) (or F (˜ u, Λ)). In addition, we can modifya little bit in the algorithm and the output will be a local minimizer of E P ( u ) (or E C ( u )) (or with probability one).Since the convergence analysis is similar for both algorithms, let F ( u, Λ) standfor F ( u, Λ) in the penalty problem (2.7) and F ( u, Λ) in the constraint problem (3.5),and E ( u ) stand for E P ( u ) and E C ( u ) as min Λ ∈{ , } M × N F ( u, Λ).Before deriving the convergence results of the algorithms, two theorems are intro-duced to show that we can solve problems (2.7) and (3.5) of u and Λ to find a localminimizer for E ( u ). Theorem 4.1. If u ∗ is a local minimizer of E ( u ) , then for any Λ ∗ ∈ { , } M × N minimizing F ( u ∗ , Λ) , ( u ∗ , Λ ∗ ) is a local minimizer of F ( u, Λ) .Proof . Since u ∗ is a local minimizer of E ( u ), we can find (cid:15) > u satisfying (cid:107) u − u ∗ (cid:107) < (cid:15) , we have E ( u ) ≥ E ( u ∗ ). Therefore for all ( u, Λ) satisfying (cid:107) ( u, Λ) − ( u ∗ , Λ ∗ ) (cid:107) < (cid:15) , we have (cid:107) u − u ∗ (cid:107) < (cid:15) . F ( u, Λ) ≥ E ( u ) ≥ E ( u ∗ ) = F ( u ∗ , Λ ∗ ) . Thus ( u ∗ , Λ ∗ ) is a local minimizer of F ( u, Λ). estoration of Images Corrupted by Impulse Noise using Blind Inpainting u ∗ being a local minimizer of E ( u ), we havethe following theorem. Theorem 4.2.
Given fixed u ∗ , if for all ¯Λ ∈ { , } M × N minimizing F ( u ∗ , Λ) ,we also have that u ∗ minimizing F ( u, ¯Λ) , then u ∗ is a local minimizer of E ( u ) .Proof . We will prove it for the penalty problem (2.7) with F ( u, Λ) = F ( u, Λ) first.Let Ω + = { ( i, j ) : | f i,j − ( Hu ∗ ) i,j | > λ } and Ω − = { ( i, j ) : | f i,j − ( Hu ∗ ) i,j | < λ } ,then from the continuity of H , we can find (cid:15) > (cid:107) u − u ∗ (cid:107) < (cid:15) , we have | f i,j − ( Hu ) i,j | > λ for all ( i, j ) ∈ Ω + and | f i,j − ( Hu ) i,j | < λ for all ( i, j ) ∈ Ω − .Notice that E ( u ) = min Λ ∈{ , } M × N F ( u, Λ), then there exists ¯Λ ∈ { , } M × N suchthat E ( u ) = F ( u, ¯Λ). We have ¯Λ i,j = 0 when ( i, j ) ∈ Ω + and ¯Λ i,j = 1 when( i, j ) ∈ Ω − . Thus ¯Λ is also a minimizer of F ( u ∗ , Λ). In addition, we have E ( u ∗ ) = F ( u ∗ , ¯Λ) ≤ F ( u, ¯Λ) = F ( u ). Therefore, u ∗ is a local minimizer of E ( u ).For the constraint problem (3.5) with F ( u, Λ) = F ( u, Λ), we have to just replace λ with the L th largest term of { (( Hu ) i,j − f i,j ) } M,Ni =1 ,j =1 , and the result follows. Remark:
We can replace the { , } M × N with the relaxed version [0 , M × N andboth theorems are still valid.With these theorems, we are ready to shown the convergence results of the twoalgorithms. Theorem 4.3.
Both algorithms will converge in finite steps and the output ( u ∗ , Λ ∗ ) is a coordinatewise minimum point of F ( u, Λ) .Proof . As explained before, though we have Λ i,j ∈ [0 ,
1] for the relaxed problem,we can always force Λ i,j ∈ { , } during the iterations because for every fixed ¯ u , wecan also find a minimizer of F (¯ u, Λ) satisfying Λ i,j ∈ { , } . Since Λ i,j ∈ { , } ,there are only finite number of Λ’s (the total number of different Λ’s with constraintΛ i,j ∈ { , } is 2 M × N ) and the algorithm will stop in finite steps if the u -subproblemand Λ-subproblem are solved exactly. Assume that at step k , the function F ( u, Λ)stops decreasing, which means F ( u k , Λ k ) = F ( u k +1 , Λ k +1 ) . (4.1)Together with nonincreasing property of the algorithm F ( u k , Λ k ) ≥ F ( u k +1 , Λ k ) ≥ F ( u k +1 , Λ k +1 ) , (4.2)we have F ( u k , Λ k ) = F ( u k +1 , Λ k ) = F ( u k +1 , Λ k +1 ) . (4.3)Thus F ( u k , Λ k ) = F ( u k +1 , Λ k ) = min u F ( u, Λ k ) , (4.4) F ( u k , Λ k ) = min Λ F ( u k , Λ) . (4.5)Then ( u ∗ , Λ ∗ ) = ( u k , Λ k ) is a coordinatewise minimum point of F ( u, Λ).However, coordinatewise minimum point (˜ u, ˜Λ) may not be a local minimumpoint of F ( u, Λ). As shown in the next theorem, ˜Λ being the unique minimum pointof F (˜ u, Λ) is a sufficient condition for (˜ u, ˜Λ) to be a local minimum point. Theorem 4.4.
For a coordinatewise minimum point ( u ∗ , Λ ∗ ) of F ( u, Λ) , if Λ ∗ is the unique minimum point of F ( u ∗ , Λ) , then ( u ∗ , Λ ∗ ) is a local minimum point of F ( u, Λ) . Furthermore, u ∗ is a local minimum point of E ( u ) . MING YAN
Proof . Since Λ ∗ is the unique minimum point of F ( u ∗ , Λ) and u ∗ minimizes F ( u, Λ ∗ ), we have u ∗ being a local minimum point of E ( u ) from theorem 4.2. Then( u ∗ , Λ ∗ ) being a local minimum point of F ( u, Λ) follows from theorem 4.1.Let ( u ∗ , Λ ∗ ) be a coordinatewise minimum point of F ( u, Λ). From theorem 4.4,if u ∗ is not a local minimum point of E ( u ), then there are many minimum points for F ( u ∗ , Λ). In addition, from theorem 4.2, there exists another minimum point ¯Λ of F ( u ∗ , Λ), such that F ( u ∗ , ¯Λ) > min u F ( u, ¯Λ).Based on theorems 4.2 and 4.4, we have the following two corollaries. Corollary 4.5.
When solving the subproblem of finding Λ k , if there are manyminimum points for F ( u k , Λ) , we can choose the best Λ with lowest min u F ( u, Λ) ,then the algorithm will stop at a local minimum point of E ( u ) .Proof . When the algorithm stops at ( u ∗ , Λ ∗ ), we have F ( u ∗ , ¯Λ) = min u F ( u, ¯Λ)for all ¯Λ ∈ { , } M × N minimizing F ( u ∗ , Λ), otherwise, we can find a Λ giving lowermin u F ( u, Λ) and the algorithm continues. Then from theorem 4.2, we know that u ∗ is a local minimum point of E ( u ). Remark:
This strategy for choosing Λ may not be very useful for practicalcomputation because there could be a large number of candidates for such Λ. However,in the numerical experiments, this does not happen, because of the rounding error inthe calculation and the probability for two values to equal is 0.Instead of having to choose the best candidates from many candidates, we canmodify the function F ( u, Λ) to avoid this case, as Wang et al. did in [38].
Corollary 4.6.
If we can modify the objective function F ( u, Λ) by adding τ (cid:80) i,j Λ i,j r i,j , where { r i,j } are random values uniformly distributed in [0 , and τ is a small number, then the algorithm will stop at a local minimum of ˜ E ( u ) ≡ min Λ ∈{ , } M × N ( F ( u, Λ) + τ (cid:80) i,j Λ i,j r i,j ) with probability one.Proof . In this case, the subproblem for updating Λ isΛ i,j = Hu ) i,j − f i,j ) / τ r i,j > λ , Hu ) i,j − f i,j ) / τ r i,j = λ , Hu ) i,j − f i,j ) / τ r i,j < λ . (4.6)The probability of getting (( Hu ) i,j − f i,j ) / τ r i,j = λ is 0 because of the ran-domness of r i,j . Then the algorithm will converge to a local minimum of ˜ E ( u ) withprobability one. Similarly for AOP, the probability of L th and ( L + 1) th largest termbeing equal is 0. Remark:
Besides adding additional term onto F ( u, Λ) we can also add a smallrandom variable τ Λ i,j r i,j onto f i,j .
5. Numerical Experiments.
Because the performance of these two algorithmsis similar and it is easier to obtain an approximate value for the number of damagedpixels, in this section, we apply the blind inpainting algorithm using AOP to removeimpulse noise and mixed Gaussian impulse noise.As mentioned in the introduction, there are many different choices for J ( x ), andthe performance of this algorithm depends on which J ( x ) is chosen. In order to makethe comparison with other methods fair, we choose J ( x ) to be total variation for allmethods used in the numerical experiments. Therefore the optimization problem willbe different from the ones that proposed in the literature. For example, we implementthe algorithm of paper [25] by replacing the wavelet frame with total variation [39].If the regularization for image u is total variation, and H is the identity operator, estoration of Images Corrupted by Impulse Noise using Blind Inpainting u isminimize u (cid:88) i,j
12 Λ i,j ( u i,j − f i,j ) + λ TV( u ) , (5.1)which is the famous TV inpainting model [40, 41]. Numerous algorithms proposed forsolving TV denoising problem can be adopted to solve this TV inpainting problemwith some necessary modifications. Some examples are algorithms based on duality[42, 43], augmented Lagrangian methods [44, 45], and split Bregman iterations [26, 46].In the numerical experiments, we choose the split Bregman iteration to solve thesubproblem.To evaluate the quality of the restoration results, peak signal to noise ratio(PSNR) is employed. Given an image u ∈ [0 , M × N , the PSNR of the restora-tion result ˆ u is defined as follows:PSNR(ˆ u, u ) = 10 log MN (cid:80) i,j (ˆ u i,j − u i,j ) . (5.2)There are two important types of impulse noise: salt-and-pepper impulse noiseand random-valued impulse noise. The pixels damaged by salt-and-pepper impulsenoise are much easier to find since the values are either d min or d max . The adaptivemedian filter (AMF) has been widely used to accurately identify most pixels dam-aged by salt-and-pepper impulse noise (See e.g. [47, 22]). The detection of pixelscorrupted by random-valued impulse noise is much harder than salt-and-pepper im-pulse noise because the value of damaged pixels can be any number between d min and d max . ACWMF was proposed to detect pixels damaged by random-valued impulsenoise.For the first experiment, salt-and-pepper impulse noise is considered. Because thepixels corrupted by this kind of impulse noise can only take two values, the detectionof damaged pixels is easy. As an efficient method for detecting the damaged pixels,AMF is used widely in salt-and-pepper impulse noise removal. We will compare totalvariation blind inpainting using AOP with AMF and TVL1, where TVL1 is the resultof solving the following problem,minimize u (cid:88) i,j | u i,j − f i,j | + λ TV( u ) , (5.3)using split Bregman [46]. The parameter λ is tuned to achieve the best quality of therestoration images.Four test images are corrupted by Gaussian noise of zero mean and standarddeviations σ = 5 , ,
15, then we add salt-and-pepper impulse noise with differentlevels ( s = 30% , , MING YAN
Salt-and-Pepper Impulse Noise σ + s “Lena” “House”Noisy AMF TVL1 AOP Noisy AMF TVL1 AOP0+30% 10.68 33.80 30.97 “Cameraman” “Boat”Noisy AMF TVL1 AOP Noisy AMF TVL1 AOP0+30% 10.32 33.62 30.43 Table 5.1: PSNR(dB) for denoising results of different algorithms for noisy imagescorrupted by salt-and-pepper impulse noise and mixed Gaussian impulse noise. σ isthe standard deviation for the Gaussian noise and s is the level of salt-and-pepperimpulse noise.We do not compare AOP with two-stage approaches because the detection ofdamaged pixels by salt-and-pepper impulse noise using AMF is very accurate, andΛ will not change too much during the iterations, thus the performance of AOP willbe similar to the two-stage approach by first detecting the damaged pixels by AMFand then solve the total variation image inpainting problem, which is just the firstiteration of AOP. Therefore, for the cases with salt-and-pepper impulse noise, usingblind inpainting does not improve too much by adaptively updating Λ, and our focuswill be on random-valued impulse noise.For random-valued impulse noise removal, it is more difficult to detect the cor-rupted pixels because they can take any value between d min and d max . ACWMF andROLD are used in two-stage approaches for detecting the damaged pixels [23, 24].In this experiment, we will compare AOP with ACWMF, TVL1, and two-stage ap-proaches for random-valued impulse noise removal. The two-stage approach we usedhere is just one step of AOP, and the parameter for second stage (total variationimage inpainting) is also tuned to achieve the best quality of the restoration images.In addition, we will compare with two other methods, one is Dong et al.’s method [25] estoration of Images Corrupted by Impulse Noise using Blind Inpainting σ = 10 and s = 30%. Top row: noisy images; Secondrow: the results restored by AMF; Third row: the results restored by TVL1; Bottomrow: the results restored by total variation blind inpainting using AOP.for solving problemminimize u,v (cid:88) i,j (( Hu ) i,j + v i,j − f i,j ) + λ T V ( u ) + λ (cid:88) i,j | v i,j | . (5.4)The parameters λ and λ are chosen to achieve the best quality. The other is thepenalty decomposition method (PD) by Lu and Zhang [33] for the problemminimize u,v (cid:80) i,j (( Hu ) i,j + v i,j − f i,j ) + λ T V ( u ) , subject to (cid:107) v (cid:107) ≤ L. (5.5) λ is tuned to achieve the best quality of the restoration images.Again four test images are corrupted by Gaussian noise of zero mean and standarddeviation ( σ = 10 , MING YAN ( s = 25% , Random-Valued Impulse Noise σ + s Noisy ACWMF TVL1 Two-Stage [25] PD AOP“Lena” 0+25% 15.25 30.53 31.75 32.65 31.09 33.42 “House” 0+25% 14.71 31.50 36.88 35.55 35.04 41.38 “Cameraman” 0+25% 14.48 28.93 31.24 31.75 30.54 32.72 “Boat” 0+25% 15.29 28.18 28.63 29.37 28.26 29.48
Table 5.2: PSNR(dB) for denoising results of different algorithms for noisy imagescorrupted by random-valued impulse noise and mixed Gaussian impulse noise. σ isthe standard deviation for the Gaussian noise and s is the level of random-valuedimpulse noise.From Table 5.2, we can see that for random-valued impulse noise, the results fromtotal variation blind inpainting using AOP are better than those by other methods forall noise levels. The comparison of ACWMF and TVL1 shows that TVL1 outperformsACWMF for all noise levels tested, because ACWMF misses quite a lot of real noiseand false-hits some noise-free pixels. TVL1 has better performance than two-stageapproach for the cases when noise level is high ( s = 40% in the numerical experiments),because the accuracy of detecting corrupted pixels by random-valued impulse noiseusing ACWMF is very low when the noise level is high. The accuracy of detectingcorrupted pixels can be improved by our method via iteratively updating the binarymatrix Λ, as shown in the comparison. PD outperforms other methods except AOPbecause (cid:96) term is used in the problem. The problem PD solves is a non-continuousproblem and it will stop at a local minimum ( u ∗ , v ∗ ). However, u ∗ may not be a localminimum of the problem with u only. While AOP will converge to a local minimumof the problem with u only.The visual comparison of some results is shown in Fig. 5.2. We can see noisy arti-facts in the background of the images obtained by ACWMF, and the images obtainedby TVL1 are blurred with some lost details. Images restored by total variation blindinpainting are smooth in flat regions of the background and the details are kept.Both experiments show that our method by iteratively updating the inpainting re- estoration of Images Corrupted by Impulse Noise using Blind Inpainting σ = 10 and s = 25%. Top row: noisy images;Second row: the results restored by ACWMF; Third row: the results restored byTVL1; Bottom row: the results restored by total variation blind inpainting usingAOP.gion and performing image inpainting provides better results in identifying the outliersand recovering damaged pixels. For salt-and-pepper impulse noise, because there arevery accurate methods for detecting the corrupted pixels such as AMF, our methodhas similar performance as two-stage approaches. However, for random-valued im-pulse noise, there is no method can detect corrupted pixels accurately, especially whenthe noise level is high. Our method by iteratively updating the corrupted pixels is abetter choice.At the end of this section, we compare the damaged pixels detected by ACWMFand obtained from AOP in Fig. 5.3 for the cameraman image. The damaged pixelsare chosen randomly ( s = 40%). The pixels with black color are detected as damaged.The set obtained from AOP is also random and does not contain any information from6 MING YAN
Fig. 5.3: The damaged pixels detected by ACWMF (left column) and AOP (rightcolumn).the image, while the set detected by ACWMF still has some features from cameramanimage.
6. Conclusion.
This paper presents two general algorithms based on blind in-painting and (cid:96) minimization for removing impulse noise. The difference is in thetreatment for the (cid:96) term: I) the (cid:96) term is put in the objective function, II) the (cid:96) term is in the constraint. Both problems can be solved by iteratively restoring theimages and identifying the damaged pixels. The performance of these two methods issimilar, and the connection between these two methods is shown. It is also shown inthe experiments that the proposed methods perform better than other methods. Thissimple idea can also be applied to other cases where the noise model is not Gaussian. Acknowledgment.
This work was supported by NSF Grant DMS-0714945 andCenter for Domain-Specific Computing (CDSC) under the NSF Expeditions in Com-puting Award CCF-0926127. We would like to thank the anonymous referees formaking several very helpful suggestions.
REFERENCES[1] A. C. Bovik,
Handbook of Image and Video Processing (Communications, Networking andMultimedia) . Orlando, FL, USA: Academic Press, Inc., 2005.[2] M. Nikolova, “A variational approach to remove outliers and impulse noise,”
Journal of Math-ematical Imaging and Vision , vol. 20, pp. 99–120, 2004.[3] L. Bar, N. A. Sochen, and N. Kiryati, “Image deblurring in the presence of salt-and-peppernoise,” in
Scale-Space , ser. Lecture Notes in Computer Science, R. Kimmel, N. A. Sochen,and J. Weickert, Eds., vol. 3459. Springer, 2005, pp. 107–118.[4] L. Bar, N. Kiryati, and N. Sochen, “Image deblurring in the presence of impulsive noise,”
International Journal of Computer Vision , vol. 70, pp. 279–298, 2006.[5] G. Gilboa and S. Osher, “Nonlocal operators with applications to image processing,”
MultiscaleModeling & Simulation , vol. 7, no. 3, pp. 1005–1028, 2008.[6] A. N. Tikhonov and V. Y. Arsenin,
Solutions of Ill-Posed Problems . V. H. Winston & Sons,Washington, D.C.: John Wiley & Sons, New York,, 1977.[7] D. Geman and G. Reynolds, “Constrained restoration and the recovery of discontinuities,”estoration of Images Corrupted by Impulse Noise using Blind Inpainting IEEE Transactions on Pattern Analysis and Machine Intelligence , vol. 14, no. 3, pp. 367–383, 1992.[8] L. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,”
Physics D , vol. 60, pp. 259–268, 1992.[9] L. I. Rudin and S. Osher, “Total variation based image restoration with free local constraints,”in
Proceedings of 1st International Conference on Image Processing , vol. 1. IEEE Comput.Soc. Press, 1994, pp. 31–35.[10] J. Cai, R. H. Chan, and Z. Shen, “A framelet-based image inpainting algorithm,”
Applied andComputational Harmonic Analysis , vol. 24, pp. 131–149, 2008.[11] Y. Li, L. Shen, D. Dai, and B. Suter, “Framelet algorithms for de-blurring images corruptedby impulse plus Gaussian noise,”
IEEE Transactions on Image Processing , vol. 20, no. 7,pp. 1822–1837, 2011.[12] R. Chan, C. Hu, and M. Nikolova, “An iterative procedure for removing random-valued impulsenoise,”
IEEE Signal Processing Letters , vol. 11, no. 12, pp. 921–924, 2004.[13] R. H. Chan, C.-W. Ho, and M. Nikolova, “Salt-and-pepper noise removal by median-type noisedetectors and detail-preserving regularization,”
IEEE Transactions on Image Processing ,vol. 14, no. 10, pp. 1479–1485, 2005.[14] J. Cai, R. H. Chan, and M. Nikolova, “Two-phase approach for deblurring images corruptedby impulse plus Gaussian noise,”
Inverse Problem and Imaging , vol. 2, pp. 187–204, 2008.[15] ——, “Fast two-phase image deblurring under impulse noise,”
Journal of Mathematical Imagingand Vision , vol. 36, pp. 46–53, 2010.[16] Y. Xiao, T. Zeng, J. Yu, and M. Ng, “Restoration of images corrupted by mixed Gaussian-impulse noise via l - l minimization,” Pattern Recognition , vol. 44, pp. 1708–1720, 2011.[17] P. Rodr´ıguez, R. Rojas, and B. Wohlberg, “Mixed gaussian-impulse noise image restoration viatotal variation,” in
Proceedings of IEEE International Conference on Acoustics, Speech,and Signal Processing (ICASSP) , Kyoto, Japan, Mar. 2012, pp. 1077–1080.[18] M. Bertalmio, G. Sapiro, V. Caselles, and C. Ballester, “Image inpainting,” in
Proceedings of the27th annual conference on Computer graphics and interactive techniques , ser. SIGGRAPH’00, 2000, pp. 417–424.[19] M. Bertalmio, L. Vese, G. Sapiro, and S. Osher, “Simultaneous structure and texture im-age inpainting,” in
IEEE Computer Society Conference on Computer Vision and PatternRecognition , vol. 2, 2003, p. 707.[20] M. Bertalm´ıo, L. A. Vese, G. Sapiro, and S. Osher, “Simultaneous structure and texture imageinpainting,”
IEEE Transactions on Image Processing , vol. 12, no. 8, pp. 882–889, 2003.[21] T. F. Chan, J. Shen, and H. Zhou, “Total variation wavelet inpainting,”
Journal of Mathemat-ical Imaging and Vision , vol. 25, pp. 107–125, 2006.[22] R. C. Gonzalez and R. E. Woods,
Digital Image Processing , 2nd ed. Boston, MA, USA:Addison-Wesley Longman Publishing Co., Inc., 2001.[23] T. Chen and H. R. Wu, “Adaptive impulse dectection using center-weighted median filters,”
IEEE Signal Processing Letters , vol. 8, no. 1, pp. 1–3, 2001.[24] Y. Dong, R. H. Chan, and S. Xu, “A detection statistic for random-valued impulse noise,”
IEEE Transactions on Image Processing , vol. 16, no. 4, pp. 1112–1120, 2007.[25] B. Dong, H. Ji, J. Li, Z. Shen, and Y. Xu, “Wavelet frame based blind image inpainting,”
Applied and Computational Harmonic Analysis , vol. 32, pp. 268–279, 2012.[26] T. Goldstein and S. Osher, “The split Bregman method for L1-regularized problems,”
SIAMJournal on Imaging Sciences , vol. 2, pp. 323–343, 2009.[27] R. Chartrand and V. Staneva, “Restricted isometry properties and nonconvex compressivesensing,”
Inverse Problems , vol. 24, no. 3, p. 035020, 2008.[28] H. Mohimani, M. Babaie-Zadeh, I. Gorodnitsky, and C. Jutten, “Sparse recovery usingsmoothed l0 (sl0): convergence analysis,”
ArXiv preprint arXiv:1001.5073 , 2010.[29] Z. Lu and Y. Zhang, “Penalty Decomposition Methods for $L0$-Norm Minimization,”
ArXivpreprint arXiv:1008.5372 , 2010.[30] T. Blumensath and M. Davies, “Iterative Thresholding for Sparse Approximations,”
Journalof Fourier Analysis and Applications , vol. 14, no. 5, pp. 629–654, Dec. 2008.[31] Y. Zhang, B. Dong, and Z. Lu, “ (cid:96) minimization of wavelet frame based image restoration,” Mathematics of Computation , vol. accepted, 2011.[32] B. Dong and Y. Zhang, “An efficient algorithm for (cid:96) minimization in wavelet frame basedimage restoration,” Journal of Scientific Computing , vol. 54, no. 2-3, pp. 350–368, 2013.[33] Z. Lu and Y. Zhang, “Sparse Approximation via Penalty Decomposition Methods,”
ArXivpreprint arXiv:1205.2334 , 2012.[34] P. Tseng, “Convergence of a block coordinate descent method for nondifferentiable minimiza-tion,”
J. Optim Theory Appl , pp. 475–494, 2001. MING YAN[35] T. Chan and J. Shen, “Variational image inpainting,”
Communications on Pure and AppliedMathematics , vol. 58, pp. 579–619, 2005.[36] M. Yan, Y. Yang, and S. Osher, “Robust 1-bit compressive sensing using adaptive outlierpursuit,”
IEEE Transactions on Signal Processing , pp. 3868–3875, 2012.[37] ——, “Exact low-rank matrix completion from sparsely corrupted entries via adaptive outlierpursuit,”
Journal of Scientific Computing , p. to appear, 2013.[38] Y. Wang, A. Szlam, and G. Lerman, “Robust locally linear analysis with applications to imagedenoising and blind inpainting,”
SIAM Journal on Imaging Sciences , to appear.[39] J. Yang, “A tv-based approach to blind image inpainting,” in
Image and Signal Processing(CISP), 2011 4th International Congress on , vol. 2, 2011, pp. 779–781.[40] T. F. Chan and J. Shen, “Mathematical models for local nontexture inpaintings,”
SIAM Journalof Applied Mathematics , vol. 62, no. 3, pp. 1019–1043, 2002.[41] T. Chan and J. Shen,
Image Processing And Analysis: Variational, PDE, Wavelet, AndStochastic Methods . Philadelphia, PA, USA: Society for Industrial and Applied Mathe-matics, 2005.[42] A. Chambolle, “An algorithm for total variation minimization and applications,”
Journal ofMathematical Imaging and Vision , vol. 20, pp. 89–97, 2004.[43] M. Zhu, S. J. Wright, and T. F. Chan, “Duality-based algorithms for total-variation-regularizedimage restoration,”
Computational Optimization and Applications , vol. 47, pp. 377–400,2010.[44] X.-C. Tai and C. Wu, “Augmented Lagrangian method, dual methods and split Bregmaniteration for ROF model,” in
Proceedings of the Second International Conference on ScaleSpace and Variational Methods in Computer Vision , 2009, pp. 502–513.[45] C. Wu and X.-C. Tai, “Augmented lagrangian method, dual methods, and split Bregman iter-ation for ROF, vectorial TV, and high order models,”
SIAM Journal on Imaging Sciences ,vol. 3, pp. 300–339, 2010.[46] P. Getreuer, “tvreg v2: Variational imaging methods for denoising, deconvolution, inpainting,and segmentation,” 2010.[47] H. Hwang and R. A. Haddad, “Adaptive median filters: new algorithms and results,”