"Plug-and-Play" Edge-Preserving Regularization
““PLUG-AND-PLAY” EDGE-PRESERVING REGULARIZATION ∗ DONGHUI CHEN † , MISHA E. KILMER ‡ , AND
PER CHRISTIAN HANSEN § Abstract.
In many inverse problems it is essential to use regularization methods that preserve edges in thereconstructions, and many reconstruction models have been developed for this task, such as the Total Variation(TV) approach. The associated algorithms are complex and require a good knowledge of large-scale optimizationalgorithms, and they involve certain tolerances that the user must choose. We present a simpler approach thatrelies only on standard computational building blocks in matrix computations, such as orthogonal transformations,preconditioned iterative solvers, Kronecker products, and the discrete cosine transform — hence the term “plug-and-play.” We do not attempt to improve on TV reconstructions, but rather provide an easy-to-use approach to computingreconstructions with similar properties.
Key words.
Image deblurring, inverse problems, p -norm regularization, projection algorithm AMS subject classifications.
1. Introduction.
This paper is concerned with discretizations of linear ill-posed prob-lems, which arise in many technical and scientific applications such as astronomical andmedical imaging, geoscience, and non-destructive testing [7, 15]. The underlying model is b = A ¯ x + η , where b is the noisy data, the matrix A (which is often structured or sparse)represents the forward operator, ¯ x is the exact solution, and η denotes the unknown noise. Wepresent a new large-scale regularization algorithm which is able to reproduce sharp gradientsand edges in the solution. Our algorithm uses only standard linear-algebra building blocksand is therefore easy to implement and to tune to specific applications.For ease of exposition, we focus on image deblurring problems involving m × m images B (the blurred and noisy image) and X (the reconstruction). With b = vec ( B ) and x = vec ( X ) , both of length n = m , the n × n matrix A is determined by the point-spread function(PSF) and corresponding boundary conditions [11]. This matrix is very ill-conditioned (orrank deficient), and computing the “naive solution” A − b = ¯ x + A − η (or, in the rank-deficient case, the minimum norm solution) results in a reconstruction that is completelydominated by the inverted noise A − η .Classical regularization methods, such as Tikhonov regularization or truncated SVD,damp the noise component in the solution by suppressing high-frequency components at theexpense of smoothing the edges in the reconstruction. The same is true for regularizingiterations (such as CGLS or GMRES) based on computing solutions in a low-dimensionalKrylov subspace. The underlying characteristic in these methods is that regularization isachieved by projecting the solution onto a low-dimensional signal subspace S k spanned by k ,low-frequency basis vectors, with the result that the high-frequency components are missing,hindering the reconstruction of sharp edges.The projection approach is a powerful paradigm that can often be tailored to the partic-ular problem. While these projected solutions may not always have satisfactory accuracy ordetails, they still contain a large component of the desired solution, namely, the low-frequencycomponent which can be reliably determined from the noisy data. What is missing is the high- ∗ Received xxx † School of Securities and Futures, Southwestern University of Finance and Economics( [email protected] ). ‡ Department of Mathematics, Tufts University ( [email protected] ). § Department of Applied Mathematics and Computer Science, Technical University of Denmark ( [email protected] ). The author is supported by grant 274-07-0065 from the Danish Research Council for Technology andProduction Sciences. 1 a r X i v : . [ m a t h . NA ] J un D. CHEN, M. E. KILMER AND P. C. HANSEN frequency component, spanned by high-frequency basis vectors, and this component must bedetermined via our prior information about the desired solution.This work describes an easy-to-use large-scale method for computing the needed high-frequency component, related to the prior information that image must have smooth regionswhile the gradient of the reconstructed image is allowed to have some (but not too many)large values. This idea is similar in spirit to Total Variation regularization, where the gradientis required to be sparse; but by relaxing this constraint we arrive at problems that are simplerto solve. The work can be considered as a continuation of earlier work [8, 10, 12] by one ofus; it is also related to the decomposition approach in [1].The remainder of this paper is organized as follows. In Section 2 we present the newedge-preserving algorithm and the convergence analysis. Section 3 discusses the efficientnumerical implementation issues. Section 4 presents numerical experiments of the new de-blurring algorithm and comparisons with other state-of-art deblurring algorithms. The con-clusions are presented in Section 5.
2. The Projection-Based Edge-Preserving Algorithm.
This section presents the mainideas of the algorithm, while the implementation details for large-scale problems are dis-cussed in the next section.
Throughout, the matrix L defines a discrete derivative orgradient of the solution (to be precisely defined later), and (cid:107) · (cid:107) p denotes the vector p -norm.The underlying prior information is then that the solution’s seminorm (cid:107) L x (cid:107) p , with < p < , is not large (which allows some amount of large gradients or edges in the reconstruction).The choice of the combination of L and p is important and, of course, somewhat problemdependent; the matrix L used here is the m ( m − × n matrix given by L = (cid:18) L ⊗ II ⊗ L (cid:19) , where L = − · · · − · · · ... ... . . . . . . ... · · · − ∈ R m − × m , where ⊗ is the Kronecker product [8]. The one-dimensional null space N ( L ) of this matrixis spanned by the n -vector e of all ones.Assume W k ∈ R n × k is a matrix with orthonormal columns that span the signal subspace S k , and let W be the matrix containing the orthonormal basis vectors for the complementaryspace S ⊥ k . The fundamental assumption is that the columns of W k represent “smooth” modesin which it is possible to distinguish a substantial component of the signal from the noise. Inother words, with the model from Section 1, we assume that (cid:107) W Tk ¯ x (cid:107) (cid:29) (cid:107) W Tk ( A − η ) (cid:107) . (2.1)This ensures that we can compute a good, but smooth, approximation to ¯ x as x k = W k y k , y k = argmin y (cid:107) ( A W k ) y − b (cid:107) , and we refer to the minimization problem for y k as the projected problem , which we assumeis easy to solve. To obtain a reconstruction with the desired features, our strategy is then tocompute the solution of the following modified projection problem min x ∈B (cid:107) L x (cid:107) p with B = { x : x = argmin z (cid:107) ( A W k W Tk ) z − b (cid:107) } , (2.2)with L defined above. As we shall see, we can express the solution to (2.2) as the low-frequency solution x k plus a high-frequency correction. PLUG-AND-PLAY” EDGE-PRESERVING REGULARIZATION Eld´en [5] provides an explicit solution of (2.2) for the case p = 2 and proves the uniqueness condition for the minimizer. The MTSVD algorithm [12]corresponds the case where p = 2 and W k consists of the first k right singular vectors, whilethe PP-TSVD algorithm [10] and its 2D extension [8] correspond to the same W k and p = 1 .In this work, we extend these results by solving (2.2) for < p < and for different choicesof W k . Below we present results that give conditions for the existence and uniqueness of thesolution to (2.2).L EMMA
The linear p -norm problem argmin x (cid:107) A x − b (cid:107) pp , p > , (2.3) has a unique minimizer if and only if A has full column rank.Proof . The function (cid:107) x (cid:107) pp is strictly convex for < p , or equivalently, the Hessian H ( x ) of (cid:107) x (cid:107) pp is positive definite for all x . This implies that (cid:107) Ax − b (cid:107) pp is strictly convex(or equivalently, its Hessian A ∗ H ( x ) A is positive definite for all x ) if and only if A has fullcolumn rank. It follows from strict convexity that the minimizer is unique. T HEOREM
The modified projection problem (2.2) has a unique minimizer if andonly if N ( AW k W Tk ) ∩ N ( L ) = { } .Proof . From [5], the constraint set B in (2.2) can be written as B = { x : x = ( AW k W Tk ) † b + P x (cid:48) , x (cid:48) arbitrary } , where † denotes the Moore-Penrose pseudoinverse [2] and P = I − ( AW k W Tk ) † ( AW k W Tk ) is the orthogonal projector onto N ( AW k W Tk ) . Let ˜ b = ( AW k W Tk ) † b . Solving the con-strained minimization (2.2) is equivalent to solving the following unconstrained problem min ˜ x (cid:107) LP ˜ x − ( − L ˜ b ) (cid:107) p . By Lemma 2.1, the above minimization problem has a unique solution if and only if N ( LP ) = { } . This is true for P = I − ( AW k W Tk ) † ( AW k W Tk ) , the projection onto N ( AW k W Tk ) , ifand only if N ( AW k W Tk ) ∩ N ( L ) = { } . It follows from the proof of Theorem 2.2 that we can solve the modifiedprojection problem (2.2) in two steps. We first compute an approximate solution x k ∈ S k that contains the smooth components, and then we compute the edge-correction component x in the orthogonal complement S ⊥ k . As a result, x = x k + x = W k y k + W y , where y k is the solution to the projected problem, and y is the solution to an associated p -norm problem. These two solutions are computed sequentially, as shown in the EPP Algo-rithm 1. From Lemma 2.1, a sufficient condition for the unique-ness of x is that both AW k and LW have full column rank, for then (2.4) and (2.5) in theEPP Algorithm have unique solutions y k and y , correspondingly. In principle, we can chooseany subspace S k and its orthogonal complement S ⊥ k with corresponding W k and W . But inpractice, however, in order to have a useful and efficient numerical implementation, we mustchoose suitable basis vectors for S k with the following requirements: We thank Martin S. Andersen for help with this proof.
D. CHEN, M. E. KILMER AND P. C. HANSEN
Algorithm 1
Edge-Preserving Projection (EPP) Algorithm Compute the smooth component x k = W k y k using the projected problem y k = argmin y (cid:107) ( AW k ) y − b (cid:107) . (2.4) Compute the correction component x = W y using the p -norm problem y = argmin y (cid:107) ( LW ) y − ( − LW k y k ) (cid:107) p . (2.5) The regularized solution is then x = W k y k + W y . (2.6) • The matrix W k must separate signal from noise according to (2.1). • The matrices AW k and LW must have full column rank. • There are efficient algorithms to compute multiplications with W k and W and theirtranspose. The MTSVD and PP-TSVD algorithms proposed in [8, 10, 12]use the first k singular vectors as the basis vectors for S k . In this case we have the followingresult.T HEOREM
Assume that W k = [ v , v , · · · , v k ] , where v i are right singular vectorsof A corresponding to nonzero singular values. Then the modified projection problem (2.2) has a unique solution if and only if e / ∈ N ( A ) = range( W ) .Proof . Since W k W Tk is the orthogonal projector onto range( W k ) it follows that ( AW k W Tk ) = range( W ) = span { v k +1 , . . . , v n } , and the requirement from Theorem 2.2 becomes range( W ) ∩ { e } / ∈ { } , which is clearlysatisfied if e / ∈ range( W ) .For blurring operators, the SVD-based subspace S k contains low-frequency components,while S ⊥ k contains relatively high-frequency components. It is therefore very likely that theprojection of e onto S k is not zero, and in fact this is easy to check. Another suitable set of basis vectors for S k are thoseassociated with spectral transforms such as the discrete sine or cosine transforms (DST orDCT) and their multidimensional extensions [9, 11]. Recall that for 1D signals of length m ,the orthogonal DCT matrix C has elements c ij = (cid:112) / m if i = 0 (cid:112) / m cos (cid:16) (2 j +1) iπ m (cid:17) if i > for i, j = 0 , , , · · · , m − . The 2-dimensional DCT matrix is the Kronecker product C ⊗ C of the above matrix [21].The DCT basis vectors, which are the rows of the DCT matrix, have the desired spectralproperties. Multiplications with W k and W and their transposes are equivalent to computingeither a DCT transform or its inverse, which is done by fast algorithms similar to the FFT.For this basis we have the following result.T HEOREM
Let the columns of W k be the first k
2D DCT basis vectors. Then themodified projection problem (2.2) has a unique solution if and only if e / ∈ N ( A ) .Proof . From the definition of the DCT it follows that w = e/ (cid:107) e (cid:107) and hence AW k W Tk e = Ae , and therefore N ( AW k W Tk ) ∩ { e } = { } ⇔ Ae (cid:54) = 0 ⇔ e / ∈ N ( A ) . PLUG-AND-PLAY” EDGE-PRESERVING REGULARIZATION We illustrate the use of the EPP algorithm with a one-dimensional test problem, which uses the coefficient matrix A from the phillips test problemin [6] with dimension n = 64 . The exact solution ¯ x is constructed to be piecewise constant,and the right-hand side is b = A ¯ x (no noise). F IG . 2.1. Thin red lines: the piecewise constant exact solution. Thick blue lines: TSVD and EPP reconstruc-tions; L is a discrete approximation to the first derivative operator, and p = 1 . . F IG . 2.2. DCT-EPP solutions for four values of p and the same L as above. Figure 2.1 shows regularized solutions for four values of k , computed with the TSVDalgorithm and the EPP algorithm with the SVD and DCT bases. The matrix L is an approxi-mation to the first derivative operator, and we use p = 1 . . The TSVD solutions are clearlytoo “smooth” to approximate the exact solution. On the other hand, once k is large enoughthat the projected component x k in the EPP solution captures the overall structure of the so-lution, the EPP algorithm is capable of producing good approximations to ¯ x (we note that x k is identical to the TSVD solution for the SVD basis). Figure 2.2 shows DCT-EPP solutionsusing the same L as above for four different values of p , thus illustrating how p controls thesmoothness of the EPP solution.
3. Computational Issues and Numerical Implementations.
While the above analysisguarantees the existence and uniqueness of the solution to (2.2), it is critical to develop anefficient numerical implementation for large-scale problems, which must take the followingthree issues into account: • efficiently construct, or perform operations with, the basis vectors for the subspace S k , • robustly choose the optimal dimension k of S k , and • efficiently solve the p -norm minimization problem (2.5). D. CHEN, M. E. KILMER AND P. C. HANSEN
Algorithm 2
Iterative Reweighted Least Squares (IRLS) Algorithm for minimizing f ( (cid:98) x ) = (cid:107) (cid:98) A (cid:98) x − (cid:98) b (cid:107) p (cid:98) x = 0 (starting vector) for j = 0 , , , . . . do (cid:98) r j = (cid:98) b − (cid:98) A (cid:98) x j D j = diag( | r j | ( p − / ) z j = argmin z (cid:107) D j ( (cid:98) A z − (cid:98) r j ) (cid:107) (solved iteratively) α j = argmin α f ( (cid:98) x j + αz j ) (line search) (cid:98) x j +1 = (cid:98) x j + α j z j end for The optimal subspace dimension can be computed standard parameter-choice algorithms [7];here we use the GCV method as explained in Section 4.
As discussed above, the singular vectorsand the 2D DCT matrix can be used as the basis vectors for S k and S ⊥ k . Here we will addressnumerical implementation issues with these choices.For large-scale deblurring problems it is impossible to obtain W k = [ v , · · · , v k ] bycomputing the SVD of the blurring matrix A without utilizing its structure. Fortunately, inmany problems the underlying point-spread function is separable or can be approximatedby a separable one [11, 14, 16, 21]. Hence, the blurring matrix A can be represented asa Kronecker product A ⊗ A . Given the SVDs of the two matrices A = U Σ V T and A = U Σ V T , the right singular matrix of A is (or can be approximated by) V = Π( V ⊗ V ) , where the permutation matrix Π ensures that the ordering of the singular vectors is inaccordance with decreasing singular values (i.e., the diagonal elements of Π(Σ ⊗ Σ ) ).For the DCT basis, multiplication with the m × m DCT matrix C is implemented inan very efficient way using the FFT algorithm, requiring only O ( m log m ) operations, anda similar fast algorithm is available for the 2D DCT. The multiplications with W k and itstranspose are equivalent to applying either the DCT or its inverse. Therefore, it is unnecessaryto form the matrix W k explicitly. The key tothe success of the EPP Algorithm is an efficient solver for the p -norm minimization prob-lem (2.5). A standard and robust approach is to use the iteratively reweighted least squares(IRLS) method [2, 17, 23], which is identical to Newton’s method with line search. Thisapproach reduces the p -norm problem to the solution of a sequence of weighted least squaresproblems, which can be solved using iterative solvers. Osborne [17] shows that the IRLSmethod is convergent for < p < .For convenience, we briefly summarize the IRLS algorithm for solving the linear p -normproblem min (cid:98) x (cid:107) (cid:98) A (cid:98) x − (cid:98) b (cid:107) p . We denote the j th iteration vector by (cid:98) x j , and we introduce thediagonal matrix D j determined by j th residual vector (cid:98) r j = (cid:98) b − r (cid:98) A (cid:98) x j as D j = diag (cid:16)(cid:12)(cid:12) (cid:98) A (cid:98) x j − (cid:98) b (cid:12)(cid:12) p − (cid:17) . The Newton search direction z j is identical to the solution of the weighted least squaresproblem min z (cid:107) D j (cid:0) (cid:98) A z − (cid:98) r j ) (cid:13)(cid:13) . (3.1) PLUG-AND-PLAY” EDGE-PRESERVING REGULARIZATION < p < , as the iteration vector (cid:98) x j gets close to the solution, the diagonal elementsin D j increase to infinity, and this tendency increases as p approach 1. Hence, the matrix D j (cid:98) A in (3.1) becomes increasingly ill-conditioned as the iterations converge. It is thereforedifficult to find a suitable preconditioner for the least squares problem (3.1).Consider the corresponding normal equations (cid:98) A T D j (cid:98) A z j = (cid:98) A T D j (cid:98) r j = (cid:98) A T D j ( (cid:98) b − (cid:98) A (cid:98) x j ) , and define the new variable q j = z j + (cid:98) x j . The normal equations can then be rewritten (cid:98) A T D j (cid:98) A q j = (cid:98) A T D j (cid:98) b. (3.2)The benefit of the above transformation is that the right-hand side in the new system (3.2)depends on iteration j only through D j , which is known in the j th iteration. For our algorithm, it follows from (2.5) that (cid:98) A = LW and (cid:98) b = − LW k y k , so (3.2) canbe rewritten as W T ( L T D j L ) W q j = − W T ( L T D j L ) W k y k . (3.3)Since the condition number increases as the IRLS algorithm converges to the solution, pre-conditioning is necessary in solving (3.3). Recall that L is a gradient operator, and hence L T D j L represents a diffusion operator with large discontinuities in the diffusion coefficients.Algebraic multi-grid (AMG) methods are robust when the diffusion coefficients are discon-tinuous and vary widely [18, 20]. Therefore, we employ an AMG method to develop a rightpreconditioner M for (3.3). The right-preconditioned problem is [ W T ( L T D j L ) W M ]˜ q j = − W T ( L T D j L ) W k y k , (3.4)where q j = M ˜ q j . In our implementation, given a vector v , the matrix-vector multiplication w = M v is implemented in three steps:1. Compute ˜ v = W v .2. Use the AMG method to solve ( L T D j L ) u = ˜ v for u .3. Compute the result w = W T u .The matrix W T ( L T D j L ) W is symmetric positive definite if D j is positive definite. Ifnot, positive definiteness of D j can be guaranteed by adding a small positive number to thediagonal elements. A first thought may be to solve (3.4) with the conjugate gradient (CG)method; but this requires that the preconditioner M is also symmetric and positive definite.In our implementation we use the Gauss-Seidel method in the pre- and post-relaxations of theAMG method, and hence the AMG residual reduction operator is not symmetric [18], andconsequently the preconditioner is not symmetric. Instead we solve (3.1) with the GMRESalgorithm with right AMG preconditioning [19].
4. Numerical Results.
We present numerical experiments using the EPP algorithm,and we perform a brief comparison with Total Variation deblurring. To better visualize theimpact of the high-frequency correction we use Matlab’s colormap
Hot for the first example,which varies smoothly from black through shades of red, orange, and yellow, to white, asthe intensity increases. Throughout we use the × “cameraman” test image. All thenumerical simulations are performed using Matlab R2009b on Windows 7 x86 32-bit system.The C compiler used to build AMG preconditioner MEX-files is Microsoft Visual Studio2008. D. CHEN, M. E. KILMER AND P. C. HANSENF IG . 4.1. Gaussian PSF with σ = 5 (left) and out-of-focus PSF with r = 5 (right). The “noise level” of a testimage is defined as (cid:107) η (cid:107) / (cid:107) b (cid:107) . The quality of the restored images is measured by the relativeerror (cid:107) x restored − ¯ x (cid:107) / (cid:107) ¯ x (cid:107) and by the MSSIM [22] (for which a larger value is better). Inour experiments the test images are generated with two common types of PSFs, Gaussianblur and out-of-focus blur, and we use reflexive boundary conditions in the restorations. Theelements of the Gaussian PSF are p ij = exp (cid:18) − σ (cid:16) ( i − k ) + ( j − (cid:96) ) (cid:17)(cid:19) , and the elements of the out-of-focus PSF are p ij = (cid:40) / πr if ( i − k ) + ( j − l ) ≤ r otherwise , where ( k, (cid:96) ) is the center of the PSF, and σ and r are parameters that determine the amount ofblurring; see Fig. 4.1. Both are doubly symmetric, but the latter is not separable, and thereforeit is not possible to efficiently compute the exact SVD of the corresponding matrix A .To compute the subspace dimension k we use the GCV method [7], which can be im-plemented very efficiently when the singular vectors or the DCT basis are used. The GCVfunction can be expressed as G ( k ) = (cid:80) ni = k +1 β i ( n − k ) , for k = 1 , , · · · , n − , where β i = w Ti b ( w i being either the left singular vectors u i or the DCT basis vectors). Asnoted in [4], the GCV method very often provides a parameter that is too large. Also, insome of our experiments we assume that the singular vectors are approximated by a Kro-necker product, which might be not accurate. Hence, we choose k to be equal to / of theoutput from GCV algorithm, where the factor of / was chosen on the basis of numerousexperiments.The stopping criteria used in the iterative methods were chosen based on exhaustiveexperiments (see [3] for details) to balance computational time against the quality of the re-construction. Results computed with smaller tolerances than those used here are qualitativelysimilar to those computed with the chosen tolerances, but the computational time is muchlonger. We thank Eric de Sturler for pointing this out.PLUG-AND-PLAY” EDGE-PRESERVING REGULARIZATION F IG . 4.2. DCT-EPP algorithm, out-of-focus blur with r = 5 , noise level . Left to right: blurred noisyimage, low-frequency component x k , high-frequency component x , and reconstruction. F IG . 4.3. DCT-EPP algorithm, Gaussian blur with σ = 5 , noise level . F IG . 4.4. SVD-EPP algorithm, Gaussian blur with σ = 5 , noise level is . In the EPP algorithm the norm parameter p can be any number between and . For smaller p , the solution tends to have sharper edges,but as p gets closer to the p -norm minimization in (2.5) becomes more ill-conditionedrequiring much more computational work, while there are very little improvement of therestored images. Hence, we show computed results with p = 1 . , . , . , and . .Table 4.1 shows the results of the restored out-of-focus blurred images using the DCT-EPP algorithm. The blur radius r varies from to pixels, and the noise level varies from1% to 10%. The table reports the computed truncation parameter k , the relative errors, and theMSSIM for both x k and the final restored image. Compared to the restored quality of x k , thelatter image has larger MSSIM and smaller relative error, demonstrating that the correctionstep (2.5) improves the image quality. This is illustrated by the example in Figure 4.2. Therestored images computed using smaller p are generally better than the results using larger p . The corresponding results for Gaussian blur with σ = 5 , , , still using the DCT-EPPalgorithm, are also shown in Table 4.1; see Fig. 4.3 for an example.Table 4.2 summarizes the results for the SVD-EPP algorithm, again for out-of-focusand Gaussian blur; see also Fig. 4.4. For the Gaussian blur, the performance is similar tothe DCT case. The out-of-focus blur, however, is not separable. Therefore, we feed theSVD-EPP algorithm the approximate singular vectors obtained from a Kronecker-productapproximation of A (with Toeplitz blocks). As this particular Kronecker approximation to0 D. CHEN, M. E. KILMER AND P. C. HANSEN the true SVD is not sufficiently good, the algorithm performs poorly, whereas use of the DCTbasis gives better quality reconstructions.
PLUG-AND-PLAY” EDGE-PRESERVING REGULARIZATION T A B LE . C o m pa r i s ono ft h e qua lit y o fi m ag e sr e s t o r e db y t h e D C T - EPPA l go r it h m f o r p = . , . , . , . . A s ex p ec t e d , p = . g i ve s t h e b e s t r e s u lt s . r no i s e k x k x r e s t o r e d o r l e v e l r e l a ti v e M SS I M r e l a ti v ee rr o r M SS I M σ % e rr o r p = . p = . p = . p = . p = . p = . p = . p = . O u t- o f-f o c u s P S F . . . . . . . . . .
704 5515550 . . . . . . . . . .
665 51012540 . . . . . . . . . .
642 10113100 . . . . . . . . . .
633 1054270 . . . . . . . . . .
572 10104180 . . . . . . . . . .
561 1517720 . . . . . . . . . .
604 1552040 . . . . . . . . . .
522 15102030 . . . . . . . . . . G a u ss i a n P S F . . . . . . . . . .
654 556780 . . . . . . . . . .
608 5105600 . . . . . . . . . .
579 1013370 . . . . . . . . . .
556 1052420 . . . . . . . . . .
516 10101740 . . . . . . . . . .
523 1511670 . . . . . . . . . .
522 1551170 . . . . . . . . . .
518 15101000 . . . . . . . . . . D. CHEN, M. E. KILMER AND P. C. HANSEN T A B LE . C o m pa r i s ono ft h e qua lit y o fi m ag e sr e s t o r e db y t h e S V D - EPPA l go r it h m ; s i m il a r t o T ab l e . . r no i s e k x k x r e s t o r e d o r l e v e l r e l a ti v e M SS I M r e l a ti v ee rr o r M SS I M σ % e rr o r p = . p = . p = . p = . p = . p = . p = . p = . O u t- o f-f o c u s P S F . . . . . . . . . .
530 5521020 . . . . . . . . . .
565 51013110 . . . . . . . . . .
577 10124830 . . . . . . . . . .
444 1058350 . . . . . . . . . .
523 10104680 . . . . . . . . . .
527 15121730 . . . . . . . . . .
234 1554860 . . . . . . . . . .
502 15103360 . . . . . . . . . . G a u ss i a n P S F . . . . . . . . . .
657 556790 . . . . . . . . . .
611 5105610 . . . . . . . . . .
580 1013380 . . . . . . . . . .
559 1052420 . . . . . . . . . .
512 10101740 . . . . . . . . . .
524 1511680 . . . . . . . . . .
522 1551240 . . . . . . . . . .
519 15101000 . . . . . . . . . . PLUG-AND-PLAY” EDGE-PRESERVING REGULARIZATION T ABLE
Comparison of the restored images by the TV and DCT-EPP algorithms with p = 1 . . There is no dramaticdifference between the performance of the two algorithms. r noise relative error MSSIM relative error MSSIM σ level % EPP TV EPP TV EPP TV EPP TV Out-of-focus PSF Gaussian PSF
We conclude by briefly comparingthe performance of the EPP algorithm with the TV deblurring algorithm, using the algorithmproposed in [13]. In order to avoid giving our algorithm an advantage, the parameters of theTV algorithm were chosen to optimize the MSSIM (which obviously requires the true image).As shown in Table 4.3, the images restored by the TV method qualitatively have better qualityas those computed by EPP algorithm as measured by both the relative error and the MMSIM.This demonstrates that the EPP algorithm can be a computationally attractive alternative toTV. F IG . 4.5. Left to right: original rice grain image, blurred noisy image (Gaussian blur with σ = 3 and noiselevel 0.03), DCT-EPP reconstruction (PSNR = 24.3, MSSIM = 0.72), and TV reconstruction (PSNR = 24.9, MSSIM= 0.74). To illustrate that the EPP and TV reconstructions have different features (due to the dif-ferent reconstruction models) we consider MATLAB’s × “rice grain” image shown inFig. 4.5 together with a Gaussian-blurred version and the DCT-EPP and TV reconstructions.The TV reconstruction has sharper edges, which comes at the expense of a more complicatedalgorithm with much larger computing time.
5. Conclusions.
We developed a new computational framework for projection-basededge-preserving regularization, and proved the existence and uniqueness of the solution. Ouralgorithm uses standard computational building blocks and is therefore easy to implementand tune to specific applications. Our experimental results for image deblurring show thatthe reconstructions are better than those from standard projection algorithms, and they arecompetitive with those from other edge preserving restoration techniques.4
D. CHEN, M. E. KILMER AND P. C. HANSENREFERENCES[1] J. B
AGLAMA AND
L. R
EICHEL , Decomposition methods for large linear discrete ill-posed problems , J.Comp. Appl. Math., 198 (2005), pp. 332–343.[2] A. B J ¨ ORCK , Numerical Methods for Least Squares Problems , SIAM, Philadelphia, 1996.[3] D. C
HEN , Numerical Methods for Edge-Preserving Image Restoration , PhD thesis, Tufts University, 2012.[4] J. C
HUNG , J. N
AGY , AND
D. P. O’L
EARY , A weighted-GCV method for Lanczos-hybrid regularization ,Electronic Transactions on Numerical Analysis, 28 (2008), pp. 149–167.[5] L. E LD ´ EN , A weighted pseudoinverse, generalized singular values, and constrained least squares problems ,BIT Numerical Mathematics, 22 (1982), pp. 487–502.[6] P. C. H
ANSEN , Regularization Tools version 4.0 for Matlab 7.3 , Numer. Algo., 46 (2007), pp. 189–194.[7] ,
Discrete Inverse Problems: Insight and Algorithms , SIAM, Philadelphia, 2010.[8] P. C. H
ANSEN , M. J
ACOBSEN , J. M. R
ASMUSSEN , AND
H. S.
RENSEN , The PP-TSVD algorithm for imagereconstruction problems , in Methods and Applications of Inversion, Lecture Notes in Earth Science,Vol. 92, Springer, Berlin, 2000, pp. 171–186.[9] P. C. H
ANSEN AND
T. K. J
ENSEN , Noise propagation in regularizing iterations for image deblurring , Elec-tron. Trans. Numer. Anal., 31 (2008), pp. 204–220.[10] P. C. H
ANSEN AND
K. M
OSEGAARD , Piecewise polynomial solutions without a priori break points , Num.Lin. Alg. Appl., 3 (1996), pp. 513–524.[11] P. C. H
ANSEN , J. N
AGY , AND
D. P. O’L
EARY , Deblurring Images: Matrices, Spectra, and Filtering , SIAM,Philadelphia, PA, USA, 2006.[12] P. C. H
ANSEN , T. S
EKII , AND
H. S
HIBAHASHI , The modified truncated SVD method for regularization ingeneral form , SIAM J. Sci. Stat. Comput., 13 (1992), pp. 1142–1150.[13] Y. H
UANG , M. N G , AND
Y. W EN , A fast total variation minimization method for image restoration , J.Multiscale Model. Simul., 7 (2008), pp. 774–795.[14] J. K
AMM AND
J. N
AGY , Optimal Kronecker product approximation of block Toeplitz matrices , SIAM J.Matrix Anal. Appl, 22 (1999), p. 2000.[15] J. M
UELLER AND
S. S
ILTANEN , Linear and Nonlinear Inverse Problems with Practical Applications , SIAM,Philadelphia, 2012.[16] J. N
AGY , M. N G , AND
L. P
ERRONE , Kronecker product approximations for image restoration with reflexiveboundary conditions , SIAM J. Matrix Anal. Appl., 25 (2003), pp. 829–841.[17] M. O
SBORNE , Finite algorithms in optimization and data analysis , Wiley, Chichester, New York, 1985.[18] J. R
UGE AND
K. S T ¨ UBEN , Algebraic multigrid , in Multigrid Methods, S. McCormick, ed., Philidelphia,Pennsylvania, 1987, SIAM, pp. 73–130.[19] Y. S
AAD AND
M. S
CHULTZ , GMRES: a generalized minimal residual algorithm for solving nonsymmetriclinear systems , SIAM J. Sci. Stat. Comput., 7 (1986), pp. 856–869.[20] U. T
ROTTENBERG , C. O
OSTERLEE , AND
A. S CH ¨ ULLER , Multigrid , Academic Press, San Diego, CA, 2001.[21] C. V AN L OAN AND
N. P
ITSIANIS , Approximation with Kronecker products , in Linear Algebra for LargeScale and Real Time Applications, Kluwer Publications, 1993, pp. 293–314.[22] Z. W
ANG , A. B
OVIK , H. S
HEIKH , AND
E. S
IMONCELLI , Image quality assessment: from error visibility tostructural similarity , IEEE Trans. Image Proces., 13 (2004), pp. 600–612.[23] R. W
OLKE AND
H. S