Graph Laplacian for image deblurring
Davide Bianchi, Alessandro Buccini, Marco Donatelli, Emma Randazzo
aa r X i v : . [ m a t h . NA ] F e b GRAPH LAPLACIAN FOR IMAGE DEBLURRING
DAVIDE BIANCHI, ALESSANDRO BUCCINI, MARCO DONATELLI, AND EMMA RANDAZZO
Abstract.
Image deblurring is relevant in many fields of science and engineering. To solvethis problem, many different approaches have been proposed and among the various methods,variational ones are extremely popular. These approaches are characterized by substituting theoriginal problem with a minimization one where the functional is composed of two terms, adata fidelity term and a regularization term. In this paper we propose, in the classical ℓ − ℓ minimization with the non-negativity constraint of the solution, the use of the graph Laplacianas regularization operator. Firstly, we describe how to construct the graph Laplacian from theobserved noisy and blurred image. Once the graph Laplacian has been built, we solve efficientlythe proposed minimization problem splitting the convolution operator and the graph Laplacianby the alternating direction method of multipliers (ADMM). Some selected numerical examplesshow the good performances of the proposed algorithm. Introduction
We are concerned with the solution of the image deblurring problem; see, e.g., [31] for moredetails on image deblurring. We will assume that the blurring is space-invariant obtaining alinear system of equations A x = b , where x ∈ R N and b ∈ R M are samplings of the unknown image to recover and the blurredimage, respectively, while A ∈ R M × N is a structured matrix (see below) whose singular valuesdecay rapidly and with no significant gap. The discretization process, along with measurementerrors, introduces some perturbations in the data, namely η ∈ R M such that k b − η k = δ in theEuclidean norm, leading to the system A x = b + η = b δ . The perturbations η are often referred to as noise. Since, in general, η / ∈ R ( A ), we would liketo solve the least-square problem(1.1) arg min x (cid:13)(cid:13)(cid:13) A x − b δ (cid:13)(cid:13)(cid:13) , where k·k is the Euclidean norm. Let A † denote the Moore-Penrose pseudo-inverse of A . Thenaive solution of (1.1), A † b δ , is usually a poor approximation of the desired solution x † = A † b ;see, e.g., [24, 30] for more details. This is due to the fact that A is severely ill-conditioned andthe solution of (1.1) is extremely sensitive to the presence of the noise in the data. To computean approximation of x † we need to use regularization methods. These methods aim at reducingthe sensitivity mentioned before. One of the most popular regularization method is Tikhonov THIS IS A PREPRINT. regularization where the original minimization problem is substituted by another one of the form(1.2) arg min x (cid:13)(cid:13)(cid:13) A x − b δ (cid:13)(cid:13)(cid:13) + µ k L x k , where L ∈ R p × n is an operator such that N ( A ) ∩ N ( L ) = { } . The matrix L is the so-calledregularization operator and its role is to enforce some a-priori knowledge on the reconstruction.If A is the discretization of an integral operator, then usually L is chosen to be a discretizationof either the first or the second derivative; see, e.g., [23]. The formulation (1.2) can be extendedfor a general ℓ p -norm(1.3) arg min x (cid:13)(cid:13)(cid:13) A x − b δ (cid:13)(cid:13)(cid:13) + µ k L x k pp , where k x k pp = P ni =1 | x i | p for p >
0. Note that, for p < x
7→ k x k p is not a norm;see, e.g., [12, 14, 16, 21, 25, 32, 36]. In this paper, we consider le graph Laplacian for L in (1.3)with p = 1. Therefore, our minimization problem is of the form(1.4) arg min x ≥ (cid:13)(cid:13)(cid:13) A x − b δ (cid:13)(cid:13)(cid:13) + µ k L x k , where we have introduced the non-negativity constraint since images cannot attain negativevalues.Recently, the graph Laplacian of a network, built from the image itself, has been proposedas regularization operator mainly for image denoising; see. e.g., [34, 35, 37, 39, 40, 42, 44]. Inthis paper, we build such an operator and use it in (1.4) to reconstruct blurred images. Wepropose an automated procedure for the construction of an appropriate graph Laplacian andshow its performances in some selected numerical examples. We compare the graph Laplacianwith the standard Total Variation (TV) operator (see [41]) and show that our proposal can leadto substantial improvements in the quality of the reconstructed images.This paper is structured as follows: in Section 2 we recall the definition of the Laplacian ofa given graph and we construct the one that we use in the following. Section 3 presents ouralgorithmic proposal for the solution of (1.4) and Section 4 contains some selected numericalexperiments. Finally, we draw some conclusions in Section 5.1.1. Notation.
Discretized images are made by union of several pixels in the plane, and there-fore are well represented by nonnegative two-dimensional discrete functions x : R n × n → [0 , + ∞ ), x ( i , i ) = x i ,i ∈ R + for i = 1 , . . . , n , i = 1 , . . . , n . Clearly, this choice is notunique. Since all the operations and analysis we will produce are invariant with respect to theordering, for the sake of notational simplicity, we fix n = n = n = √ N and consider the lexi-cographic one-dimensional ordering, that is, ( i , i ) =: i < j := ( j , j ) if i < j or i = j and i < j . With this choice, an image reads as x : R N → [0 , + ∞ ), x ( i ) = x i ∈ R + for i = 1 , . . . , n ,and it is said that x is the vectorization of a square image.2. Construction of the graph Laplacian
In this section, we first describe how to construct the Laplacian of a given weighted graph.Then we show how to build an appropriate graph, i.e., a graph whose Laplacian is a “good”regularization operator, given a good approximation of the exact image x † . Finally, we providean algorithm to construct our L given the problem (1.1).Given a countable measure space ( V , ν ), where ν is a positive measure, a symmetric non-negative function ω : V × V → [0 , + ∞ ) with zero diagonal is called undirected graph on V . RAPH LAPLACIAN FOR IMAGE DEBLURRING1 3
The elements i, j of the set V are called vertices , and two vertices are connected if ω ( i, j ) > w ( i, j ) is called weight associated to the edge { i, j } ; for a modern analyticintroduction to graph theory we refer to [33]. If V is a finite set of n elements, then the graph ω can be uniquely represented, unless permutations, by the adjacency matrix Ω ∈ R N × N (Ω) i,j := ω ( i, j ) . The linear operator L ω : C ( V ) → C ( V ), acting on the space C ( V ) := { x : V → R } ≃ R N via L ω x ( i ) := 1 ν ( i ) X j ω ( i, j ) ( x ( i ) − x ( j ))is the graph Laplacian on C ( V ) associated to the graph ω . It is a symmetric operator withrespect to the inner product h x , y i := X i x ( i ) y ( i ) ν ( i ) . In many applications, a quite standard choice for the measure ν is the degree function ν = deg,defined by deg( i ) := X j ω ( i, j ) . It measures the whole intensity of the weights associated to the vertex i . Clearly, this choicemakes L ω not symmetric with respect to the standard Euclidean inner product. A good com-promise is to choose the homogeneous measure associated to the Frobenius norm of Ω, i.e., ν ( i ) ≡ k Ω k F . Let us observe that, writing D as the diagonal matrix such that ( D ) i,i = deg( i ),then it is easy to check that k D k ≤ k Ω k F ≤ k D k . Henceforth, we will assume ν = k Ω k F . In matrix form, then the graph Laplacian reads L ω = D − Ω k Ω k F . We wish to construct a graph ω so that L ω can be used in (1.4). In principle we would like toconstruct ω such that (cid:13)(cid:13)(cid:13) L ω x † (cid:13)(cid:13)(cid:13) ≈ . To this aim let x ∗ be a good approximation of x † . Define ω as the weighted and undirectedgraph on V , whose nodes are the pixels of x † and the weights are defined by ω ( i, j ) = (cid:26) e − ( x ∗ ( i ) − x ∗ ( j )) /σ if i = j and k i − j k ∞ ≤ R, σ > R ∈ N are user-defined parameters. Let us recall that we are using theone-dimensional lexicographic ordering, but the nodes of V represent points in R . Therefore, i = ( i , i ) , j = ( j , j ) and k i − j k ∞ = max {| i − j | ; | i − j |} . Intuitively, the graph isconstructed as follows: we connect two pixels if they are close enough and we weight theirconnection depending on how similar their values are. In particular, we give a strong connectionto pixels that have similar values. The parameter R determines how large is the neighborhoodwe consider for each pixel and σ determines how strong the connection between two close pixelsshould be. DAVIDE BIANCHI, ALESSANDRO BUCCINI, MARCO DONATELLI, AND EMMA RANDAZZO
The construction of this graph, and consequently of the graph Laplacian, in turn dependson the construction of an appropriate approximation x ∗ . As we show in Section 4, if we couldchoose x ∗ = x † we would obtain an almost optimal result. However, this is not possible inrealistic scenarios. Therefore, we wish to provide a practical way to determine a good enough x ∗ in a totally automatic way. To compute x ∗ we propose to solve (1.2) with a regularizationoperator defined as follows. Let L ∈ R n × n be L = − − − − , i.e., L is a discretization of the first derivative with periodic boundary conditions (BCs). Let I n be the identity matrix of order n , we define L TV by(2.1) L TV = (cid:20) L ⊗ I n I n ⊗ L (cid:21) ∈ R N × N . Note that L TV is an extremely sparse matrix formed by two Block Circulant with CirculantBlocks (BCCB) matrices stacked one over the other. Therefore, matrix-vector products involving L TV can be performed extremely cheaply (in particular, the flop count is O ( N )) and L T TV L TV is a BCCB matrix. We exploit the latter property below.To simplify the computations we impose periodic BCs to the matrix A . Thanks to thischoice, A is a BCCB matrix; see [31] for more details. We recall that BCCB matrices arediagonalized by the two-dimensional Fourier matrix. Let F ∈ R n × n be the Fourier matrix, i.e.,( F ) j,k = e πι ( j − k − /n with ι = −
1, then the two-dimensional Fourier matrix is defined by F = F ⊗ F . Note that matrix-vector products with F and its inverse F ∗ can be performed in O ( N log N ) flops with the aid of the fft and ifft algorithms.As discussed above we wish to solve (1.2) with L described above to determine L ω , i.e., wewish to solve a problem of the form(2.2) x µ = arg min x (cid:13)(cid:13)(cid:13) A x − b δ (cid:13)(cid:13)(cid:13) + µ k L TV x k , for a certain µ >
0. Thanks to the structure of A and L TV this can be solved cheaply for any µ . We can write(2.3) A = F ∗ Σ F and L TV = (cid:20) F ∗ Λ x FF ∗ Λ y F (cid:21) , where Σ, Λ x , and Λ y are diagonal matrices whose diagonal entries are the eigenvalues of A , L ⊗ I n , and I ⊗ L , respectively. We recall that the eigenvalue of a BCCB matrix C can becomputed by F c , where c is the first column of C . Assuming that N ( A ) ∩ N ( L TV ) = { } we RAPH LAPLACIAN FOR IMAGE DEBLURRING2 5 have that x µ = ( A T A + µL T TV L TV ) − A T b δ = (cid:18) F ∗ Σ ∗ F F ∗ Σ F + µ (cid:2) F ∗ Λ ∗ x F F ∗ Λ ∗ y F (cid:3) (cid:20) F ∗ Λ x FF ∗ Λ y F (cid:21)(cid:19) − F ∗ Σ ∗ F b δ = (cid:0) F ∗ Σ ∗ Σ F + µF ∗ (Λ ∗ x Λ x + Λ ∗ y Λ y ) F (cid:1) − F ∗ Σ ∗ F b δ = F ∗ (cid:0) Σ ∗ Σ + µ (Λ ∗ x Λ x + Λ ∗ y Λ y ) (cid:1) − Σ ∗ F b δ , where the matrix to be inverted is a diagonal matrix. Therefore, x µ can be computed for any µ cheaply.We now wish to determine in an automatic way the parameter µ . We employ the GeneralizedCross Validation (GCV).Denote by G ( µ ) the following function G ( µ ) = (cid:13)(cid:13) A x µ − b δ (cid:13)(cid:13) trace ( I − A ( A T A + µL T TV L TV ) − A T ) . The GCV parameter is the minimizer of G ( µ ), i.e., µ GCV = arg min µ G ( µ ). Given the decompo-sition (2.3) the value of G ( µ ) can be computed in a straightforward way. Introduce the followingnotation r µ = (cid:13)(cid:13)(cid:13) A x µ − b δ (cid:13)(cid:13)(cid:13) and t µ = trace ( I − A ( A T A + µL T TV L TV ) − A T ) , therefore, G ( µ ) = r µ /t µ . Using the spectral decomposition of A we have r µ = (cid:13)(cid:13)(cid:13) A x µ − b δ (cid:13)(cid:13)(cid:13) = (cid:13)(cid:13)(cid:13) F ∗ Σ F F ∗ (cid:0) Σ ∗ Σ + µ (Λ ∗ x Λ x + Λ ∗ y Λ y ) (cid:1) − Σ ∗ F b δ − b δ (cid:13)(cid:13)(cid:13) = (cid:13)(cid:13)(cid:13) Σ (cid:0) Σ ∗ Σ + µ (Λ ∗ x Λ x + Λ ∗ y Λ y ) (cid:1) − Σ ∗ F b δ − F b δ (cid:13)(cid:13)(cid:13) = (cid:13)(cid:13)(cid:13) (ΣΣ ∗ (cid:0) Σ ∗ Σ + µ (Λ ∗ x Λ x + Λ ∗ y Λ y ) (cid:1) − − I ) c b δ (cid:13)(cid:13)(cid:13) , where c b δ = F b δ . We now move to the computation of t µ t µ = trace ( I − A ( A T A + µL T TV L TV ) − A T )= trace ( I − F ∗ Σ(Σ ∗ Σ + µ (cid:0) Λ ∗ x Λ x + Λ ∗ y Λ y ) (cid:1) − Σ ∗ F )= trace ( I − ΣΣ ∗ (Σ ∗ Σ + µ (cid:0) Λ ∗ x Λ x + Λ ∗ y Λ y ) (cid:1) − ) . We can observe that, once the decompositions (2.3) and c b δ have been computed, the evaluationof G ( µ ) can be done in O ( N ) flops. This allows for an extremely fast determination of µ GCV .Finally, we select as x ∗ the solution of the minimization problem(2.4) x ∗ = arg min x (cid:13)(cid:13)(cid:13) A x − b δ (cid:13)(cid:13)(cid:13) + µ GCV k L TV x k . Remark 2.1. If A is constructed with BCs different from the periodic ones it is still possible tocompute a fairly accurate approximation of G ( µ ) using Krylov subspace methods; see [8, 17, 26,27]. We summarize the procedure to construct L ω in Algorithm 1. DAVIDE BIANCHI, ALESSANDRO BUCCINI, MARCO DONATELLI, AND EMMA RANDAZZO
Algorithm 1
Construction of L ω for image deblurring Input: A ∈ R N × N , b δ ∈ R N , R > σ > Output: L ω ∈ R n × n Construct L TV as defined in (2.1) Σ = diag (cid:0) F ( A (: , ) (cid:1) Λ x = diag (cid:0) F (( L TV ) (: , N ) ) (cid:1) Λ y = diag (cid:0) F (( L TV ) (: ,N +1:2 N ) ) (cid:1) c b δ = F b δ µ GCV = arg min µ r µ t µ , where ( r µ = (cid:13)(cid:13)(cid:13) (ΣΣ ∗ (cid:0) Σ ∗ Σ + µ (Λ ∗ x Λ x + Λ ∗ y Λ y ) (cid:1) − − I ) c b δ (cid:13)(cid:13)(cid:13) t µ = trace ( I − ΣΣ ∗ (Σ ∗ Σ + µ (cid:0) Λ ∗ x Λ x + Λ ∗ y Λ y ) (cid:1) − ) x ∗ = F ∗ (cid:0) Σ ∗ Σ + µ (Λ ∗ x Λ x + Λ ∗ y Λ y ) (cid:1) − Σ ∗ c b δ Construct Ω ∈ R n × n as ω ( i, j ) = (cid:26) e − ( x ∗ ( i ) − x ∗ ( j )) /σ if i = j and k i − j k ∞ ≤ R, i and j representing two-dimensional indexes, sorted in lexicographic order D = diag { P nj =1 (Ω) ( i,j ) } L ω = D − Ω k Ω k F Graph Laplacian deblurring
We now describe the non-linear model we employ to compute an approximate solution of(1.1). We consider the graph Laplacian L ω constructed by Algorithm 1 and we use it in (1.4).Therefore, we wish to solve the following(3.1) arg min x ≥ (cid:13)(cid:13)(cid:13) A x − b δ (cid:13)(cid:13)(cid:13) + µ k L ω x k . To solve this problem we use the Alternating Direction Multiplier Method (ADMM); see, e.g., [6]for a recent review. We use ADMM since it allows us to decouple the ℓ and ℓ norms as well asthe matrices A and L ω . The latter point is extremely relevant since, as we discuss below, bothmatrices have exploitable structures, however, they are difficult to exploit together.We first reformulate (3.1) in an equivalent formarg min x , y , w , z (cid:26) (cid:13)(cid:13)(cid:13) A x − b δ (cid:13)(cid:13)(cid:13) + µ k z k + ι ( w ) , s.t. x = y , x = w , z = L ω y , (cid:27) or equivalently arg min x , y , w , z (cid:13)(cid:13)(cid:13) A x − b δ (cid:13)(cid:13)(cid:13) + µ k z k + ι ( w )s.t. I OO II O (cid:20) xz (cid:21) − I OL ω OO I (cid:20) yw (cid:21) = , (3.2) RAPH LAPLACIAN FOR IMAGE DEBLURRING3 7 where O and denote the zero matrix and the zero vector, respectively, and ι is the indicatorfunction of the nonnegative cone, i.e., ι ( x ) = (cid:26) x ≥ , + ∞ otherwise.We can construct the augmented Lagrangian of (3.2) L ρ ( x , y , w , z ; λ ) = 12 (cid:13)(cid:13)(cid:13) A x − b δ (cid:13)(cid:13)(cid:13) + µ k z k + ι ( w ) − * λ , I OO II O (cid:20) xz (cid:21) − I OL ω OO I (cid:20) yw (cid:21)+ + ρ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) I OO II O (cid:20) xz (cid:21) − I OL ω OO I (cid:20) yw (cid:21)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) , where ρ > λ ∈ R N is the Lagrangian multiplier. Applying ADMMwe get the iterations (cid:20) x ( k +1) z ( k +1) (cid:21) = arg min x , z L ρ ( x , y ( k ) , w ( k ) , z ; λ ( k ) ) , (cid:20) y ( k +1) w ( k +1) (cid:21) = arg min y , w L ρ ( x ( k +1) , y , w , z ( k +1) ; λ ( k ) ) , λ ( k +1) = λ ( k ) + ρ I OO II O (cid:20) x ( k +1) z ( k +1) (cid:21) − I OL ω OO I (cid:20) y ( k +1) w ( k +1) (cid:21) . We can write λ = λ λ λ with λ j ∈ R N for j = 1 , ,
3. Therefore the minimization problemsabove decouples and we obtain x ( k +1) = arg min x (cid:13)(cid:13) A x − b δ (cid:13)(cid:13) − *" λ ( k )1 λ ( k )3 , (cid:20) x − y ( k ) x − w ( k ) (cid:21)+ + ρ (cid:13)(cid:13)(cid:13)(cid:13)(cid:20) x − y ( k ) x − w ( k ) (cid:21)(cid:13)(cid:13)(cid:13)(cid:13) , z ( k +1) = arg min z µ k z k − D λ ( k )2 , z − L ω y ( k ) E + ρ (cid:13)(cid:13) z − L ω y ( k ) (cid:13)(cid:13) , y ( k +1) = arg min y ρ (cid:13)(cid:13) x ( k +1) − y (cid:13)(cid:13) − *" λ ( k )1 λ ( k )2 , (cid:20) x ( k +1) − y L ω z ( k +1) − y (cid:21)+ + ρ (cid:13)(cid:13)(cid:13)(cid:13)(cid:20) x ( k +1) − y L ω z ( k +1) − y (cid:21)(cid:13)(cid:13)(cid:13)(cid:13) , w ( k +1) = arg min w ρ (cid:13)(cid:13) x ( k +1) − w (cid:13)(cid:13) − D λ ( k )3 , x ( k +1) − w E + ι ( w ) . DAVIDE BIANCHI, ALESSANDRO BUCCINI, MARCO DONATELLI, AND EMMA RANDAZZO
All of these minimization problems have a closed form for their solution, namely x ( k +1) = ( A T A + 2 ρI ) − ( A T b δ + ρ y ( k ) − λ ( k )1 + ρ w ( k ) − λ ( k )3 ) , z ( k +1) = S µ/ρ (cid:16) L ω y ( k ) − λ ( k )2 /ρ (cid:17) , y ( k +1) = ( L Tω L ω + I ) − ( L Tω ( z ( k +1) + λ ( k )2 /ρ ) + x ( k +1) + λ ( k )1 /ρ ) , w ( k +1) = (cid:0) x ( k +1) + λ /ρ (cid:1) + , λ ( k +1)1 = λ ( k )1 + ρ ( x ( k +1) − y ( k +1) ) , λ ( k +1)2 = λ ( k )2 + ρ ( z ( k +1) − L ω y ( k +1) ) , λ ( k +1)3 = λ ( k )3 + ρ ( x ( k +1) − w ( k +1) ) , where S µ denotes the soft-thresholding operator with parameter µ , i.e., S µ ( x ) = sign ( x )( | x | − µ ) + , where the operations are meant element-wise and ( x ) + = max { x, } is the metric projectioninto the nonnegative cone. Note that each iteration requires the solution of two linear systems.The linear system involving A can be easily solved using the fft algorithm if periodic BCs areemployed; see above. If other BCs are employed, the structure of the matrix, in general, doesnot allow to use fast transform for the solution of the system. Nevertheless, this system can besolved by an iterative method using a circulant preconditioner. On the other hand, the solutionof linear system with the L ω matrix can be easily computed using the lsqr algorithm appliedto the equivalent least-square problem since the matrix L ω is extremely sparse.We would like to briefly discuss the use of the lsqr method for the solution of the system(3.3) ( L Tω L ω + I ) y ( k +1) = ( L Tω ( z ( k +1) + λ ( k )2 /ρ ) + x ( k +1) + λ ( k )1 /ρ. The linear system of equations (3.3) is equivalent to the least squares problem y ( k +1) = arg min y (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:20) L ω I (cid:21) y − " z ( k +1) + λ ( k )2 /ρ x ( k +1) + λ ( k )1 /ρ = arg min y (cid:13)(cid:13)(cid:13) b L y − b v ( k ) (cid:13)(cid:13)(cid:13) . (3.4)The lsqr algorithm is an iterative method that determines an approximate solution of (3.4)into a Krylov subspace. In particular, at its j − th iteration the lsqr algorithm determines asolution of (3.4) in the Krylov subspace K j ( b L T b L, b L T b v ( k ) ), where K j ( b L T b L, b L T b v ( k ) ) = span n b L T b v ( k ) , ( b L T b L ) b L T b v ( k ) , . . . , ( b L T b L ) j − b L T b v ( k ) o . The lsqr method requires one matrix-vector product with b L and one with b L T at each iteration.Therefore, since b L is extremely sparse the flop count per iteration is O ( N ). Moreover lsqr ismathematically equivalent to the cg method applied to (3.3). However, its implementation ismore stable. Nevertheless, the number of iteration required to converge is proportional to κ ( b L )which is extremely small; see, e.g., [5] for a discussion on lsqr and cg .We summarize our approach in Algorithm 2.Note that since the functional (1.4) is convex we can apply the following classical result onADMM Theorem 3.1 (see, e.g., Section 3.2 of Boyd et al. [6]) . With the notation of Algorithm 2 itholds that
RAPH LAPLACIAN FOR IMAGE DEBLURRING4 9
Algorithm 2
Graph Laplacian image deblurring Input: A ∈ R N × N , b δ ∈ R N , R > σ > ρ > τ > K > µ > Output: x ∈ R n × n Run Algorithm 1 to compute L ω y (0) = w (0) = λ (0)1 = λ (0)2 = λ (0)3 = for k = 0 , . . . , K x ( k +1) = ( A T A + 2 ρI ) − ( A T b δ + ρ y ( k ) − λ ( k )1 + ρ w ( k ) − λ ( k )3 ) z ( k +1) = S µ/ρ (cid:16) L ω y ( k ) − λ ( k )2 /ρ (cid:17) y ( k +1) = ( L Tω L ω + I ) − ( L Tω ( z ( k +1) + λ ( k )2 /ρ ) + x ( k +1) + λ ( k )1 /ρ ) w ( k +1) = P (cid:0) x ( k +1) + λ /ρ (cid:1) λ ( k +1)1 = λ ( k )1 + ρ ( x ( k +1) − y ( k +1) ) λ ( k +1)2 = λ ( k )2 + ρ ( z ( k +1) − L ω y ( k +1) ) λ ( k +1)3 = λ ( k )3 + ρ ( x ( k +1) − w ( k +1) ) if k > (cid:13)(cid:13) x k +1 − x k (cid:13)(cid:13) ≤ τ (cid:13)(cid:13) x k (cid:13)(cid:13) Exit end if end for x = x ( k +1) (i) lim k →∞ (cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) I OO II O (cid:20) x ( k ) z ( k ) (cid:21) − I OL ω OO I (cid:20) y ( k ) w ( k ) (cid:21)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13)(cid:13) = 0 , i.e., the iterates approach feasibility as k → ∞ ;(ii) lim k →∞ (cid:13)(cid:13) A x ( k ) − b δ (cid:13)(cid:13) + µ (cid:13)(cid:13) z ( k ) (cid:13)(cid:13) + ι ( w ( k ) ) = p ∗ , where p ∗ is the minimum of (3.1) ;(iii) lim k →∞ λ k = λ ∗ , where λ ∗ is a dual optimal point, i.e., a saddle point of L . Remark 3.2.
ADMM can be slow to converge in certain scenarios. It is not in the scope ofthis paper to propose a fast algorithm for the solution of (3.1) . Rather we wish to show thepotentiality of L ω as a regularization operator. Nevertheless, it is possible to accelerate theconvergence of ADMM by extrapolation methods to improve the convergence rate of ADMM;see, e.g., [9, 29]. Numerical Examples
We now report some selected numerical examples to show the performances of the proposedmethod. We are particularly interested in showing that the graph Laplacian constructed inAlgorithm 2 provides better reconstructions than the classical TV approach.We compare the results obtained using L = L TV and L = L ω in (1.4) with the solution x ∗ computed in line 9 of Algorithm 2. To compute the solution of (1.4) with L = L TV we usethe algorithm described in [22]. Moreover, we want to show the fill potentiality of the proposedapproach. To this aim we construct the operator L ω using the exact image x true and we denoteit by e L ω . Obviously this approach is not feasible in realistic scenarios, nevertheless it allows usto show the full potentiality of the proposed method.In all our experiments we set the parameters as specified in Table 1. We compare the con-sidered method in terms of accuracy using the Relative Restoration Error (RRE) computed Table 1.
Setting of the parameters in Algorithm 2.Paramter Value Description R
10 Support of the weight function in the Graph σ − Variance of the weight function in the Graph ρ − Augmentation parameter in ADMM τ − Stopping criterion for ADMM K µ Hand-tuned Regularization parameter in (1.4)as RRE( x ) = k x − x true k k x true k , the Pick Signal to Noise Ration (PSNR), defined byPSNR( x ) = 20 log (cid:18) N m k x true k (cid:19) , where m denotes the maximum value achievable by x true . Moreover, we consider the StructureSIMilarity index (SSIM), constructed in [43]. The definition of the SSIM is extremely involved,here we simply recall that this statistical index measures how structurally similar two images are,in particular, the higher the SSIM the more similar the images are, and its highest achievablevalue is 1.Example 1. Our first example is the atmosphericBlur50 test case of the RestoreTools toolbox[2]. We report the exact image, the PSF, and the blurred an noisy image in Fig. 2. The normof the noise, denoted by δ , that corrupts the data is approximately 1% of the norm of the exactright-hand side b δ .We report the obtained results with the considered methods in Table 2. We can observethat ℓ − ℓ methods provide much more accurate results than the classical Tikhonov method,especially in terms of SSIM. The reconstruction obtained with L = e L ω , i.e., using the graphrelated to the exact image, is extremely accurate. To validate our model we show in Fig. 1a visualization of (cid:12)(cid:12)(cid:12)e L ω x † (cid:12)(cid:12)(cid:12) . We can observe that it is extremely sparse. Therefore, we expectour model to provide accurate reconstructions. However, this approach is not feasible in realscenarios. Nevertheless, we can also observe that using L = L ω improves the quality of therestoration with respect to the classic TV. This is confirmed by the visual inspection of thereconstructions in Fig. 3. Comparing the reconstructions obtained by the ℓ − ℓ methods wecan observe that the choice L = L TV leads to more noisy reconstructions than the ones obtainedwith the graph Laplacian.Example 2. For our second example we consider the Hubble image in Fig. 4(a) and we blur itwith the PSF in Fig. 4(b). We then add white Gaussian noise such that δ = 0 . k b k obtainingthe blurred and noisy image in Fig. 4(c).We compute approximate solutions with the considered algorithms and report the obtainedRRE, PSNR, and SSIM in Table 2. We can observe that our proposal provides the best recon-struction both in terms of RRE (and, therefore of PSNR) and SSIM. This is confirmed by thevisual inspection of the reconstructions in Fig. 5. We would like to stress that, similarly as the RAPH LAPLACIAN FOR IMAGE DEBLURRING5 11
Figure 1.
Example 1: Visualization of (cid:12)(cid:12)(cid:12)e L ω x † (cid:12)(cid:12)(cid:12) in the jet colormap. The colorblue represents the 0s in the image.(a) (b) (c) Figure 2.
Example 1: (a) True image (256 ×
256 pixels), (b) PSF (256 × ×
256 pixels with δ ≈ . k b k ).(a) (b) (c) Figure 3.
Example 1 reconstructions: (a) ℓ − ℓ with L = L TV , (b) ℓ − ℓ with L = L ω , (c) ℓ − ℓ with L = e L ω . (a) (b) (c) Figure 4.
Example 2: (a) True image (256 ×
256 pixels), (b) PSF (9 × ×
256 pixels with δ = 0 . k b k ).(a) (b) (c) Figure 5.
Example 2 reconstructions: (a) ℓ − ℓ with L = L TV , (b) ℓ − ℓ with L = L ω , (c) ℓ − ℓ with L = e L ω .previous example, the unconstrained Tikhonov method computes extremely noisy reconstruc-tions.Example 3. For our third example we consider the Saturn image in Fig. 6(a). We blur it witha non-symmetric PSF (see Fig. 6(b)) and add 5% of white Gaussian noise, i.e., δ = 0 . k b k obtaining the image in Fig. 6(c).We report in Table 2 the obtained results with the considered algorithms. We can observethat our proposal provides a very accurate reconstruction in terms of RRE and PSNR. However,the SSIM of the computed solution is slightly lower than the one obtained with the standardTV regularization. In Fig. 7 we report all the computed solution. We would like to observe thatthe reconstruction obtained with the classical TV regularization presents a very heavy stair-caseeffect while the approximate solution obtained with our proposal avoids this issue. To showthis we propose in Fig. 8 blow-ups of the central part of the image of the exact solution andof the reconstructions obtained by TV regularization and our approach. We can observe thatthe TV reconstruction presents a very heavy stair-case effect that is avoided by our proposal. RAPH LAPLACIAN FOR IMAGE DEBLURRING6 13 (a) (b) (c)
Figure 6.
Example 3: (a) True image (256 ×
256 pixels), (b) PSF (17 × ×
256 pixels with δ = 0 . k b k ).(a) (b) (c) Figure 7.
Example 3 reconstructions: (a) ℓ − ℓ with L = L TV , (b) ℓ − ℓ with L = L ω , (c) ℓ − ℓ with L = e L ω .The solution computed by our method manages to remain extremely sharp, while avoiding anystair-casing.Example 4. We consider the exact image in Fig. 9(a) and we blur it with an average PSF (seeFig. 9(b)), we add white Gaussian noise so that (cid:13)(cid:13) b − b δ (cid:13)(cid:13) = 0 . k b k obtaining Fig. 9(c). Weconstruct the graph Laplacian related to the exact solution e L ω and deblur the blurred and noisyimage with Tikhonov (with µ = µ GCV ) and by minimizing (1.4) with both L = L TV and L = e L ω .We report the results obtained in Table 2. We can observe that the proposed algorithm withthe perfect choice of e L ω largely outperforms the other approaches furnishing a very accuratereconstruction of the proposed image. Moreover, the choice of L = L ω , which we recall that itdoes not require any a priori information on the exact solution, is still more accurate than theclassical TV. This is confirmed by the visual inspection of the reconstructions in Fig. 10. (a) (b) (c) Figure 8.
Example 3 blow ups of the exact solution and of two reconstructionsin the jet colormap: (a) True solution, (b) ℓ − ℓ with L = L TV , (c) ℓ − ℓ with L = L ω .(a) (b) (c) Figure 9.
Example 4: (a) True image (256 ×
256 pixels), (b) PSF (12 × ×
256 pixels with δ = 0 . k b k ).5. Conclusions
In this paper we have proposed a new regularization operator for ℓ − ℓ minimization. Theconstruction of this operator is automatic and extremely cheap to perform. We have shownthat the proposed method outperforms the classical TV approach. Matter of future researchinclude the application of the proposed method to more general inverse problems as well as theintegration of the considered method with the ℓ p − ℓ q minimization proposed in [12,14–16,32,36]or with iterative regularization methods like, e.g., Linearized Bregman splitting [11, 13, 18–20]and Iterated Tikhonov with general penalty terms [3, 4, 7, 10]. Another line of future research isthe construction of more sophisticated graphs ω which can better exploit the structure of thegiven image itself. Such constructions may stem from a PDEs approach; see, e.g., [1, 28, 38]. RAPH LAPLACIAN FOR IMAGE DEBLURRING (a) (b) (c) Figure 10.
Example 4 reconstructions: (a) ℓ − ℓ with L = L TV , (b) ℓ − ℓ with L = L ω , (c) ℓ − ℓ with L = e L ω . Table 2.
Comparison of the RRE, PSNR, and SSIM.Example Method RRE PSNR SSIMExample 1 Tikhonov 0 . .
663 0 . ℓ − ℓ with L = L TV . .
984 0 . ℓ − ℓ with L = L ω . .
638 0 . ℓ − ℓ with L = e L ω . .
212 0 . . .
735 0 . ℓ − ℓ with L = L TV . .
720 0 . ℓ − ℓ with L = L ω . .
019 0 . ℓ − ℓ with L = e L ω . .
439 0 . . .
715 0 . ℓ − ℓ with L = L TV . .
041 0 . ℓ − ℓ with L = L ω . .
231 0 . ℓ − ℓ with L = e L ω . .
684 0 . . .
160 0 . ℓ − ℓ with L = L TV . .
686 0 . ℓ − ℓ with L = L ω . .
024 0 . ℓ − ℓ with L = e L ω . .
695 0 . Acknowledgments
A.B., and M.D. are members of the GNCS-INdAM group. D.B. is member of the GNAMPA-INdAM group. A.B. research is partially supported by the Regione Autonoma della Sardegnaresearch project “Algorithms and Models for Imaging Science [AMIS]” (RASSR57257, interventofinanziato con risorse FSC 2014-2020 - Patto per lo Sviluppo della Regione Sardegna).
References [1] Adriani, A., Bianchi, D., Serra-Capizzano, S.: Asymptotic Spectra of Large (Grid) Graphs with a UniformLocal Structure (Part I): Theory. Milan J. Math. (2), 169–199 (2020) [2] Berisha, S., Nagy, J.G.: Iterative methods for image restoration. Tech. rep., Department of Mahtematics andComputer Science, Emory University (2012). [3] Bianchi, D., Buccini, A., Donatelli, M., Serra-Capizzano, S.: Iterated fractional Tikhonov regularization.Inverse Problems (5), 055005 (2015)[4] Bianchi, D., Donatelli, M.: On generalized iterated Tikhonov regularization with operator-dependent semi-norms. Electron. Trans. Numer. Anal. , 73–99 (2017)[5] Bj¨orck, ˚A.: Numerical methods for least squares problems. SIAM (1996)[6] Boyd, S., Parikh, N., Chu, E.: Distributed optimization and statistical learning via the alternating directionmethod of multipliers. Now Publishers Inc (2011)[7] Buccini, A.: Regularizing preconditioners by non-stationary iterated tikhonov with general penalty term.Applied Numerical Mathematics , 64–81 (2017)[8] Buccini, A.: Generalized Cross Validation stopping rule for Iterated Tikhonov regularization. In preparation(2021)[9] Buccini, A., Dell’Acqua, P., Donatelli, M.: A general framework for admm acceleration. Numerical Algorithms , 829–848 (2020)[10] Buccini, A., Donatelli, M., Reichel, L.: Iterated Tikhonov with general penalty term. Accepted on: NumericalLinear Algebra and Applications (2016)[11] Buccini, A., Park, Y., Reichel, L.: Numerical aspects of the nonstationary modified linearized Bregmanalgorithm. Applied Mathematics and Computation , 386–398 (2018)[12] Buccini, A., Pasha, M., Reichel, L.: Modulus-based iterative methods for constrained ℓ p − ℓ q minimization.Inverse Problems (8), 084001 (2020)[13] Buccini, A., Pasha, M., Reichel, L.: Linearized Krylov subspace Bregman iteration with nonnegativityconstraint. Numerical Algorithms (In press)[14] Buccini, A., Reichel, L.: An ℓ − ℓ q regularization method for large discrete ill-posed problems. Journal ofScientific Computing (3), 1526–1549 (2019)[15] Buccini, A., Reichel, L.: An ℓ - ℓ q regularization method for large discrete ill-posed problems. Journal ofScientific Computing (3), 1526–1549 (2019)[16] Buccini, A., Reichel, L.: An ℓ p - ℓ q minimization method with cross-validation for the restoration of impulsenoise contaminated images. Journal of Computational and Applied Mathematics , 112824 (2020)[17] Buccini, A., Reichel, L.: Generalized cross validation for ℓ p - ℓ q minimization. Under Review (2021)[18] Cai, J.F., Osher, S., Shen, Z.: Linearized Bregman iterations for frame-based image deblurring. SIAM Journalon Imaging Sciences (1), 226–252 (2009)[19] Cai, J.F., Osher, S., Shen, Z.: Split Bregman methods and frame based image restoration. Multiscale Mod-eling & Simulation (2), 337–369 (2009)[20] Cai, Y., Donatelli, M., Bianchi, D., Huang, T.Z.: Regularization preconditioners for frame-based imagedeblurring with reduced boundary artifacts. SIAM Journal on Scientific Computing (1), B164–B189 (2016)[21] Chan, R.H., Liang, H.X.: Half-quadratic algorithm for ℓ p - ℓ q problems with applications to TV- ℓ imagerestoration and compressive sensing. In: Efficient Algorithms for Global Optimization Methods in ComputerVision, pp. 78–103. Springer (2014)[22] Chan, R.H., Tao, M., Yuan, X.: Constrained total variation deblurring models and fast algorithms based onalternating direction method of multipliers. SIAM Journal on imaging Sciences (1), 680–697 (2013)[23] Donatelli, M., Reichel, L.: Square smoothing regularization matrices with accurate boundary conditions.Journal of Computational and Applied Mathematics , 334–349 (2014)[24] Engl, H.W., Hanke, M., Neubauer, A.: Regularization of Inverse Problems. Kluwer, Doordrecht (1996)[25] Estatico, C., Gratton, S., Lenti, F., Titley-Peloquin, D.: A conjugate gradient like method for p-normminimization in functional spaces. Numerische Mathematik (4), 895–922 (2017)[26] Fenu, C., Reichel, L., Rodriguez, G.: GCV for Tikhonov regularization via global Golub-Kahan decomposi-tion. Numerical Linear Algebra with Applications (3), 467–484 (2016)[27] Fenu, C., Reichel, L., Rodriguez, G., Sadok, H.: GCV for Tikhonov regularization by partial SVD. BITNumerical Mathematics , 1019–1039 (2017)[28] Gilboa, G., Osher, S.: On generalized iterated Tikhonov regularization with operator-dependent seminorms.SIAM Multiscale Model. Simul. (3), 1005–1028 (2009)[29] Goldstein, T., O’Donoghue, B., Setzer, S., Baraniuk, R.: Fast alternating direction optimization methods.SIAM Journal on Imaging Sciences (3), 1588–1623 (2014) RAPH LAPLACIAN FOR IMAGE DEBLURRING [30] Hansen, P.C.: Rank-deficient and discrete ill-posed problems: numerical aspects of linear inversion. SIAM(1998)[31] Hansen, P.C., Nagy, J.G., O’Leary, D.P.: Deblurring Images: Matrices, Spectra, and Filtering. SIAM,Philadelphia (2006)[32] Huang, G., Lanza, A., Morigi, S., Reichel, L., Sgallari, F.: Majorization-minimization generalized Krylovsubspace methods for ℓ p - ℓ q optimization applied to image restoration. BIT Numerical Mathematics pp. 1–28(2017)[33] Keller, M., Lenz, D., Wojciechowski, R.K.: Graphs and discrete Dirichlet spaces. Grundlehren der mathe-matischen Wissenschaften: Springer, forthcoming. (2021)[34] Kheradmand, A., Milanfar, P.: Motion deblurring with graph laplacian regularization. In: Digital Photogra-phy XI, vol. 9404, p. 94040C. International Society for Optics and Photonics (2015)[35] Krishnan, D., Fergus, R.: Fast image deconvolution using hyper-laplacian priors. In: Advances in neuralinformation processing systems, pp. 1033–1041 (2009)[36] Lanza, A., Morigi, S., Reichel, L., Sgallari, F.: A generalized Krylov subspace method for ℓ p - ℓ q minimization.SIAM Journal on Scientific Computing (5), S30–S50 (2015)[37] Li, F., Ng, M.K.: Image colorization by using graph bi-laplacian. Advances in Computational Mathematics (3), 1521–1549 (2019)[38] Lou, Y., Zhang, X., Osher, S., Bertozzi, A.: Image recovery via nonlocal operators. J. Sci. Comput. ,185–197 (2010)[39] Meyer, F.G., Shen, X.: Perturbation of the eigenvectors of the graph laplacian: Application to image denois-ing. Applied and Computational Harmonic Analysis (2), 326–334 (2014)[40] Pang, J., Cheung, G.: Graph laplacian regularization for image denoising: Analysis in the continuous domain.IEEE Transactions on Image Processing (4), 1770–1785 (2017)[41] Rudin, L.I., Osher, S., Fatemi, E.: Nonlinear total variation based noise removal algorithms. Physica D:nonlinear phenomena (1-4), 259–268 (1992)[42] Susnjara, A., Perraudin, N., Kressner, D., Vandergheynst, P.: Accelerated filtering on graphs using lanczosmethod. arXiv preprint arXiv:1509.04537 (2015)[43] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility tostructural similarity. IEEE Transactions on Image Processing (4), 600–612 (2004)[44] Ya˘gan, A.C., ¨Ozgen, M.T.: A spectral graph wiener filter in graph fourier domain for improved imagedenoising. In: 2016 IEEE Global Conference on Signal and Information Processing (GlobalSIP), pp. 450–454. IEEE (2016) Dipartimento di Scienze e Alta Tecnologia, Universit`a dell’Insubria, via Valleggio 11, I-22100Como, Italy
Email address : [email protected] Department of Mathematics and Computer Science, University of Cagliari, 09124 Cagliari, Italy
Email address : [email protected] Dipartimento di Scienze e Alta Tecnologia, Universit`a dell’Insubria, via Valleggio 11, I-22100Como, Italy
Email address : [email protected] Dipartimento di Scienze e Alta Tecnologia, Universit`a dell’Insubria, via Valleggio 11, I-22100Como, Italy
Email address ::