Proposal of fault-tolerant tomographic image reconstruction
Hiroyuki Kudo, Keita Takaki, Fukashi Yamazaki, Takuya Nemoto
aa r X i v : . [ phy s i c s . m e d - ph ] S e p Proposal of fault-tolerant tomographic image reconstruction
Hiroyuki Kudo a,b , Keita Takaki a , Fukashi Yamazaki a , and Takuya Nemoto aa Faculty of Engineering, Information and Systems, University of Tsukuba, Tennoudai 1-1-1,Tsukuba 305-8573, Japan b JST-ERATO Momose Quantum-Beam Phase Imaging Project, Katahira, Aoba-ku, Sendai980-8577, Japan
ABSTRACT
This paper deals with tomographic image reconstruction under the situation where some of projection data binsare contaminated with abnormal data. Such situations occur in various instances of tomography. We propose anew reconstruction algorithm called the Fault-Tolerant reconstruction outlined as follows. The least-squares ( L -norm) error function k A~x − ~b k used in ordinary iterative reconstructions is sensitive to the existence of abnormaldata. The proposed algorithm utilizes the L -norm error function k A~x − ~b k instead of the L -norm, and wedevelop a row-action-type iterative algorithm using the proximal splitting framework in convex optimizationfields. We also propose an improved version of the L -norm reconstruction called the L -TV reconstruction, inwhich a weak Total Variation (TV) penalty is added to the cost function. Simulation results demonstrate thatreconstructed images with the L -norm were severely damaged by the effect of abnormal bins, whereas imageswith the L -norm and L -TV reconstructions were robust to the existence of abnormal bins. Keywords:
Tomography, Image Reconstruction, Image Processing, Robust Estimation, Iterative Reconstruc-tion, Proximal Splitting
1. INTRODUCTION
This paper deals with tomographic image reconstruction under the situation where some of projection data(sinogram) bins are contaminated with abnormal data, i.e. impulsive noise with heavy tails, and locations ofthe abnormal bins are unknown. We believe that such situations occur in various instances of tomographydue to unexpected troubles in detector, x-ray source or power supply, data transfer errors, metal in the object,misalignments in scanner geometry, and phase wrapping in phase tomography. However, according to ourknowledge, there exist very few (almost no, to the best of our knowledge) literatures which deal with imagereconstruction to handle such situations. On the other hand, in other fields such as statistics, computer vision,and machine learning, the problem of estimating some physical parameters from a set of measured data containingthe abnormal errors (outliers) has been recognized as an important research topic so that there exist a lot ofresearch activities.We propose a new image reconstruction algorithm called the Fault-Tolerant reconstruction outlined as follows.In ordinary iterative reconstructions, the least-squares error k A~x − ~b k defined as the L norm difference betweenforward-projected projection data A~x and measured projection data ~b is used as a cost function [1],[2]. However,it is well-known that the least-squares criterion is sensitive to the existence of abnormal data in the measurement ~b . A simple method to estimate the locations of abnormal bins and exclude the corresponding data from the datafitting is to utilize the L -norm difference k A~x − ~b k between A~x and ~b instead of the L norm [3],[4]. Basedon this idea, we develop a row-action-type iterative algorithm minimizing the L norm error function usingthe proximal splitting framework in convex optimization fields [5],[6]. The resulting algorithm resembles thestructure of ART (Algebraic Reconstruction Technique) algorithm [7],[8]. We also propose an improved versionof the L -norm approach called the L -TV reconstruction, in which a weak Total Variation (TV) penalty functionis added to the cost function [9]-[11]. We performed simulation studies for typical scenarios assuming detectorerrors, x-ray radiation errors, and random errors in projection data space. In all of the simulated scenarios,reconstructed images with the L -norm were severely damaged by the effect of abnormal bins, whereas imageswith the L -norm and L -TV reconstructions were surprisingly robust to the existence of abnormal bins. Further author information: (Send correspondence to H.K.)H.K.: E-mail: [email protected] . PROPOSED METHOD2.1 Abnormal Error Noise Model
This section describes the setup of image reconstruction problem dealt in this paper, and derives the corre-sponding noise model together with the cost function used in image reconstruction. We denote an image bya J -dimensional vector ~x = ( x , x , . . . , x J ) T and denote a corresponding projection data (sinogram) by an I -dimensional vector ~b = ( b , b , . . . , b I ) T . We denote the system matrix which relates ~x to ~b by A = { a ij } , wherewe assume that I > J , i.e. the linear equation A~x = ~b is overdetermined. Then, the measurement process ofprojection data is expressed as ~b = A~x + ~n, (1)where ~n = ( n , n , . . . , n I ) T denotes measurement noise. In ordinary iterative reconstructions, the cost functionis designed by assuming that the noise component ~n follows independent Gaussian distribution or independentPoisson distribution. On the other hand, in this paper, we consider the Abnormal Error Noise (AEN) model, inwhich most elements of ~n are zeros but only a small number of elements of ~n take non-zero abnormal error valuesoccurred by various physical reasons described in Section 1. We assume that the abnormal projection data takesa value over the pre-specified interval [ A~x | i − m , A~x | i + m ] with a uniform probability, where m > , m > i.e. data corresponding to one detector elementcontains the error, data corresponding to one projection angle contains the error, and randomly selected fiveprojection data bins contain the error. Furthermore, we also assume that locations of the abnormal data binsare unknown. Below, we explain that such noise model can be well represented by using a particular case ofgeneralized Gaussian probability density function. In the generalized Gaussian model, the likelihood function of ~b given ~x is expressed as Pr( ~b | ~x ) = 1 Z exp( − α k A~x − ~b k pp ) , (2)where Z denotes the partition function, α denotes the (variance) parameter of density function, and k ~r k p denotes the L p -norm with 0 ≤ p ≤ k ~r k p isexpressed as k ~r k pp = I X i =1 | r i | p p = 0lim ǫ → +0 I X i =1 | r i | ǫ p = 0 , (3)where ~r = ( r , r , . . . , r I ) T . It is well-known that the noise model corresponding to p = 2 is the independentGaussian model, which is used when the additive noise ~n follows the independent Gaussian distribution witha same variance for all the bins. Also, the noise model corresponding to p = 1 is the independent Laplacianmodel, which can handle heavy tail noise like impulsive noise better compared with the Gaussian [12]. The mostappropriate noise model for the above AEN model is the case of p = 0 due to the following reason. In the caseof p = 0 in Eq. (2), k A~x − ~b k pp inside exp( · ) becomes the so-called L -norm, which represents the number ofnon-zero elements in the noise vector ~n = ~b − A~x . Therefore, the corresponding likelihood Pr( ~b | ~x ) becomeslarge when ~n = ~b − A~x has only a small number of non-zero elements, which matches to the assumption of AENmodel. The similar discussion on the above property of L -norm can be found in many literatures of CompressedSensing (CS) [13],[14]. Furthermore, by taking the negative logarithm of Eq. (2), we obtain the cost function ofimage reconstruction f ( ~x ) to be minimized corresponding to the AEN model as f ( ~x ) = − Pr( ~b | ~x ) = k A~x − ~b k +constant ( L − norm reconstruction) . (4) igure 1. Typical image degradations occurred by the abnormal data bins in projection data. Top images show projectiondata where black line or dots show the abnormal bins. Bottom images show corresponding reconstructed images. However, as is well-known in the CS fields [13],[14], the L -norm is a non-convex function in which its exactminimization is of NP-hard complexity. So, by following the standard approximation used in the CS fields, wereplace the L -norm by the L -norm as f ( ~x ) = k A~x − ~b k ( L − norm reconstruction) . (5)This replacement is popular in the CS fields and is called the convex relaxation in optimization literatures[13],[14]. On the other hand, the standard cost function in image reconstruction fields is the L -norm expressedas f ( ~x ) = k A~x − ~b k ( L − norm reconstruction) . (6)Equation (6) is derived from the Gaussian noise model by putting p = 2 in Eq. (2), which is quite different fromthe AEN model. Therefore, in image reconstruction under the AEN model, we expect that there exist significantdifferences in image quality between Eq. (5) and Eq. (6). Actually, in Section 3, we demonstrate that thereexist surprisingly large differences between the L -norm reconstruction and the L -norm reconstruction.One more remark is as follows. The above derivation inspires that our method is same as image reconstructionusing the CS technique. In the CS fields, the L -norm is normally used in regularization terms to pick up sparsesolutions ~x from a large number of feasible solutions. However, we use the L -norm for data fidelity (likelihood)terms. Such approaches are frequently used in statistics, computer vision, and data mining to mitigate the effectof outliers in the data fitting, with various different terminologies such as robust statistics, robust estimation,and outlier detection [3],[4],[15],[16]. However, according to our knowledge, there exist very few (almost no)literatures to use it in tomographic image reconstructions. L -Norm Reconstruction The most significant contribution in this paper is to propose a row-action-type iterative algorithm for the L -normimage reconstruction, which resembles the structure of famous ART algorithm [7],[8]. The algorithm derivationis based on a framework called the proximal splitting in convex optimization fields [5],[6]. Before going into theoncrete derivation, we briefly review the proximal splitting together with some related basic knowledges. Letus consider a convex minimization problem formulated asmin ~x f ( ~x ) , (7)where we assume that f ( ~x ) is a possibly non-differentiable lower semi-continuous (lsc) convex function.[Proximity Operator and Proximal Algorithm] [17] The proximity (prox) operator corresponding to f ( ~x ) isdefined by ~x = prox αf ( ~z ) ≡ arg min ~x ( f ( ~x ) + 12 α k ~x − ~z k ) , (8)where the parameter α is called the stepsize parameter. Two nice properties of the prox operator are that itcan be defined for any lsc convex function even if it is non-differentiable like the L -norm (including convexconstraint), and that its fixed points, i.e. ~x satisfying ~x = prox αf ( ~x ), coincide with minimizers of f ( ~x ) for any α >
0. Furthermore, the prox operator is necessarily a non-expansive mapping. These properties allow us to usethe following iteration formula to find a minimizer of f ( ~x ). ~x ( k +1) = prox αf ( ~x ( k ) ) , (9)where k denotes the iteration number. This iterative algorithm is called the proximal minimization algorithm,which provides a powerful framework for non-differentiable convex minimizations.[Proximal Splitting] [5],[6] Let us consider a convex minimization problem formulated asmin ~x f ( ~x ) ≡ I X i =1 f i ( ~x ) , (10)where f i ( ~x )( i = 1 , , . . . , I ) are possibly non-differentiable lsc component functions and I ≥
2. We considerthe situation where the prox operator corresponding to f ( ~x ) is intensive to compute, but the prox operatorcorresponding to every f i ( ~x ) can be computed in low costs. The proximal splitting is a framework to constructan iterative algorithm to minimize f ( ~x ) under such situations. In most literatures on the proximal splitting, thenumber of component functions (splitting) I is 2 [5]. In this paper, we employ a multi-splitting version, i.e. I ≥ f i ( ~x ) sequentially according to the order i = 1 , , . . . , I to obtain the next iterate ~x ( k +1) from the previous iterate ~x ( k ) . The iteration formula of Passty proximal splitting is summarized in Algorithm 1.Algorithm 1: Passty Proximal Splitting AlgorithmGive an initial vector ~x (0 , . Execute the following.loop k = 0 , , , . . . loop i = 1 , , . . . , I~x ( k,i +1) = prox α ( k ) f i ( ~x ( k,i ) ) ~x ( k +1 , = ~x ( k,I +1) In Algorithm 1, k denotes the main iteration number, i denotes the sub iteration number, and α ( k ) > k . With respect to the convergence of Algorithm 1, Passty proved the followingtheorem [6].Theorem] We consider an ergodic average of the iterates ~x ( k ′ ,I +1) ( k ′ = 0 , , , . . . , k ) in Algorithm 1 defined by ~x ( k ) = k X k ′ =0 α ( k ′ ) ~x ( k ′ ,I +1) k X k ′ =0 α ( k ′ ) . (11)Then, ~x ( k ) converges to a minimizer of f ( ~x ) when the diminishing stepsize rule satisfying the following equationsis used. α ( k ) → k → ∞ ) , ∞ X k =0 α ( k ) = ∞ , ∞ X k =0 ( α ( k ) ) < ∞ (12)We note that the above convergence is called the ergodic convergence, which is weaker than the convergence ofsequence ~x ( k,I +1) itself to a minimizer of f ( ~x ). In practice, however, we have never observed a situation in whichthe ergodic convergence occurs but the sequence ~x ( k,I +1) itself is not convergent. Therefore, we conjecture thatthe proposed algorithm can be used without being anxious about the non-convergence.The proposed row-action-type iterative algorithm for the L -norm reconstruction is derived from Algorithm1 according to the following steps. First, Eq. (5) is split into the sum of I component cost functions as f ( ~x ) ≡ I X i =1 f i ( ~x ) , f i ( ~x ) = | ~a Ti ~x − b i | ( i = 1 , , . . . , I ) , (13)where ~a i is the column vector corresponding to the i -th row of system matrix A . Next, the prox operatorcorresponding to each f i ( ~x ) appearing in Algorithm 1 is calculated, which amounts to solving the followingoptimization problem. ~x ( k,i +1) = prox α ( k ) f i ( ~x ( k,i ) ) ≡ arg min ~x ( | ~a Ti ~x − b i | + 12 α ( k ) k ~x − ~x ( k,i ) k ) (14)This problem can be solved in closed form by using the standard Lagrange multiplier technique [18],[19]. First,by introducing an additional variable z , the above minimization problem can be converted into the constrainedminimization arg min ( ~x,z ) ( | z − b i | + 12 α ( k ) k ~x − ~x ( k,i ) k ) subject to ~a Ti ~x = z. (15)The Lagrangian function corresponding to Eq. (15) is defined by L ( ~x, z, λ ) = | z − b i | + 12 α ( k ) k ~x − ~x ( k,i ) k + λ ( ~a Ti ~x − z ) , (16)where λ is the Lagrange multiplier, which is also called the dual variable. The dual problem corresponding toEq. (16) is defined by max λ D ( λ ) ≡ min ( ~x,z ) L ( ~x, z, λ ) subject to λ ∈ ΩΩ = { λ | D ( λ ) > −∞} , (17)here D ( λ ) is the dual function and Ω is the domain on which D ( λ ) is defined. In the current case, D ( λ ) andΩ can be explicitly calculated as D ( λ ) = min ~x min z [ | z − b i | + 12 α ( k ) k ~x − ~x ( k,i ) k + λ ( ~a Ti ~x − z )]= min ~x ( − λb i + 12 α ( k ) k ~x − ~x ( k,i ) k + λ~a Ti ~x )= − λb i − α ( k ) k ~x ( k,i ) − λα ( k ) ~a i k + 12 α ( k ) k ~x ( k,i ) k Ω = { λ | − ≤ λ ≤ } , (18)where the relation between the primal variable ~x and the dual variable λ is expressed as ~x = ~x ( k,i ) − λα ( k ) ~a i . (19)Some additional explanations on the above mathematical calculation are in order. To derive the second equationfrom the first equation in Eq. (18), we used the fact that min z ( | z − b i | − λz ) = −∞ when λ < − λ > z ( | z − b i | − λz ) = − λb i when − ≤ λ ≤ D ( λ ) becomes Ω = { λ | − ≤ λ ≤ } .Furthermore, Eq. (19) is obtained from the minimization with respect to ~x in the second equation in Eq. (18).Therefore, by maximizing D ( λ ) subject to the constraint λ ∈ Ω, the solution to dual problem (Eq. (17)) is givenby λ = − q ≤ − q (if − < q < q ≥ , q = − b i − ~a Ti ~x ( k,i ) α ( k ) k ~a i k . (20)By substituting Eq. (20) into Eq.(19) yields the closed form expression of ~x ( k,i +1) = prox α ( k ) f i ( ~x ( k,i ) ).With respect to the selection of stepsize parameter α ( k ) , we use the diminishing stepsize rule satisfying Eq.(12) expressed as α ( k ) = α ǫk , (21)where ( α > , ǫ >
0) are pre-specified parameters in the stepsize control. Using the obtained expression of proxoperator in the Passty proximal splitting, the proposed algorithm can be summarized in Algorithm 2.Algorithm 2 possesses a row-action-type structure similar to the ART algorithm, in which each image updateis performed along the direction of i -th row of the system matrix ~a i [20]. In tomographic reconstruction fields,there is a strong trend that people prefer such iterative algorithms, because they can be implemented easilythanks to the necessity of accessing only a single row of A for each image update. Furthermore, it is known thatsuch algorithms can be accelerated by using the special data access order described by Herman and Meyer [8].We use this data access order in our implementations. L -Norm Reconstruction The above algorithm derivation can be also used to develop an analogous row-action-type algorithm for the L -norm reconstruction in Eq. (6). A comparison between the iteration formula for the L -norm reconstructionand that for the L -norm reconstruction provides a useful insight together with a nice intuitive interpretationinto the proposed algorithm. In this section, we perform such a comparison. First, by using the same algorithmderivation outlined above to the L -norm cost function in Eq. (6), we obtain the following iteration formula. λ = − b i − ~a Ti ~x ( k,i ) )1 + 2 α ( k ) k ~a i k ~x ( k,i +1) = ~x ( k,i ) − λα ( k ) ~a i (22)lgorithm 2: L -Norm ReconstructionSet the stepsize control parameters ( α > , ǫ > ~x (0 , . Executethe following steps for k = 0 , , , . . . .[Step 1]Set the stepsize parameter α ( k ) α ( k ) = α ǫk [Step 2]Execute the following steps for i = 1 , , . . . , I .(2 .
1) Compute λq = − b i − ~a Ti ~x ( k,i ) α ( k ) k ~a i k λ = − q ≤ − q (if − < q < q ≥ .
2) Update ~x~x ( k,i +1) = ~x ( k,i ) − λα ( k ) ~a i [Step 3] ~x ( k +1 , = ~x ( k,I +1) Figure 2. Definitions of the horizontal difference h Tj ~x and the vertical difference v Tj ~x used in the TV norm. The iteration formulae for both the L -norm and L -norm reconstructions possess a row-action-type structure[20]. The major difference lies in the way to compute the value of variable λ which can be interpreted as theamount of update along the vector ~a i . From the description of Algorithm 2 in the L -norm reconstruction, thevalue λ is truncated to -1 or 1 when the residual error r i = | b i − ~a Ti ~x ( k,i ) | is large. On the other hand, from Eq.(22), such truncation is not performed in the L -norm reconstruction at all. Intuitively, this truncation can beconsidered a judgement whether the measured data b i contains an abnormal error or not. When r i is large, it canbe expected that b i is contaminated by the abnormal error, so the amount of update λ is truncated to a small valueto avoid overfitting to the erroneous data. On the other hand, when r i is small, it is expected that b i is free fromthe abnormal error, so the amount of update is used without the truncation. Since the L -norm reconstructionis based on the independent Gaussian noise model, such truncation operation is not necessary because everyprojection data value is contaminated with noise having the same standard deviation. In conclusion, from theabove discussion, we expect that the L -norm reconstruction is robust against the existence of abnormal data inthe projection data. We demonstrate that the difference between the L -norm reconstruction and the L -normreconstruction is surprisingly large in Section 3.lgorithm 3: L -TV ReconstructionSet the hyper parameter β >
0. Give an initial vector ~x (0 , . Set the stepsize controlparameters ( α > , ǫ > k = 0 , , , . . . .[Step 1]Set the stepsize parameter α ( k ) α ( k ) = α ǫk [Step 2]Execute the following steps for i = 1 , , . . . , I .(2 .
1) Compute λq = − b i − ~a Ti ~x ( k,i ) α ( k ) k ~a i k λ = − q ≤ − q (if − < q < q ≥ .
2) Update ~x~x ( k,i +1) = ~x ( k,i ) − λα ( k ) ~a i [Step 3]Compute the proximity operator for the TV term ~x ( k,I +2) = prox α ( k ) β k ~x k TV ( ~x ( k,I +1) ) = arg min ~x ( β k ~x k TV + 12 α ( k ) k ~x − ~x ( k,I +1) k )[Step 4] ~x ( k +1 , = ~x ( k,I +2) The power to remove artifacts in the proposed algorithm (Algorithm 2) can be strengthened by adding a weakTotal Variation (TV) penalty into the cost function f ( ~x ) [9]-[11]. The reason is that the TV penalty imposed on ~x helps in eliminating streak artifacts generated by the abnormal data in image space, in addition to identifyingand excluding the abnormal data in projection space. We call this modified algorithm the L -TV reconstruction,in which the cost function f ( ~x ) is expressed as f ( ~x ) = β k ~x k TV + k A~x − ~b k ( L − TV reconstruction) , (23)where β > k ~x k TV is the TV normdefined by k ~x k TV ≡ J X j =1 q ( ~h Tj ~x ) + ( ~v Tj ~x ) . (24)In Eq. (24), ~h Tj ~x and ~v Tj ~x are inner product representations of finite difference operations around the j -th pixelalong the horizontal and vertical directions, respectively. See Fig. 2 for the detailed definitions of ~h j and ~v j .The iterative algorithm to minimize Eq. (23) can be derived based on the proximal splitting as follows. First,we split the cost function f ( ~x ) into the sum of I + 1 component cost functions as f ( ~x ) ≡ I +1 X i =1 f i ( ~x ) , f i ( ~x ) = | ~a Ti ~x − b i | ( i = 1 , , . . . , I ) , f I +1 ( ~x ) = β k ~x k TV . (25)Applying the Passty proximal splitting to the split in Eq. (25) leads to the iterative algorithm summarized inAlgorithm 3, which is similar to Algorithm 2.In Algorithm 3, we need to compute the prox operator corresponding to the TV term (Step 3). This minimizationproblem is same as that appearing in the so-called TV denoising (ROF model), so that we can use a standardalgorithm such as Chambolle’s projection algorithm [9],[10]. In our implementations, we used Chambolle’sprojection algorithm. . SIMULATION STUDIES We performed simulation studies to demonstrate performances of the proposed L -norm and L -TV recon-struction algorithms. We considered three typical scenarios summarized below. In all the scenarios, pro-jection data corresponding to erroneous bins were replaced by values selected from some pre-specified range[ A~x | i − m , A~x | i + m ], where m > , m > ×
320 pixels. Thesimulated projection data was computed with the sampling of 320(angles) × × L -Norm Reconstruction] The L -norm reconstruction of Eq. (22) was implemented with the iteration number50.[Projection Space Median Filter] This is an empirical method to remove the effect of abnormal errors. Themedian filter is applied to degraded projection data to remove the abnormal errors followed by the L -normreconstruction. The window size of median filter was empirically determined in such a way that visual qualityof reconstructed image is best dependent on each case.[ L -Norm Reconstruction] The L -norm reconstruction (Algorithm 2) was implemented with the iteration number50.[ L -TV Reconstruction] The L -TV reconstruction (Algorithm 3) was implemented with the iteration number50. We show reconstructed images together with the degraded projection data for all the scenarios in Figs. 3-5.In all the scenarios, the images by the L -norm reconstruction were severely damaged by the abnormal errors,in which the artifact patterns, i.e. streaks, random errors, etc., depend on the locations of abnormal bins.The empirical projection space median filtering succeeded in reducing the artifacts, but the filtering also affectsthe correct data so that we can observe some additional artifacts in the final images. On the other hand, forthe cases of Detector Error 1, Angle Error 1, and Random Error 1 with relatively mild errors, the power ofidentifying the abnormal bins in both the L -norm and L -TV reconstructions were significant in which theysucceeded in reconstructing almost perfect images. However, the difference between the L -norm reconstructionand the L -TV reconstruction became apparent for the more difficult cases of Detector Error 2, Angle Error 2,and Random Error 2. In these cases, the L -TV reconstruction correctly identified most of the abnormal binswhereas the L -norm reconstruction did not succeed perfectly. With respect to convergence speed of the L -normand L -TV reconstructions, they were a bit slower compared with the L -norm reconstruction and the standardART algorithm mainly because early iterations need to be spent to correctly identify the locations of abnormalbins. However, thanks to their row-action structures, they seem to be still significantly faster than using thestandard L -norm and L -TV minimization algorithms (for example, popular iterative reweighted least-squaresmethod [3],[4]) which can be used for the same cost functions. igure 3. Projection data and corresponding reconstructed images for the scenarios of Detector Error 1 and DetectorError 2.
4. CONCLUSIONS
In this paper, we proposed a new image reconstruction algorithm called the Fault-Tolerant reconstruction fortomographic imaging situation where a small number of projection data bins are contaminated by abnormalerrors occurred during the measurement by some physical reasons. More specifically, we derived two row-action-type iterative algorithms called the L -norm and L -TV reconstructions. In the simulation studies, comparedwith the naive approaches such as the L -norm reconstruction and the projection space median filter (followedby the L -norm reconstruction), we demonstrated that the proposed algorithms are robust to the existence ofabnormal projection data bins. We believe that this work is valuable because there exist very few (almost no, tothe best of our knowledge) works to handle tomographic reconstruction under the existence of abnormal errors. ACKNOWLEDGMENTS
This work was partially supported by JSPS KAKENHI Grant Number 15K06103.
REFERENCES [1] P.Gilbert ”Iterative methods for the three-dimensional reconstruction of an object from projections,”
J.Theor.Biol. , Vol.36, pp.105-117, 1972.[2] S.Kawata and O.Nalcioglu ”Constrained iterative reconstruction by the conjugate gradient method,”
IEEETrans.Med.Imaging , Vol.4, pp.65-71, 1985.[3] P.J.Huber and E.M.Ronchetti ”Robust statistics,” Wiley, 2009. igure 4. Projection data and reconstructed images for the scenarios of Angle Error 1 and Angle Error 2. [4] R.A.Maronna, D.R.Martin, and V.J.Yohai ”Robust statistics: theory and methods,” Wiley, 2006.[5] P.L.Combettes and J-C.Pesquet ”Proximal splitting methods in signal processing,” in
Fixed-Point Algorithmsfor Inverse Problems in Science and Engineering (Springer), pp.185-212, 2011.[6] G.B.Passty ”Ergodic convergence to a zero of the sum of monotone operators in Hilbert space,”
J.Math.Anal.Appl , Vol.72, pp.383-390, 1979.[7] R.Gordon, R.Bender, and G.T.Herman ”Algebraic reconstruction technique (ART) for three-dimensionalelectron microscopy and x-ray photography,”
J.Theor.Biol. , Vol.29, pp.471-481, 1970.[8] G.T.Herman and L.B.Meyer ”Algebraic reconstruction techniques can be made computationally efficient,”
IEEE Trans.Med.Imaging , Vol.12, pp.600-609, 1993.[9] L.I.Rudin, S.Osher, and E.Fatemi ”Nonlinear total variation noise removal algorithm,”
Physica D , Vol.60,pp.259-268, 1992.[10] A.Chambolle ”An algorithm for total variation minimization and applications,”
J.Math.Imaging Vis. , Vol.20,pp.89-97, 2004.[11] E.Y.Sidky and X.Pan ”Image reconstruction in circular cone-beam computed tomography by constrained,total-variation minimization,”
Phys.Med.Biol. , Vol.53, pp.4777-4807, 2008.[12] J.Astola and P.Kuosmanen ”Fundamentals of nonlinear digital filtering,” CRC Press, 1997.[13] M.Elad ”Sparse and redundant representations: from theory to applications in signal and image processing,”Springer, 2010.[14] I.Rish and G.Grabarnik ”Sparse modeling: theory, algorithms, and applications,” CRC Press, 2014.[15] C.V.Stewart ”Robust parameter estimation in computer vision,”
SIAM Review , Vol.41, pp.513-537, 1999.[16] C.C.Aggarwal ”Outlier analysis,” Springer, 2013. igure 5. Projection data and reconstructed images for the scenarios of Random Error 1 and Random Error 2. [17] N.Parikh and S.Boyd ”Proximal algorithms,” Foundations and Trends(r) in Optimization, Now PublishersInc., 2013.[18] D.P.Bertsekas ”Nonlinear programming,” Athena Scientific, 1999.[19] R.T.Rockafeller ”Convex analysis,” Princeton University Press, 1996.[20] Y.Censor ”Row-action methods for huge and sparse systems and their applications,”