Regularization by Discretization in Banach Spaces
Uno Hämarik, Barbara Kaltenbacher, Urve Kangro, Elena Resmerita
aa r X i v : . [ m a t h . NA ] A p r Regularization by Discretization in Banach Spaces
Uno H¨amarik , Barbara Kaltenbacher , Urve Kangro , ElenaResmerita Institute of Mathematics, University of Tartu, Estonia, Institute of Mathematics,Alpen-Adria-Universit¨at Klagenfurt, AustriaE-mail: [email protected], [email protected],[email protected], [email protected]
Abstract.
We consider ill-posed linear operator equations with operators acting betweenBanach spaces. For solution approximation, the methods of choice here are projectionmethods onto finite dimensional subspaces, thus extending existing results from Hilbertspace settings. More precisely, general projection methods, the least squares methodand the least error method are analyzed. In order to appropriately choose thedimension of the subspace, we consider a priori and a posteriori choices by thediscrepancy principle and by the monotone error rule. Analytical considerations andnumerical tests are provided for a collocation method applied to a Volterra integralequation in one dimension space.PACS numbers: 02.30.Zz, 02.30.Rz, 02.60.Cb
Appeared in:
Inverse Problems egularization by Discretization in Banach Spaces
1. Introduction
Consider an ill-posed linear operator equation Au = f (1.1)with A ∈ L ( E, F ) mapping between nontrivial Banach spaces E and F . In practiceonly noisy data f δ will be given. We assume here that the noise level δ satisfying k f δ − f k F ≤ δ (1.2)is known and consider convergence of regularized solutions to an exact solution u ∗ of(1.1) as δ goes to zero.Regularization by projection onto finite dimensional subspaces of E and/or F hasbeen studied in detail e.g., in [9, 10, 11, 12, 14, 17, 21] in the Hilbert space setting. Herethe dimension of the projection spaces plays the role of a regularization parameter.The error estimates of [9, 15, 17] allow for an a priori choice of this dimension, in[10, 12, 14, 21] also an a posteriori choice of the dimension is considered. Our aim isto extend these results (or at least part of them) to the general Banach space setting.This is motivated, e.g., by the use of L p spaces with p = 2 to recover sparse solutionsor to model uniform or impulsive noise. Also the space C ( ¯Ω) of continuous functionson some domain Ω and its dual M (Ω) are of particular interest since our setting allowsthen to analyze, e.g. collocation of integral equations as a regularization method. Notethat some results regarding regularization by discretization in Banach spaces are knownin a general setting (see [15] and [20]) and about the quadrature formulae method (see[2, 4]), the collocation method (see, e.g., [4, 7, 8, 15]) and the Galerkin method (see [4]).Let E n ⊆ E , Z n ⊆ F ∗ , n ∈ N , be finite dimensional nontrivial subspaces whichhave the role of approximating the spaces E and F ∗ , respectively. For instance, thesubspaces can be chosen in the following manner, as it will be emphasized later, ∀ n ∈ N , : E n ⊆ E n +1 and [ n ∈ N E n = E, (1.3) ∀ n ∈ N , : Z n ⊆ Z n +1 and [ n ∈ N Z n = F ∗ . (1.4)The general projection method defines a finite dimensional approximation u n to u ∗ by u n ∈ E n and ∀ z n ∈ Z n : h z n , Au n i F ∗ ,F = h z n , f δ i F ∗ ,F . (1.5)As in the Hilbert space case, the least squares method u n ∈ argmin { k A ˜ u n − f δ k F : ˜ u n ∈ E n } (1.6)and the least error method u n ∈ argmin { k ˜ u k E : ∀ z n ∈ Z n : h z n , A ˜ u i F ∗ ,F = h z n , f δ i F ∗ ,F } (1.7) egularization by Discretization in Banach Spaces P n : E → E n denotes some projection. For drawing certainconclusions, this will sometimes be assumed to have the following properties:( I − P n ) = I − P n and ∀ u ∈ X , λ ∈ R : k ( I − P n )( λx ) k = | λ | k ( I − P n )( x ) k . (1.8)As opposed to the Hilbert space setting, P n is not necessarily linear any more. Remark 1.1. i) One can use the metric projection operator P n : E → E n P n ( w ) = argmin { k w − w n k E : w n ∈ E n } , in case it is single valued (as happens in strictly convex Banach spaces), but it canalso be some differently defined projection operator, for an example see Section 6 below.Note that the metric projection P n is obviously homogeneous, idempotent and does fulfill ( I − P n ) = I − P n , as one can see in what follows: ( I − P n ) ( u ) = ( I − P n )( u ) if andonly if u − P n ( u ) − P n ( u − P n ( u )) = u − P n ( u ) , which is equivalent to P n ( u − P n ( u )) = 0 . Indeed, k u − P n ( u ) − k E ≤ k u − ( P n ( u ) + v n ) k E = k u − P n ( u ) − v n k E , for all v n ∈ E n , as P n ( u ) + v n ∈ E n .ii) In general, single valued metric projections onto finite dimensional subspaces ofa Banach space X are nonlinear, otherwise X would be linearly isometric to an innerproduct space, cf., e.g., [13, p. 210]. Let Q n be the linear operator defined by Q n : F → Z ∗ n ∀ g ∈ F , z n ∈ Z n : h Q n g, z n i Z ∗ n ,Z n = h z n , g i F ∗ ,F which allows to write (1.5) as u n ∈ E n and Q n Au n = Q n f δ . (1.9)The norm of Q n equals one since k Q n k = sup g ∈ F, k g k F =1 k Q n g k Z ∗ n = sup g ∈ F, k g k F =1 ,z n ∈ Z n , k z n k F ∗ =1 h Q n g, z n i Z ∗ n ,Z n = sup g ∈ F, k g k F =1 ,z n ∈ Z n , k z n k F ∗ =1 h z n , g i F ∗ ,F = 1 . (1.10)Moreover, Q ′ n will stand for the metric projection onto the subspace AE n (or a singlevalued choice of the metric projection in case it is multivalued), whenever E n is a linearsubspace of E , so that (1.6) can be rewritten as Au n = Q ′ n f δ . egularization by Discretization in Banach Spaces E n and Z n , respectively. This can be extended to the Banach spaces undercertain conditions. For this purpose we will make use of duality mappings J F → F ∗ q = ∂ ( q k · k qF ) = ∂ Φ F ,J F ∗ → F ∗∗ q ∗ = ∂ ( q ∗ k · k q ∗ F ∗ ) , q ∗ = qq − ,J E → E ∗ q = ∂ ( q k · k qE ) = ∂ Φ E , (1.11)cf., e.g., [6, Chapters I-II].Moreover, we will make use of the Bregman distance induced by the functionalΦ E = q k · k qE , which in case of single valued duality mapping J E → E ∗ q is defined by D q (˜ u, u ) = q k ˜ u k qE − q k u k qE + h J E → E ∗ q ( u ) , u − ˜ u i E ∗ ,E . (1.12)We will also use the symmetric Bregman distance D symq (˜ u, u ) = D q (˜ u, u ) + D q ( u, ˜ u ) = h J E → E ∗ q ( u ) − J E → E ∗ q (˜ u ) , u − ˜ u i E ∗ ,E (1.13)and the identity D q ∗ ( J E → E ∗ q ( u ) , J E → E ∗ q (˜ u )) = D q (˜ u, u ) (1.14)provided that ∀ u ∈ E : J E ∗ → E ∗∗ q ∗ ( J E → E ∗ q ( u )) = u holds cf. [19, Lemma 2.63]. Note that in smooth and uniformly convex spaces (suchas L p with p ∈ (1 , ∞ )), convergence with respect to the Bregman distance impliesconvergence with respect to the norm and vice versa, cf., e.g. [19, Theorem 2.60] D q ( u ∗ , u k ) k →∞ → ⇔ D q ( u k , u ∗ ) k →∞ → ⇔ k u k − u ∗ k E k →∞ → . (1.15)While tools like the Bregman distance have only relatively recently been applied inthe context of regularization, some of the fundamental concepts we use are still thosefrom the seminal papers [15, 21]. In [15], which partly also works with general Banachspaces, error estimates for the general projection method (1.5) rely on the norms of thelinear operator B n : F → E n mapping f δ to a solution of (1.5), as well as the specialprojection ˜ P n : E → E n , ˜ P n u = B n Au . Note that well-definedness of these operatorscan be shown under certain conditions, see, e.g., (2.1), (2.2) below. As a matter of fact,it is readily checked that for u n defined by (1.5), the error estimate k u ∗ − u n k ≤ (1 + k ˜ P n k ) dist( u ∗ , E n ) + k B n k δ holds, which splits the total error into an approximation error term and a term boundingthe noise propagation. Error estimates of this type will enable the constructionof convergent parameter choice rules also here and the concepts of quasi-optimality(uniform boundedness of k ˜ P n k ) and robustness (uniform boundedness of k B n k α − n with egularization by Discretization in Banach Spaces α n = sup u n ∈ E n , k Au n k =1 k u n k ) can be recovered in the boundedness conditions (2.12),(2.20), (2.21), (3.6), (3.8).Note that computing general projection, least squares and least error approxima-tions in general Banach spaces might not be trivial. The reader is referred e.g., to[20, Section 3], [18] for some iterative methods (Landweber type, sequential subspaceoptimization) in uniformly convex and smooth Banach spaces.This work is organized as follows. Well-definedness, stability and convergence witha priori and a posteriori choices of the dimension parameter are shown for the generalprojection method, the least squares method and the least error method in Section2, 3 and 4, respectively. This theory has, of course, its limitations and can approachproblems in various couples of smaller or larger function spaces E and F , as shortlyoutlined in Section 5. Some applications are discussed in Section 6. Namely, analyticalconsiderations and numerical tests are provided for a collocation method applied to aVolterra integral equation in one dimension space.
2. The general projection method
Throughout this section, E n ⊆ E and Z n ⊆ F ∗ , n ∈ N , are finite dimensional subspaces. The following Lemma gives conditions for well-definedness of u n according to (1.5). Lemma 2.1.
Let dim ( E n ) = dim ( Z n ) (2.1) and N ( Q n A ) ∩ E n = { } (2.2) hold. Then (1.5) is uniquely solvable for any f δ ∈ F .Proof. Since (1.5) is a finite dimensional linear system, with (2.1), unique solvability forany right hand side is equivalent to uniqueness, i.e., to the condition (cid:16) w n ∈ E n and ∀ z n ∈ Z n : h z n , Aw n i F ∗ ,F = 0 (cid:17) ⇒ w n = 0This is the same as (2.2). For stating stability we will make use of the following quantity:˜ κ n := sup w n ∈ E n ,w n =0 k w n k E sup z n ∈ Z n , k z n k F ∗ =1 h z n , Aw n i F ∗ ,F = 1min w n ∈ E n , k w n k E =1 max z n ∈ Z n , k z n k F ∗ =1 h z n , Aw n i F ∗ ,F = max w n ∈ E n , k w n k E =1 k Q n Aw n k Z ∗ n , (2.3) egularization by Discretization in Banach Spaces τ n defined as τ n := sup w n ∈ E n ,w n =0 k Aw n k F k Q n Aw n k Z ∗ n (2.4)is finite, which, e.g., is ensured by (2.2), one can bound ˜ κ n by means of the simplerquantity κ n := sup w n ∈ E n k w n k E k Aw n k F = max w n ∈ E n , k w n k E =1 k Aw n k F . (2.5) Lemma 2.2.
Suppose that (2.1) and (2.2) hold. Then the operator A n := Q n A | E n : E n → Z ∗ n has an inverse and κ n ≤ k A − n k = ˜ κ n ≤ τ n κ n . (2.6) Proof.
For any w n ∈ E n we have k A n w n k F ≥ τ − n k Aw n k F ≥ τ − n κ − n k w n k E . Let v n ∈ E n be an element for which in (2.5) the maximum is attained. Then by (1.10) we have k A n v n k F ≤ k Av n k F = κ − n k v n k E . Remark 2.3.
Under conditions (2.1) and (2.2) of Lemma 2.1 one has ˜ κ n < ∞ , sinceone takes the supremum over the unit sphere, which is compact in the finite dimensionalspaces under consideration. The definition of the reciprocal of the stability factor ˜ κ n inthe general projection method reveals the relation to Ladyshenskaja-Babuska-Brezzi (orinf-sup) conditions used for showing well-posedness of Petrov-Galerkin discretizations ofpartial differential equations. For the general projection method we get:
Lemma 2.4.
Let the assumptions of Lemma 2.1 be satisfied and consider, for f , f ∈ F the solutions of u n,i ∈ E n and ∀ z n ∈ Z n : h z n , Au n,i i F ∗ ,F = h z n , f i i F ∗ ,F i = 1 , . Then the estimate k u n, − u n, k E ≤ ˜ κ n k f − f k F holds.Proof. According to Lemma 2.1, the solutions u n,i , i = 1 , u n = u n, − u n, satisfiesˆ u n ∈ E n and ∀ z n ∈ Z n : h z n , A ˆ u n i F ∗ ,F = h z n , f − f i F ∗ ,F . Therefore, by definition of ˜ κ n and u n, − u n, ∈ E n (due to linearity of the space E n ) wehave k u n, − u n, k E ≤ ˜ κ n sup z n ∈ Z n , k z n k F ∗ =1 h z n , A ( u n, − u n, ) i F ∗ ,F =˜ κ n sup z n ∈ Z n , k z n k F ∗ =1 h z n , f − f i F ∗ ,F ≤ ˜ κ n k f − f k F egularization by Discretization in Banach Spaces n Theorem 2.5.
Let for all n ∈ N , the assumptions of Lemma 2.1 be satisfied and let u n be defined by the projection method (1.5) . Additionally, we assume that there exists asequence of approximations (ˆ u n ) n ∈ N , ˆ u n ∈ E n , satisfying the convergence conditions k u ∗ − ˆ u n k E → as n → ∞ (2.7) and ˜ κ n sup z n ∈ Z n , k z n k F ∗ =1 h z n , A ( u ∗ − ˆ u n ) i F ∗ ,F → as n → ∞ . (2.8) Then for exact data δ = 0 we have convergence k u n − u ∗ k E → as n → ∞ . For noisy data and with the dimension n = n ( δ ) chosen such that n ( δ ) → ∞ and ˜ κ n ( δ ) δ → as δ → we have convergence k u n ( δ ) − u ∗ k E → as δ → . Proof.
For any w n ∈ E n we have, by definition (2.3) of ˜ κ n and u n − w n ∈ E n (herelinearity of the space E n is used), that k u n − u ∗ k E ≤ k u ∗ − w n k E + k u n − w n k E ≤ k u ∗ − w n k E + ˜ κ n sup z n ∈ Z n , k z n k F ∗ =1 h z n , A ( u n − w n ) i F ∗ ,F ≤ k u ∗ − w n k E + ˜ κ n sup z n ∈ Z n , k z n k F ∗ =1 (cid:16) h z n , A ( u n − u ∗ ) i F ∗ ,F + h z n , A ( u ∗ − w n ) i F ∗ ,F (cid:17) = k u ∗ − w n k E + ˜ κ n sup z n ∈ Z n , k z n k F ∗ =1 (cid:16) h z n , f δ − f i F ∗ ,F + h z n , A ( u ∗ − w n ) i F ∗ ,F (cid:17) ≤ k u ∗ − w n k E + ˜ κ n (cid:16) δ + sup z n ∈ Z n , k z n k F ∗ =1 h z n , A ( u ∗ − w n ) i F ∗ ,F (cid:17) (2.10)where we have also used linearity of A . Inserting w n = ˆ u n and using (2.7), (2.8), togetherwith our assumptions on the choice of n ( δ ) we therefore immediately get the assertionsin both cases δ = 0 and δ > Remark 2.6. i) The approximation property (2.7) holds, e.g., when the subspaces E n are chosen according to (1.3) .ii) Under conditions (1.8) and k u ∗ − P n u ∗ k E → as n → ∞ , (2.11) on some sequence of operators { P n : E → E n , n ∈ N } , the uniform boundednesscondition ∃ C < ∞ ∀ n ∈ N : ˜ κ n sup w ∈ E, k w k E =1 sup z n ∈ Z n , k z n k F ∗ =1 h z n , A ( I − P n ) w i F ∗ ,F ≤ C (2.12) egularization by Discretization in Banach Spaces is sufficient for (2.8) and by (2.10) with w n = P n u ∗ yields the estimate k u n − u ∗ k E ≤ (1 + C ) k u ∗ − P n u ∗ k E + ˜ κ n δ . (2.13) In the context of Petrov Galerkin discretizations of PDEs, estimate (2.10) is knownas Strang’s First Lemma.2.4. Convergence with a posteriori choice of n – the discrepancy principle Theorem 2.7.
Let the assumptions of Lemma 2.1 be satisfied for all n ∈ N and let u n be defined by the projection method (1.5) . We also assume that there exists a sequenceof approximations (ˆ u n ) n ∈ N , ˆ u n ∈ E n , satisfying (2.11) and the conditions κ n sup z n ∈ Z n , k z n k F ∗ =1 h z n , A ( u ∗ − ˆ u n ) i F ∗ ,F → as n → ∞ (2.14) κ n +1 k ( I − Q ′ n ) A ( u ∗ − ˆ u n ) k F → as n → ∞ . (2.15) Additionally, we assume that there exists τ < ∞ such that τ sup z n ∈ Z n , k z n k F ∗ =1 h z n , Aw n i F ∗ ,F ≥ k Aw n k F , ∀ w n ∈ E n , (2.16) i.e., τ n ≤ τ for all n ∈ N , where τ n is defined by (2.4) .Denote d DP ( n ) = k Au n − f δ k F , n ∈ N . Let b > τ be fixed and for δ > , let n = n DP ( δ ) be the first index such that d DP ( n ) ≤ bδ. (2.17) Then n DP ( δ ) is finite.Moreoever, u n DP ( δ ) → u ∗ as δ → subsequentially in the following sense: There existsa convergent subsequence and the limit of every convergent subsequence solves (1.1) ; if u ∗ is unique, then k u n DP ( δ ) − u ∗ k E → as δ → .Proof. By Lemma 2.2 and assumption (2.16) we can use κ n as in (2.5) instead of ˜ κ n asin (2.3) here.For any n let w n ∈ E n be such that Q ′ n f δ = Aw n . From (2.16) it follows that k Au n − f δ k F ≤ k A ( u n − w n ) k F + k Aw n − f δ k F ≤ τ sup z n ∈ Z n , k z n k F ∗ =1 h z n , A ( u n − w n ) i F ∗ ,F + k Aw n − f δ k F = τ sup z n ∈ Z n , k z n k F ∗ =1 h z n , f δ − Aw n i F ∗ ,F + k Aw n − f δ k F ≤ ( τ + 1) dist( f δ , AE n ) . (2.18)In particular, sincelim sup n →∞ dist( f δ , AE n ) ≤ δ + lim sup n →∞ dist( Au ∗ , AE n ) ≤ δ + lim sup n →∞ k A ( u ∗ − ˆ u n ) k = δ, egularization by Discretization in Banach Spaces n DP ( δ ) is finite.If for some δ k → k → ∞ ) the discrepancy principle gives n DP ( δ k ) ≤ N , with N ≥
0, then the sequence u n DP ( δ k ) lies in a finite-dimensional subspace – the linear hull of E n , n = 0 , . . . , N . Boundedness and therefore relative compactness of ( u n DP ( δ k ) ) followsfrom (2.10) (e.g., with w n = 0). Since k Au n DP ( δ k ) − f δ k k F ≤ bδ k , then Au n DP ( δ k ) → f as k → ∞ . Hence ( u n DP ( δ k ) ) has a convergent subsequence and the limit of every convergentsubsequence solves (1.1).Otherwise, n DP ( δ ) will be larger than zero. In this case, let m = n DP ( δ ) − ≥ n = m the inequality (2.17) does not hold and (2.18) gives bδ < k Au m − f δ k F ≤ ( τ + 1) dist( f δ , AE m ) ≤ ( τ + 1)( δ + dist( f, AE m )) . Since b > τ we have( b − − τ ) δτ + 1 < dist( f, AE m ) = k Au ∗ − Q ′ m Au ∗ k F = k ( I − Q ′ m ) A ( u ∗ − ˆ u m ) k F . (2.19)Inserting this into (2.10) with w n = ˆ u n , n = n DP ( δ ) and using (2.6), (2.7), (2.15),(2.14), we get convergence if n DP ( δ ) → ∞ as δ → Remark 2.8.
If some sequence of operators { P n : E → E n , n ∈ N } satisfies (1.8) and (2.11) , then (2.14) and (2.15) follow from (2.7) for ˆ u n = P n u ∗ and from the uniformboundedness conditions ∃ C < ∞ ∀ n ∈ N : κ n sup w ∈ E, k w k E =1 sup z n ∈ Z n , k z n k F ∗ =1 h z n , A ( I − P n ) w i F ∗ ,F ≤ C, (2.20) ∃ C < ∞ ∀ n ∈ N : κ n +1 sup w ∈ E, k w k E =1 k ( I − Q ′ n ) A ( I − P n ) w k F ≤ C . (2.21) If additionally I − Q ′ n is homogeneous, one has by (2.19) and by homogeneity of I − P n κ m +1 ( b − − τ ) δτ + 1 ≤ κ m +1 sup w ∈ E, k w k E =1 k ( I − Q ′ m ) A ( I − P m ) w k F k ( I − P m ) u ∗ k E ≤ C k ( I − P m ) u ∗ k E . (2.22) Hence, by (2.13) and Lemma 2.2, we obtain the error estimate k u n DP ( δ ) − u ∗ k E ≤ (1 + τ C ) k ( I − P n DP ( δ ) ) u ∗ k E + C τ ( τ + 1) b − τ − k ( I − P n DP ( δ ) − ) u ∗ k E , (2.23) in case n DP ( δ ) ≥ .Note that conditions (2.20) and (2.21) correspond to m = 1 in condition (1.27) of[21].egularization by Discretization in Banach Spaces
3. The least squares method
Throughout this section, E n ⊆ E is a finite dimensional subspace. We show below thatthe least squares method is well-defined and converges to a solution under a priori anda posteriori choices for the discretization dimension. Lemma 3.1.
Let N ( A ) ∩ E n = { } . (3.1) Then the set of minimizers argmin { k A ˜ u n − f δ k F : ˜ u n ∈ E n } is nonempty. If, for some q ≥ , the functional Φ F : g q k g k qF (3.2) is strictly convex, then the minimizer is unique.Proof. The finite dimensional linear subspace E n is reflexive, closed, convex andnonempty. The cost functional j : ˜ u n
7→ k A ˜ u n − f δ k F is convex, weakly lowersemicontinuous, bounded from below. It is also coercive, since the minimum κ − n =min { k A ˆ u n k F : ˆ u n ∈ E n , k ˆ u n k = 1 } exists on the finite dimensional hence compactunit sphere and is positive by condition (3.1), hence boundedness of some sequence( k A ˜ u kn − f δ k F ) k ∈ N implies boundedness of ( k ˜ u kn k E ) k ∈ N as follows: k ˜ u kn k E ≤ sup ˜ u n ∈ E n , ˜ u n =0 k ˜ u n k E k A ˜ u n k F k A ˜ u kn k F = κ n k A ˜ u kn k F ≤ κ n ( k A ˜ u kn − f δ k F + k f δ k F ) . Thus we can conclude existence of a minimizer.Minimizing j over E n is obviously equivalent to minimizing q j q over E n . Moreover,strict convexity of the functional Φ F by (3.1) transfers to the functional q j q : u n q k A ˜ u n − f δ k qF on E n . This implies uniqueness.In the Hilbert space setting, the least squares method can be shown to be a specialcase of the general projection method (1.5) upon appropriate choice of the spaces E n . Lemma 3.2.
Let (3.1) hold and let u n be defined by the least squares method (1.6) .Assume that, for some q ≥ , the functional (3.2) is strictly convex, the duality mappingssatisfy ∀ g ∈ F : J F ∗ → F ∗∗ q ∗ ( J F → F ∗ q ( g )) = g and J F → F ∗ q is Gateaux differentiable at ( Au n − f δ ) with Gateaux derivative G n : F → F ∗ , G n = ( J F → F ∗ q ) ′ ( Au n − f δ ) .Then we have equivalence of (1.6) and (1.5) by considering the linear space Z n = G n AE n .egularization by Discretization in Banach Spaces Proof.
Using the identity k f k q − F = k J F → F ∗ q ( f ) k F ∗ , and the functionalΦ F ∗ : z q ∗ k z k q ∗ F ∗ we have that (1.6) is equivalent to u n ∈ argmin { Φ F ∗ ( J F → F ∗ q ( A ˜ u n − f δ )) : ˜ u n ∈ E n } The necessary and, by convexity, also sufficient condition for this optimality problemreads as ∀ w n ∈ E n : 0 = ddu n (cid:0) Φ F ∗ ( J F → F ∗ q ( Au n − f δ )) (cid:1) [ w n ]= h ddu n (cid:0) J F → F ∗ q ( Au n − f δ ) (cid:1) [ w n ] , ∂ Φ F ∗ ( J F → F ∗ q ( Au n − f δ ) i F ∗ ,F = h ( J F → F ∗ q ) ′ ( Au n − f δ ) Aw n , J F ∗ → F ∗∗ q ∗ ( J F → F ∗ q ( Au n − f δ ) i F ∗ ,F = h ( J F → F ∗ q ) ′ ( Au n − f δ ) Aw n , Au n − f δ i F ∗ ,F , which is (1.5) with Z n = ( J F → F ∗ q ) ′ ( Au n − f δ ) AE n . Remark 3.3.
Since u n from the definition of the operator G n is unknown, Lemma 3.2is only of theoretical use. Later on, it will enable us to conclude convergence from therespective result for general projection methods - see Corollary 3.6 below. For practicalcomputation of u n , the finite dimensional minimization problem (1.6) should be solved. Note that the equality ∀ g ∈ F : J F ∗ → F ∗∗ q ∗ ( J F → F ∗ q ( g )) = g required by the previouslemma holds, e.g., in reflexive spaces, cf. [6]. For the least squares method, the crucial quantity in the stability estimate is κ n definedas in (2.5). As in Remark 2.3, under the conditions of Lemma 3.1, we have κ n < ∞ .Therewith we obtain the following stability result. Lemma 3.4.
Let all the assumptions of Lemma 3.2 be satisfied and consider, for f , f ∈ F the solutions of u n,i ∈ argmin { k A ˜ u n − f i k F : ˜ u n ∈ E n } i = 1 , . Then the estimate k u n, − u n, k E ≤ κ n k Q ′ n f − Q ′ n f k F holds, where Q ′ n is some single valued selection of the metric projection onto the subspace AE n . If Q ′ n is continuous, then u n, depends continuously on f .Proof. The proof follows by the definition of κ n and the fact that Au n,i = Q ′ n f i . egularization by Discretization in Banach Spaces Remark 3.5.
The metric projection operator Q ′ n onto closed convex sets is single valuedand continuous in uniformly convex Banach spaces (see, e.g., [1]). Thus, the aboveresult is applicable to the setting F = L p with p ∈ (1 , + ∞ ) , but not to the space C (Ω) in general. However, since the subspaces AE n are finite dimensional according to therank-nullity theorem for linear mappings, one might work with continuous selections ofthe metric operators in this nonreflexive Banach space setting if those subspaces havecertain properties - see, e.g., Theorem 6.34 in [16]. More precisely, the metric projection P S onto an n -dimensional subspace S of C [ a, b ] admits a unique continuous selection ifand only if every function f ∈ S , f = 0 has at most n zeros and if every f ∈ S has atmost n − changes of sign.3.3. Convergence with a priori choice of n Together with Lemma 3.2, Theorem 2.5 immediately implies convergence of the leastsquares method.
Corollary 3.6.
Let all the assumptions of Lemma 3.2 be satisfied. Additionally, weassume that there exists a sequence of approximations (ˆ u n ) n ∈ N , ˆ u n ∈ E n , satisfying (2.7) , (2.8) , where ˜ κ n is defined as in (2.3) with Z n = ( J F → F ∗ q ) ′ ( Au n − f δ ) AE n .Then for exact data δ = 0 we have convergence as n → ∞k u n − u ∗ k E → as n → ∞ . For noisy data and with the dimension n = n ( δ ) chosen according to (2.9) , we haveconvergence as δ → k u n ( δ ) − u ∗ k E → as δ → . Alternatively, we can also prove convergence directly:
Theorem 3.7.
Let condition (3.1) be satisfied for all n ∈ N . Then an approximation u n according to the least squares method (1.6) exists and the error estimate k u n − u ∗ k E ≤ inf w n ∈ E n { k u ∗ − w n k E + 2 κ n k Au ∗ − Aw n k F } + 2 κ n δ (3.3) holds. If there exists a sequence of approximations (ˆ u n ) n ∈ N , ˆ u n ∈ E n , satisfying (2.7) and κ n k A ( u ∗ − ˆ u n ) k F → as n → ∞ , (3.4) then we have in case of exact data ( δ = 0 ) convergence k u n − u ∗ k E → as n → ∞ and in case of noisy data with the choice of n = n ( δ ) according to n ( δ ) → ∞ and κ n ( δ ) δ → as δ → convergence as δ → : k u n ( δ ) − u ∗ k E → as δ → . egularization by Discretization in Banach Spaces Proof.
Let w n ∈ E n be arbitrary. We have k Au n − f δ k F ≤ k Aw n − f δ k F due to theleast squares property (1.6), therefore k Au n − Aw n k F ≤ k Au n − f δ k F + k Aw n − f δ k F ≤ k Aw n − f δ k F ≤ k Aw n − f k F + δ )and k u n − u ∗ k E ≤ k u ∗ − w n k E + k u n − w n k E ≤ k u ∗ − w n k E + κ n k Au n − Aw n k F ≤ k u ∗ − w n k E + 2 κ n ( k Aw n − f k F + δ ) . In this error estimate w n ∈ E n is arbitrary, hence (3.3) holds. If some sequence ofapproximations (ˆ u n ) n ∈ N satisfies conditions (2.7) and (3.4), then insertion of w n = ˆ u n into (3.3) gives the convergence assertions. Remark 3.8.
Note that convergence condition (3.4) is satisfied, if some sequence ofoperators { P n : E → E n , n ∈ N } satisfies conditions (1.8) , (2.11) and ∃ C < ∞ ∀ n ∈ N : κ n sup w ∈ H, k w k E =1 k A ( I − P n ) w k F ≤ C (3.6) (compare (2.12) , (2.20) ). Namely, the equality I − P n = ( I − P n ) allows to estimate κ n k A ( I − P n ) u ∗ k F ≤ C k ( I − P n ) u ∗ k E . n – the discrepancy principle Theorem 3.9.
Let for all n ∈ N condition (3.1) be satisfied so that u n according to theleast squares method (1.6) exists. Additionally, we assume that there exists a sequenceof approximations (ˆ u n ) n ∈ N , ˆ u n ∈ E n , satisfying (2.7) and the condition κ n ( k A ( u ∗ − ˆ u n ) k F + k A ( u ∗ − ˆ u n − ) k F ) → as n → ∞ (3.7) Let b > be fixed and for δ > , let n = n DP ( δ ) be the first index such that (2.17) holds.Then for δ > we have that n DP ( δ ) is finite. Moreover, u n DP ( δ ) → u ∗ as δ → subsequentially.Proof. The proof is the same as for the Theorem 2.7, but (2.18) with τ = 0 is nowtrivial, using optimality of u n for the minimization problem (1.6), and hence τ can beomitted in formula (2.19) (with m = n DP ( δ ) − w n = ˆ u n is used instead of (2.10). Remark 3.10.
If some sequence of operators { P n : E → E n , n ∈ N } satisfies (1.8) and (2.11) , then the uniform boundedness condition ∃ C < ∞ ∀ n ∈ N : ( κ n + κ n +1 ) sup w ∈ H, k w k E =1 k A ( I − P n ) w k F ≤ C. (3.8) is sufficient for (3.7) .egularization by Discretization in Banach Spaces
4. The least error method
Throughout this section, Z n ⊆ F ∗ is a finite dimensional subspace. We establish well-definedness and convergence of the least error method to a solution under a priori anda posteriori choices for the discretization dimension. Lemma 4.1. (i) Let E be a Banach space in which the unit ball is weakly compact andassume that N ( A ∗ ) ∩ Z n = { } . (4.1) Then the set of minimizers argmin { k ˜ u k E : ∀ z n ∈ Z n : h z n , A ˜ u i F ∗ ,F = h z n , f δ i F ∗ ,F } is nonempty.(ii) If additionally for some q ≥ , the functional Φ E : ˜ u q k ˜ u k qE (4.2) is strictly convex, then the minimizer u n of (1.7) is unique, and so is the minimum-norm-solution u † of (1.1) .Proof. Condition (4.1) implies that the admissible set E adn = { ˜ u ∈ E : ∀ z n ∈ Z n : h z n , A ˜ u i F ∗ ,F = h z n , f δ i F ∗ ,F } is nonempty. To see this, we apply the Closed RangeTheorem to the linear operator Q n A : E → Z ∗ n , whose finite dimensional range isobviously closed. Hence we have the identity Q n AE = N (( Q n A ) ∗ ) ⊥ = { g n ∈ Z ∗ n | ∀ z n ∈ Z n : ( Q n A ) ∗ z n = 0 ⇒ h g n , z n i Z ∗ n ,Z n = 0 } = { g n ∈ Z ∗ n | ∀ z n ∈ Z n : A ∗ z n = 0 ⇒ h g n , z n i Z ∗ n ,Z n = 0 } = Z ∗ n under condition (4.1), since by definition of Q n we have Q ∗ n z n = z n for all z n ∈ Z n : ∀ g ∈ F : h Q ∗ n z n , g i F ∗ ,F = h Q n g, z n i Z ∗ n ,Z n = h z n , g i F ∗ ,F . Thus, the equation Q n A ˜ u = f n with f n = Q n f δ ∈ Z ∗ n defining E adn is always solvableunder condition (4.1).Due to our assumption on E , level sets of the cost function j : ˜ u
7→ k ˜ u k E are weaklycompact. Moreover, j is weakly lower semicontinuous and bounded from below. Thisimplies existence of a minimizer, which in case of strict convexity of the cost function q j q = Φ E is obviously unique.Also the least error method is to some extent a special case of (1.5). However,different from the Hilbert space situation, the ansatz space might be nonlinear in generalBanach spaces. egularization by Discretization in Banach Spaces Lemma 4.2.
Let the conditions of Lemma 4.1 (i) be satisfied and let additionally, forsome q > , the norm functional u q k u k qE be Frechet differentiable, and the singlevalued duality mapping J E → E ∗ q be invertible.Then (1.7) and (1.5) are equivalent, when E n = ( J E → E ∗ q ) − ( A ∗ Z n ) .Proof. By convexity and Frechet differentiability of the cost function as well as linearityof the constraints, optimality in (1.7) is equivalent to existence of a Lagrange multiplier λ ∈ R m n with m n = dim Z n such that stationarity for the Lagrange function L : H × R m n → R (˜ u, λ ) q k ˜ u k qE + m n X i =1 λ i h z n,i , A ˜ u − f δ i F ∗ ,F = q k ˜ u k qE + h A ∗ ( m n X i =1 λ i z n,i ) , ˜ u i E ∗ ,E − h m n X i =1 λ i z n,i , f δ i F ∗ ,F holds, where Z n = span { z n,i , i ∈ { , , ..., m n }} . That is, there exists ¯ λ ∈ R m n such that J E → E ∗ q ( u n ) + A ∗ v n = 0 and ∀ z n ∈ Z n : h z n , Au n − f δ i F ∗ ,F = 0 , where v n = P m n i =1 ¯ λ i z n,i . The first of these two equations with invertibility of J E → E ∗ q yields that (1.7) is equivalent to (1.5) with E n = ( J E → E ∗ q ) − ( A ∗ Z n ).The implication (1.5) ⇒ (1.7) can be shown also in a variational manner, by exploitingduality mapping properties. We include the alternative proof here, for the sake ofcompleteness. Thus, assume that u n satisfies (1.5) with E n = ( J E → E ∗ q ) − ( A ∗ Z n ) and let u be an arbitrary element of the feasible set E adn = { ˜ u ∈ E : ∀ z n ∈ Z n : h z n , A ˜ u i F ∗ ,F = h z n , f δ i F ∗ ,F } . Then we can write u n = ( J E → E ∗ q ) − ( A ∗ v n ) for some v n and insert z n = v n to obtain the identity h v n , Au n i F ∗ ,F = h v n , f δ i F ∗ ,F , which together with feasibility of u yields h J q ( u n ) , u n i E ∗ ,E = h A ∗ v n , u n i E ∗ ,E = h v n , Au n i F ∗ ,F = h v n , f δ i F ∗ ,F = h v n , Au i F ∗ ,F = h A ∗ v n , u i F ∗ ,F ≤ k A ∗ v n k E ∗ k u k E . On the other hand, we have h J q ( u n ) , u n i E ∗ ,E = k A ∗ v n k E ∗ k u n k E , thus altogether k u n k E ≤ k u k E . Remark 4.3.
Note that E n = ( J E → E ∗ q ) − ( A ∗ Z n ) is not necessarily a linear space,though. So in the proof of stability and convergence we cannot resort to the respectiveresults on the general projection method, but have to carry out separate proofs for theleast error method, see Lemma 4.5 and Theorem 4.6 below.egularization by Discretization in Banach Spaces Theorem 4.4.
Let u ∗ be some solution of (1.1) . Then, for any n ∈ N , the minimizer u n defined by (1.7) in case f δ = f attains the least error in E n = ( J E → E ∗ q ) − ( A ∗ Z n ) measured with respect to the Bregman distance, that is, D ( u ∗ , u n ) ≤ D ( u ∗ , u ) , ∀ u ∈ E n = ( J E → E ∗ q ) − ( A ∗ Z n ) . Proof.
In the case of exact data, equation (1.5) can be written as h z n , Au n − Au ∗ i F ∗ ,F = 0 , ∀ z n ∈ Z n . (4.3)Let u ∈ ( J E → E ∗ q ) − A ∗ v with v ∈ Z n be an arbitrary element of E n = ( J E → E ∗ q ) − ( A ∗ Z n ).Then one has D ( u ∗ , u n ) + D ( u n , u ) − D ( u ∗ , u ) = h J E → E ∗ q ( u ) − J E → E ∗ q ( u n ) , u ∗ − u n i F ∗ ,F = h A ∗ v − A ∗ v n , u ∗ − u n i F ∗ ,F = h v − v n , f − Au n i F ∗ ,F = 0 , as v, v n ∈ Z n satisfy (4.3). Since D ( u n , u ) is nonnegative, this implies the desiredinequality, showing that u n is the Bregman projection of u ∗ onto E n . A stability result for the least error method can be formulated by usingˆ κ n := sup z n, ,z n, ∈ Z n k z n, − z n, k F ∗ (cid:16) D symq (cid:0) ( J E → E ∗ q ) − ( A ∗ z n, ) , ( J E → E ∗ q ) − ( A ∗ z n, ) (cid:1)(cid:17) q ∗ = max z n, ,z n, ∈ Z n , k z n, k F ∗ =1 , k z n, k F ∗ ≤ k z n, − z n, k F ∗ (cid:16) D symq ∗ ( A ∗ z n, , A ∗ z n, ) (cid:17) q ∗ κ ∗ n := sup z n ∈ Z n k z n k F ∗ k A ∗ z n k E ∗ = 1min z n ∈ Z n , k z n k F ∗ =1 k A ∗ z n k E ∗ (4.4)Again, as in Remark 2.3, one sees that ˆ κ n and κ ∗ n are finite under the conditions ofLemma 4.1, in particular, condition (4.1). Lemma 4.5.
Let the assumptions of Lemma 4.2 be satisfied and consider, for f , f ∈ F the solutions of u n,i ∈ argmin { k ˜ u k E : ∀ z n ∈ Z n : h z n , A ˜ u i F ∗ ,F = h z n , f i i F ∗ ,F } i = 1 , . Then the estimate D symq ( u n, , u n, ) q ≤ ˆ κ n k f − f k F holds; in particular, if E is a q -convex space, then one has k u n, − u n, k E ≤ ˆ κ n k f − f k F . egularization by Discretization in Banach Spaces If additionally E is s -smooth, then one has k u n, − u n, k q − s +1 E ≤ C s c q max { k u n, k E , k u n, k E } q − s κ ∗ n k f − f k F . for some constants c q , C s independent of n .Proof. According to Lemma 4.1, the solutions u n,i , i = 1 ,
2, are well defined. Lemma4.2 implies existence of v n,i ∈ Z n such that J E → E ∗ q ( u n,i ) = A ∗ v n,i , i = 1 ,
2. Therefore,we get the identity D symq ( u n, , u n, ) = h J E → E ∗ q ( u n, ) − J E → E ∗ q ( u n, ) , u n, − u n, i E ∗ ,E = h A ∗ ( v n, − v n, ) , u n, − u n, i E ∗ ,E = h v n, − v n, , f − f i F ∗ ,F ≤ ˆ κ n (cid:16) D symq ( u n, , u n, ) (cid:17) q ∗ k f − f k F Similarly, in the q -convex and s -smooth case, which implies D symq (˜ u, u ) ≥ D q (˜ u, u ) ≥ c q k ˜ u − u k qE (4.5) k J E → E ∗ q (˜ u ) − J E → E ∗ q ( u ) k E ∗ ≤ C s max { k ˜ u k E , k u k E } q − s k ˜ u − u k s − E for some constants c q , C s > u, u ∈ H (see, e.g., [3, Lemma 2.7], [19, Theorem2.42]), we get c q k u n, − u n, k qE ≤ D symq ( u n, , u n, ) = h v n, − v n, , f − f i F ∗ ,F ≤ k v n, − v n, k F ∗ k f − f k F ≤ κ ∗ n k A ∗ v n, − A ∗ v n, k F ∗ k f − f k F ≤ κ ∗ n C s max { k u n, k E , k u n, k E } q − s k u n, − u n, k s − E k f − f k F . Note that for p ∈ (1 , ∞ ), L p (Ω) is max { p, } -convex and min { p, } -smooth, see,e.g, [19, Example 2.47]. n For the least error method, due to possible nonlinearity of the space E n according toLemma 4.2, convergence cannot be directly concluded from Theorem 2.5. We obtainthe following result with a priori discretization level choice. Theorem 4.6.
Let E be a Banach space in which the unit ball is weakly compactand assume that, for some q ≥ , the functional (4.2) is strictly convex and Frechetdifferentiable, and the single valued duality mapping J E → E ∗ q is invertible.Let u n be defined by the least error method (1.7) , where the operator A is assumedto satisfy (4.1) and ∀ z ∈ F ∗ : inf z n ∈ Z n k A ∗ ( z − z n ) k E ∗ → as n → ∞ . (4.6) egularization by Discretization in Banach Spaces Then the minimum-norm-solution u † of (1.1) is unique and for exact data ( δ = 0 ) wehave convergence D q ( u † n , u † ) → as n → ∞ . If, additionally, the space E is smooth and uniformly convex, one has k u † n − u † k E → as n → ∞ for exact data, while for noisy data and with the dimension n = n ( δ ) chosen such that n ( δ ) → ∞ and ˆ κ n ( δ ) δ → as δ → , (4.7) we have convergence k u n ( δ ) − u † k E → as δ → . .Proof. Let u † n be the well defined elements (due to Lemma 4.1) u † n ∈ argmin { k ˜ u k E : ∀ z n ∈ Z n : h z n , A ˜ u i F ∗ ,F = h z n , f i F ∗ ,F } , (4.8)i.e., u n with exact data. Then the following holds for any solution u ∗ to (1.1), k u † n k E ≤ k u ∗ k E . (4.9)By the assumed weak compactness of the unit ball in E , the sequence ( u † n ) n ∈ N has aweakly convergent subsequence ( u † n l ) l ∈ N whose limit u solves (1.1), since A is weaklycontinuous and by h z n l , Au † n l − f i F ∗ ,F = 0 for all z n l ∈ Z n l and (4.6) we have ∀ z ∈ F ∗ : h z, Au † n l − f i F ∗ ,F = inf z nl ∈ Z nl h z − z n l , Au † n l − f ) i F ∗ ,F = inf z nl ∈ Z nl h z − z n l , A ( u † n l − u ∗ ) i F ∗ ,F ≤ inf z nl ∈ Z nl k A ∗ ( z − z n l ) k E ∗ k u ∗ k E → l → ∞ . Moreover, by (4.9) and weak lower semicontinuity of the norm, this limit u satisfies k u k E ≤ k u ∗ k E for any solution u ∗ of (1.1), thus it has to coincide with theunique minimum-norm-solution u † . A subsequence-subsequence argument yields weakconvergence of the whole sequence ( u † n ) n ∈ N to u † as n → ∞ . Hence, for the Bregmandistance we get, again using (4.9) with u ∗ = u † , that D q ( u † n , u † ) = q k u † n k qE − q k u † k qE + h J E → E ∗ q ( u † ) , u † − u † n i E ∗ ,E ≤h J E → E ∗ q ( u † ) , u † − u † n i E ∗ ,E → n → ∞ by the already shown weak convergence. This proves the assertion in case of exact data,since then we have u n = u † n . egularization by Discretization in Banach Spaces u n and u † n by meansof Lemma 4.5: D symq ( u n , u † n ) q ≤ ˆ κ n δ . So by choosing n = n ( δ ) such that n ( δ ) → ∞ and ˆ κ n ( δ ) δ → δ →
0, we have D q ( u † n , u † ) → D symq ( u n , u † n ) q → δ → . However, the Bregman distance does not satisfy a triangle inequality, thus we need q -convexity of E at this point to conclude from Lemma 4.5 and (4.5) k u † n − u † k E → k u n − u † n k E → δ → , thus the assertion. Remark 4.7.
The approximation property (4.6) is ensured, e.g., by choosing Z n according to (1.4) .4.4. Convergence with a posteriori choice of n – the monotone error rule Under the conditions of Lemma 4.2 we can carry over some results for the least errormethod from the Hilbert space setting by closely following [12, 10]. In particular we willshow monotonicity of the error measured in the Bregman distance defined by (1.12), aswell as convergence if the stopping index determined by the monotone error rule goesto infinity as δ →
0, see [10, Theorem 2].
Theorem 4.8.
Let the assumptions of Lemmas 4.1 (i), (ii) and 4.2 be satisfied. Thenfor u n defined by the least error method we have(a) There exists v n ∈ Z n such that u n = ( J E → E ∗ q ) − ( A ∗ v n ) .(b) With v n as in (a), the identity k u n k qE = h v n , f δ i F ∗ ,F holds. If Z n ⊆ Z n +1 for all n ∈ N , then k u n k E ≤ k u n +1 k E for all n ∈ N . (c) With d ME ( n ) defined by d ME ( n ) = h v n +1 − v n , f δ i F ∗ ,F q k v n +1 − v n k F ∗ the identities d ME ( n ) = k u n +1 k qE − k u n k qE q k v n +1 − v n k F ∗ = D q ( u n +1 , u n ) k v n +1 − v n k F ∗ and the estimate D q ( u † , u n +1 ) − D q ( u † , u n ) ≤ − ( d ME ( n ) − δ ) k v n +1 − v n k F ∗ hold. In particular, if ∀ n ∈ N : Z n ⊆ Z n +1 , egularization by Discretization in Banach Spaces then by minimality of u n , we have d ME ( n ) ≥ and the error measured in theBregman distance is monotonically decreasing as long as δ ≤ d ME ( n ) . (4.10) (d) Let n = n ME ( δ ) be the first index such that (4.10) is violated.If n ME ( δ ) → ∞ as δ → and (4.6) holds, then k u n ME ( δ ) − u † k E → as δ → provided that E is smooth and q -convex.Proof. Item (a) has already been proven in Lemma 4.2.Since the duality mapping satisfies h J E → E ∗ q ( w ) , w i E ∗ ,E = k w k qE , (4.11)we get the first part of item (b): k u n k qE = h J E → E ∗ q ( u n ) , u n i E ∗ ,E = h A ∗ v n , u n i E ∗ ,E = h v n , Au n i F ∗ ,F = h v n , f δ i F ∗ ,F . Note that v n as in (a) satisfies h v n , Au n i F ∗ ,F = h v n , f δ i F ∗ ,F and h v n , Au n +1 i F ∗ ,F = h v n , f δ i F ∗ ,F , due to the assumption Z n ⊆ Z n +1 . Then (1.7) yields the second partof (b).The first identity in (c) is an immediate consequence of (b), while thesecond one follows from h v n , Au n +1 i F ∗ ,F = h v n , Au n i F ∗ ,F which can be rewritten as h J E → E ∗ q ( u n ) , u n +1 − u n i E ∗ ,E = 0.Considering the differences between the Bregman distances and using that the term q k u ∗ k qE cancels out we get D q ( u ∗ , u n +1 ) − D q ( u ∗ , u n )= q k u n k qE − q k u n +1 k qE + h J E → E ∗ q ( u n +1 ) , u n +1 − u ∗ i E ∗ ,E − h J E → E ∗ q ( u n ) , u n − u ∗ i E ∗ ,E = q k u n k qE − q k u n +1 k qE + h J E → E ∗ q ( u n +1 ) , u n +1 i E ∗ ,E − h J E → E ∗ q ( u n ) , u n i E ∗ ,E − h J E → E ∗ q ( u n +1 ) − J E → E ∗ q ( u n ) , u ∗ i E ∗ ,E = q ∗ k u n +1 k qE − q ∗ k u n k qE − h A ∗ ( v n +1 − v n ) , u ∗ i E ∗ ,E = q ∗ h v n +1 − v n , f δ i F ∗ ,F − h v n +1 − v n , Au ∗ i F ∗ ,F = − q h v n +1 − v n , f δ i F ∗ ,F + h v n +1 − v n , f δ − f i F ∗ ,F ≤ − q h v n +1 − v n , f δ i F ∗ ,F + k v n +1 − v n k F ∗ δ = − ( d ME ( n ) − δ ) k v n +1 − v n k F ∗ , where we have used again (4.11) in the second equality.Let n ( δ ) be an a priori stopping rule satisfying (4.7), let ( δ k ) k ∈ N be a sequence ofnoise levels tending to zero and denote by n k = n ( δ k ), n kME = n ME ( δ k ) the stopping egularization by Discretization in Banach Spaces k such that n kME > n k for all k ≥ k , then by monotone decay of theerror up to n kME we have D q ( u ∗ , u n kME ) ≤ D q ( u ∗ , u n k ) → k → ∞ .Otherwise there exists a subsequence ( k l ) l ∈ N such that for all l ∈ N we have n k l ME ≤ n k l and therefore ˆ κ n klME ≤ ˆ κ n kl , so the right hand limit in (4.7) together with Lemma 4.5implies k u † n klME − u n klME k E l →∞ →
0. On the other hand, by assumption we have n k l ME l →∞ → ∞ ,thus by Theorem 4.6, k u † n klME − u † k E → l → ∞ . Thus a subsequence-subsequenceargument yields the assertion. Remark 4.9.
Convergence in the degenerate case when ( n ME ( δ )) has finiteaccumulation points remains an open problem even in Hilbert spaces.As regards a relation of the type d ME ( n ) ≤ k Au n − f δ k F / , ∀ n ∈ N , shown in Hilbert spaces (see, e.g., [10, Th.2, 5)], it is not clear whether such a connectioncould be established in the Banach space framework.
5. On the requirements for spaces and subspaces
The three projection methods investigated in this work require different theoreticalsettings as concerns stability and convergence.Note that reflexivity of the space E is essential in convergence results for theleast error method, thus ruling out the case E = C (Ω) or E = M (Ω), while allowing F = C (Ω) (thus , e.g., collocation) or F = M (Ω) (for modelling impulsive noise).The additional restrictions on uniform boundedness (e.g., (2.12)) will be discussedin the following section; they are more severe in case of a posteriori choice of n , a factwhich is already known from the Hilbert space setting.The preimage and image space combinations we are interested in are E = L p (Ω) , F = L r (Ω) , p, r ∈ (1 , ∞ ) , (5.1) E = L p (Ω) , F = C (Ω) , p ∈ (1 , ∞ ) , (5.2) E = C (Ω) ∗ = M (Ω) , F = L r (Ω) , p ∈ (1 , ∞ ) , (5.3) E = M (Ω) , F = C (Ω) , (5.4)for some smooth open domain Ω ⊆ R d . L p spaces with p ∈ (1 , ∞ ) are reflexive, smoothand q ( p )-convex with q ( p ) = max { , p } , the duality mappings, which are given by( J L p → L p ∗ q ( p ) ( w ))( x ) = k w k q ( p ) − pL p | w ( x ) | p − sign( w )( x ) p ∗ = pp − J L p → L p ∗ q ( p ) ) − = J L p → L p ∗ q ( p ) ∗ q ( p ) ∗ = q ( p ) q ( p ) − egularization by Discretization in Banach Spaces r ≥
2, i.e., q ( r ) = max { , r } = r , then J L r → L r ∗ q ( r ) isadditionally Gateaux differentiable with Gateaux derivative( J L r → L r ∗ q ( r ) ) ′ ( g ))[ h ]( x ) = ( r − | g ( x ) | r − h ( x ) . Therefore in case (5.1) all well-definedness, characterization, stability and convergenceresults Lemmas 2.1, 3.1, 4.1, 3.2, 4.2, 2.4, 3.4, 4.5, Corollary 3.6, and Theorems 2.5,4.6, 2.7, 3.9, 4.8, are applicable. In case (5.2), we still have all these results except forthose on stability of the least squares method, Lemma 3.4 unless the projection spacesare chosen appropriately (cf. Remark 3.5). Likewise, in case (5.3) all results exceptfor those concerning the least error method apply. Finally, in the situation (5.4), onlythe results for the general projection method, Lemmas 2.1, 2.4, and Theorems 2.5, 2.7,remain valid.
6. Applications
We will now consider applicability of the results derived in the previous sections forconcrete discretizations, so that the crucial conditions for convergence and stability(2.7), (2.8), (2.14), (2.15), (3.4), (3.7), (4.6), will become conditions on the smoothingproperties of the forward operator. These will be interpreted for the case of integralequations. For certain test examples we will also provide numerical experiments.
For applying the results from Sections 2, 3, 4, in the respective cases, it still remainsto verify the crucial convergence conditions. However, the convergence conditions (2.8),(2.14), (2.15), (3.4), (3.7) (recall the corrsponding sufficient boundedness conditions(2.12), (2.20), (2.21), (3.6), (3.8)) require an appropriate trade-off between stability andapproximation. Note that these conditions are only needed for the general projectionand the least squares method, but not for the least error method.We will now illustrate these conditions for integral equations with discretization inspline spaces.Let k, n ∈ N , h = n , 1 < p < ∞ , 1 < r < ∞ . We denote by S ( l ) k − ( I h ) thespline space defined as the set of functions w h ∈ C l [0 , I ih : [( i − h, ih ] , i = 1 , . . . , n are polynomials of order ≤ k − w h | I ih ∈ Π k − . Thecase of potentially discontinuous piecewise polynomial functions w h will be denoted by S ( − k − ( I h ).We recall below several well-known properties of splines.1) Approximation property: ∀ v ∈ W l,p (0 , , ∃ v h ∈ S ( − k − ( I h ) : k v − v h k L p (0 , ≤ C app h min( k,l ) k v k W l,p (0 , , ∀ v ∈ C l [0 , , ∃ v h ∈ S ( − k − ( I h ) : k v − v h k C [0 , ≤ C app h min( k,l ) k D l v k C [0 , , where D l is the differential operator of order l . egularization by Discretization in Banach Spaces ∀ v h ∈ S ( l − k − ( I h ) , k D l v h k L p (0 , ≤ C inv n l k v h k L r (0 , ≤ C inv n l k v h k C [0 , . On each subinterval I ih we define the local projection P n , using an L ( I ih )-orthogonalbasis { φ I ih , . . . , φ I ih k } of Π k − :( P n w )( t ) = k X j =1 Z I ih φ I ih j ( s ) w ( s ) ds φ I ih j ( t ) t ∈ I ih . We consider P n as a mapping P n : L p (0 , → L p (0 , R ( P n ) = E n , E n = { v ∈ L p (0 , | ∀ i = 1 , . . . , n : v | I ih ∈ Π k − } . By the L ( I ih ) orthogonality of the basis functions, it is easily checked that P ∗ n is definedin exactly the same manner, but considered as a mapping L p ∗ (0 , → L p ∗ (0 , R ( P ∗ n ) = E n . Obviously I − P ∗ n annihilates polynomials of degree lower orequal to k − I ih .For checking conditions (2.12), (2.20), (2.21), (3.6), (3.8) we can use the followinglemma which follows from the approximation property of splines. Lemma 6.1.
Let A ∈ L ( E, F ) , E = L p (0 , , E n = S ( − k − ( I h ) . If F = L r (0 , , A ∗ Z n ⊂ W l,p ∗ (0 , or F = C [0 , , A ∗ Z n ⊂ C l [0 , , then sup w ∈ E, k w k E =1 sup z n ∈ Z n , k z n k F ∗ =1 h z n , A ( I − P n ) w i F ∗ ,F = sup w ∈ E, k w k E =1 sup z n ∈ Z n , k z n k F ∗ =1 h ( I − P ∗ n ) A ∗ z n , w i F ∗ ,F ≤ C app h min( k,l ) . Due to this lemma, for conditions (2.12), (2.20), (2.21), (3.6), (3.8), we need theinequality k ≥ l and the estimate κ n ≤ Cn l . We are able to guarantee the latter estimateonly for specific operators. Lemma 6.2.
Let A ∈ L ( E, F ) with E = L p (0 , , E n = S ( − k − ( I h ) and F = L r (0 , or F = C [0 , . If for all w n ∈ S ( − k − ( I h ) we have k w n k E ≤ C k D l Aw n k E and v n := Aw n ∈ S ( l − k + l − ( I h ) , then κ n = sup w n ∈ S ( − k − ( I h ) k w n k L p (0 , k Aw n k F ≤ Cn l , C = C C inv . (6.1) Proof.
Let w n ∈ S ( − k − ( I h ) satisfy the above assumptions. Since v n := Aw n is a spline oforder k + l − v n : k w n k E ≤ C k D l Aw n k E ≤ C C inv n l k Aw n k F . The conditions of Lemmas 6.1 and 6.2 are satisfied for integral equations of the firstkind ( Au )( t ) := Z K ( t, s ) u ( s ) ds = f ( t ) , t ∈ [0 , , (6.2) egularization by Discretization in Banach Spaces D l , l ∈ N under differenthomogeneous boundary conditions, such that the equation D l z = 0 has only the trivialsolution z = 0. Here K ( t, s ) has different forms K ( t, s ) and K ( t, s ) for regions0 ≤ s < t ≤ ≤ t ≤ s ≤ D l with boundary conditions f ( j ) (0) = 0 , j = 0 , , . . . , l − K ( t, s ) = ( t − s ) l − / ( l − K ( t, s ) = 0. For l = 2 and boundary conditions f (0) = f (1) = 0 we have K ( t, s ) = s ( t −
1) = K ( s, t ), for l = 4 and boundary conditions f (0) = f ′ (0) = f (1) = f ′ (1) = 0 we have K ( t, s ) = − s (1 − t ) ( s +2 st − t ) / K ( s, t ).Let us formulate the convergence theorem. Theorem 6.3.
Consider A ∈ L ( E, F ) defined by (6.2) with E = L p (0 , , < p < ∞ ,where F = L r (0 , , < r < ∞ and f ∈ W l,r (0 , or F = C [0 , and f ∈ C l [0 , is assumed. Let K ( t, s ) be a Green’s function of D l with homogeneous boundaryconditions such that D l z = 0 has only the trivial solution z = 0 , let f ( t ) satisfy theseboundary conditions.Then the following statements hold:(i) Equation (6.2) has a unique solution u ∗ .(ii) Let E n = S ( − k − ( I h ) with k ≥ l . Then the least squares method determines aunique approximation u n ∈ E n for all n ∈ N .(iii) If (2.2) hold the general projection method (1.5) determines a uniqueapproximation u n ∈ E n for all n ∈ N .Under these assumptions we have for both methods convergence k u n − u ∗ k → as n → ∞ in case of exact data δ = 0 . In case of noisy data one has convergence k u n ( δ ) − u ∗ k → as δ → , if n = n ( δ ) is chosen a priori such that n ( δ ) → ∞ , n ( δ ) l δ → as δ → or a posteriori according to the discrepancy principle, where in theleast squares method b > , while in the general projection method assumptions (2.16) and b > τ + 1 are assumed.Proof. The assumptions of Lemmas 6.1 and 6.2 are satisfied, since for any w ∈ E we have k w k E = k D l Av k E and for any w n ∈ E n we have Aw n ∈ E n with increasedpower and global smoothness of the spline. The assertions follow with Lemmas 6.1, 6.2from Theorems 3.7, 3.9 for the least squares method, and from Theorems 2.5, 2.7, andinequalities (2.6) (2.16) for the general projection method, respectively.One can compare the above results to their counterparts in Hilbert spaces (see [21]).For the general projection method (1.5), the Hilbert space analog of Theorems 2.5, 2.7is the following. Theorem 6.4.
Let A ∈ L ( E, F ) , where E and F are Hilbert spaces. Let f ∈ R ( A ) and P n : E → E n , Q n : F → F n , Q ′ n : F → AE n be orthoprojectors, where F n are finitedimensional subspaces of F . Let the following conditions (i)-(iii) hold:(i) ∀ u ∈ E : k P n u − u k E → as n → ∞ ,egularization by Discretization in Banach Spaces (ii) ∀ n ∈ N : N ( A ∗ ) ∩ F n = { } ,(iii) ∃ τ ∗ < ∞ ∀ z n ∈ F n : τ ∗ k P n A ∗ z n k E ≥ k A ∗ z n k E .Then equations Au = f and (1.9) have unique solutions u ∗ ∈ E and u n ∈ E n respectively. If δ = 0 , then k u n − u ∗ k E → as n → ∞ ((i)-(iii) are necessary andsufficient conditions of this convergence for arbitrary f ∈ R ( A ) ). If δ > , then foran a priori choice of n = n ( δ ) such that n ( δ ) → ∞ , δ · κ ∗ n ( δ ) → δ → one has k u n ( δ ) − u ∗ k E → as δ → . If δ > and the following additional conditions (iv)-(vi)hold(iv) N ( A ) ∩ E n = { } ,(v) ∃ τ < ∞ ∀ v n ∈ E n : τ k Q n Av n k F ≥ k Av n k F ,(vi) ∃ C < ∞ ∀ n ∈ N : κ n +1 k ( I − Q ′ n ) A k F ≤ C ,then convergence k u n ( δ ) − u ∗ k E → as δ → also holds for a choice of n = n ( δ ) by thediscrepancy principle with b > τ . Note that in Hilbert spaces, conditions (iii), (v) are automatically fulfilled bythe least error method ( E n = A ∗ F n ) and by the least squares method ( F n = AE n ) respectively, and that for condition (vi) the inequality k ( I − Q ′ n ) A k ≤ k ( I − P n )( A ∗ A ) l k l , ∀ l ∈ N is useful. Conditions (iii), (v) here seem to be weaker than thecorresponding conditions (2.12), (2.20), (2.21) in the Banach space theorems.For the least squares method (1.6), the Hilbert space analog of Theorems 3.7, 3.9is Theorem 6.5 and the analog of Theorem 6.3 is Theorem 6.6. Theorem 6.5.
Let A ∈ L ( E, F ) , where E , F - Hilbert spaces, N ( A ) = { } , f ∈ R ( A ) , P n : E → E n orthoprojector, let k P n u − u k → as n → ∞ for all u ∈ E , and let ∃ l ∈ N ∃ C < ∞ : ( κ n + κ n +1 ) k ( I − P n )( A ∗ A ) l k lF ≤ C ∀ n ∈ N . (6.3) Then equations Au = f and (1.6) have unique solutions u ∗ ∈ E and u n ∈ E n respectively. If δ = 0 , then k u n − u ∗ k → as n → ∞ . If δ > , then k u n ( δ ) − u ∗ k → as δ → for an a priori choice of n = n ( δ ) such that n ( δ ) → ∞ , δ · κ n ( δ ) → as δ → and also for a choice of n = n ( δ ) according to the discrepancy principle with b > . Theorem 6.6.
Let E = F = L (0 , , K ( s, t ) in (6.2) be a Green’s function of thedifferential operator L l z = l X j =0 b j ( t ) z ( j ) , b j ∈ C [0 , , b m ( t ) = 0 ∀ t ∈ (0 , with boundary conditions P l − j =0 α i,j z ( j ) (0) + β i,j z ( j ) (1) = 0 , ( i = 1 , . . . , l ) such that L l z = 0 only has the trivial solution z = 0 , and let f ( t ) satisfy these boundaryconditions. Then equation (6.2) has a unique solution u ∗ and the least squares methodwith E n = S ( − k − ( I h ) determines a unique approximation u n ∈ E n ∀ n, k ∈ N .Convergence k u n ( δ ) − u ∗ k E → as δ → holds with an a priori choice of n = n ( δ ) such that n ( δ ) → ∞ , δ · n ( δ ) l → δ → and also with a choice of n = n ( δ ) by thediscrepancy principle with b > .egularization by Discretization in Banach Spaces l = 1 in (6.3). If R ( A ∗ ) = R ( A ∗ A ) ⊂ W l, , then R (( A ∗ A ) l ) ⊂ W , and in case E n = S ( − k − , κ n ≤ Cn l condition (6.3) is satisfied for all k ∈ N , but (3.6), (3.8) require k ≥ l in Theorem 6.3.We list below several open problems:(1) Is it possible to weaken the assumption k ≥ l ?(2) Is it possible to extend the results of Theorem 6.3 using a more general operator L l instead of the operator D l , as in Theorem 6.6?Concerning (1), computational results for the collocation method indicate that k = l isreally needed there. Note that (2) can be reduced to the (also open) question, whetherthe following lemma, proved in [21] for the case q = r = 2, remains valid for general q, r ∈ [1 , ∞ ]. Lemma 6.7.
Let B ∈ L ( L q , L r ) , W l,r (0 , ⊂ B ( L q (0 , ⊂ W l,r (0 , , where W l,r (0 ,
1) = { z ∈ W l,r (0 , , z ( j ) (0) = z ( j ) (1) = 0 , j = 0 , . . . , l − } , L q = L q (0 , , < q < ∞ , < r < ∞ . Then k B ∗ v k L q ∗ ≥ C k D ( − l ) v k L r ∗ , ∀ v ∈ L r ∗ , q ∗ = q/ ( q − , r ∗ = r/ ( r − , where D ( − l ) v = D l Γ l v , Γ l : L r ∗ → W l,r ∗ is the inverse to the differentialoperator D l for the boundary conditions z ( j ) (0) = z ( j ) (1) = 0 , j = 0 , , . . . , l − . In the next section we consider the collocation method as a special case of thegeneral projection method, applying Theorem 6.3 to a Volterra integral equation of thefirst kind and estimating τ . Note that in [12] a collocation method for integral equationsof the first kind is considered using kernel functions for basis functions, the number ofwhich was determined by the monotone error rule. We consider a Volterra integral equation of the first kind( Au )( t ) := Z t K ( t, s ) u ( s ) ds = f ( t ) , t ∈ [0 ,
1] (6.4)with the operator A ∈ L ( L p (0 , , C [0 , , ≤ p ≤ ∞ . A special case of equation (6.4)is the model problem( Au )( t ) := Z t ( t − s ) l − u ( s ) ds = f ( t ) , t ∈ [0 , . (6.5)In the collocation method we find u n ∈ E n = S ( − k − ( I h ) such that Au n ( t i,j ) = f δ ( t i,j ) , i = 1 , ..., n, j = 1 , ..., k where t i,j = ( i − c j ) h ∈ [0 , i = 1 , ..., n , j = 1 , ..., k are collocation nodes and0 < c < ... < c k ≤ egularization by Discretization in Banach Spaces δ = 0, E = L ∞ , F = C , | K ( t, t ) | > l = 1 in (6.5)). Theor. 2.4.2 in [5] (page 123) proves that convergence holds if andonly if k Y j =1 (1 − c j ) /c j < . In [7] the case K ( t, t ) = 0 is considered, where ∂K ( t,s ) ∂t | t = s = 0 (case l = 2 in (6.5)) andconvergence if c k = 1 and Q k − j =1 (1 − c j ) /c j < l > n = n ( δ ) of the subintervals, thus we use the first n such that k Au n − f δ k F ≤ bδ . According to Theorem 2.7 we need that b > τ + 1, so the value of τ in (2.16) is needed. For the use of inequality τ n ≤ τ for all n ∈ N we need to estimate τ n = sup w n ∈ E n k Aw n k sup z n ∈ Z n , k z n k F ∗ =1 h z n , Aw n i F ∗ ,F = sup w n ∈ E n sup t ∈ [0 , | Aw n ( t ) | sup i,j | Aw n ( t i,j ) | . In the numerical experiments of the next section we solve equation (6.5) with l = 2. Weuse linear splines k = 2 and collocation nodes t i = ( i − h + ch with c ∈ (0 . ,
1) and t i = ih . It can be shown that τ = τ ( c ) depends on c in the form τ ( c ) = 1 + 4( y − y + 1) / − y + 6 y + 6 y − y (2 c − − c ) , y = c ( − c + c + 1) . (6.6)Actually, it is sufficient to consider cubic functions z ( t ) on the interval [0 ,
1] whichsatisfy z (0) = z ( c ) = 1, z (1) = − , z ′ (0) = 2 / ( c (1 − c )(2 c − Aw n at the points ih under conditions | Aw n ( t i,j ) | ≤ n → ∞ . The value of τ ( c ) in (6.6) is the maximum of z ( t ). We consider equation (6.5) with the exact solutions u ∗ ( s ) = s r , r ∈ { / , / } , where theexact right hand side is computed as f ( t ) = ( Au )( t ). The noisy data were generated bythe formula f δ ( t i,j ) = f ( t i,j ) + δθ i,j , where δ = 10 − m , m ∈ { , ..., } and θ i,j are randomnumbers with normal distribution, normed after being generated: max i,j | θ i,j | = 1 . Inthe space setting we used p = 1, i.e., we consider A as an operator from L (0 ,
1) to C [0 , k = 2 (linear splines) and used collocationnodes t i = ( i − h + ch with c ∈ (0 . ,
1) and t i = ih . Table 1 contains the resultsfor c ∈ { . , . , . , . } ; according to formula (6.6), the corresponding values of τ ( c )are 5.67, 4.10, 4.22 and 6.51 respectively. For fulfilling the theoretical requirement(2.16) in Theorem 2.7 we actually used b ( c ) = 1 .
01 + τ ( c ) in the discrepancy principle.The discrepancy principle gave a number n D of subintervals with corresponding error e D = k u n D − u ∗ k . We also found the optimal number n opt of subintervals and the egularization by Discretization in Banach Spaces Table 1.
Results for optimal n and for n according to the discrepancy principle u ∗ ( s ) = s / u ∗ ( s ) = s / c δ n opt n D r b e D r e n opt n D r b e D r e e opt = min n ∈ N k u n − u ∗ k E = k u n opt − u ∗ k E , as well as the bestcoefficient b = b opt for the choice of n = n ( δ ) in the discrepancy principle according to b opt = k Au n opt − f δ k F /δ .Table 1 contains our results for the exact solutions u ∗ ( s ) = s r with r = 1 / r = 3 / r b and r e contain the ratios of the b -values r b = b ( c ) /b opt and the corresponding errors r e = e D /e opt . The performance of the discrepancy principleis determined by the constant b . According to column r b , the lowest values of constants b = b ( c ), needed by the assumptions of Theorem 2.7, are typically 1.5 to 3 times largerthan the optimal values b opt . Nevertheless, column r e shows that the errors e D of theapproximate solutions with choice of the dimension by the discrepancy principle weretypically not larger than 1 to 1.4 times the optimal errors e opt . Comparison of the errors e D for different c -values suggests to use medium c -values 0.7 or 0.8. egularization by Discretization in Banach Spaces
7. Conclusions and Remarks
In this paper we have extended some results on regularization by projection in Hilbertspaces to a more general Banach space setting. Besides being applicable in case of“nice” reflexive Banach spaces like L p with p ∈ (1 , ∞ ), some of our results also givenew insights concerning certain cases of nonreflexive Banach spaces like L ∞ , L , C, M which are currently of high interest for several applications. Analytical considerationsand numerical results are provided for a Volterra integral equation in one dimensionspace, using a spline discretization.Future work in this context will be devoted to proving convergence rates,particularly also in nonreflexive spaces, and to more general applications in higherdimension spaces. Acknowledgment
The first and third author are supported by the Estonian Science Foundation Grant 9120and by institutional research funding IUT20-57 of the Estonian Ministry of Educationand Research. The second and fourth author are supported by the Karl PopperKolleg “Modeling-Simulation-Optimization” funded by the Alpen-Adria-Universit¨atKlagenfurt and by the Carinthian Economic Promotion Fund (KWF).We thank also Reimo Palm from the University of Tartu for numerical tests and thethree referees for the careful reading of the manuscript and for the valuable comments. [1]
Y. Alber and A. Notik , On some estimates for projection operators in Banach spaces , Comm.Appl. Nonlinear Anal., 2 (1995), pp. 47–55.[2]
A. Apartsyn , Nonclassical Linear Volterra Equations of the First Kind , Inverse and Ill-PosedProblems Series, De Gruyter, 2003.[3]
T. Bonesky, K. S. Kazimierski, P. Maaß, F. Sch¨opfer, and T. Schuster , Minimizationof Tikhonov functionals in Banach spaces , Abstract and Applied Analysis, Volume 2007 (2007),p. 192679 (19 pp).[4]
G. Bruckner, S. Pr¨ossdorf, and G. Vainikko , Error bounds of discretization methods forboundary integral equations with noisy data , Applicable Analysis, 63 (1996), pp. 25–37.[5]
H. Brunner , Collocation Methods for Volterra Integral and Related Functional Equations ,Cambridge University Press, Cambridge, 2004.[6]
I. Cioranescu , Geometry of Banach spaces, duality mappings and nonlinear problems , Kluwer,Dordrecht, 1990.[7]
P. Eggermont , Collocation for Volterra integral equations of the first kind with iterated kernel ,SIAM J. Numer. Anal., 20 (1983), pp. 1032–1048.[8]
P. Eggermont , Stability and robustness of collocation methods for Abel-type integral equations ,Numerische Mathematik, 45 (1984), pp. 431–445.[9]
H. W. Engl and A. Neubauer , On projection methods for solving linear ill-posed problems ,in Model Optimization in Exploration Geophysics, A. Vogel, ed., Braunschweig, 1987, Vieweg,pp. 73–92.[10]
A. Ganina, U. H¨amarik, and U. Kangro , On the self-regularization of ill-posed problems bythe least error projection method , Mathematical Modelling and Analysis, 19 (2014), pp. 299–308.[11]
C. W. Groetsch and A. Neubauer , Convergence of a general projection method for an operatorequation of the first kind , Houston J. Math., 14 (1988), pp. 201–208. egularization by Discretization in Banach Spaces [12] U. H¨amarik, E. Avi, and A. Ganina , On the solution of ill-posed problems by projection methodswith a posteriori choice of the discretization level , Mathematical Modelling and Analysis, 7(2002), p. 241 252.[13]
M. Hazewinkel , Encyclopaedia of Mathematics, Volume 6 , Springer Science and Business Media,1990.[14]
P. Mathe and N. Sch¨one , Regularization by projection in variable Hilbert scales , ApplicableAnalysis, 87 (2008), pp. 201–219.[15]
F. Natterer , Regularisierung schlecht gestellter Probleme durch Projektionsverfahren , Numer.Math., 28 (1977), pp. 329–341.[16]
D. Repovs and P. Semenov , Continuous Selections of Multivalued Mappings , Kluwer, Berlin,1998.[17]
G. Richter , Numerical solution of integral equations of the first kind with nonsmooth kernels ,SIAM J. Numer. Anal., 15 (1978), pp. 511–522.[18]
F. Sch¨opfer, T. Schuster, and A. Louis , Metric and Bregman projections onto affinesubspaces and their computation via sequential subspace optimization methods , Inverse and Ill-posed Problems, 16 (2008), pp. 479–506.[19]
T. Schuster, B. Kaltenbacher, B. Hofmann, and K. Kazimierski , Regularization Methodsin Banach Spaces , de Gruyter, Berlin, New York, 2012. Radon Series on Computational andApplied Mathematics.[20]
T. Schuster, A. Rieder, and F. Sch¨opfer , The approximate inverse in action: Iv. semi-discrete equations in a Banach space setting , Inverse Problems, 28 (2012), p. 104001.[21]