Modified iterated Tikhonov methods for solving systems of nonlinear ill-posed equations
aa r X i v : . [ m a t h . NA ] D ec Modified iterated Tikhonov methods for solving systems ofnonlinear ill-posed equations
J. Baumeister † A. De Cezaro ‡ A. Leit˜ao § December 23, 2020
Abstract
We investigate iterated Tikhonov methods coupled with a Kaczmarz strategy for ob-taining stable solutions of nonlinear systems of ill-posed operator equations. We show thatthe proposed method is a convergent regularization method. In the case of noisy data wepropose a modification, the so called loping iterated Tikhonov-Kaczmarz method, wherea sequence of relaxation parameters is introduced and a different stopping rule is used.Convergence analysis for this method is also provided.
Keywords.
Nonlinear systems; Ill-posed equations; Regularization; iterated Tikhonov method.
AMS Classification:
In this paper we propose a new method for obtaining regularized approximations of systemsof nonlinear ill-posed operator equations.The inverse problem we are interested in consists of determining an unknown physicalquantity x ∈ X from the set of data ( y , . . . , y N − ) ∈ Y N , where X , Y are Hilbert spacesand N ≥
1. In practical situations, we do not know the data exactly. Instead, we have onlyapproximate measured data y δi ∈ Y satisfying k y δi − y i k ≤ δ i , i = 0 , . . . , N − , (1)with δ i > δ := ( δ , . . . , δ N − ). The finite set of data aboveis obtained by indirect measurements of the parameter, this process being described by themodel F i ( x ) = y i , i = 0 , . . . , N − , (2)where F i : D i ⊂ X → Y , and D i are the corresponding domains of definition. † Department of Mathematics, Goethe University, Robert-Mayer Str. 6-10, D-60054 Frankfurt Main, Ger-many [email protected] . ‡ Institute of Mathematics Statistics and Physics, Federal University of Rio Grande, Av. Italia km 8, 96201-900 Rio Grande, Brazil [email protected] . § Department of Mathematics, Federal University of St. Catarina, P.O. Box 476, 88040-900 Florian´opolis,Brazil [email protected] . Iterative type regularization methods [1, 10, 20]) or
Tikhonov type regularization methods [10, 25, 29] afterrewriting (2) as a single equation F ( x ) = y , where F := ( F , . . . , F N − ) : \ N − i =0 D i → Y N (3)and y := ( y , . . . , y N − ). However these methods become inefficient if N is large or the evalua-tions of F i ( x ) and F ′ i ( x ) ∗ are expensive. In such a situation, Kaczmarz type methods [18, 24, 26]which cyclically consider each equation in (2) separately are much faster [26] and are often themethod of choice in practice.For recent analysis of Kaczmarz type methods for systems of ill-posed equations, we referthe reader to [3, 13, 9, 12]. The starting point of our approach is the iterated Tikhonov method[15, 5, 23] for solving linear ill-posed problems. This regularization method is defined by x δk +1 ∈ arg min (cid:8) k F x − y δ k + α k x − x δk k (cid:9) , what corresponds to the iteration x δk +1 = x δk − α − F ∗ ( F x δk +1 − y δ ) . Motivated by the ideas in [3, 12], we propose in this article an iterated Tikhonov-Kaczmarzmethod ( iTK method) for solving (2). This iterative method is defined by x δk +1 ∈ arg min (cid:8) k F [ k ] ( x ) − y δ [ k ] k + α k x − x δk k (cid:9) . (4)Here α > k ] := ( k mod N ) ∈ { , . . . , N − } , and x δ = x ∈ X is an initial guess, possibly incorporating some a priori knowledge aboutthe exact solution. Remark 1.1.
Notice that from the iteration formula in (4) we conclude that x δk +1 = x δk − α − F ′ [ k ] ( x δk +1 ) ∗ ( F [ k ] ( x δk +1 ) − y δ [ k ] ) . (5) As usual for nonlinear Tikhonov type regularization, the global minimum for the Tikhonov func-tionals in (4) need not be unique. For exact data we obtain the same convergence statementsfor any possible sequence of iterates (see Section 3) and we will accept any global solution.For noisy data, a (strong) semi-convergence result is obtained under a smooth assumption onthe functionals F i (see assumption (A4) in Section 4), which guarantees uniqueness of globalminimizers in (4) . Remark 1.2.
It is worth noticing that some authors consider iterated Tikhonov regularizationwith the number of iterations n ∈ N being fixed [11, 21, 27]. In this case, α plays the role ofthe regularization parameter. This regularization method is also called n -th iterated Tikhonovmethod. The iTK method consists in incorporating the Kaczmarz strategy in the iterated Tikhonovmethod. This strategy is analog to the one introduced in [12] regarding the Landweber-Kaczmarz ( LK ) iteration, in [9] regarding the Steepest-Descent-Kaczmarz ( SDK ) iteration, in[13] regarding the Expectation-Maximization-Kaczmarz (
EMK ) iteration. As usual in Kacz-marz type algorithms, a group of N subsequent steps (starting at some multiple k of N ) shallbe called a cycle . The iteration should be terminated when, for the first time, at least one of2he residuals k F [ k ] ( x δk +1 ) − y δ [ k ] k drops below a specified threshold within a cycle. That is, westop the iteration at k δ ∗ := min { lN ∈ N : k F i ( x δlN + i +1 ) − y δi k ≤ τ δ i , for some 0 ≤ i ≤ N − } , (6)where τ > k = k δ ∗ we do not necessarilyhave k F i ( x δk δ ∗ + i ) − y δi k ≤ τ δ i for all i = 0 , . . . , N −
1. In the case of noise free data, δ i = 0 in(1), the stop criteria in (6) may never be reached, i.e. k δ ∗ = ∞ for δ i = 0.In the case of noisy data, we also propose a loping version of iTK , namely, the l-iTK iteration. In the l-iTK iteration we omit an update of the iTK iteration (within one cycle)if the corresponding i -th residual is below some threshold. Consequently, the l-iTK methodis not stopped until all residuals are below the specified threshold. We provide a completeconvergence analysis for both iTK and l-iTK iterations. In particular we prove that l-iTK is a convergent regularization method in the sense of [10].The article is outlined as follows. In Section 2 we formulate basic assumptions and derivesome auxiliary estimates required for the analysis. In Section 3 a convergence result for the iTK method is proved. In Section 4 a semi-convergence result for the iTK method for noisydata is proved. In Section 5 we introduce (for the case of noisy data) a loping version of the iTK method and prove a semi-convergence result for this new method. In Section 6 we discusssome possible applications related to parameter identification in elliptic PDE’s. Section 7 isdevoted to final remarks an conclusions. We begin this section by introducing some assumptions, that are necessary for the convergenceanalysis presented in the next section. These assumptions derive from the classical assumptionsused in the analysis of iterative regularization methods [10, 20, 27].(A1) The operators F i are weakly sequentially continuous and Fr´echet differentiable; thecorresponding domains of definition D i are weakly closed. Moreover, we assume the existenceof x ∈ X , M >
0, and ρ > k F ′ i ( x ) k ≤ M , x ∈ B ρ ( x ) ⊂ \ N − i =0 D i . (7)Notice that x δ = x is used as starting value of the iTK iteration.(A2) This is an uniform assumption on the nonlinearity of the operators F i . We assume thatthe local tangential cone condition [10, 20] k F i ( x ) − F i (¯ x ) − F ′ i (¯ x )( x − ¯ x ) k Y ≤ η k F i ( x ) − F i (¯ x ) k Y , x, ¯ x ∈ B ρ ( x ) (8)holds for some η < x ∗ ∈ B ρ/ ( x ) such that F ( x ∗ ) = y , where y = ( y , . . . , y N − )are the exact data satisfying (1).We are now in position to choose the positive constants α and τ in (5), (6). For the rest ofthis article we shall assume α > (cid:16) δ max ρ (cid:17) , τ > η − η ≥ , (9)3here δ max := max j { δ j } . In particular, for linear problems we can choose τ = 1. Moreover,for exact data (i.e., δ j = 0, for j = 0 , . . . , N −
1) we require simply α > J k ( x ) := k F [ k ] ( x ) − y δ [ k ] k + α k x − x δk k , (10)which obviously relate to iteration (5) due to the fact that x δk +1 ∈ arg min J k ( x ). Lemma 2.1.
Let assumption (A1) be satisfied. Then each Tikhonov functional J k in (10) attains a minimizer on X .Proof. See [10, Chapter 10]. (cid:3)
The assertion of Lemma 2.1 still holds true if, instead of (A1), we assume that the operator F [ k ] is continuous and weakly closed, and that D ( F [ k ] ) is weakly closed [10]. In the next lemmawe prove an estimate for the residual of the iTK iteration. Lemma 2.2.
Let x δk and α be defined by (5) and (9) respectively. Then k F [ k ] ( x δk +1 ) − y δ [ k ] k ≤ k F [ k ] ( x δk ) − y δ [ k ] k , k < k δ ∗ . (11) Proof.
The inequality in (11) is a direct consequence of k F [ k ] ( x δk +1 ) − y δ [ k ] k ≤ J k ( x δk +1 ) ≤ J k ( x δk ) ≤ k F [ k ] ( x δk ) − y δ [ k ] k , k < k δ ∗ . The following lemma is an important auxiliary result, which will be used to prove amonotony property of the iTK iteration.
Lemma 2.3.
Let x δk and α be defined by (5) and (9) respectively. Moreover, assume that (A1)- (A3) hold true. If x δk +1 ∈ B ρ ( x ) for some k ∈ N , then k x δk +1 − x ∗ k − k x δk − x ∗ k ≤ α k F [ k ] ( x δk +1 ) − y δ [ k ] k h ( η − k F [ k ] ( x δk +1 ) − y δ [ k ] k + (1 + η ) δ [ k ] i . (12) Proof.
From (5) it follows that k x δk +1 − x ∗ k − k x δk − x ∗ k ≤ h x δk +1 − x ∗ , x δk +1 − x δk i = 2 α h x δk +1 − x ∗ , F ′ [ k ] ( x δk +1 ) ∗ ( y δ [ k ] − F [ k ] ( x δk +1 )) i = 2 α h y δ [ k ] − F [ k ] ( x δk +1 ) , F ′ [ k ] ( x δk +1 )( x δk +1 − x ∗ ) ± F [ k ] ( x δk +1 ) ± F [ k ] ( x ∗ ) i≤ α (cid:16) h F [ k ] ( x δk +1 ) − y δ [ k ] , F [ k ] ( x δk +1 ) − F [ k ] ( x ∗ ) − F ′ [ k ] ( x δk +1 )( x δk +1 − x ∗ ) i + 2 h F [ k ] ( x δk +1 ) − y δ [ k ] , F [ k ] ( x ∗ ) − F [ k ] ( x δk +1 ) ± y δ [ k ] i (cid:17) . x = x ∗ ∈ B ρ/ ( x ), ¯ x = x δk +1 ∈ B ρ ( x ), leads to k x δk +1 − x ∗ k − k x δk − x ∗ k ≤ α k F [ k ] ( x δk +1 ) − y δ [ k ] k (cid:16) η k F [ k ] ( x δk +1 ) − y [ k ] ± y δ [ k ] k− k F [ k ] ( x δk +1 ) − y δ [ k ] k + k y [ k ] − y δ [ k ] k (cid:17) , and (12) follows from this inequality together with (1).It is worth noticing that the proof of Lemma 2.3 requires an assumption on x δk +1 , namelythat x δk +1 ∈ B ρ ( x ). In the next lemma we make sure that this assumption is satisfied. Lemma 2.4.
Let x δk and α be defined by (5) and (9) respectively. Moreover, assume that(A1), (A3) hold true. If x δk ∈ B ρ/ ( x ∗ ) for some k ∈ N , then x δk +1 ∈ B ρ ( x ) .Proof. It follows from the definition of x δk +1 that α k x δk +1 − x δk k ≤ J k ( x δk +1 ) ≤ J k ( x ∗ ) ≤ k y [ k ] − y δ [ k ] k + α ( ρ/ . From this inequality and (9) we obtain k x δk +1 − x δk k ≤ δ [ k ] ( √ α ) − + ρ/ ≤ ρ/
2. Therefore, itfollows that k x δk +1 − x k ≤ k x δk +1 − x δk k + k x δk − x k ≤ ρ/ ρ/ , completing the proof.Our next goal is to prove a monotony property, known to be satisfied by other iterativeregularization methods, e.g., by the Landweber [10], the steepest descent [28], the LK [22]method, the l-LK method [12], and the l-SDK method [9]. Proposition 2.5 (Monotonicity) . Under the assumptions of Lemma 2.3, for all k < k δ ∗ theiterates x δk remain in B ρ/ ( x ∗ ) ⊂ B ρ ( x ) and satisfy (12) . Moreover, k x δk +1 − x ∗ k ≤ k x δk − x ∗ k , k < k δ ∗ . (13) Proof.
From (A3) it follows that x ∈ B ρ/ ( x ∗ ). Moreover, Lemma 2.4 guarantees that x ∈ B ρ ( x ∗ ). Therefore, it follows from Lemma 2.3 that (12) holds for k = 0. Then we concludefrom (12) and (6) that k x δ − x ∗ k − k x δ − x ∗ k ≤ α k F ( x δ ) − y δ k δ h τ ( η −
1) + (1 + η ) i . Thus, it follows from (9) that (13) holds for k = 0. In particular we have x ∈ B ρ/ ( x ∗ ). Theproof follows now using an inductive argument.In the next two sections we provide a complete convergence analysis for the iTK iteration(see Theorems 3.2 and 4.3 below). 5 iTK Method: Convergence for exact data Throughout this section, we assume that (A1) - (A3) hold true and that x δk , α and τ are definedby (5) and (9). Our main goal in this section is to prove convergence of the iTK iteration for δ i = 0, i = 0 , . . . , N −
1. For exact data y = ( y , . . . , y N − ), the iterates in (5) are denoted by x k to contrast with x δk in the noisy data case. Lemma 3.1.
There exists an x -minimal norm solution of (2) in B ρ/ ( x ) , i.e., a solution x † of (2) such that k x † − x k = inf {k x − x k : x ∈ B ρ/ ( x ) and F ( x ) = y } . Moreover, x † is theonly solution of (2) in B ρ/ ( x ) ∩ (cid:0) x + ker( F ′ ( x † )) ⊥ (cid:1) .Proof. Lemma 3.1 is a consequence of [16, Proposition 2.1]. For a detailed proof we refer thereader to [20].Throughout the rest of this article, x † denotes the x -minimal norm solution of (2). Wedefine e k := x † − x k . From Proposition 2.5 it follows that k e k k is monotone non increasing.Notice that Proposition 2.5 guarantees that (12) holds for all k ∈ N . Since the data isexact, (12) can be rewritten as k x k +1 − x ∗ k − k x k − x ∗ k ≤ α − ( η − k F [ k ] ( x k +1 ) − y [ k ] k .By summing over all k , this leads to ∞ P k =0 k F [ k ] ( x k +1 ) − y [ k ] k ≤ α − η ) k x − x † k < ∞ , (14)Equation (14) and the monotony of k e k k are the main arguments in the following proof of theconvergence of the iTK iteration. Theorem 3.2 (Convergence for exact data) . For exact data, the iteration ( x k ) converges to asolution of (2) , as k → ∞ . Moreover, if N (F ′ ( x † )) ⊆ N (F( x )) for all x ∈ B ρ ( x ) , i = 0 , . . . , N − , (15) then x k → x † .Proof. We have already observed that k e k k decreases monotonically. Therefore, k e k k convergesto some ǫ ≥
0. In the following we show that e k is in fact a Cauchy sequence. This is donesimilarly as in the proof of [9, Theorem 3.3]. The crucial difference is the fact that the term |h e n − e k , e n i| is here estimated by |h e n − e k , e n i| ≤ n − P i = k α − k F i ( x i +1 ) − y i k k F ′ i ( x i +1 )( x † − x i +1 ) k + l − P i = k α − | F i ( x i +1 ) − y i k k F ′ i ( x i +1 )( x i +1 − x i ∗ +1 ) k + l − P i = k α − k F i ( x i +1 ) − y i k k F ′ i ( x i +1 )( x i ∗ +1 − x n ) k . (16)Then, it follows from (8) that k F ′ i ( x i +1 )( x † − x i +1 ) k ≤ (1 + η ) k F i ( x i +1 ) − y i k (17) k F ′ i ( x i +1 )( x i +1 − x i ∗ +1 ) k ≤ (1 + η ) (cid:0) k F i ( x i +1 ) − y i k + k y i − F i ( x i ∗ +1 ) k (cid:1) . (18)Moreover, from the definition of the iterated Tikhonov method and and (7) it follows that k F ′ i ( x i +1 )( x i ∗ +1 − x n ) k ≤ α − M N − P j =0 k F j ( x n N + j +1 ) − y j k ≤ α − M γ , (19)6ith γ = γ ( n ) := P N − j =0 k F j ( x n N + j +1 ) − y j k . Substituting (17), (18), (19) in (16) leads to |h e n − e k , e n i|≤ n P i = k N − P i =0 α − k F i ( x i N + i +1 ) − y i k (cid:16) η ) k F i ( x i N + i +1 ) − y i k + [(1 + η ) + M α ] γ (cid:17) (we used the fact that k y i − F i ( x i ∗ +1 ) k ≤ γ ) and we finally obtain the estimate |h e n − e k , e n i| ≤ c n P i = k N − P i =0 k F i ( x i N + i +1 ) − y i k = c n − P i = k k F [ i ] ( x i +1 ) − y [ i ] k with c := ( N + 2) α − (1 + η ) + N M α − .The remaining of the argumentation (including the proof of the second assertion) followsthe lines of the proof of [9, Theorem 3.3]. Throughout this section, we assume that (A1) - (A3) hold true and that x δk , α and τ are definedby (5), and (9). Our main goal in this section is to prove that x δk δ ∗ converges to a solution of(2) as δ →
0, where k δ ∗ is defined in (6). Our first goal is to verify the finiteness of the stoppingindex k δ ∗ . Proposition 4.1.
Assume δ min := min { δ , . . . δ N − } > . Then k δ ∗ defined in (6) is finite.Proof. Assume by contradiction that for every l ∈ N , there exists no i ( l ) ∈ { , . . . , N − } such that k F i ( l ) ( x δlN + i ( l )+1 ) − y δi ( l ) k ≤ τ δ i ( l ) . From Proposition 2.5 it follows that (12) can beapplied recursively for k = 1 , . . . , lN , and we obtain −k x − x ∗ k ≤ lN − P k =1 α k F [ k ] ( x δk +1 ) − y δ [ k ] k h ( η − k F [ k ] ( x δk +1 ) − y δ [ k ] k + (1 + η ) δ [ k ] i , l ∈ N . Using the fact that k F [ k ] ( x δk +1 ) − y δ [ k ] k > τ δ [ k ] , we obtain the estimate k x − x ∗ k ≥ lN − P k =1 α k F [ k ] ( x δk +1 ) − y δ [ k ] k δ [ k ] h τ (1 − η ) − (1 + η ) i ≥ h τ (1 − η ) − (1 + η ) i τ δ α ( lN − , l ∈ N . (20)Due to (9), the right hand side of (20) tends to + ∞ as l → ∞ , which gives a contradiction.Consequently, the minimum in (6) takes a finite value.For the rest of this section we assume, additionally to (A1) – (A3), that(A4) The operators F i in (2) and it’s derivatives F ′ i are Lipschitz continuous, i.e., there existsa constant L such that k F i ( x ) − F i (¯ x ) k + k F ′ i ( x ) − F ′ i (¯ x ) k ≤ L k x − ¯ x k , for all x, ¯ x ∈ B ρ ( x ) . Moreover, the constants α in (9) and M in (7) are such that ( M + M ) L < α , where M = M ( ρ, x , y, ∆) := sup {k F i ( x ) − y δi k : i = 0 , . . . , N − , x ∈ B ρ ( x ) , k y δi − y i k ≤ δ i , | δ | ≤ ∆ } .The next result concerns the continuity of x δk at δ = 0 for fixed k ∈ N .7 emma 4.2. Let δ j = ( δ j, , . . . , δ j,N − ) ∈ (0 , ∞ ) N be given with lim j →∞ δ j = 0 . Moreover, let y δ j = ( y δ j , . . . , y δ j N − ) ∈ Y N be a corresponding sequence of noisy data satisfying k y δ j i − y i k ≤ δ j,i , i = 0 , . . . , N − , j ∈ N . Then, for each k ∈ N we have lim j →∞ x δ j k +1 = x k +1 .Proof. Notice that the uniqueness of global minimizers of J k in (10) hold true. Indeed, let δ ∈ (0 , ∞ ) N and y δ ∈ Y N be given as in (1). If x , x ∈ B ρ ( x ) are minimizers of J k , we have k x − x k = α − h F ′ [ k ] ( x ) ∗ ( F [ k ] ( x ) − y δ [ k ] ) − F ′ [ k ] ( x ) ∗ ( F [ k ] ( x ) − y δ [ k ] ) , x − x i = α − h h F [ k ] ( x ) − y δ [ k ] , ( F ′ [ k ] ( x ) − F ′ [ k ] ( x )) ( x − x ) i + h ( F [ k ] ( x ) − F [ k ] ( x )) , F ′ [ k ] ( x )( x − x ) i i ≤ ( M + M ) Lα − k x − x k , and from (A4) it follows that x = x . An immediate consequence of this uniqueness is thefact that the iterative steps x δk +1 in (5) are uniquely defined (see (10)).The proof of Lemma 4.2 uses an inductive argument in k . First we consider the case k = 0.Notice that x δ j = x for j ∈ N and we can estimate k x δ j − x k = α − h F ′ ( x ) ∗ ( F ( x ) − y ) − F ′ ( x δ j ) ∗ ( F ( x δ j ) − y δ j ) , x δ j − x i = α − h h F ( x ) − y , ( F ′ ( x ) − F ′ ( x δ j ))( x δ j − x ) i + h F ( x ) − F ( x δ j ) , F ′ ( x δ j )( x δ j − x ) i + h y δ j − y , F ′ ( x δ j )( x δ j − x ) i i ≤ ( M + M ) Lα − k x δ j − x k + M α − δ j, k x δ j − x k . (21)Therefore, it follows from (A4) that lim j →∞ x δ j = x . Next, let k > k ′ < k we have lim j →∞ x δ j k ′ +1 = x k ′ +1 . Arguing as in (21) we obtain the estimate k x δ j k +1 − x k +1 k ≤ ( M + M ) Lα − k x δ j k +1 − x k +1 k + (cid:16) M α − δ j, + k x δ j k − x k k (cid:17) k x δ j k +1 − x k +1 k . From (A4) it follows that[ α − ( M + M ) L ] α − k x δ j k +1 − x k +1 k ≤ M α − δ j, + k x δ j k − x k k (22)and from the induction hypothesis we conclude that lim j →∞ x δ j k +1 = x k +1 . Theorem 4.3 (Convergence for noisy data) . Let δ j = ( δ j, , . . . , δ j,N − ) be a given sequence in (0 , ∞ ) N with lim j →∞ δ j = 0 , and let y δ j = ( y δ j , . . . , y δ j N − ) ∈ Y N be a corresponding sequenceof noisy data satisfying k y δ j i − y i k ≤ δ j,i , i = 0 , . . . , N − , j ∈ N . Denote by k j ∗ := k ∗ ( δ j , y δ j ) thecorresponding stopping index defined in (6) and assume that the sequence { k j ∗ } j ∈ N is unbounded.Then x δ j k j ∗ converges to a solution of (2) , as j → ∞ . Moreover, if (15) holds, then x δ j k j ∗ → x † .Proof. The proof is analogous to the proof of [9, Theor. 3.6] and will be omitted. In the proof,[9, Theor. 3.5] has to be replaced by Lemma 4.2 above.8 emark 4.4.
The assumption on the boundedness of the sequence { k j ∗ } j ∈ N in Theorem 4.3is crucial for the proof. This assumption is natural when dealing with ill-posed problems andnoisy data, since in practical applications one generally has k δ ∗ → ∞ as δ → . A similarassumption is also needed in [22] to prove convergence of the Landweber-Kaczmarz iterationfor noisy data.In Section 5 we investigate the coupling of the iTK iteration with a loping strategy, whichallow us to drop the above assumption on the boundedness of { k j ∗ } j ∈ N and still prove a semi-convergence result analog to Theorem 4.3. Motivated by the ideas in [12, 9, 13, 3], we investigate in this section a loping iterated Tikhonov-Kaczmarz method ( l-iTK method) for solving (2). This iterative method is defined by x δk +1 = x δk − α − ω k F ′ [ k ] ( x δk +1 ) ∗ ( F [ k ] ( x δk +1 ) − y δ [ k ] ) . (23)where ω k := ( k F [ k ] ( x δk +1 ) − y δ [ k ] k ≥ τ δ [ k ] . (24)The positive constants α and τ are defined as in (9). The meaning of (23), (24) is the following:at each iterative step an element x k +1 / ∈ D [ k ] satisfying x k +1 / = x δk − α − F ′ [ k ] ( x k +1 / ) ∗ ( F [ k ] ( x k +1 / ) − y δ [ k ] )is computed. If k F [ k ] ( x k +1 / ) − y δ [ k ] k ≥ τ δ [ k ] we set x δk +1 = x k +1 / , otherwise x δk +1 = x δk .For exact data ( δ = 0) the l-iTK reduces to the iTK iteration investigated in the previoussections. For noisy data however, the l-iTK method is fundamentally different from the iTK method: The bang-bang relaxation parameter ω k effects that the iterates defined in (5)become stationary if all components of the residual vector k F i ( x δk ) − y δi k fall below a pre-specified threshold. This characteristic renders (5) a regularization method, as we shall see inSubsection 5.1. Remark 5.1.
As observed in Remark 1.1, the iteration in (23) corresponds to x δk +1 ∈ arg min (cid:8) ω k k F [ k ] ( x ) − y δ [ k ] k + α k x − x δk k (cid:9) and is not uniquely defined. For noisy data, asemi-convergence result is obtained under the smooth assumption (A4) on the functionals F i ,which guarantees that the l-iTK iteration is uniquely defined. The l-iTK iteration should be terminated when, for the first time, all x δk are equal withina cycle. That is, we stop the iteration at k δ ∗ := min { lN ∈ N : x δlN = x δlN +1 = · · · = x δlN + N − } , (25)Notice that k δ ∗ is the smallest multiple of N such that x δk δ ∗ = x δk δ ∗ +1 = · · · = x δk δ ∗ + N − . (26)9 .1 Convergence analysis In what follows we assume that (A1) – (A3) and (A4) hold true and that x δk , ω k , α and τ aredefined by (23), (24) and (9). We start by listing some straightforward facts about the l-iTK iteration: • Lemma 2.2 holds true. Lemma 2.3 still holds true, but (12) has to be replaced by k x δk +1 − x ∗ k −k x δk − x ∗ k ≤ ω k α k F [ k ] ( x δk +1 ) − y δ [ k ] k h ( η − k F [ k ] ( x δk +1 ) − y δ [ k ] k +(1+ η ) δ [ k ] i . (27) • Lemma 2.4 and Proposition 2.5 hold true. • Theorem 3.2 holds true (for exact data, the l-iTK iteration reduces to iTK ).Before proving the main semiconvergence theorem we need two auxiliary results: the firstresult guarantees that, for noisy data, the stopping index k δ ∗ in (25) is finite (compare withProposition 4.1); the second result is the analogous of Lemma 4.2 for the l-iTK iteration. Proposition 5.2.
Assume δ min := min { δ , . . . δ N − } > . Then k δ ∗ in (25) is finite, and k F i ( x δk δ ∗ ) − y δi k < κτ δ i , i = 0 , . . . , N − . (28) where κ := [(1 + η ) + M /α ] / (1 − η ) .Proof. Assume by contradiction that for every l ∈ N , there exists i ( l ) ∈ { , . . . , N − } suchthat x lN + i ( l ) = x lN . From Proposition 2.5 it follows that (27) can be applied recursively for k = 1 , . . . , lN , and we obtain −k x − x ∗ k ≤ lN − P k =1 ω k α k F [ k ] ( x δk +1 ) − y δ [ k ] k h ( η − k F [ k ] ( x δk +1 ) − y δ [ k ] k + (1 + η ) δ [ k ] i , l ∈ N , Using the fact that either ω k = 0 or k F [ k ] ( x δk +1 ) − y δ [ k ] k > τ δ [ k ] , we obtain the estimate k x − x ∗ k ≥ lN − P k =1 ω k α k F [ k ] ( x δk +1 ) − y δ [ k ] k δ [ k ] h τ (1 − η ) − (1 + η ) i . (29)Equation (29) and the fact that x l ′ N + i ( l ′ ) = x l ′ N for all l ′ ∈ N , imply k x − x ∗ k ≥ h τ (1 − η ) − (1 + η ) i l δ min α ( τ δ min ) , l ∈ N . (30)Due to (9), the right hand side of (30) tends to + ∞ as l → ∞ , which gives a contradiction.Consequently, the set { l ∈ N : x lN + i = x lN , 0 ≤ i ≤ N − } is not empty and the minimum in(6) takes a finite value.It remains to prove (28). For each fixed i ∈ { , . . . , N − } we have k F i ( x δk δ ∗ ) − y δi k ≤ k F i ( x δk δ ∗ ) − F i ( x δk δ ∗ +1 / ) + F ′ i ( x δk δ ∗ +1 / )( x δk δ ∗ +1 / − x δk δ ∗ ) k + k F i ( x δk δ ∗ +1 / ) − y δi k + k − F ′ i ( x δk δ ∗ +1 / )( x δk δ ∗ +1 / − x δk δ ∗ ) k≤ η k F i ( x δk δ ∗ ) − F i ( x δk δ ∗ +1 / ) ± y δi k + τ δ i + M k x δk δ ∗ +1 / − x δk δ ∗ k≤ η k F i ( x δk δ ∗ ) − y δi k + (1 + η ) τ δ i + M α − k F ′ i ( x δk δ ∗ +1 / )( F i ( x δk δ ∗ +1 / ) − y δi ) k ω k δ ∗ + i = 0 and k F i ( x δk δ ∗ +1 / ) − y δi k ≤ τ δ i ). Therefore,we obtain the estimate(1 − η ) k F i ( x δk δ ∗ ) − y δi k ≤ (1 + η ) τ δ i + M α − k F i ( x δk δ ∗ +1 / ) − y δi k (31)and (28) follows. Lemma 5.3.
Let δ j = ( δ j, , . . . , δ j,N − ) ∈ (0 , ∞ ) N be given with lim j →∞ δ j = 0 . Moreover, let y δ j = ( y δ j , . . . , y δ j N − ) ∈ Y N be a corresponding sequence of noisy data satisfying k y δ j i − y i k ≤ δ j,i , i = 0 , . . . , N − , j ∈ N . Then, for each fixed k ∈ N we have lim j →∞ x δ j k +1 = x k +1 .Proof. Arguing as in the first part of the proof of Lemma 4.2, we conclude that the iterativesteps x δk +1 in (23) – (24) are uniquely defined.The proof of Lemma 5.3 uses an inductive argument in k . First we take k = 0 (notice that x δ j = x for j ∈ N ). We have to consider two cases: If ω = 1, we argue as in (21) and obtainthe estimate k x δ j − x k ≤ M [ α − ( M + M ) L ] − δ j, . (32)Otherwise, if ω = 0, we have x δ j = x and k F ( x δ j / ) − y δ j k ≤ τ δ j, . Therefore, k x δ j − x k = α − h F ′ ( x ) ∗ ( F ( x ) − y ± F ( x ) ± y δ j ) , x δ j − x i≤ M α − k x δ j − x k n k F ( x ) − F ( x ) k + k F ( x ) − y δ j k + k y δ j − y k o ≤ ( M + M ) α − k x δ j − x k n L k x − x δ k + k F ( x ) − y δ j k + δ j, o . Arguing as in (31) we estimate k F ( x ) − y δ j k ≤ κτ δ j, . Therefore, it follows that k x δ j − x k ≤ α [ α − ( M + M ) L ] − ( κτ + 1) δ j, . (33)Thus, it follows from (32), (33) and (A4) that lim j →∞ x δ j = x .Now, take k > k ′ < k we have lim j →∞ x δ j k ′ +1 = x k ′ +1 . Once againtwo cases must be considered: ω = 1 and ω = 0. Arguing as in the case k = 0, we obtainestimates similar to (32) and (33). Thus, lim j →∞ x δ j k +1 = x k +1 follows using the inductionhypothesis (compare with (22) and the corresponding step in the proof of Lemma 4.2).We are now ready to state and prove a semiconvergence result for the l-iTK iteration. Theorem 5.4.
Let δ j = ( δ j, , . . . , δ j,N − ) be a given sequence in (0 , ∞ ) N with lim j →∞ δ j =0 , and let y δ j = ( y δ j , . . . , y δ j N − ) ∈ Y N be a corresponding sequence of noisy data satisfying k y δ j i − y i k ≤ δ j,i , i = 0 , . . . , N − , j ∈ N . Denote by k j ∗ := k ∗ ( δ j , y δ j ) the corresponding stoppingindex defined in (25) . Then x δ j k j ∗ converges to a solution x ∗ of (2) as j → ∞ . Moreover, if (15) holds, then x δ j k j ∗ converges to x † .Proof. The proof is analogous to the proof of [9, Theorem 3.6] and is divided in two cases. Inthe second case (the sequence k j ∗ is not bounded) one has to argue with Lemma 5.3. Notice that for distinct i ∈ { , . . . , N − } the points x δk δ ∗ +1 / may be different, since they are minimizers ofthe Tikhonov functionals J k δ ∗ + i ( x ) := k F i ( x ) − y δi k + α k x − x δk δ ∗ k . Applications
In this section we address parameter identification problems in elliptic equations. In the focusis the question whether the local tangential cone condition (8) is satisfied.Part of the following analysis is based on the verification of a stronger condition, whichimplies the local tangential cone condition, namely the (adjoint) range invariance condition : There exists a family of bounded linear operators R x : Y −→ Y and a positiveconstant such that F ′ ( x ) = R x F ′ ( x † ) and k R x − id k ≥ c k x − x † k X , x ∈ B ρ ( x ) . (34)It is a well known fact that the range invariance condition implies that range( F ′ ( x )) =range( F ′ ( x † )), x ∈ B ρ ( x ).The model problem under investigation is an elliptic boundary value problem − ( au s ) s + ( bu ) s + cu = f , in (0 ,
1) (35) − α u s (0) + β u (0) = g , − α u s (1) + β u (0) = g . (36)Here f is a given function in L (0 ,
1) and α i , β i , g i are real numbers specified below. To simplifythe discussion we consider here the one-dimensional case only, but we shall give some hints fortwo- and three-dimensional cases.The equation in (35) may be considered as a simplified model for a steady state convection-diffusion equation. The term cu is a production term where the function c depends on proper-ties of the material. The term − ( au s ) s +( bu ) s results from an ansatz for the flux j := − au s + bu . Here a, b are functions describing the diffusion and convective part, respectively. For a concreteapplication see for instance [2], Chapter I.2.We want to identify the parameters a, b, c from a measurement u δ ∈ L (0 ,
1) of the solution u ∈ L (0 ,
1) of the boundary value problem (35), (36). We distinguish between three differentinverse problems, namely the so called a/b/c –problems:
The a-problem : Find a under the assumptions b ≡ c ≡ The b-problem : Find b under the assumptions a ≡ c ≡ The c-problem : Find c under the assumptions a ≡ b ≡ F ( x ) = y for an appro-priately chosen parameter-to-output mapping F : D ⊂ X → Y .The a - and c -problem are considered in a huge amount of references whereas the b -problemreceived less attention. It seems that the tangential cone condition for this problem has notbeen investigated up to now; we do that below. A detailed analysis of regularization methodsfor the identification in elliptic and parabolic equations can be found in [4]. Let us start the discussion with the c-problem , the most simple one. Here the mapping F isdefined as follows: F : D ∋ c u ( c ) ∈ L (0 , , D ⊂ X := Y := L (0 , , For a proof that the local tangential cone condition follows from the range invariance condition, see [16]. u ( c ) solves the boundary value problem − u ss + cu = f , in (0 , u (0) = g , u (1) = g in the weak sense. The domain of definition is chosen as a ball in X := L (0 ,
1) (see [8]): D := B ρ ( c ) where c ∈ L (0 , , c ≥ , . Then the mapping F is Fr´echet-differentiable in D (see [10, 20]) and we have F ′ ( c ) h = Γ( c ) − ( − hu ( c )) , F ′ ( c ) ∗ w = − u ( c )Γ( c ) − w , h, w ∈ L (0 , , where Γ( c ) : H (0 , ∩ H (0 , → L (0 ,
1) is defined by Γ( c ) u := − u ss + cu . We assume that c is chosen such that u ( c ) ≥ κ a.e. for each c ∈ D , where κ is a positive constant. Then wehave F ′ (˜ c ) = R (˜ c, c ) F ′ ( c ) , c, ˜ c ∈ D , (37)with R (˜ c, c ) ∗ w = Γ(˜ c )[ u (˜ c ) u ( c ) − A (˜ c ) − w ] , w ∈ L (0 , , k R (˜ c, c ) − id k ≤ κ k ˜ c − c k , c, ˜ c ∈ D .
Here κ is a positive constant. As a result, we see that the range invariance condition issatisfied and the tangential cone condition follows. Remark 6.1.
The results above hold also in the two- and three-dimensional cases; no furtherassumptions are necessary (see, e.g., [14, 19]). Clearly, the boundary conditions have now tobe considered in the sense of trace operators.
Here the parameter-to-output mapping F is defined as follows: F : D ∋ b u ( b ) ∈ L (0 , , D ⊂ X := H (0 , , Y := L (0 , , where u ( b ) solves the boundary value problem − u ss + ( bu ) s + u = f , in (0 , − u s (0) + bu (0) = g , − u s (1) + bu (1) = g in the weak sense. The boundary value problem above is uniquely solvable in H (0 ,
1) whenever k b k X is small enough, which can be seen from an application of the Lax-Milgram-Lemma.Therefore we choose D as a ball B ρ := { x ∈ X | k x k X ≤ ρ } in X with ρ small enough suchthat u ( b ) is uniquely determined for each b ∈ B ρ . Additionally, the assumption that eachparameter b belongs to H (0 ,
1) ensures that the solution u ( b ) is in H (0 , b ∈ B ρ . Then F is Fr´echet-differentiable in b and F ′ ( b ) h = v , where v solves − v s s + ( bv ) s + v = − ( hu ) s in (0 , , (38) − v s + bv (cid:12)(cid:12) = − hu (cid:12)(cid:12) (39)13e want to verify an inequality which leads to the tangential cone condition. Let u = u ( b ),˜ u = u (˜ b ) with ˜ b , b ∈ B ρ ( b ). Moreover let v := F ′ ( b )(˜ b − b ). We define the mapping Q ( b ) : Y −→ H (0 ,
1) where ψ := Q ( b ) w solves the boundary value problem − ψ ss − bψ s + ψ = w in (0 , , ψ s (0) = ψ s (1) = 0 , in a weak sense. Since b ∈ H (0 ,
1) we see that ψ is more regular, namely ψ ∈ H (0 , w ∈ Y, k w k Y ≤ , and let ψ := Q ( b ) w . Then h ˜ u − u − F ′ ( b )(˜ b − b ) , w i Y = h ˜ u − u − v, w i Y = h ˜ u − u − v, − ψ ss − bψ s + ψ i Y = h− (˜ u − u ) ss + [ b (˜ u − u )] s + (˜ u − u ) , ψ i Y + h v ss − [ bv ] s − v, ψ i Y + (˜ b − b )(˜ u − u ) ψ (cid:12)(cid:12) = h [( b − ˜ b )˜ u ] s , ψ i Y + h [(˜ b − b ) u ] s , ψ i Y + (˜ b − b )(˜ u − u ) ψ (cid:12)(cid:12) = h (˜ b − b )(˜ u − u ) , ψ s i Y . This implies k F (˜ b ) − F ( b ) − F ′ ( b )(˜ b − b ) k Y = sup k w k Y ≤ |h ˜ u − u − F ′ ( b )(˜ u − u ) , w i Y |≤ sup k w k Y ≤ |h (˜ b − b )(˜ u − u ) , ( Q ( b ) w ) s i Y |≤ k (˜ b − b )(˜ u − u ) k L (0 , sup k w k Y ≤ k ( Q ( b ) w ) s k L (0 , ≤ k ˜ b − b k L ∞ (0 , k ˜ u − u k L (0 , sup k w k Y ≤ k Q ( b ) w k H (0 , , and we derive the estimate k F (˜ b ) − F ( b ) − F ′ ( b )(˜ b − b ) k Y ≤ κ k ˜ b − b k H (0 , k ˜ u − u k L (0 , , (40)where the constant κ depends on the norm of the mapping Q ( b ) . Remark 6.2.
The formulation of the b -problem above can be easily generalized to the two-dimensional case. The convection term in this case is ∂ ( bu ) + ∂ ( bu ) and again a scalarfunction b has to be identified. The situation is different when one models the first order termin the equation by b ∂ u + b ∂ u [17]. Then one has to identify two parameters and the analysisis much more delicate. It seems that the identification problems has not been considered in theframework chosen above; see [7] for the investigation of identifiably for this inverse problem. Here the parameter-to-solution mapping F is defined by F : D ∋ a u ( a ) ∈ L (0 , , D ⊂ X := Y := L (0 , , where u ( a ) solves the boundary value problem − ( au s ) s = f , in (0 , u (0) = g , u (1) = g Due to the Sobolev embedding theorem of H s in L ∞ , in the two-dimensional case the parameter space X has to be chosen a a subset of H ε , for some ε >
14n the weak sense. The domain of definition is chosen as D := { a ∈ H (0 , | a ( s ) ≥ a a.e. } , where a is a positive constant. One can prove [20] that F is Fr´echet differentiable in D with F ′ ( a ) h = A ( a ) − (( − hu ( c ) s ) s ) , F ′ ( c ) ∗ w = − J − [ u ( a ) s ( A ( a ) − w ) s ] , h, w ∈ L (0 , , (41)where A ( a ) : H (0 , ∩ H (0 , → L (0 ,
1) is defined as A ( a ) u := − ( au s ) s and J : H (0 , → L (0 ,
1) is defined by
J ψ := − ψ ss + ψ ( J is the adjoint of the embedding of H (0 ,
1) into L (0 , Remark 6.3.
The results in this section strongly benefit from the fact that the model is one-dimensional. One can see this for instance that, due to the choice of the parameter space, eachadmissible parameter is a continuous function. In the two- or three-dimensional case additionalassumptions are necessary in order to obtain the same results (see, e.g., [14]).
Remark 6.4.
It seems that the range invariance condition cannot be proved (even understronger regularity assumptions) for the a - and the b -problem, respectively; for the a -problemsee [16]. Notice that the presentation of the Fr´echet-derivative in (41) , (38) cannot be handledin the same way as in the case of the c -problem. In this paper we propose a new iterative method for inverse problems of the form (2), namelythe iTK iteration. In the case of noisy data, we also propose a loping version of iTK , namely,the l-iTK iteration.In the particular case of dealing with a single operator equation ( N = 1 in (2)), iTK and l-iTK are the same iteration and reduce to the classical iterated Tikhonov method. Tothe best of our knowledge this method has so far been investigated only for linear problems[5, 15, 23] and the convergence analysis for nonlinear operator equations was still open. Three good reasons for using the loping iteration
The first reason is a numerical one:Notice that, (11) allow us to conclude ω k = 0 without having to compute x k +1 / at all.Therefore, after a large number of iterations, ω k will vanish for some k within each iterationcycle and the computational expensive evaluation of x k +1 / (solution of a nonlinear equation)might be loped, making the l-iTK method in (23) a fast alternative to the iTK method aswell as to classical Kaczmarz type methods [22, 6].The second reason is of analytical nature:An alternative to relax the assumption on the boundedness of the sequence { k j ∗ } j ∈ N in The-orem 4.3 and still prove a semiconvergence result, is the introduction of the loping strategyabove. This is done in Theorem 5.4.The third reason is of heuristic nature:The rules for choosing the stooping index k δ ∗ in (6) and in (25) are quite different. According to(6) the iTK iteration should be stopped when for the first time one of the equations of system 2is satisfied within a specified threshold. Therefore, at the iteration step x δk δ ∗ , we cannot controlall the residuals k F i ( x δk ) − y δi k within the cycle.15ccording to (25) however, the l-iTK iteration only stops when all the residuals k F i ( x δk ) − y δi k , i = 0 , . . . , N − l-iTK iterationneeds more steps to reach discrepancy, it produces an approximate solution x δk δ ∗ which betterfits all the system data. Acknowledgments
We would like to thank Prof. M.Burger (M¨unster) for useful and stimulating discussions. A.DC.acknowledges support from CNPq grant 474593/2007–0. The work of A.L. is supported by theBrazilian National Research Council CNPq, grant 303098/2009–0, and by the Alexander vonHumbolt Foundation AvH.
References [1] A.B. Bakushinsky and M.Y. Kokurin,
Iterative methods for approximate solution of in-verse problems , Mathematics and Its Applications, vol. 577, Springer, Dordrecht, 2004.[2] H.T. Banks and K. Kunisch,
Estimation techniques for distributed parameter systems ,Birkh¨auser, 1989.[3] J. Baumeister, B. Kaltenbacher, and A. Leit˜ao,
On Levenberg-Marquardt Kaczmarz meth-ods for regularizing systems of nonlinear ill-posed equations , Inverse Problems and Imaging(2010), to appear.[4] B. Blaschke(-Kaltenbacher),
Some newton type methods for the solution of nonlinear ill-posed problems , Ph.D. thesis, Johannes Kepler University, Linz, 2005.[5] M. Brill and E. Schock,
Iterative solution of ill-posed problems: A survey , ch. in ModelOptimization in Exploration Geophysics, Ed. A. Vogel, pp. 13–38, Vieweg, Braunschweig,1987.[6] C. Byrne,
Block-iterative algorithms , Int. Trans. in Operational Research (2009), 01–37.[7] J. Cheng and M. Yamamoto, Identification of convection term in a parabolic equation witha single measurement , Nonlinear Analysis (2002), no. 1, 163–171.[8] F. Collonius and K. Kunisch, Stability of parameter estimation in two point boundaryvalue problems , J. Reine Angew. Math. (1986), 1–29.[9] A. De Cezaro, M. Haltmeier, A. Leit˜ao, and O. Scherzer,
On steepest-descent-Kaczmarzmethods for regularizing systems of nonlinear ill-posed equations , Appl. Math. Comput. (2008), no. 2, 596–607.[10] H.W. Engl, M. Hanke, and A. Neubauer,
Regularization of inverse problems , KluwerAcademic Publishers, Dordrecht, 1996.[11] C. W. Groetsch and O. Scherzer,
Non-stationary iterated Tikhonov-Morozov method andthird-order differential equations for the evaluation of unbounded operators , Math. Meth-ods Appl. Sci. (2000), no. 15, 1287–1300.1612] M. Haltmeier, A. Leit˜ao, and O. Scherzer, Kaczmarz methods for regularizing nonlinearill-posed equations. I. convergence analysis , Inverse Probl. Imaging (2007), no. 2, 289–298.[13] M. Haltmeier, A. Leit˜ao, and E. Resmerita, On regularization methods of EM-Kaczmarztype , Inverse Problems (2009), 075008.[14] M. Hanke, Regularizing properties of a truncated Newton-CG algorithm for nonlinear in-verse problems , Numer. Funct. Anal. Optim. (1997), no. 9–10, 971–993.[15] M. Hanke and C. W. Groetsch, Nonstationary iterated Tikhonov regularization , J. Optim.Theory Appl. (1998), no. 1, 37–53.[16] M. Hanke, A. Neubauer, and O. Scherzer, A convergence analysis of Landweber iterationfor nonlinear ill-posed problems , Numer. Math. (1995), 21–37.[17] Victor Isakov, Inverse problems for partial differential equations , second ed., AppliedMathematical Sciences, vol. 127, Springer, New York, 2006.[18] S. Kaczmarz,
Approximate solution of systems of linear equations , Internat. J. Control (1993), no. 6, 1269–1271.[19] B. Kaltenbacher, Some newton-type methods for the regularization of nonlinear ill-posedproblems , Inverse Problems (1997), 729–753.[20] B. Kaltenbacher, A. Neubauer, and O. Scherzer, Iterative regularization methods fornonlinear ill-posed problems , Radon Series on Computational and Applied Mathematics,vol. 6, Walter de Gruyter GmbH & Co. KG, Berlin, 2008.[21] S. Kindermann and A. Neubauer,
On the convergence of the quasioptimality criterion for(iterated) Tikhonov regularization , Inverse Probl. Imaging (2008), no. 2, 291–299.[22] R. Kowar and O. Scherzer, Convergence analysis of a landweber-kaczmarz method forsolving nonlinear ill-posed problems , Ill posed and inverse problems (book series) (2002),69–90.[23] L. J. Lardy, A series representation for the generalized inverse of a closed linear operator ,Atti della Accademia Nazionale dei Lincei, Rendiconti della Classe di Scienze Fisiche,Matematiche, e Naturali, Serie VIII (1975), 152–157.[24] S. McCormick, The methods of kaczmarz and row orthogonalization for solving linearequations and least squares problems in hilbert space , Indiana Univ. Math. J. (1977),1137–1150.[25] V.A. Morozov, Regularization methods for ill–posed problems , CRC Press, Boca Raton,1993.[26] F. Natterer,
Algorithms in tomography , State of the Art in Numerical Analysis, vol. 63,1997, pp. 503–524.[27] O. Scherzer,