A preconditioner based on sine transform for two-dimensional Riesz space factional diffusion equations in convex domains
aa r X i v : . [ m a t h . NA ] F e b A preconditioner based on sine transform for two-dimensional Riesz spacefractional diffusion equations in convex domains
Xin Huang a , Hai-Wei Sun a, ∗ a Department of Mathematics, University of Macau, Macao.
Abstract
In this paper, we develop a fast numerical method for solving the time-dependent Riesz spacefractional diffusion equations with a nonlinear source term in the convex domain. An implicitfinite difference method is employed to discretize the Riesz space fractional diffusion equationswith a penalty term in a rectangular region by the volume-penalization approach. The stabilityand the convergence of the proposed method are studied. As the coefficient matrix is with theToeplitz-like structure, the generalized minimum residual method with a preconditioner basedon the sine transform is exploited to solve the discretized linear system, where the preconditioneris constructed in view of the combination of two approximate inverse τ matrices, which can bediagonalized by the sine transform. The spectrum of the preconditioned matrix is also inves-tigated. Numerical experiments are carried out to demonstrate the efficiency of the proposedmethod. Keywords:
Riesz fractional derivative, Toeplitz matrix, sine transform based preconditioner,GMRES method, penalization
1. Introduction
Consider the following two-dimensional Riesz space fractional diffusion equations (RSFDEs)defined in convex domains with the homogenous Dirichlet boundary condition [8] ∂u∂t = k x ∂ α u∂ | x | α + k y ∂ α u∂ | y | α + f ( u, x, y, t ) , ( x, y, t ) ∈ Ω × (0 , T ] , (1.1) u ( x, y,
0) = u ( x, y ) , ( x, y ) ∈ Ω , (1.2) u ( x, y, t ) = 0 , ( x, y, t ) ∈ ∂ Ω × (0 , T ] , (1.3)where Ω ∈ R is a convex region whose left and right boundaries are a ( y ) and b ( y ), while thelower and upper boundaries are a ( x ) and b ( x ), respectively, k x and k y are positive constantsrepresenting the diffusivity coefficients, the nonlinear source term f ( u, x, y, t ) is supposed to hold ∗ Corresponding author.
Email addresses: [email protected] (Xin Huang ),
[email protected] (Hai-Wei Sun )
Preprint submitted to Elervier February 24, 2021 he Lipschitz condition with respect to u and t , the Riesz fractional derivatives ∂ α u∂ | x | α and ∂ α u∂ | y | α concerning to α i (1 < α i < , i = 1 ,
2) are described by ∂ α u∂ | x | α = c α (cid:16) a ( y ) D α x u + x D α b ( y ) u (cid:17) , ∂ α u∂ | y | α = c α (cid:16) a ( x ) D α y u + y D α b ( x ) u (cid:17) , where c α i = −
12 cos( α i π/ >
0, and the above left and right Riemann-Liouville fractional deriva-tives are depicted as a ( y ) D α x u ( x, y, t ) = 1Γ(2 − α ) ∂ ∂x Z xa ( y ) u ( s, y, t )( x − s ) α − d s, x D α b ( y ) u ( x, y, t ) = 1Γ(2 − α ) ∂ ∂x Z b ( y ) x u ( s, y, t )( s − x ) α − d s. Analogously, a ( x ) D α y u and y D α b ( x ) u can be defined in the same way.Fractional differential equations provide plenty of efficient and powerful models for describingcomplex phenomena arising in various fields such as science, engineering, and physics; see [2, 4,13]. The Riesz fractional diffusion equation as one of the most popular fractional differentialequations has received tremendous attention on practical applications, including the nonlocaldynamics [34], the lattice model [9], the FizHugh-Nagumo model [12] and so on. The potentialsignificance of studying Riesz fractional diffusion equations has attracted lots of researchers tostudy.Since the analytical solutions of fractional differential equations are usually unavailable, thus,it becomes a major research direction to seek the numerical solutions that has been intensivelydeveloped; see [24, 27, 29]. Because of the Riesz fractional operator being with the nonlocalcharacteristics, the resulting coefficient matrix stemming from the numerical discretization isusually dense. Therefore, it needs O ( N ) computational cost and O ( N ) storage requirementto invert by the direct solvers, where N denotes the number of unknowns. To overcome thosedemerits, numerous strategies are proposed for fast solving fractional diffusion equations withcheaper memory; see [21, 32, 35].However, if the fractional diffusion equations are defined in convex domains, the resultingcoefficient matrices are no longer with tensor forms. Hence the methods that are available forthe rectangular domains may not be extended to solve the problems in the convex domains.In 2018, Chen et al. in [8] employed the alternating direction implicit (ADI) method to solvethe RSFDEs (1.1)–(1.3) in the convex domains. In their work, the resulting coefficient matrixis consist of Toeplitz blocks with different sizes. The preconditioner conjugate gradient (PCG)method with a circulant preconditioner is exploited to solve the Toeplitz block system. Likewise,Jia and Wang [19] adopted the same way to solve the distributed-order space-fractional diffusionequation defined on convex domains; see [11, 22]. Generally speaking, despite the total amountof calculation arising from the problem in convex domain is consistent in the order of magnitudecomparing with the one of rectangular domains, its computational complexity is more than thatof the cases in the rectangular domains. 2o avoid the inconvenience of storing and inverting the coefficient matrices with differentToeplitz blocks, more effective and robust approaches are urgently desired. Jia and Wang [18]firstly applied the volume-penalization method [1] to solve the fractional differential equationsdefined in convex domains. More precisely, they enlarge the convex domain to a rectangulardomain, and thereby the resulting coefficient matrix is equivalent to the sum of an accuratetensor form of a Toeplitz-like matrix and a diagonal matrix which derives from the penaltyterm. As a result, the methods designed for the problems in rectangular domains are availablefor that in the convex domains. As a matter of fact, the essence of the volume-penalizationis transforming the problem in the convex domain to a more general problem. Numericallyspeaking, for many problems, the solutions of the penalized fractional differential equationsconverge to the solutions of the original fractional differential equations, and such facts havebeen confirmed in [1, 5, 20]. Consequently, the preconditioning techniques have been developedfor those penalized systems; see [10, 6].In this paper, we study the RSFDEs defined in convex domains combining with the volume-penalization strategy. We discretize the model equations with the penalty term on a rectangularregion via an implicit finite difference method, which has been proved to be unconditionallystable and with first order in temporal and spatial direction. Note that the resulting coefficientmatrix is the sum of a diagonal matrix, whose entries equal to 1 or 0, and an exact tensor formof Toeplitz-like matrix. Therefore, the coefficient matrix can be stored in O ( N ) memory and thecomplexity of matrix-vector multiplication is of O ( N log N ) by the fast Fourier transformation(FFT). Thus, when the Krylov subspace method is exploited to solve the discretized linear sys-tem, the computational cost per iteration can keep O ( N log N ) operations. In order to speed upthe convergent rate of the iterative method, constructing an efficient and feasible preconditioneris necessary and significant. To this end, we establish two step approximations to construct thepreconditioner. Firstly, the sine transform based preconditioner, which is also called τ matrixthat can be diagonalized by the discrete sine transform [3, 26], is applied to approximate theToeplitz matrix. Afterward, we obtain a preliminary preconditioner that the spectrum of thepreconditioned matrix is proved to be uniformly bounded in the open interval (1 / , / O ( N log N ) operations. Therefore, we resort the strategy that has been adopted in[6, 10] to overcome this weakness. The second step to construct an efficient preconditioner is thecombination of two inverse τ matrices to approximate the previous preconditioner. The spectraof the preconditioned matrix are also analysed. Numerical results fully exhibit the efficiency ofthe proposed method.The rest of the paper is organized as follows. In Section 2, an implicit finite difference methodis employed to discretize the RSFDEs in the generalized rectangular domain. In Section 3, theconvergence and the stability of the difference scheme are studied. A preconditioner based onthe sine transform is constructed in Section 4 and the spectra of the preconditioned matrix arediscussed as well. In Section 5, numerical results are reported to demonstrate the effectivenessof the proposed method. Concluding remarks are given in Section 6.3 . Finite difference discretization To seek the numerical solution of the problem (1.1)–(1.3), we extend the convex domain toa rectangular one that follows the idea as shown in [10].Suppose that the domain Ω is contained in a rectangular domain ¯Ω = ( a, b ) × ( c, d ) ⊃ Ω.Then, we reformulate the problem (1.1)–(1.3) to be the following equations ∂u η ∂t − k x ∂ α u η ∂ | x | α − k y ∂ α u η ∂ | y | α + 1 − Ω ( x, y ) η u η = ˆ f ( u η , x, y, t ) , ( x, y, t ) ∈ ¯Ω × (0 , T ] , (2.1) u η ( x, y,
0) = ˆ u ( x, y ) , ( x, y ) ∈ ¯Ω , (2.2) u η ( x, y, t ) = 0 , ( x, y, t ) ∈ ∂ ¯Ω × (0 , T ] , (2.3)where 1 Ω ( x, y ) is an indicator function satisfying 1 Ω ( x, y ) = 1 if ( x, y ) ∈ Ω, or 0 elsewhere,ˆ f ( u η , x, y, t ) and ˆ u ( x, y ) are the extensions of the source term f ( u, x, y, t ) and the initial value u ( x, y ) from Ω to ¯Ω, respectively. For convenience, zero extensions for ˆ f ( u η , x, y, t ) are used inthe theoretical analysis and numerical experiments. Hence, the RSFDEs in (1.1) will reduce to(2.1) if ( x, y ) ∈ Ω. Moreover, the solution u η is supposed to satisfy the homogeneous Dirichletboundary condition on the extended region ¯Ω \ Ω; i.e.,lim η → + u η ( x, y, t ) = 0 . Indeed, it is have been proved that the solution of the penalized equation will converge to thesolution of the original equation for many problems; see [1, 20]. Therefore, in the following, wefocus on the discretization of the penalized equations.Let n , n , m be positive integers. Denote h x = b − an +1 and h y = d − cn +1 be the mesh sizes in x direction and y direction. We further define uniform spatial partitions as x i = a + ih x for i =0 , . . . , n + 1 and y j = c + jh y for j = 0 , . . . , n + 1, respectively. Let ∆ t = Tm and t k = k ∆ t for k = 0 , . . . , m . In order to discretize the equation (2.1), we assume that the problem (2.1)–(2.3)is uniquely solvable and the solution u η ( x, y, t ) is sufficiently smooth on ¯Ω. Then, applying theshifted Gr¨unwald Letnikov difference scheme to approximate the left and right Riemann-Liouvillefractional derivatives with respect to x at grid point ( x i , y j , t k ), we obtain a D α x u η ( x i , y j , t k ) = 1 h α x i +1 X l =0 g ( α ) l u η ( x i − l +1 , y j , t k ) + O ( h x ) , (2.4) x D α b u η ( x i , y j , t k ) = 1 h α x n − i +2 X l =0 g ( α ) l u η ( x i + l − , y j , t k ) + O ( h x ) , (2.5)where the coefficients g ( α ) l are defined by g ( α )0 = 1 , g ( α ) l = (cid:18) − α + 1 l (cid:19) g ( α ) l − , for l ≥ . (2.6)4imilarly, the above approximation results hold for the Riesz fractional derivative in y direction.Next, we consider the discretization of the time derivative term. The backward Euler differ-ence scheme is exploited to approximate the time derivative as follows ∂u η ( x i , y j , t k ) ∂t = u η ( x i , y j , t k ) − u η ( x i , y j , t k − )∆ t + O (∆ t ) . (2.7)According to the above assumption and (2.7), we derive that there exists a constant c suchthat u η ( x, y, t ) satisfying | u η ( x i , y j , t k ) − u η ( x i , y j , t k − ) | ≤ c ∆ t. For dealing with the nonlinear term, noting that ˆ f ( u η , x, y, t ) satisfies the Lipschitz conditionfor u η and t in Ω and making using of the zeros extensions on the extended region, we derivethat, for arbitrary u and u , there exists a constant c > | ˆ f ( u , x, y, t ) − ˆ f ( u , x, y, t ) | < c | u − u | . Likewise, denote c > t . Then, it holds that | ˆ f ( u η , x, y, t ) − ˆ f ( u η , x, y, t ) | ≤ c | t − t | . Thus, for 1 ≤ i ≤ n , 1 ≤ j ≤ n and 1 ≤ k ≤ m , we have | ˆ f ( u η ( x i , y j , t k ) , x i , y j , t k ) − ˆ f ( u η ( x i , y j , t k − ) , x i , y j , t k − ) |≤| ˆ f ( u η ( x i , y j , t k ) , x i , y j , t k ) − ˆ f ( u η ( x i , y j , t k ) , x i , y j , t k − ) | + | ˆ f ( u η ( x i , y j , t k ) , x i , y j , t k − ) − ˆ f ( u η ( x i , y j , t k − ) , x i , y j , t k − ) |≤ c | t k − t k − | + c | u η ( x i , y j , t k ) − u η ( x i , y j , t k − ) |≤ c ∆ t + c c ∆ t =( c + c c )∆ t, from which we obtain an approximation for the nonlinear source term ˆ f ( u η , x, y, t ); i.e.,ˆ f ( u η ( x i , y j , t k ) , x i , t j , t k ) = ˆ f ( u η ( x i , y j , t k − ) , x i , y j , t k − ) + O (∆ t ) . (2.8)Denote c x = ∆ tk x c α h α x > c y = ∆ tk y c α h α y >
0. By applying (2.4), (2.5), (2.7) and (2.8) to (2.1),we obtain the following expression u η ( x i , y j , t k ) − c x i +1 X l =0 g ( α ) l u η ( x i − l +1 , y j , t k ) + n − i +2 X l =0 g ( α ) l u η ( x i + l − , y j , t k ) ! − c y j +1 X l =0 g ( α ) l u η ( x i , y j − l +1 , t k ) + n − j +2 X l =0 g ( α ) l u η ( x i , y j + l − , t k ) ! + ∆ t − Ω ( x, y ) η u η ( x i , y j , t k )= u η ( x i , y j , t k − ) + ∆ t ˆ f ( u η ( x i , y j , t k − ) , x i , y j , t k − ) + ∆ tr kij , (2.9) c such that | r kij | < c ( h x + h y + ∆ t ) , ≤ i ≤ n , ≤ j ≤ n , ≤ k ≤ m. (2.10)Define the penalization coefficients d i,j = 0 for ( x i , y j ) ∈ Ω and d i,j = ∆ tη for ( x i , y j ) ∈ ¯Ω \ Ω. Denoting f ki,j = ˆ f ( u η ( x i , y j , t k ) , x i , y j , t k ), setting u ki,j as the numerical approximation of u η ( x i , y j , t k ) and omitting the small term r ki,j , we can construct the difference scheme for solving(2.1) with the initial and boundary conditions of (2.2) and (2.3) as following u ki,j − c x i +1 X l =0 g ( α ) l u ki − l +1 ,j + n − i +2 X l =0 g ( α ) l u ki + l − ,j ! − c y j +1 X l =0 g ( α ) l u ki,j − l +1 + n − j +2 X l =0 g ( α ) l u ki,j + l − ! + d i,j u ki,j = u k − i,j + ∆ tf k − i,j , ≤ i ≤ n , ≤ j ≤ n , ≤ k ≤ m, (2.11) u i,j = ˆ u ( x i , y j ) , ≤ i ≤ n + 1 , ≤ j ≤ n + 1 , (2.12) u k ,j = u kn +1 ,j = u ki, = u ki,n +1 = 0 , ≤ i ≤ n , ≤ j ≤ n , ≤ k ≤ m. (2.13) Let I be the identity matrix with an appropriate size and N = n n . Denote u k = [ u k , , . . . , u kn , , u k , , . . . , u kn , , . . . , u k ,n , . . . , u kn ,n ] ⊺ ,f k = [ f k , , . . . , f kn , , f k , , . . . , f kn , , . . . , f k ,n , . . . , f kn ,n ] ⊺ ,D = diag( d , , . . . , d n , , d , , . . . , d n , , . . . , d ,n , . . . , d n ,n ) . Accordingly, the difference scheme (2.11)–(2.13) can be expressed as the following matrix vectorform ( I − A + D ) u k = u k − + ∆ tf k − , (2.14)with D representing the penalization matrix and A = I n ⊗ A x + A y ⊗ I n , (2.15)where A x = c x G ( α ) n and A y = c y G ( α ) n , in which G ( α ) n = g ( α )1 g ( α )0 + g ( α )2 g ( α )3 . . . g ( α ) n − g ( α ) n g ( α )0 + g ( α )2 g ( α )1 g ( α )0 + g ( α )2 g ( α )3 . . . g ( α ) n − ... g ( α )0 + g ( α )2 g ( α )1 . . . . . . ...... . . . . . . . . . . . . g ( α )3 g ( α ) n − . . . . . . . . . 2 g ( α )1 g ( α )0 + g ( α )2 g ( α ) n g ( α ) n − · · · · · · g ( α )0 + g ( α )2 g ( α )1 . (2.16)It is obvious that G ( α ) n is a symmetric Toeplitz matrix. In addition, it has been shown that theentries g ( α ) l for l ≥ emma 2.1. (See [28]) For α ∈ (1 , , the coefficients g ( α ) l , l = 0 , , . . . , satisfy g ( α )0 = 1 , g ( α )1 = − α < , g ( α )2 > g ( α )3 > · · · > , ∞ X l =0 g ( α ) l = 0 , n X l =0 g ( α ) l < , f or n ≥ . Combining the results shown in [17] with Lemma 2.1, we conclude that G ( α ) n is negativedefinite for α ∈ (1 , A is with the following property. Corollary 2.1.
The matrix A defined in (2.15) is symmetric negative definite. For convenience in our later study, we simplify the linear system (2.14) as
M u k = b k − , (2.17)where M = I − A + D and b k − = u k − + ∆ tf k − . Based on the results shown in Corollary 2.1,the following conclusion can be drawn reasonably. Lemma 2.2.
The coefficient matrix M = I − A + D is symmetric positive definite and follows n n X j =1 ,j = i | [ M ] ij | ≤ [ M ] ii − , f or i = 1 , . . . , n n . Proof.
From Corollary 2.1, we derive that the coefficient matrix M is positive definite. It is easyto check that [ M ] ii = 1 − [ A ] ii + η ∆ t if ( x i , y i ) ∈ ¯Ω \ Ω, or [ M ] ii = 1 − [ A ] ii . By following the factthat A is strictly diagonal dominant with negative diagonal elements, we obtain n n X j =1 ,j = i | [ M ] ij | = n n X j =1 ,j = i | [ A ] ij | ≤ | [ A ] ii | ≤ [ M ] ii − .
3. Stability and convergence analysis
In this section, the stability and convergence of the difference scheme (2.11)-(2.13) are dis-cussed. Firstly, we introduce an auxiliary lemma, which plays a critical role in later investigation.
Lemma 3.1. (See [7])
Let v = [ v , v , . . . , v n ] ⊺ ∈ R n be an arbitrary vector. If the matrix B = [ b i,j ] n × n satisfies the following condition n X l =1 ,l = i | b i,l | ≤ b i,i − , then we have k v k ∞ ≤ k Bv k ∞ .
7n order to investigate the stability of the difference scheme, we begin with some notations.Suppose ˜ u ki,j (1 ≤ i ≤ n , ≤ j ≤ n ) be the approximation solution of the difference scheme(2.11). Denote ǫ ki,j = ˜ u ki,j − u ki,j (1 ≤ i ≤ n , ≤ j ≤ n ) be the corresponding error. Then, theerror vector can be depicted as ǫ k = [ ǫ k , , . . . , ǫ kn , , ǫ k , , . . . , ǫ kn , , . . . , ǫ k ,n , . . . , ǫ kn ,n ] ⊺ . Let γ ki,j = ˆ f (˜ u ki,j , x i , y j , t k ) − ˆ f ( u ki,j , x i , y j , t k ). Denote γ k = [ γ k , , . . . , γ kn , , γ k , , . . . , γ kn , , . . . , γ k ,n , . . . , γ kn ,n ] ⊺ . Lemma 3.2.
The difference scheme (2.11) - (2.13) is unconditionally stable.Proof. According to (2.11) and (2.14), we obtain the error equation as
M ǫ k = ǫ k − + ∆ tγ k − . Since ˆ f ( u η , x, y, t ) satisfies the Lipschitz condition, it yields | ˆ f (˜ u k − ij , x i , y j , t k − ) − ˆ f ( u k − ij , x i , y j , t k − ) | ≤ c | ˜ u k − ij − u k − ij | = c | ǫ k − ij | , i.e., k γ k − k ∞ ≤ c k ǫ k − k ∞ . Combining Lemma 2.2 with Lemma 3.1, we have the following inequality k ǫ k k ∞ ≤ k M ǫ k k ∞ = k ǫ k − + ∆ tγ k − k ∞ ≤ (1 + ∆ tc ) k ǫ k − k ∞ . By repeating the above inequality k times and making use of the Gronwall inequality [15], wederive k ǫ k k ∞ ≤ (1 + ∆ tc ) k k ǫ k ∞ ≤ e c T k ǫ k ∞ , which indicates that the difference scheme is unconditionally stable.Now, we pay attention to the convergence of the difference scheme. Here some notations arepresented. Let r ki,j be the truncated error between difference scheme (2.11) and equation (2.1)shown in (2.10). Denote r k = [ r k , , . . . , r kn , , r k , , . . . , r kn , , . . . , r k ,n , . . . , r kn ,n ] ⊺ . Let δ ki,j = u η ( x i , y j , t k ) − u ki,j , 1 ≤ i ≤ n , ≤ j ≤ n , be the error between the exact solution ofthe problem (2.1)–(2.3) and the numerical solution of the difference scheme (2.11)–(2.13). Theerror vector can be written as δ k = [ δ k , , . . . , δ kn , , δ k , , . . . , δ kn , , . . . , δ k ,n , . . . , δ kn ,n ] ⊺ . Set ξ ki,j (1 ≤ i ≤ n , ≤ j ≤ n ) be the error between ˆ f ( u η ( x i , y j , t k ) , x i , y j , t k ) and ˆ f ( u ki,j , x i , y j , t k ).Denote ξ k = [ ξ k , , . . . , ξ kn , , ξ k , , . . . , ξ kn , , . . . , ξ k ,n , . . . , ξ kn ,n ] ⊺ . Then, we have the following convergent theorem.8 emma 3.3.
The difference scheme defined in (2.11) – (2.13) is convergent and satisfies k δ k k ∞ ≤ c (∆ t + h x + h y ) , where c is a positive constant independent of temporal step ∆ t and spatial step h x and h y .Proof. Since ˆ f ( u η , x, y, t ) satisfies the Lipschitz condition concerning u η , we have | ξ ki,j | ≤ c | δ ki,j | , f or all i, j. Then, it follows that k ξ k k ∞ ≤ c k δ k k ∞ . Due to the difference scheme (2.11) is consistent, according to (2.9) and (2.14), we obtain thefollowing error equation
M δ k = δ k − + ∆ tξ k − + ∆ tr k , where δ = 0 and k r k k ∞ ≤ c (∆ t + h x + h y ). By virtue of Lemma 2.2 and Lemma 3.1 again, itleads to k δ k k ∞ ≤ k M δ k k ∞ ≤ k δ k − k ∞ + ∆ t k ξ k − k ∞ + ∆ t k r k k ∞ ≤ (1 + ∆ tc ) k δ k − k ∞ + ∆ tc (∆ t + h x + h y ) . By repeating the above processes k times, it follows that k δ k k ∞ ≤ c c (1 + ∆ tc ) k (∆ t + h x + h y ) ≤ c c e c T (∆ t + h x + h y ) = c (∆ t + h x + h y ) , where c = c c e c T . Therefore, we confirm that the difference method is convergent.
4. Implementation
In this section, we expect to numerically solve the linear system (2.17). As the coefficientmatrix of the system is with the Toeplitz-like structure, the Krylov subspace method is employedto solve the linear system. In order to speed up the convergence rate of the iterative method,an efficient preconditioner is indispensable. In the following, we concentrate on constructing apreconditioner and discussing the spectrum of the preconditioned matrix. τ preconditioner Firstly, we recall the sine transform based preconditioner, which is also called the τ precondi-tioner. Denote T n = [ t | i − j | ] n × n be an n × n symmetric Toeplitz matrix. Then, the corresponding τ preconditioner of T n can be determined by the Hankel correction [3]. More precisely, the τ matrix can be described as τ ( T n ) = T n − H n , (4.1)9here H n is a Hankel matrix whose entries are constant along the antidiagonals, in which theantidiagonals are depicted as[ t , t , . . . , t n − , , , , t n − , . . . , t , t ] ⊺ . Remark that the τ matrix can be diagonalized by the sine transform matrix, which is writtenas τ ( T n ) = S n Λ n S n , where the diagonal matrix Λ n is consist of all the eigenvalues of the matrix τ ( T n ), and S n is asymmetric orthogonal matrix whose entries are given by[ S n ] i,j = r n + 1 sin ( πijn + 1 ) , ≤ i, j ≤ n. Then, the matrix vector multiplication S n v for any vector v can be done by the discrete sinetransform and only O ( n log n ) operations are required. The eigenvalues of the τ matrix aredetermined by its first column with O ( n ) storage being needed. In the following, we constructthe preconditioner for system (2.17).Recall the coefficient matrix, the corresponding preconditioner based on the sine transformare described as P = I − τ ( A ) + D, (4.2)where τ ( A ) = I n ⊗ τ ( A x ) + τ ( A y ) ⊗ I n . (4.3)However, the diagonal matrix D cannot be diagonalized by the sine transform matrix, whichleads to that P is hard to be inverted in O ( N log N ) operations. We then follow the idea in [10]to construct a workable and efficient preconditioner.First of all, rewrite the coefficient matrix M as the following splitting form M = ( I − Φ d )( I − A ) + Φ d (cid:18) (1 + ∆ tη ) I − A (cid:19) , (4.4)where Φ d = diag( φ , . . . , φ n , , φ , . . . , φ n , , . . . , φ ,n , . . . , φ n ,n ) is a diagonal matrix withentries φ ij = (cid:26) , ( x i , y j ) ∈ Ω , , ( x i , y j ) ∈ ¯Ω \ Ω . Accordingly, the preconditioner can be expressed as P = ( I − Φ d )( I − τ ( A )) + Φ d (cid:18) (1 + ∆ tη ) I − τ ( A ) (cid:19) . (4.5)10ikewise, it is too expensive to compute P − v for an arbitrary vector v . Therefore, we constructa preconditioner ˆ P to approximate P such thatˆ P − = ( I − Φ d )( I − τ ( A )) − + Φ d (cid:18) (1 + ∆ tη ) I − τ ( A ) (cid:19) − . (4.6)In this circumstance, the product of the matrix ˆ P − and a vector can be done in O ( N log N )operations by the discrete sine transform, and hence the computational cost on each time iter-ation keeps O ( N log N ) by the preconditioned Krylov subspace method. Due to the diagonalmatrices destroy the symmetric structure of the coefficient matrix, the generalized minimumresidual (GMRES) method with the preconditioner ˆ P is exploited to solve the linear system(2.17). In the following, we discuss the spectrum of the preconditioned matrix. For the study of the spectra, it suffices to consider the existence of the preconditioner. Auseful result emerged in [17] will be utilized to demonstrate the invertibility of the matrices P and ˆ P . Lemma 4.1. (See [17])
Let G ( α ) n and G ( α ) n be Toeplitz matrices as defined in (2.16) . Then thematrices τ ( A x ) = c x τ ( G ( α ) n ) and τ ( A y ) = c y τ ( G ( α ) n ) are both negative definite. In light of the results shown in Lemma 4.1, we derive that all the eigenvalues of τ ( A x ) and τ ( A y ) are less than 0 and hence we have the following lemma. Lemma 4.2.
The matrix P defined in (4.2) is positive definite and so P is invertible.Proof. Let Λ x and Λ y be the diagonal matrices including all the eigenvalues of τ ( A x ) and τ ( A y ),respectively. Then, we have I − τ ( A ) = ( S n ⊗ S n )( I − I n ⊗ Λ x − Λ y ⊗ I n )( S n ⊗ S n ) . By the conclusion stemming from Lemma 4.1, we deduce that all the eigenvalues of I − τ ( A )are positive and hence I − τ ( A ) is positive definite. Therefore, the matrix P ≥ I − τ ( A ) ispositive definite and so P is invertible. The proof is complete.Evoking the results from Lemma 4.2, the invertibility of the preconditioner ˆ P is determinedin the following lemma. Lemma 4.3.
The preconditioner ˆ P defined in (4.6) is invertible.Proof. As ˆ P − ( I − τ ( A )) ((1 + ∆ t/η ) I − τ ( A ))=( I − Φ d ) ((1 + ∆ t/η ) I − τ ( A )) + Φ d ((1 + ∆ t/η ) I − τ ( A )) − ( I − τ ( A )) ((1 + ∆ t/η ) I − τ ( A ))=( I − Φ d ) ((1 + ∆ t/η ) I − τ ( A )) + Φ d ( I − τ ( A ))= I − τ ( A ) + ∆ t/η ( I − Φ d ) I − τ ( A ) , we certify that ˆ P − is invertible owing to the fact that both I − τ ( A ) and (1 + ∆ t/η ) I − τ ( A )are positive definite, from which we confirm that ˆ P is invertible.Next, we focus on the spectrum of the matrix P − M . Some conclusions are proposed forour later investigation. Lemma 4.4. (See [17])
The matrix A is a block Toeplitz with Toeplitz block matrix defined in (2.15) . τ ( A ) is the corresponding block Toeplitz with τ block matrix defined in (4.3) . Then, thespectrum of τ ( A ) − A are uniformly bounded in the open interval (1 / , / . Based on the results shown in Lemma 4.4, we have the following results.
Lemma 4.5.
The spectrum of the matrix P − M are uniformly bounded below by and boundedabove by .Proof. Let x ∈ R N be an arbitrary vector. By the Rayleigh quotients theorem and the conclusionpresented in Lemma 4.4, it holds that12 ≤ x ∗ Axx ∗ τ ( A ) x ≤ . It immediately follows thatmin x (cid:26) , x ∗ Axx ∗ τ ( A ) x (cid:27) < x ∗ ( I − A + D ) xx ∗ ( I − τ ( A ) + D ) x = x ∗ ( I + D ) x − x ∗ Axx ∗ ( I + D ) x − x ∗ τ ( A ) x < max x (cid:26) , x ∗ Axx ∗ τ ( A ) x (cid:27) . Therefore, we derive λ min ( P − M ) = min x x ∗ ( I − A + D ) xx ∗ ( I − τ ( A ) + D ) x > , λ max ( P − M ) = max x x ∗ ( I − A + D ) xx ∗ ( I − τ ( A ) + D ) x < . The proof is complete.This lemma implies that when apply the matrix P as the preconditioner, the spectrum of thepreconditioned matrix is uniformly bounded and hence the GMRES method converges linearly.From the theoretical point of view, the matrix P is worth to be treated as the preconditioner.However, from the perspective of practical computation, computing the inverse of P requiresmore computational cost and storage as mentioned before, which does not comply with ouroriginal intention. Actually, in our practical operation, the matrix ˆ P is utilized to accelerate theconvergence rate. Remark 4.1.
We wish to establish a result similar to the one for the spectral of P − M , whichprovides an upper bound and a lower bound. Unfortunately, it is helpless to prove a conclusionsuch as Lemma 4.5. Up to now, there is no any theoretical analysis for the spectrum of thepreconditioned matrix ˆ P − M . Nevertheless, it is remarkable to mention that such kind of pre-conditioner has been used to several model equations. The numerical results indicate that thepreconditioner ˆ P is efficient and feasible. Therefore, we continually make use of the precondi-tioner under the case without theoretical support. . Numerical results In this section, the numerical experiments are carried out to demonstrate the effectivenessof the proposed method. The GMRES method and the preconditioned GMRES method areapplied to solve the linear system (2.14), respectively. Set the restart number be 20 and thestopping criterion of those methods as k r ( k ) k k r (0) k < − , where r ( k ) means the residual vector after k iterations. The initial guess is chosen as v = (cid:26) u , m = 0 ,u m +1 = 2 u m − u m − , m > . To exhibit the performance of the proposed method, the ADI method and the preconditionedADI method [8] are implemented as comparisons. In the following tables, ‘PGMRES’ and ‘PADI’represent the preconditioned GMRES method and peconditioned ADI method, respectively.‘Iter’ means the average number of iterations by those iterative methods. ‘CPU(s)’ displays thetotal CPU time in seconds for solving the whole discretized system. In addition, the ‘Error’denotes the infinite norm of the relative error between the exact solution and the numericalsolution on the original area Ω as
Error = k u e − u η k ∞ k u e k ∞ , where u e is the exact solution and u η is the numerical solution. All numerical results are carriedout by MATLAB R2017a on a dell PC with the configuration: Intel(R) Core(TM)i7-8700 [email protected] 3.20GHz and 8 GB RAM. Example 5.1. (See [8])
Consider the RSFDEs defined on the following elliptical domain: ( x, y ) ∈ Ω = { ( x, y ) | ( x − a ) /a + ( y − b ) /b ≤ } , with the initial condition: u ( x, y, t ) = (( x − a ) /a + ( y − b ) /b − , ( x, y ) ∈ Ω , and the zero Dirichlet boundary condition: u ( x, y, t ) = 0 , ( x, y ) ∈ ∂ Ω . The exact solution is given by u ( x, y, t ) = e − t (( x − a ) /a + ( y − b ) /b − and the sourceterm is depicted as f ( u, x, y, t ) = k x c α e − t a [ h ( α , x − a + c y , c y ) + h ( α , a + c y − x, c y )]+ k y c α e − t b [ h ( α , y − b + c x , c x ) + h ( α , b + c x − y, c x )] − u ( x, y, t ) , where h ( α, s, d ) = s (4 − α ) Γ(5 − α ) − ds (3 − α ) Γ(4 − α ) + d s (3 − α ) Γ(3 − α ) , c y = a p (1 − ( y − b ) /b ) , c x = b p (1 − ( x − a ) /a ) . able 5.1. Comparisons for solving Example 1 by the GMRES method, the PGMRES method, the ADI methodand the PADI method for different coefficients.
GMRES P GMRES ADI P ADIk x n Error Iter CPU(s) Iter CPU(s) Error Iter CPU(s) Iter CPU(s)10 − − Table 5.2.
Comparisons for solving Example 1 by the PGMRES methodwith τ -based preconditioner and PADI method with circulant preconditionerfor T = 10. P GMRES P ADIn Error Iter CPU(s) Error Iter CPU(s)2 In this example, we extend Ω to be a square domain ¯Ω = ( − a, a ) × ( − b, b ). Assume thatthe values of ˆ f and ˆ u on the extended region ¯Ω \ Ω are both 0. In the following tables, take α = 1 . α = 1 . a = 2 and b = 1. Let k x = k y and n = n . Table 5.1 shows thenumerical results of Example 5.1 with T = 1, m = n , and η = 10 − . From this table, wesee that all mentioned methods implement well when the diffusion coefficients k x and k y aresmall. In particular, for those cases with no preconditioners are still powerful and even superiorthan the cases with preconditioners from the consumed CPU times point of view. The reasonof this phenomenon is that the model equations become time direction dominant if the diffusioncoefficients are small, which yields that the coefficient matrix is almost equivalent to the identitymatrix. In this situation, the preconditioner seems superfluous and thus more computationalcost and memory are required. Nevertheless, the GMRES method still works well comparedwith both the ADI method and the preconditioned ADI method from the perspective of therequired iterations and CPU times. Moreover, the merit of the preconditioner is shown underthe cases where the coefficients become large. We see that when k x = k y = 1, only few iterations14 able 5.3. Values of k u η k ∞ for k x = 10 − in the extended region ¯Ω \ Ω withdifferent penalization parameters η = 10 − , − , − . n η = 10 − η = 10 − η = 10 − and CPU times are required for the preconditioned GMRES method to attain convergence.On the other hand, the advantage of our proposed method is more evident provided thatthe time T becomes large as shown in Table 5.2, where we take T = 10, m = 10 n , and k x = 1.It is remarkable to notice that our method is more accurate than the ADI method from theextent of the numerical approximation. With the same matrix size, the error arising from theADI method is nearly twice than the error generating from our method. With the case of sameerror, the CPU time required for the ADI method to converge is much greater than the CPUtime required for the proposed method. In conclusion, the proposed method should be a goodchoice for handling convex domain problems and more suitable for general cases.In addition, Table 5.3 lists the numerical solutions of the penalized equation defined in (2.1)on the extended region ¯Ω \ Ω for different parameter η . From Table 5.3, we obverse that thesolutions tend to 0 as the parameter is closed to 0 and the values of the solutions are rely onthe value of the parameter, which is coincident with the fact that the solution of the penalizedequation converges to the solution of the original equation. Therefore, we have demonstratedthe effectiveness of the proposed method.
6. Concluding remarks