Explicit high-order generalized-α methods for isogeometric analysis of structural dynamics
Pouria Behnoudfar, Gabriele Loli, Alessandro Reali, Giancarlo Sangalli, Victor M. Calo
EExplicit high-order generalized- α methods for isogeometricanalysis of structural dynamics Pouria Behnoudfar a, ∗ , Gabriele Loli b , Alessandro Reali c , Giancarlo Sangalli b , Victor M.Calo d a Curtin Institute for Computation & School of Earth and Planetary Sciences, Curtin University, KentStreet, Bentley, Perth, WA 6102, Australia b Dipartimento di Matematica “F. Casorati”, Universit`a di Pavia, 27100 Pavia, Italy c Department of Civil Engineering and Architecture, Universit`a di Pavia, 27100 Pavia, Italy d School of Electrical Engineering, Computing and Mathematical Sciences, Curtin University, P.O.Box U1987, Perth, WA 6845, Australia
Abstract
We propose a new family of high-order explicit generalized- α methods for hyperbolicproblems with the feature of dissipation control. Our approach delivers 2 k, ( k ∈ N ) ac-curacy order in time by solving k matrix systems explicitly and updating the other 2 k variables at each time-step. The user can control the numerical dissipation in the discretespectrum’s high-frequency regions by adjusting the method’s coefficients. We study themethod’s spectrum behaviour and show that the CFL condition is independent of theaccuracy order. The stability region remains invariant while we increase the accuracyorder. Next, we exploit efficient preconditioners for the isogeometric matrix to minimizethe computational cost. These preconditioners use a diagonal-scaled Kronecker productof univariate parametric mass matrices; they have a robust performance with respectto the spline degree and the mesh size, and their decomposition structure implies thattheir application is faster than a matrix-vector product involving the fully-assembledmass matrix. Our high-order schemes require simple modifications of the available im-plementations of the generalized- α method. Finally, we present numerical examplesdemonstrating the methodology’s performance regarding single- and multi-patch IGAdiscretizations. Keywords:
Generalized- α method, high-order time integrator, Explicit method,dissipation control, CFL condition, hyperbolic, preconditioner, Isogeometric analysis
1. Introduction
Chung and Hulbert [1] introduced the generalized- α method for hyperbolic systemsarising in structural dynamics; later, Hulbert and Chung [2] presented an explicit versionof this time-marching method. The method is second-order accurate in time and provides ∗ Corresponding author
Email address: [email protected] (Pouria Behnoudfar)
Preprint submitted to Arxiv February 24, 2021 a r X i v : . [ m a t h . NA ] F e b ser-control on the high-frequency numerical dissipations. Likewise, other well-knownmethods like, e.g., the Newmark- β [3] and the HHT- α [4] methods have both implicitand explicit formulations and are limited to the second-order accuracy. Nevertheless, theNewmark method does not control the dissipation while, using the HHT- α , one obtainsdissipative solutions in low-frequency regions. The generalized- α method produces analgorithm that combines high-frequency and low-frequency dissipations optimally; see [1,2]. These methods, including the generalized- α method, are limited to the second-orderaccuracy in time. In contrast, high-order methods like Lax-Wendroff, Runge-Kutta,Adams-Moulton, and backward differentiation schemes (see [5]) or high-order IGA collo-cation methods for dynamics (see, e.g., [6]) lack explicit control over the numerical dis-sipation of high frequencies. Thus, we propose an explicit k -step generalized- α methodthat delivers 2 k accuracy in time for second derivatives problems in time. Our methodobtains high-order accuracy in the time with optimal control of the resulting system’sspectral behaviour. Thus, we build an algorithm that consists of 3 k equations; for eachthree-equation set, we solve an explicit system for a variable and update the other two.We then study the resulting amplification matrix’s spectral properties to determine theCFL condition and introduce user-defined parameters that control the numerical dissipa-tion. Later, we prove that the CFL condition of our method is independent of temporalaccuracy.The explicit generalized- α method, at each time step, factors the isogeometric massmatrix; thus, we use a preconditioned conjugate gradients (PCG) as an iterative solverusing M [7], which is easy to implement, extremely efficient and robust. We buildthe preconditioner using a diagonal scaling of the parametric mass matrix (i.e., matrixassociated with the d -dimensional cube pre-image of the parametric space) for single-patch geometries. We combine the preconditioners defined above for each patch formulti-patch geometries using an additive Schwarz domain-decomposition method. In [7],the authors proved that these preconditioners are efficient to apply and are robust withrespect to the mesh size; in the single-patch case, they show that the behaviour improvesas the problem size grows (mesh refinement).We present numerical simulations to demonstrate the performance of the time-marchingscheme and its preconditioner. We provide numerical evidence on the time integrator’shigh-order accuracy and its dispersion properties. Additionally, we show the optimalconvergence in the spatial domain for several problems. The outline for the remainderof the paper is as follows. Section 2 describes the hyperbolic problem we consider andintroduces the spatial discretizations to obtain the matrix formulation of the problem.Section 3 presents our fourth-order explicit generalized- α method; therein, we also an-alyze the method’s stability, its CFL condition, and its temporal accuracy. Section 4generalizes the method to 2 k th -order of accuracy. Then, we introduce our solver in Sec-tion 5 with details on single- and multi-patch isogeometric analysis. We verify the solver’sconvergence and its computational performance in Section 6 numerically. Section 7 de-scribes our contributions and further applications.2 . Problem Statement We start with an initial boundary-value hyperbolic problem, a model problem forstructural dynamics: ¨ u ( x , t ) − ∇ · ( ω ∇ u ( x , t )) + c ( ˙ u ( x , t )) = f ( x , t ) , ( x , t ) ∈ Ω × [0 , T ] ,u ( x , t ) = u D , x ∈ ∂ Ω ,u ( x ,
0) = u , x ∈ Ω , ˙ u ( x ,
0) = v , x ∈ Ω , (1)Let Ω ⊂ R d , d = 1 , , , be an open bounded domain. The operator ∇ is the spatialgradient and a superscript dot denotes a time derivative such that ˙ u ( x , t ) = ∂u ( x ,t ) ∂t and¨ u ( x , t ) = ∂ u ( x ,t ) ∂t . c ( ˙ u ( x , t )) models linear damping. The source f , propagation speed ω ,initial data u , v , and Dirichlet boundary conditions u D are given and assumed regularenough for the problem to admit a weak solution. In order to derive our numericalmethod for (1), we first obtain a semi-discretized problem by discretizing in space, then,we deploy our explicit generalized- α method to have a fully discretized system. Adopting a Galerkin method (in particular, isogeometric analysis), the matrix prob-lem resulting from the semi-discretization of (1) reads: M ¨ U + C ˙ U + KU = F, (2)where M , C , and K are the mass, damping, and stiffness matrices, respectively. U denotes the vector of the unknowns, and F is the source vector. The initial conditionsalso read: U (0) = U , V (0) = V , (3)where U and V represent the given vectors initial conditions corresponding to u ,h and˙ u ,h , respectively. In the next section, we propose our numerical technique to deal withthe time derivative ¨ U and ˙ U in (2) with the accuracy of order 2 k in the temporal domainwith k ∈ N . Remark 1.
Herein, we propose an explicit generalized- α scheme by considering a generalspatial discretization, leading to the matrix problem (2) . Therefore, the use of isogeomet-ric analysis does not limit the method’s applicability; effectively, our method applies tothe spatial discretization of a time-dependent semi-discretized problem. Remark 2.
In problem (1) and accordingly (2) , for simplicity, we only consider constant ω and assume that the solution u ( · , t ) satisfies homogeneous boundary condition. Onerequires slight modifications of the discrete bilinear and linear functions for the cases ofheterogeneous propagation speed and non-homogeneous boundary conditions [8, 9].2.2. Time-discretization To obtain a fully discrete problem (2), we adopt an appropriate time marching schemeto deal with ¨ u h and ˙ u h ; in the next section, we propose a new high-order explicitgeneralized- α method. 3 . Explicit generalized- α method Consider a partitioning of the time interval [0 , T ] as 0 = t < t < · · · < t N = T with a grid size τ n = t n +1 − t n . We approximate U ( t n ) , ˙ U ( t n ) , ¨ U ( t n ) using U n , V n , A n ,respectively. Tthe explicit generalized- α method with second-order accuracy in timesolves (2) at time-step t n +1 , M A n + α m + CV n + KU n = F n + α f , (4a) V n +1 = V n + τ A n + τ γ (cid:74) A n (cid:75) , (4b) U n +1 = U n + τ V n + τ A n + τ β (cid:74) A n (cid:75) , (4c)where F n + α f = F ( t n + α f ) ,A n + α m = A n + α m (cid:74) A n (cid:75)(cid:74) A n (cid:75) = A n +1 − A n . (5)To retrieve the unknown A at the initial state, we solve A = M − ( F − KU − CV ) . (6)At each time step, we first compute A n +1 using (4a), and then evaluate V n +1 and U n +1 from (4b) and (4c), respectively. The method (4) has a truncation error in time of O ( τ )(for further details, see [10, 11]). We extend the accuracy, assuming sufficient temporalregularity of the solution; thus, we use a Taylor expansion with higher-order terms. Weintroduce new variables L a ( A n ) to approximate the a -th order derivative of A n in time.Therefore, for example, to derive a fourth-order explicit generalized- α method, we solve M A α n + CV n + KU n = F n + α f ,M L ( A n ) α + C L ( A n ) + K L ( A n ) = F (3) n + α f , (7)with updating conditions U n +1 = U n + τ V n + τ A n + τ L ( A n ) + τ L ( A n ) + τ L ( A n ) + β τ P n ,V n +1 = V n + τ A n + τ L ( A n ) + τ L ( A n ) + τ L ( A n ) + γ τ P n , L ( A n +1 ) = L ( A n ) + τ L ( A n ) + τ L ( A n ) + τ β (cid:74) L ( A n ) (cid:75) , L ( A n +1 ) = L ( A n ) + τ L ( A n ) + τ γ (cid:74) L ( A n ) (cid:75) , (8)4here P n = A n +1 − A n − τ L ( A n ) − τ L ( A n ) − τ L ( A n ) ,A α n = A n + τ L ( A n ) + τ L ( A n ) + τ L ( A n ) + α P n , L ( A n ) α = L ( A n ) + α (cid:74) L ( A n ) (cid:75) . (9)We approximate ∂ ( a ) ∂t ( a ) f ( · , n + α f ) as F ( a ) n + α f . Following (6), one can readily obtain theinitial data of the unknowns using the given information on U and V as A = M − ( F − KU − CV ) , L ( A ) = M − (cid:16) F (1)0 − KV − CA (cid:17) , L ( A ) = M − (cid:16) F (2)0 − KA − C L ( A ) (cid:17) , L ( A ) = M − (cid:16) F (3)0 − K L ( A ) − C L ( A ) (cid:17) . (10)Next, we derive the corresponding coefficients that deliver the desired accuracy anddiscuss the method’s stability. We now determine the parameters γ and γ that guarantee the fourth-order accuracyof (7); thus, we use the following result. Theorem 1.
Assuming that the solution is sufficiently smooth with respect to time, themethod (7) is fourth-order accurate in time given γ = 12 − α f + α , γ = 12 − α f + α . (11) Proof.
Following [12], we determine the amplification matrix by associating with λ = θ the eigenvalues of the matrix M − K . The damping term’s eigenvalues M − C are ζθ where ζ is the damping coefficient. Without loss of the proof’s generality, we set ζ = 0and ignore damping. Then, substituting (8) into (7), we obtain a system of equations ateach time step as A U n +1 = B U n + F n + α f , (12)5etting U Tn = (cid:2) U n , τ V n , τ A n , τ L ( A n ) , τ L ( A n ) , τ L ( A n ) (cid:3) T , A = − β − γ α − β − γ α ,B = − β − β − β − β − γ − β − β − β − τ λ − τ λ α − − τ λ α − − τ λ ( α − ( α − − β − γ − τ λ α − (13)and F n + α f consists of the forcing terms F n + α f and ( F (3) n + α f ). Thus, the amplificationmatrix G becomes G = A − B, (14)with G being an upper-block triangular matrix written as: G = (cid:20) Λ Ξ0 Λ (cid:21) , (15)where Λ = 1 α α − β τ λ α − β τ λ (cid:0) α − β (cid:0) τ λ + 2 (cid:1)(cid:1) − γ τ λ α − γ τ λ α − γ (cid:0) τ λ + 2 (cid:1) − τ λ − τ λ α − − τ λ , Λ = 1 α α − β τ λ α α − β − γ τ λ α α − γ − τ λ α − . (16)Following [11, 13], firstly, we obtain the high-order unknowns L ( A n ) and L ( A n ), beingsecond-order accurate. Secondly, we set the parameters such that the upper block on thediagonal also delivers second-order accuracy. Then, we obtain the fourth-order accuracyby adding the high-order terms through the upper off-diagonal block to our solution.Therefore, we study each diagonal block to derive the related parameters. For this aim,for any arbitrary amplification matrix, we can state the following, G L ( A n +1 ) − G L ( A n ) + G L ( A n − ) − G L ( A n − ) = 0 , (17)where the coefficients are invariants of the amplification matrix as G = 1, G is thetrace of G , G is the sum of principal minors of G , and G is the determinant of G .6sing a Taylor series expansion, we have: L ( A n +1 ) = L ( A n ) + τ L ( A n ) + τ L ( A n ) + O ( τ ) , L ( A n − ) = L ( A n ) − τ L ( A n ) + τ L ( A n ) + O ( τ ) , L ( A n − ) = L ( A n ) − τ L ( A n ) + 2 τ L ( A n ) + O ( τ ) . (18)Setting γ = − α f + α , we obtain that L ( A n +1 ) , L ( A n +1 ) are second-order accuratein time. Next, we rewrite (18) for the unknown U n +1 as U n +1 = U n + τ V n + τ A n + R ,U n − = U n − τ V n + τ A n + R ,U n − = U n − τ V n + 4 τ A n + R , (19)where R is a function of L ( A n ) , L ( A n ) and L ( A n ). We neglect this term in the analysisas it is a residual. Thus, we can prove that the remaining terms are second-order accuratein time. Then, we add the residuals to the second-order accurate solution, to produce atruncation error of O ( τ ) and consequently, we obtain a fourth-order accurate scheme intime, which completes the proof. We bound the spectral radius of the amplification matrix by one to deliver a stabletime-marching scheme. For this, firstly, we calculate the eigenvalues of the matrix G in (14) for the case Θ := τ λ →
0. Therefore, in this case, the diagonal blocks in (16)are: Λ = − β α − γ α − α , Λ = − β α − γ α − α . (20)Then, G ’s eigenvalues are: λ = λ = λ = λ = 1 , λ = 1 − α , λ = 1 − α . (21)The boundedness of λ and λ in (21) implies that α ≥ and α ≥ .Now, we derive the CFL condition for our method. In the analysis of the implicitgeneralized- α methods of second- and higher-order accuracy [1, 11], we analyze the dis-crete system’s eigenvalue distribution in the limit Θ → ∞ and set the method’s freeparameters such that all eigenvalues are equal to a real constant ρ ∞ ∈ [0 , G , which is equivalentto finding the eigenvalues of each diagonal block in (16). Therefore, the characteristic7olynomials for the diagonal blocks are:Λ : 2 α λ + λ (Θ(2 β + 2 γ + 1) − α + 2)+ λ (Θ( − β − γ + 1) + 6 α −
4) + 2 β Θ − α + 2 = 0 , Λ : 2 α λ + λ (2Θ β − α + 2) + λ (Θ( − β + 2 γ + 1) + 6 α − β + 1 − γ ) − α + 2 = 0 , (22)where two roots of each characteristic polynomial in (22) are the principal roots while thethird is spurious (unphysical). We require the two principal roots to be complex conju-gates except in the high-frequency regions. This requirement maximizes high-frequencydissipation while setting two eigenvalues to one in the low-frequency range to improveits approximation accuracy. Thus, we set β and β such that the complex parts of theprincipal roots of the blocks Λ and Λ , respectively, vanish in high-frequency regions;this setting changes the largest eigenvalue from one to a user-defined value, which resultsin an approach similar to the implicit generalized- α methods. Additionally, we define thelimit in which this bifurcation happens at block i, i = 1 ,
2, by Ω bi . We define the criticalstability limit for our explicit method as Ω si and show the stability region of block i .We find the parameter values using the characteristic equation corresponding to thediagonal block i of the amplification matrix as: (cid:88) j =0 (cid:16) ˜ a ji + ˜ y ji Θ (cid:17) λ (3 − j ) i = 0 , (23)where ˜ a i and ˜ y i are functions of the parameters γ i and β i . Each block i has threeeigenvalues; we set two of them to ρ bi and one becomes ρ si . Therefore, we rewrite thecharacteristic polynomial (23) as (cid:0) λ i + ρ bi + 2 λ i ρ bi (cid:1) ( λ i + ρ si ) = 0 . (24)Next, to have all three roots with real values at the bifurcation limit, we equate (24)to (22) to obtain: Ω b = 2 − ρ b − ρ s + ρ s ρ b , Ω b = 2 + 2 ρ b + ρ s − ρ s ρ b ,α = 2 − (1 + ρ b ) ρ s ( − ρ b )( − ρ s ) α = 2 + (1 − ρ b ) ρ s (1 + ρ b )(1 + ρ s ) . (25) Remark 3.
Herein, we constrain ρ si ≤ ρ bi which results in Ω bi ≤ Ω si . We can maximizethe bifurcation region Ω bi by setting ρ si = ρ bi and consequently, obtain a one-parameterfamily of algorithms, Figure 1 shows this numerically. Figure 1 shows how the bifurcation and stability regions change as the user-definedparameter changes.To set the complex part of the eigenvalues equal to zero in the high-frequency regions,8 b1 b i C=0 b2b1 , b2 b2b2 C=0.6 b1 C=0.3 b1 C= b1 (a) ρ s = ρ s = ρ b = C . b2 b i C=0 b1 b1b2b1 , b2 C=0.6 b2 C= b2 C=0.3 b2 (b) ρ s = ρ s = ρ b = C . Figure 1: Effect of user-defined parameters on the bifurcation regions Ω b and Ω b . we define the parameters β i as β = (1 + ρ b )( − ρ b ρ s ) ( − ρ b ) ( − ρ s )( − ρ s + ρ b ρ s ) ,β = − − ρ b − ρ s + 2 ρ b ρ s + 2 ρ b ρ s − ρ s + ρ b ρ s (1 + ρ b ) ( − − ρ s + ρ b ρ s − ρ s + ρ b ρ s ) . (26)Finally, we find the critical values at which each block’s spectral radius becomes largerthan one. For this, we set Θ = Ω si and λ i = 1 in (23) and solve the resulting equation.Thus, we introduce the critical values Ω si asΩ s = 4(1 − ρ b )(2 − ρ b ρ s − ρ s )(3 + ρ b − ρ s − ρ b ρ s )2(5 − ρ b ) + (5 − ρ b − ρ b − ρ b ) ρ s − (1 + ρ b ) ρ s , Ω s = 4(1 + ρ b )(2 − ρ b ρ s + ρ s )(3 − ρ b + ρ s − ρ b ρ s )2(5 − ρ b ) + (5 − ρ b − ρ b + ρ b ) ρ s − (1 − ρ b ) ρ s . (27) Remark 4.
Following closely [2, 14], α f and α f are free parameters with respect tospectral radius. Therefore, we set α f = 1 to deliver fourth-order accuracy. Accordingly,to optimally combine low- and high-frequency dissipation, we set α f = 0 . Therefore, by setting 0 ≤ ρ b , ρ b ≤
1, we control the system’s spectral radius ρ and, consequently, the high-frequency numerical damping. Figure 2 shows how the usercontrols ρ and the stability region Ω s ; setting ρ b = ρ s = 0 .
99 leads to the largest stabilityregion Ω s = 4( θτ ) = 4Θ, equivalent to the stability region of the second-order centraldifference method [12]. 9 ( ) ( ) ( ) ( ) b =0 b =2 s =2.4 b =0.2 b =0.5 b =2.59 s =2.87 s =3.5 b =3.37 b = s =4 b =0.99 Figure 2: Spectral radius ρ behaviour for different ρ b values, where ρ b = ρ b = ρ s = ρ s .
4. General 2 k th -order accuracy in time This section extends our approach to deliver 2 k th -order accurate methods with k ≥ k th -order explicit generalized- α method as: M A α n + CV n + KU n = F n + α f ,M L j − ( A n ) α j + C L j − ( A n ) + K L j − ( A n ) = F (3 j − n + α fj , , j = 2 , · · · , k − ,M L k − ( A n ) α k + C L k − ( A n ) + K L k − ( A n ) = F (3 k − n + α fk , (28)10nd updating the system using the following U n +1 = U n + τ V n + τ A n + τ L ( A n ) + · · · + τ k − (3 k − L k − ( A n ) + β τ P n, ,V n +1 = V n + τ A n + τ L ( A n ) + · · · + τ k − (3 k − L k − ( A n ) + τ γ P n, , L j − ( A n +1 ) = L j − ( A n ) + τ L j − ( A n ) + · · · + τ k − j )+2 (3( k − j ) + 2)! L k − ( A n ) + τ β j P n,j , L j − ( A n +1 ) = L j − ( A n ) + · · · + τ k − j )+1 (3( k − j ) + 1)! L k − ( A n ) + τ γ j P n,j , j = 2 , · · · , k − , L k − ( A n +1 ) = L k − ( A n ) + τ L k − ( A n ) + τ L k − ( A n ) + τ β k (cid:74) L k − ( A n (cid:75) , L k − ( A n +1 ) = L k − ( A n ) + τ L k − ( A n ) + τ γ k (cid:74) L k − ( A n (cid:75) , (29)with P n, = A n +1 − A n − τ L ( A n ) − · · · − τ k − (3 k − L k − ( A n ) ,A α n = A n + τ L ( A n ) + · · · + τ k − (3 k − L k − ( A n ) + α P n, ,P n,j = L j − ( A n +1 ) − L j − ( A n ) − · · · − τ k − j ) k − j )! L k − ( A n ) ,A α j n = L j − ( A n ) + · · · + τ k − j ) k − j )! L k − ( A n ) + α j P n,j , j = 2 , · · · , k − , L k − ( A n ) α k = L k − ( A n ) + α k (cid:74) L k − ( A n ) (cid:75) . (30)Letting k = 2 recovers the fourth-order explicit generalized- α method of Section 3.Using similar arguments to the proof of Theorem 1, we can establish higher-order schemesin the form of (28) and (29) by using high-order Taylor series. Then, to seek 2 k th -orderof accuracy, we substitute (29) into (28) and find a matrix system as L U n +1 = R U n + F n + α f . (31)Therefore, the amplification matrix of the 2 k th -order accurate scheme becomes: G k = L − R = Λ Ξ · · · · · · Ξ k Ξ · · · Ξ k ... . . . · · · Λ k − Ξ k − · · · k , (32)11ith Λ j = 1 α j α j − β j Θ α j − β j Θ ( α j − β j (Θ + 2)) − γ j Θ α j − γ j Θ α j − γ j (Θ + 2) − Θ − Θ α j − − Θ2 , j = 1 , · · · , k − Λ k = 1 α k α k − β k Θ α k α k − β k − γ k Θ α k α k − γ k − Θ 0 α k − . (33) Remark 5.
The amplification matrix (32) is an upper-triangular block matrix; thus, weneglect the non-diagonal block contributions in the eigenvalue analysis.
Theorem 2.
Assuming u h ( t ) and ˙ u h ( t ) have sufficient regularity in time, our semi-discrete method (28) - (30) for advancing (2) is k th -order accurate in time when γ j = 12 − α fj + α j , j = 1 , · · · , k − , γ k = 12 − α fk + α k . (34) Proof.
The amplification matrix (32) is an upper-diagonal block matrix; each block is a3 × k − and Λ k have similar entries to those of theamplification matrix of the fourth-order method. The other diagonal blocks are analogousto Λ k − . Hence, separately for each block, we determine the relevant terms for (17) andconsider the higher-order terms to obtain second-order accuracy. Consequently, aftersolving the whole system and adding the higher-order terms to the unknowns u h and v h ,we have a truncation error of O ( τ k +1 ). We define the system’s spectral behaviour, the bifurcation regions, and its CFL con-ditions; thus, we study the amplification matrix’s eigenvalues (32). Accordingly, we cal-culate each diagonal block’s eigenvalues. The first k − bj = 2 − ρ bj − ρ sj + ρ sj ρ bj , j = 1 , · · · , k − ,α j = 2 − (1 + ρ bj ) ρ sj ( − ρ bj )( − ρ sj ) ,β j = (1 + ρ bj )( − ρ bj ρ sj ) ( − ρ bj ) ( − ρ sj )( − ρ sj + ρ bj ρ sj ) , Ω bk = 2 + 2 ρ bk + ρ sk − ρ sk ρ bk ,α k = 2 + (1 − ρ bk ) ρ sk (1 + ρ bk )(1 + ρ sk ) ,β k = − − ρ bk − ρ sk + 2 ρ bk ρ sk + 2 ρ bk ρ sk − ρ sk + ρ bk ρ sk (1 + ρ bk ) ( − − ρ sk + ρ bk ρ sk − ρ sk + ρ bk ρ sk ) , (35)12nd the critical values Ω sj areΩ sj = 4(1 − ρ bj )(2 − ρ bj ρ sj − ρ sj )(3 + ρ bj − ρ sj − ρ bj ρ sj )2(5 − ρ bj ) + (5 − ρ bj − ρ bj − ρ bj ) ρ sj − (1 + ρ bj ) ρ sj , j = 1 , · · · , k − , Ω sk = 4(1 + ρ bk )(2 − ρ bk ρ sk + ρ sk )(3 − ρ bk + ρ sk − ρ bk ρ sk )2(5 − ρ bk ) + (5 − ρ bk − ρ bk + ρ bk ) ρ sk − (1 − ρ bk ) ρ sk . (36) Remark 6.
We constrain ρ sm ≤ ρ bm , m = 1 , · · · , k and maximize the bifurcation regions Ω bm by setting ρ sm = ρ bm = ρ with ≤ ρ < as a user-defined parameter; our methodis a one-parameter family of time-marching algorithms. Additionally, the stability regionis independent of the accuracy order; we obtain higher-order accuracy without affectingany features of the second-order algorithm (i.e., preserve stability regions, bifurcationlimit, and dissipation control). Similarly, α fj and α fk are free parameters; thus, we set α fj = 1 , j = 1 , · · · , k − , and α fk = 0 . Remark 7.
The most expensive computational cost of our explicit time marching isthe factorization of the mass matrices. Next, we discuss a state-of-the-art approach toprecondition the mass matrices resulting from isogeometric analysis that minimize thiscost for complex geometries as well as for single- and multi-patch discretizations.
5. Solver
The method we propose has many advantages: for instance, the CFL condition andthe dissipation control are independent of the order of accuracy; however, it requiressolving k mass-matrix systems. Herein, we discuss a method to accelerate these systems’solution when using isogeometric analysis. Firstly, we describe efficient and robust pre-conditioners for maximum-continuity isogeometric mass matrices for single and multi-patch geometries. Then, we adopt an iterative solver (i.e., preconditioned conjugategradients, PCG) to calculate the solutions. Given two positive integers p and m , consider an open knot vectorΞ := { ξ , . . . , ξ m + p +1 } such that ξ = . . . = ξ p +1 < ξ p +2 ≤ . . . ≤ ξ m < ξ m +1 = . . . = ξ m + p +1 , where interior repeated knots are allowed with maximum multiplicity p . We assume ξ = 0 and ξ m + p +1 = 1. From the knot vector Ξ, we define degree- p B-spline functionsusing the Cox-De Boor recursive formula: we start with piecewise constants ( p = 0): (cid:98) b i, ( ζ ) = (cid:26) ξ i ≤ ζ < ξ i +1 , , p ≥
1, the following recursion defines the B-spline functions (cid:98) b i,p ( ζ ) = ζ − ξ i ξ i + p − ξ i (cid:98) b i,p − ( ζ ) + ξ i + p +1 − ζξ i + p +1 − ξ i +1 (cid:98) b i +1 ,p − ( ζ ) , where 0 / (cid:98) b i,p depends only on p + 2 knots, which we collect in alocal knot vector Ξ i,p := { ξ i , . . . , ξ i + p +1 } , is non-negative and its support is the interval [ ξ i , ξ i + p +1 ]. Moreover, these B-splinefunctions constitute a partition of unity, that is m (cid:88) i =1 (cid:98) b i,p ( x ) = 1 , ∀ x ∈ (0 , . (37)The univariate spline space is (cid:98) S h = (cid:98) S h ([0 , { (cid:98) b i,p } mi =1 , where h denotes the maximal mesh-size. We may drop the degree p from the notationwhen it will not lead to confusion (see, e.g., [15, 16]).We define multivariate B-splines from univariate ones by tensorization, as is commonpractice. Let d be the space dimension and consider open knot vectors Ξ k = { ξ k, , . . . , ξ k,m + p +1 } and a set of multi-indices I := { i = ( i , . . . , i d ) : 1 ≤ i l ≤ m } . For each multi-index i =( i , . . . , i d ), we introduce the d -variate B-spline, (cid:98) B i ( ζ ) := (cid:98) b [Ξ i ,p ]( ζ ) . . . (cid:98) b [Ξ i d ,p ]( ζ d ) . The corresponding spline space is (cid:98) S h = (cid:98) S h ([0 , d ) := span { B i : i ∈ I } , where h is the maximal mesh-size in all dimensions, that is, h := max ≤ k ≤ d ≤ i ≤ m + p +1 {| ξ k,i +1 − ξ k,i |} . Assumption 1.
Knot vectors are quasi-uniform; there exists α > , h independent, suchthat each non-empty knot span ( ξ k,i , ξ k,i +1 ) fulfills αh ≤ ξ k,i +1 − ξ k,i , for ≤ k ≤ d .5.2. Single-patch geometric space We consider a single-patch domain Ω ⊂ R d , a d -dimensional parametrization F ,Ω = F ( (cid:98) Ω) , with F ( ξ ) = (cid:88) i C i (cid:98) B i ( ξ ) , where C i are the control points and (cid:98) B i are tensor-product B-spline basis functions de-fined on a parametric patch (cid:98) Ω := (0 , d , where F is an invertible map. Following14he isoparametric paradigm, isogeometric basis functions B i are the push-forward of theparametric basis, that is, B i = (cid:98) B i ◦ F − . Thus, the isogeometric space on Ω is defined as S h = S h (Ω) := span (cid:110) B i := (cid:98) B i ◦ F − : i ∈ I (cid:111) . We introduce a co-lexicographical reordering of the basis functions and write S h = span { B i : i ∈ I } = span { B i } N dof i =1 . (38) Following [17], a multi-patch domain Ω ⊂ R d is an open set, a subdomain unionΩ = N patch (cid:91) r =1 Ω ( r ) , (39)where N patch is the number of subdomains, Ω ( r ) = F ( r ) ( (cid:98) Ω) are the disjoint patches(subdomain pre-images), each F ( r ) has a different spline parametrization, and the superindex ( r ) refers to Ω ( r ) . We introduce for each patch Ω ( r ) , B-spline spaces (cid:98) S ( r ) h := span (cid:110) (cid:98) B ( r ) i : i = 1 , . . . , N ( r )dof (cid:111) . and isogeometric spaces S ( r ) h := span (cid:110) B ( r ) i : i = 1 , . . . , N ( r )dof (cid:111) . We assume for simplicity that all patches have the same degree p . We define an iso-geometric space on Ω by imposing continuity at the interfaces between patches, that is V h := (cid:110) v ∈ C (Ω) : v | Ω ( r ) ∈ S ( r ) h for r = 1 , . . . , N patch (cid:111) . (40)We assume a suitable conformity to construct a basis for space V h ; for all r, s ∈ { , . . . , N patch } ,with r (cid:54) = s , let Γ rs = ∂ Ω ( r ) ∩ ∂ Ω ( s ) be the interface between the patches Ω ( r ) and Ω ( s ) . Assumption 2.
We assume:1. Γ rs is either a vertex or the image of a full edge or the image of a full face for bothparametric domains.2. For each B ( r ) i ∈ S ( r ) h such that supp( B ( r ) i ) ∩ Γ rs (cid:54) = ∅ , there exists a function B ( s ) j ∈S ( s ) h such that B ( r ) i | Γ rs = B ( s ) j | Γ rs . We define, for each patch Ω ( r ) , an application G ( r ) : { , . . . , N ( r )dof } → J = { , . . . , dim( V h ) } , such that G ( r ) ( i ) = G ( s ) ( j ) if and only if Γ rs (cid:54) = ∅ and B ( r ) i | Γ rs = B ( s ) j | Γ rs . Moreover,we define, for each global index l ∈ J , a set of pairs J l := { ( r, i ) : G ( r ) ( i ) = l } , which15ollects local indices of patch-wise contributions to a global function, and the scalar n l := J l , (41)that expresses the patch multiplicity for the global index l . Furthermore, let N adj := max { n l : l ∈ J } (42)be the maximum number of adjacent patches (i.e., those with non-empty closure inter-section). We define, for each l ∈ J , the global basis function B l ( x ) := (cid:40) B ( r ) i ( x ) if x ∈ Ω ( r ) and ( r, i ) ∈ J l , , (43)which is continuous due to Assumption 2. Then V h = span { B l : l ∈ J } . (44)The set { B l : l ∈ J } where B l is defined as in (43), represents a basis for V h . Finally,we define the index set J ( r ) ⊂ J such that l ∈ J ( r ) if and only if l = G ( r ) ( i ) for some i ,where J ( r ) = N ( r )dof and J ( r ) are the index set for (cid:98) S ( r ) h and S ( r ) h , abusing notation. In this section, we briefly revisit the preconditioner described in [7], for an isogeo-metric mass matrix associated with a single-patch domain, denoted Ω, that is[ M ] i,j = (cid:90) (cid:98) Ω (cid:98) B i (cid:98) B j | det( D F ) | . (45) Assumption 3.
We assume F ∈ C ([0 , d ) and there exists δ > such that for all x ∈ [0 , d , det( D F ( r ) ) ≥ δ . Hence, as a preconditioner for the mass matrix M , we consider M := D (cid:98) D − (cid:99) M (cid:98) D − D , (46)where [ (cid:99) M ] i,j := (cid:90) (cid:98) Ω (cid:98) B i (cid:98) B j , (cid:98) D := diag (cid:16)(cid:99) M (cid:17) , D := diag ( M ) . (47)We define κ ( A ) := λ max ( A ) λ min ( A ) , (48)for a symmetric, positive definite matrix A . This preconditioner (46) has several goodproperties (see [7] for details): 16 asymptotic exactness, that islim h → κ ( M − M M − ) = 1; (49) • p -robustness, in fact numerical tests show a slow growth (almost linear) of κ ( M − M M − ) with respect to the spline degree p . • good behaviour with respect to the spline parametrization F ; numerical evidenceshows that κ ( M − M M − ) and the number of PCG iteration needed to convergeare small even for distorted geometries. We introduce a mass matrix preconditioner for multi-patch domains, that is[ M ] i,j = (cid:90) Ω B i B j , B i , B j ∈ B mp , where Ω is the union of Ω ( r ) , see (39). Following [7], we combine the single-patch pre-conditioner of (46) with an additive Schwarz method. We define a family of local spaces V ( r ) h := span (cid:110) B ( r )mp (cid:111) , r = 1 , . . . , N patch , (50)where we have set B ( r )mp := { B l : l ∈ J ( r ) } , with J ( r ) defined as in Section 5.3. Therefore, V ( r ) h is the subspace of V h spanned bythe B-splines whose support intersect Ω ( r ) . Moreover, following the notation of [18], weconsider restriction operators R ( r ) : V h → V ( r ) h with r = 1 , . . . , N patch , defined by R ( r ) (cid:32)(cid:88) l ∈J u l B l (cid:33) = (cid:88) l ∈J ( r ) u l B l and their transposes, in the basis representation, R ( r ) T : V ( r ) h → V h correspond, in ourcase, to the inclusion of V ( r ) h into V h . We denote with R ( r ) and R ( r ) T the rectangularmatrices associated to R ( r ) and R ( r ) T , respectively.The additive Schwarz preconditioner (inverse) is M − := N patch (cid:88) r =1 R ( r ) T M ( r ) − R ( r ) , (51)where we set M ( r ) := D ( r ) (cid:98) D ( r ) − (cid:99) M ( r ) (cid:98) D ( r ) − D ( r ) (52)17nd [ (cid:99) M ( r ) ] i,j := (cid:90) (cid:98) Ω (cid:98) B ( r ) i (cid:98) B ( r ) j , (cid:98) D ( r ) := diag (cid:16)(cid:99) M ( r ) (cid:17) , D ( r ) := diag (cid:16) M ( r ) (cid:17) , (53)assumption the basis functions ordering of { (cid:98) B ( r ) i } N ( r )dof i =1 and { B ( r ) i } N ( r )dof i =1 follows Section 5.3. Assumption 4.
For all r = 1 , . . . , N patch , let us assume F ( r ) fulfils Assumption 3. An upper bound for the conditioning of the preconditioned system [7] is k (cid:16) M − ad M M − ad (cid:17) ≤ CN , where the constant C is independent of h and N patch ; equation (42) defines N adj . Nu-merical experiments show the method’s p -robustness and good behavior with respect tothe spline parametrizations F ( r ) . The mass matrices and preconditioners of Section 5.4 and 5.5 are symmetric andpositive definite; thus, we adopt the preconditioned conjugate gradient method (PCG)to solve the associated linear systems. The application of the single-patch preconditionerrequires the solution of a linear system associated with M = D (cid:98) D − (cid:99) M (cid:98) D − D . the Kronecker product structure of the matrix implies that the preconditioner’s applica-tion cost is proportional to N dof [7, 19–21].For the multi-patch cases, the application of M − , (51), involves, for r ∈ { , . . . , N patch } ,the application of the operators R ( r ) and R ( r ) T , whose cost is negligible, and the appli-cation of (cid:16) M ( r ) (cid:17) − , whose cost we analyze above. In conclusion, also for M − , thecost of application is O ( N dof ) floating-point operations (FLOPS).
6. Numerical results
In this section, firstly, we show the performance of the proposed explicit generalized- α method. Then, we provide further numerical results on our solver. All the tests areperformed with Matlab R2015a and GeoPDEs toolbox [22]. To show how our explicitmethod and solver work, the linear systems are solved by PCG, with tolerance equal to10 − and with the null vector as the initial guess. We denote by n sub the number ofsubdivisions, which are the same in each parametric direction and in each patch, by p the spline degree and by τ the size of the time grid. Moreover, we underline that we onlyconsider the splines of maximal regularity. The symbol “*” denotes the impossibility ofthe formation of the matrix M , due to memory requirements.18 -4 -11 -10 -9 -8 -7 -6 || u hn - u ( t n ) || , O( -4 ) =1 =0=0.5 -4 -9 -8 -7 -6 -5 -4 || v hn - v ( t n ) || , =0.5 =0=1O( -4 ) Figure 3: L (Ω) norm error at T = 0 . In all the examples, we consider the following hyperbolic problem ¨ u ( x , t ) − ∇ · ( ω ∇ u ( x , t )) = f ( x , t ) , ( x , t ) ∈ Ω × (0 , T ] ,u ( x , t ) = u D , x ∈ ∂ Ω ,u ( x ,
0) = u , x ∈ Ω , ˙ u ( x ,
0) = v , x ∈ Ω , (54)where ω is assumed to be one except in the problem taken into account in section 6.2. α method For verifying the accuracy of the 4 th order explicit generalized- α method, we solve (54)on Ω = [0 , × [0 , u ( x, y, t ) = sin(10 πx ) sin(10 πx ) (cid:104) cos (cid:16) √ πt (cid:17) + sin (cid:16) √ πt (cid:17)(cid:105) . In Figure 3, we show the 4 th order of convergence in the time domain for the unknowndisplacement and velocity at final time T = 0 .
1. In this example, we use 100 × p = 7 with continuity C . We also setall the user-defined parameters ρ bi = ρ si = ρ . As the analysis showed before, the obtainedsolutions u nh and v nh which respectively approximate u ( x, t n ) and ˙ u ( x, t n ) converge withan order of four. α method In this example, we numerically show that choosing higher-order methods leads tobetter approximations in low-frequency zones. Again, we consider a homogeneous Dirich-let boundary condition with an exact solution given by u ( x, t ) = sin( jπx ) cos( πt ) , (55)19 .3 0.4 0.5 0.6 0.7 0.8 0.9 1 j/N -5 -4 -3 -2 -1 || u n h - u ( t n ) || , Fourth order=1=0.8 =0.5 =0Second order, =1 j/N -10 -5 || u n h - u ( t n ) || , Second order, =1 Fourth order, =0, =1
Figure 4: Comparison between the dispersion behaviour of the second and fourth order methods, τ = 0 . τ = 10 − (bottom) for final time t n = 5. and setting ω = j . We discretize the spatial domain using N = 400 elements with poly-nomials of degree p = 4 and regularity C . We refer to Figure 4, where we consider twocases; one is with the time step τ = 0 .
05 in which the bifurcation region is reached and,therefore, we obtain distinguishable results by setting different values for ρ . Additionally,it shows the importance of picking ρ wisely. In the other example, we set the time step τ = 10 − . Here, the spectral behaviour of the system does not change by varying j andresults in similar solutions for different values of ρ . In order to analyze the behaviour of the proposed preconditioners and the proposedfourth-order method, we solve (54) on a regular single-patch domain (Figure 5a), asingular one (Figure 6a) and a multi-patch domain (Figure 7a), obtained by glueingtogether seven blade-shaped patches like the one represented in Figure 5a. For each ofthem, source term, Dirichlet boundary condition and initial condition are chosen such20 a) single-patch domain (blade geometry) -10 -5 (b) Optimal convergence of L error Figure 5: L (Ω) norm relative error at T = 64 · τ with τ = 10 − for the blade geometry. n sub p = 1 p = 2 p = 3 p = 48 7.5 10.1 10.2 10.416 6.1 8.4 8.4 8.932 5.5 6.5 6.5 7.264 5.0 5.5 5.5 6.0 Table 1: Mean values, across all the time steps, of the iterations needed by PCG on the blade, for τ = 10 − and T = 64 · τ . that the exact solution is always u ( x , x , x , t ) = sin( x ) sin( x ) sin( x ) [cos(20 πt ) + sin(20 πt )] . We report the mean value, across all the time steps, of the number of iterations neededby PCG for reaching the given tolerance for the three different spatial domains. Tables 1and 3 shows that the number of PCG iterations is always very low and decreases whenthe subdivisions are increased. This is true also for the singularly parametrized Donutdomain (see Table 2), even though this case is beyond the robustness result presented inSection 5.4.Furthermore, for assessing the good behaviour of the isogeometric discretization, for n sub p = 1 p = 2 p = 3 p = 48 6.0 8.0 9.0 10.016 5.0 7.0 8.0 8.032 5.0 6.0 7.0 8.064 5.3 6.5 7.0 7.8 Table 2: Mean values, across all the time steps, of the iterations needed by PCG on the donut, for τ = 10 − and T = 64 · τ . a) Singular domain (donut geometry) -6 -4 -2 (b) Optimal convergence of L error Figure 6: L (Ω) norm relative error at T = 64 · τ with τ = 10 − for the donut geometry. (a) multi-patch domain (fan geometry) -10 -5 (b) Optimal convergence of L error Figure 7: L (Ω) norm relative error at T = 64 · τ with τ = 10 − for the fan geometry. n sub p = 1 p = 2 p = 3 p = 48 22.5 32.0 37.7 41.916 21.0 27.8 30.8 35.632 19.0 25.5 27.9 29.964 17.0 22.5 26.0 27.5 Table 3: Mean values, across all the time steps, of the iterations needed by PCG on the fan problem,for τ = 5 · − and T = 64 · τ . L (Ω) at the final instant T = 6 . · − with τ = 10 − for different mesh size and spline degree. In Figure 5b, we can see thatthe rates of convergence are optimal with respect to the mesh size h ≈ n − , i.e., of order O ( h p +1 ), for p = 1 , , ,
4, as expected from standard a priori error estimate. This optimalconvergence in the spatial domain is also true for singular and multi-patch domains. Forthis, we refer the reader to Figures 6b and 7b.
7. Contributions
We propose a new class of higher-order explicit generalized- α methods for solvinghyperbolic problems that provide dissipation control. Additionally, the method’s stabil-ity region is independent of the accuracy in time. We obtain 2 k th -order of accuracy bysolving k mass-matrix systems. We adopt a preconditioner designed and built for the iso-geometric mass matrix, which significantly reduces the computational costs. We discussseveral numerical examples that show the stability and performance of our one-parameterfamily of explicit time-marching methods. Acknowledgement
This publication was also made possible in part by the CSIRO Professorial Chairin Computational Geoscience at Curtin University and the Deep Earth Imaging En-terprise Future Science Platforms of the Commonwealth Scientific Industrial ResearchOrganisation, CSIRO, of Australia. This project has received funding from the EuropeanUnion’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 777778 (MATHROCKS). The Curtin Corrosion Centre andthe Curtin Institute for Computation kindly provide ongoing support. The AustralianGovernment Research Training Program Scholarship supported P. Behnoudfar’s research.Part of this work was carried over while P. Behnoudfar was invited by Prof. A. Reali inPavia, partially supported by MIUR-PRIN project XFAST-SIMS (no. 20173C478N). G.Loli and G. Sangalli were partially supported by the European Research Council throughthe FP7 Ideas Consolidator Grant HIGEOM n.616563, and by the Italian Ministry ofEducation, University and Research (MIUR) through the ”Dipartimenti di EccellenzaProgram (2018-2022) - Dept. of Mathematics, University of Pavia”. They are also mem-bers of the Gruppo Nazionale Calcolo Scientifico - Istituto Nazionale di Alta Matematica(GNCS-INDAM). These supports are gratefully acknowledged.
References [1] J. Chung and G. M. Hulbert. A time integration algorithm for structural dynamics with improvednumerical dissipation: the generalized- α method. Journal of Applied Mechanics , 60(2):371–375,1993.[2] G. M. Hulbert and J. Chung. Explicit time integration algorithms for structural dynamics with op-timal numerical dissipation.
Computer Methods in Applied Mechanics and Engineering , 137(2):175– 188, 1996.[3] N. M. Newmark. A method of computation for structural dynamics.
Journal of the engineeringmechanics division , 85(3):67–94, 1959.[4] H. M. Hilber, T. J. R. Hughes, and R. L. Taylor. Improved numerical dissipation for time integrationalgorithms in structural dynamics.
Earthquake Engineering & Structural Dynamics , 5(3):283–292,1977.
5] J. C. Butcher.
Numerical methods for ordinary differential equations . John Wiley & Sons, 2016.[6] J. A. Evans, R. R. Hiemstra, T. J. R. Hughes, and A. Reali. Explicit higher-order accurate isogeo-metric collocation methods for structural dynamics.
Computer Methods in Applied Mechanics andEngineering , 338:208–240, 2018.[7] G. Loli, G. Sangalli, and M. Tani. Easy and efficient preconditioning of the isogeometric massmatrix.
Computers & Mathematics with Applications , 2021.[8] T. J. R. Hughes, J. A. Evans, and A. Reali. Finite element and nurbs approximations of eigen-value, boundary-value, and initial-value problems.
Computer Methods in Applied Mechanics andEngineering , 272:290–320, 2014.[9] T. J. R. Hughes.
The finite element method: linear static and dynamic finite element analysis .Courier Corporation, 2012.[10] P. Behnoudfar, Q. Deng, and V. M. Calo. High-order generalized-alpha method.
Applications inEngineering Science , 4:100021, 2020.[11] P. Behnoudfar, Q. Deng, and V. M. Calo. Higher-order generalized- α methods for hyperbolicproblems. arXiv preprint arXiv:1906.06081 , 2019.[12] E. Hairer and G. Wanner. Solving Ordinary Differential Equations II: Stiff and Differential-Algebraic Problems , volume 14. Springer, 2010.[13] P. Behnoudfar, Q. Deng, and V. M. Calo. Split generalized- α method: A linear-cost solver formulti-dimensional second-order hyperbolic systems. Computer Methods in Applied Mechanics andEngineering , 376:113656, 2021.[14] P. Behnoudfar, V. M. Calo, Q. Deng, and P. D. Minev. A variationally separable splitting for thegeneralized- α method for parabolic equations. International Journal for Numerical Methods inEngineering , 121(5):828–841, 2020.[15] J. A. Cottrell, T. J. R. Hughes, and Y. Bazilevs.
Isogeometric analysis: toward integration of CADand FEA . John Wiley & Sons, Chichester, 2009.[16] C. De Boor.
A practical guide to splines, Revised Edition , volume 27 of
Applied MathematicalSciences . Springer-Verlag, New York, 2001.[17] L. Beir˜ao da Veiga, A. Buffa, G. Sangalli, and R. H. V´azquez. Mathematical analysis of variationalisogeometric methods.
Acta Numer. , 23:157—-287, 2014.[18] A. Toselli and O. Widlund.
Domain decomposition methods—algorithms and theory , volume 34 of
Springer Series in Computational Mathematics . Springer-Verlag, Berlin, 2005.[19] L. Gao and V. M. Calo. Fast isogeometric solvers for explicit dynamics.
Computer Methods inApplied Mechanics and Engineering , 274:19–41, 2014.[20] M. (cid:32)Lo´s, M. Paszy´nski, A. K(cid:32)lusek, and W. Dzwinel. Application of fast isogeometric l2 projectionsolver for tumor growth simulations.
Computer Methods in Applied Mechanics and Engineering ,316:1257–1269, 2017.[21] L. Gao and V. M. Calo. Preconditioners based on the alternating-direction-implicit algorithmfor the 2d steady-state diffusion equation with orthotropic heterogeneous coefficients.
Journal ofComputational and Applied Mathematics , 273:274–295, 2015.[22] R. H. V´azquez. A new design for the implementation of isogeometric analysis in Octave and Matlab:GeoPDEs 3.0.
Comput. Math. Appl. , 72(3):523–554, 2016., 72(3):523–554, 2016.