A Galerkin Isogeometric Method for Karhunen-Loeve Approximation of Random Fields
AA Galerkin Isogeometric Method for Karhunen-Lo`eve Approximation ofRandom Fields ✩ Sharif Rahman College of Engineering, The University of Iowa, Iowa City, Iowa 52242, U.S.A.
Abstract
This paper marks the debut of a Galerkin isogeometric method for solving a Fredholm integral eigenvalueproblem, enabling random field discretization by means of the Karhunen-Lo`eve expansion. The methodinvolves a Galerkin projection onto a finite-dimensional subspace of a Hilbert space, basis splines (B-splines)and non-uniform rational B-splines (NURBS) spanning the subspace, and standard methods of eigensolu-tions. Compared with the existing Galerkin methods, such as the finite-element and mesh-free methods, theNURBS-based isogeometric method upholds exact geometrical representation of the physical or computa-tional domain and exploits regularity of basis functions delivering globally smooth eigensolutions. Therefore,the introduction of the isogeometric method for random field discretization is not only new; it also offers afew computational advantages over existing methods. In the big picture, the use of NURBS for random fielddiscretization enriches the isogeometric paradigm. As a result, an uncertainty quantification pipeline of thefuture can be envisioned where geometric modeling, stress analysis, and stochastic simulation are all inte-grated using the same building blocks of NURBS. Three numerical examples, including a three-dimensionalrandom field discretization problem, illustrate the accuracy and convergence properties of the isogeometricmethod for obtaining eigensolutions.
Keywords:
B-splines, NURBS, Fredholm integral eigenvalue problem, Hilbert-Schmidt operator,uncertainty quantification.
1. Introduction
Many uncertainty quantification problems in engineering and applied sciences require modeling spatialvariability of random input parameters. For instance, the tensile and fracture toughness properties ofengineering materials, the size and shape characteristics of mechanical components, and the wind and snowloads in structural systems all exhibit randomness that varies not only from sample to sample, but also frompoint to point in their respective domains. Therefore, random field treatment of spatial varying randomnessis a vital ingredient in computational analysis. Loosely speaking, a random field represents a randomquantity at each point of the domain and, therefore, engenders an infinite number of random variables. Inpractice, though, the number of random variables must be finite and manageable but also large enough toensure an optimal or accurate approximation of the original random field. This process is often referred toas random field discretization.A number of methods and approaches are available for random field discretization. For brevity, thispaper will not perform a comprehensive review, but will direct readers to a paper by Betz et al. [1], whichprovides a summary of existing works, including many references cited therein. A popular approach, knownby the name of Karhunen-Lo`eve (KL) expansion [2, 3, 4], entails spectral decomposition of the covariance ✩ Grant sponsor: U.S. National Science Foundation; Grant No. CMMI-1607398.
Email address: [email protected] (Sharif Rahman) Professor.
Preprint submitted to Computer Methods in Applied Mechanics and Engineering April 16, 2018 a r X i v : . [ m a t h . NA ] A p r unction, leading to an infinite series consisting of deterministic functions of space and uncorrelated randomvariables. The expansion is well known with diverse applications in engineering and applied sciences [5].However, the KL expansion mandates solution of a Fredholm integral eigenvalue problem [6], which is notan easy task in general. Analytical solutions are available only when the covariance function has simplerfunctional forms, such as exponential or linear functions, and/or the problem domain is rectangular. Forarbitrary covariance functions or arbitrary domains in two or three dimensions, numerical methods are oftenneeded to solve the eigenvalue problem approximately.For numerical solution of the integral eigenvalue problem, a well-known method is the Galerkin finite-element method (FEM) employed by Ghanem and Spanos [7] in the 1990s. Roughly speaking, the finite-element solution consists of a variational formulation and function spaces defined by its basis functions [8].These basis functions are described by local representations via finite elements, resulting in a mesh or grid,which constitutes a non-overlapping decomposition of the computational domain into elementary shapes,such as triangles or tetrahedra and quadrilaterals or hexahedra. However, for mechanical systems withcomplex geometry, a finite-element mesh is often created from a computer-aided design (CAD) model, wherethe former is an approximation of the latter. Therefore, an additional source of imprecision is embeddedin the FEM-based eigensolution. Another Galerkin approach, which sidesteps the need for element-wisedecomposition, is the meshless or mesh-free method, especially the element-free Galerkin method [9], uponwhich Rahman and Xu [10, 11] capitalize for the solution of the integral eigenvalue problem. The fundamentalaspects of both FEM and the mesh-free method are identical as they are rooted in the same Galerkinformulation, but the function spaces and their basis functions are different: in FEM, the basis functions areinterpolatory polynomials with C -continuity across element boundaries, whereas in the mesh-free method,the basis functions are non-interpolatory rational functions with at least C -continuity everywhere. Inconsequence, the approximate eigenfunctions of the KL expansion obtained by the mesh-free method areusually globally smoother than those derived from FEM. Nonetheless, as in FEM, the link between themesh-free method and CAD geometry is, at best, tenuous [12]. Indeed, FEM or the mesh-free method maynever faithfully replicate the CAD geometry. More importantly, for complex engineering designs, generatinga high-quality finite-element mesh or mesh-free discretization from the CAD geometry is more formidablethan performing the analysis. This is the principal motivation behind replacing finite-element- or mesh-free-generated basis functions with CAD-generated basis functions for solving the integral eigenvalue problemdirectly, leading to effective random field discretization.This paper presents a Galerkin isogeometric method for solving the integral eigenvalue problem stemmingfrom the KL expansion of a random field with an arbitrary covariance function and an arbitrary computa-tional domain in three dimensions. The method entails performing a Galerkin discretization of the integraleigenvalue problem, formulation of the associated matrix eigenvalue problem by constructing the isogeo-metric function spaces spanned by basis splines (B-splines) and non-uniform rational B-splines (NURBS),and solution of the resultant matrix eigenvalue problem by standard methods. The paper is organized asfollows. A brief exposition of NURBS paraphernalia and isogeometric concept is given in Section 2. Section3 formally defines a random field and its KL expansion, followed by truncation of the KL expansion anda description of associated error measures. The limitation of the KL expansion is also discussed. Section4 presents the proposed isogeometric method for solving the integral eigenvalue problem. The propertiesand construction of system matrices involved in the matrix eigenvalue problem are explained. The resultsfrom three numerical examples of increasing dimensions and hence complexity are reported in Section 5 andAppendix A. Section 6 discusses future work. Finally, conclusions are drawn in Section 7.
2. Isogeometric Analysis
Let N := { , , . . . } , N := N ∪ { } , R := ( −∞ , + ∞ ), R +0 := [0 , + ∞ ), and R + := (0 , + ∞ ) represent thesets of positive integer (natural), non-negative integer, real, non-negative real, and positive real numbers,respectively. Denote by d the dimension of the physical or computational domain D of a geometrical object,which can be a curve, surface, and solid in R d . In this work, d = 1 , ,
3, and
D ⊂ R d is a bounded domain.These standard notations will be used throughout the paper.2he isogeometric analysis (IGA) employs basis functions from CAD, such as B-splines and NURBS,directly in computational analysis. Hughes et al. [13] were the first to propose the isogeometric paradigmand its computational framework. For the paper to be self-contained, a brief summary of NURBS-basedIGA is presented here. The description is restricted to geometries modeled as a single patch. However,for NURBS-based IGA, it is sometimes necessary to represent the physical or computational domain by amulti-patch geometric model, for example, when analyzing multiply-connected domains. The multi-patchgeometries were not considered in this work. Consider a d -dimensional cartesian coordinate system in the parametric domain ˆ D = [0 , d , where anarbitrary point has coordinate ξ = ( ξ , . . . , ξ d ). For the coordinate direction k , where k = 1 , . . . , d , define apositive integer n k ∈ N and a non-negative integer p k ∈ N , representing the total number of basis functionsand polynomial degree, respectively. Given n k and p k , introduce on the parametric interval [0 , ⊂ R , anordered knot vector Ξ k := (0 = ξ k, , ξ k, , . . . , ξ k,n k + p k +1 = 1) , ξ k, ≤ ξ k, ≤ · · · ≤ ξ k,n k + p k +1 , where ξ k,i k is the i k th knot with i k = 1 , , . . . , n k + p k + 1 representing the knot index for the coordinatedirection k . Although not absolutely necessary, assume that ξ k, = 0 and ξ k,n k + p k +1 = 1 for any k , sothat all parametric intervals are the same as [0 , ξ k,i k of the knot vector Ξ k may appear 1 ≤ m k,i k ≤ p k + 1 times, where m k,i k is referred to as itsmultiplicity. The multiplicity has important implications on the regularity properties of B-spline functions,to be discussed later. To monitor knots without repetitions, say, there are r k distinct knots in Ξ k . Collectthem into an auxiliary knot vector Z k := ( ζ k, , . . . , ζ k,r k ) and define the vector M k := ( m k, , . . . , m k,r k ) oftheir corresponding multiplicities such that Ξ k = (0 = ζ k, , . . . , ζ k, (cid:2) (cid:3)(cid:4) (cid:5) m k, times , ζ k, , . . . , ζ k, (cid:2) (cid:3)(cid:4) (cid:5) m k, times , . . . , ζ k,r k , . . . , ζ k,r k (cid:2) (cid:3)(cid:4) (cid:5) m k,rk times = 1) , r k (cid:6) i k =1 m k,i k = n k + p k + 1 . A knot vector is called open if its first and last knots appear p k + 1 times. Open knot vectors are standardin CAD [14]. The B-spline functions for a given degree are defined in a recursive manner using the knot vector. Denoteby N ki k ,p k ( ξ k ) the i k th univariate B-spline function with degree p k for the coordinate direction k . Given the zero -degree basis functions, N ki k , ( ξ k ) = (cid:7) , ξ k,i k ≤ ξ k < ξ k,i k +1 , , otherwise , for k = 1 , . . . , d , all higher-order B-spline functions are efficiently generated by the recursive Cox-de Boorformula [15, 16], N ki k ,p k ( ξ k ) = ξ k − ξ k,i k ξ k,i k + p k − ξ k,i k N ki k ,p k − ( ξ k ) + ξ k,i k + p k +1 − ξ k ξ k,i k + p k +1 − ξ k,i k +1 N ki k +1 ,p k − ( ξ k ) , (1)where 1 ≤ k ≤ d , 1 ≤ i k ≤ n k , 1 ≤ p k < ∞ , and 0 / zero .The B-spline functions for any k = 1 , . . . , d and p k ∈ N satisfy the following desirable properties [13, 14,15, 16]: (1) they are non-negative, that is, N ki k ,p k ( ξ k ) ≥ i k and ξ k ; (2) they are locally supported on The nouns degree and order associated with IGA are used synonymously in the paper. ξ N i , p ( ξ ) N , N , N , N , N , N , N , N , (a) 1010.25 ξ B i , p ( ξ , ξ ) ξ B (5 , , (2 , Figure 1: Quadratic B-splines generated from the knot vectors Ξ = Ξ = (0 , , , . , . , . , . , . , , ,
1) with n = n = 8and p = p = 2; (a) eight univariate B-splines for the coordinate direction ξ ; (b) a bivariate B-spline from the tensor productof N , ( ξ ) and N , ( ξ ). the interval [ ξ k,i k , ξ k,i k + p k +1 ] for all i k ; (3) they are linearly independent, that is, if (cid:8) n k i k =1 c ki k N ki k ,p k ( ξ k ) = 0,then c ki k = 0 for all i k ; (4) they form a partition of unity, that is, (cid:8) n k i k =1 N ki k ,p k ( ξ k ) = 1, ξ k ∈ [ ξ k, , ξ k,n k + p k +1 ];and (5) they are everywhere pointwise C ∞ -continuous except at the knots ξ k,i k of multiplicity m k,i k , whereit is C p k − m k,ik -continuous, provided that 1 ≤ m k,i k < p k + 1.Define by N k := N k ( Ξ k ; p k ) := span { N ki k ,p k ( ξ k ) } i k =1 ,...,n k the space of univariate B-splines with degree p k for the coordinate direction k . Figure 1(a) shows eightunivariate quadratic B-spline basis functions N i ,p ( ξ ), i = 1 , . . . , n , when n = 8, p = 2, and Ξ = { , , , . , . , . , . , . , , , } . The multiplicity of each interior knot is one, except at the fifth knot,where it is two. Therefore, the basis functions are C -continuous at ξ , = ξ , = 0 . C -continuous atother interior nodes. Clearly, the regularities of B-splines depend on the multiplicities of the knots selected. The multivariate B-splines in d variables ξ , . . . , ξ d are constructed from the tensor product of the uni-variate B-splines stemming from the chosen knot vectors Ξ , . . . , Ξ d . The corresponding auxiliary knot4ectors and multiplicity vectors are Z , . . . , Z d and M , . . . , M d , respectively. A mesh Q h in the parametricdomain ˆ D = [0 , d is defined by its partition into d -dimensional parametric elements Q , that is, Q h := (cid:9) Q = ⊗ dk =1 ( ζ k,i k , ζ k,i k +1 ) : 1 ≤ i k < r k − (cid:10) . Relatedly, if the size of an element Q ∈ Q h is defined as ˆ h Q := diam( Q ), then ˆ h := max Q ∈Q h { ˆ h Q } definesthe global mesh size in the parametric domain.Define two multi-indices i := ( i , . . . , i d ) and p := ( p , . . . , p d ). For the first multi-index, denote by I := { i = ( i , . . . , i d ) : 1 ≤ i k ≤ n k , ≤ k ≤ d } a multi-index set. Then, for i ∈ I and p ∈ N d , the multivariate B-spline function B i , p : ˆ D → R is defined as B i , p ( ξ ) := d (cid:11) k =1 N ki k ,p k ( ξ k ) (2)with the corresponding tensor-product B-spline space B h := d (cid:12) k =1 N k ( Ξ k ; p k ) = d (cid:12) k =1 span { N ki k ,p k ( ξ k ) } i k =1 ,...,n k = span { B i , p ( ξ ) } i ∈I . (3)Note that the functions in B h are piecewise polynomials of degree p k along each coordinate direction k =1 , . . . , d . Figure 1(b) depicts a bivariate quadratic B-spline, which is generated from the knot vectors Ξ = Ξ = (0 , , , . , . , . , . , . , , ,
1) and tensor product of N , ( ξ ) and N , ( ξ ).Due to the tensor-product structure, multivariate B-spline functions inherit most of the aforementionedproperties of their univariate counterparts, namely, non-negativity, local support, linear independence, par-tition of unity, and regularity. The functions are C ∞ -continuous in the interior of each element Q ∈ Q h ,while, across element boundaries, the regularity is decided by the directional regularity in each coordinate. With the multivariate B-spline functions and their space described by (2) and (3), the multivariateNURBS functions and the corresponding space can now be defined using a projective transformation [14, 17].Associated with each i ∈ I , denote by w i ∈ R + a constant positive weight. As a result, the weight function w : ˆ D → R can be defined as w ( ξ ) := (cid:6) i ∈I w i B i , p ( ξ ) . Using the properties of B-splines, it is elementary to show that the weight function is also positive. Given i ∈ I and p ∈ N d , the multivariate NURBS function R i , p : ˆ D → R is defined as [14, 17] R i , p ( ξ ) := w i B i , p ( ξ ) w ( ξ ) = w i B i , p ( ξ ) (cid:6) i ∈I w i B i , p ( ξ ) , producing the NURBS function space R h := span { R i , p ( ξ ) } i ∈I (4)on the parametric domain ˆ D .The NURBS functions described in the preceding inherit all of the important properties from theirpiecewise polynomial counterparts as follows [13]: (1) they constitute a partition of unity; (2) the NURBSand B-splines functions have the same continuity and support; (3) they possess the property of affinetransformations; (4) setting all the weights to be equal, a NURBS function reduces to a scaled B-spline5unction; and (5) the NURBS surfaces and solids are projective transformations of tensor-product, piecewisepolynomial entities.Using multivariate B-splines and NURBS functions, a geometric object in R d , such as a curve, surface,or solid, can be readily generated. For each i ∈ I , let C i ∈ R d be a control point. Denote by n c := |I| thecardinality of I , representing the number of such control points. Call the collection of such control points { C i } i ∈I to be a control mesh. Using NURBS functions, the physical domain D ⊂ R d is obtained by ageometrical mapping x : ˆ D → D ⊂ R d , which is described more explicitly by x ( ξ ) = (cid:6) i ∈I R i , p ( ξ ) C i . (5)A similar mapping can be defined using multivariate B-spline functions. However, not all objects or domains,some of which are commonly used in engineering, can be represented by B-splines. For instance, free-form surfaces and conic sections, such as circles, ellipses, cylinders, spheres, ellipsoids, and tori, cannotbe described by piecewise polynomials. In contrast, NURBS functions equipped with judiciously selectedweights can represent them exactly [14, 17]. Therefore, the use of NURBS, that is, (5), becomes necessaryin the CAD community.Using the geometrical mapping (5), the physical mesh K h , say, of the physical domain can now be viewedas the image of the parametric mesh Q h , that is, K h := { K = x ( Q ) : Q ∈ Q h } , (6)where the element K of the physical mesh is the image of the element Q of the parametric mesh. FollowingBazilevs et el. [18], define the global physical mesh size as h := max K ∈K h h K , where h K = (cid:10) ∇ x (cid:10) L ∞ ( Q ) ˆ h Q is the size of element K . Here, ∇ x is the Jacobian of the mapping x : ˆ D → D ⊂ R d , comprising a matrix ofpartial derivatives of the components of x with respect to those of ξ , and (cid:10) · (cid:10) L ∞ ( Q ) is the infinity norm ofthe matrix. Moreover, define the space of NURBS functions in the physical domain D as the push-forwardof the NURBS space R h in (4) via V h := span (cid:9) R i , p ◦ x − (cid:10) i ∈I = span (cid:9) ¯ R i , p (cid:10) i ∈I , (7)where ¯ R i , p := R i , p ◦ x − is the NURBS function in the physical domain. It is assumed that the mapping (5)is invertible almost everywhere in D and has smooth inverse on each element K of the physical mesh K h . The accuracy of IGA depends on the enrichment of the NURBS spaces R h and V h in (4) and (7) viarefinement. There are three principal types of refinement. A simple and straightforward type, namelyknot insertion, is equivalent to h -refinement commonly used in FEM. For knot insertion, a finer mesh isconstructed by adding knots to the existing knot vectors without changing the geometry. As an example,consider inserting a new knot ξ (cid:4) k ∈ [ ξ k,l , ξ k,l +1 ), 1 ≤ l ≤ n k + p k , to the existing knot vector Ξ k :=( ξ k, , ξ k, , . . . , ξ k,n k + p k +1 ), which produces n k B-spline functions. Applying the Cox-de Boor formula (1) tothe new knot vector, say, Ξ (cid:4) k := ( ξ (cid:4) k, , ξ (cid:4) k, , . . . , ξ (cid:4) k,n k + p k +2 ) = ( ξ k, , ξ k, , . . . , ξ k,l , ξ (cid:4) k , ξ k,l +1 , . . . , ξ k,n k + p k +1 ) , a new set of n k + 1 basis functions is created with their span nesting the span of existing basis functions.The process can be repeated for additional knots. Moreover, the h -refinement can be performed globally inall d coordinate directions or individually in select coordinate directions. Henceforth, for a NURBS objectin R d , a new set of control points should be defined for the new basis functions to obtain an object that isgeometrically and parametrically the same as the original one. In other words, the geometry of the physical The h -refinements in all and a single coordinate direction(s) are illustrated in Examples 2 and 3, respectively, of Section 5. h -refinement, there are increases in the number of elementsas well as in the number of basis functions and, consequently, in the number of control points.Another prominent type of refinement is called order elevation, which is reminiscent of p -refinement inFEM. For order elevation, higher-order polynomials are used on the same mesh, again, without changingthe geometry. In other words, the geometry of the physical domain is also upheld in p -refinement. A thirdtype of refinements, called k refinement, has no analog in FEM, but it encompasses both knot insertion andorder elevation. For a detailed description of all three types of refinement, including examples, read thebook by Cottrell et al. [12]. Let L ( D ) be a Hilbert space of real-valued square-integrable functions on D ⊂ R d equipped with theusual inner product and induced norm( u, v ) L ( D ) := (cid:13) D u ( x ) v ( x ) d x and (cid:10) u (cid:10) L ( D ) := ( u, u ) / L ( D ) , respectively, for any u, v ∈ L ( D ). Define a Sobolev space H l ( D ) of order l ∈ N , where H ( D ) = L ( D ),with the standard norm and seminorm (cid:10) u (cid:10) H l ( D ) := (cid:14) l (cid:6) k =0 | u | H k ( D ) (cid:15) / and | u | H k ( D ) := ⎛⎝ (cid:6) | i | = k (cid:10) ∂ i u (cid:10) L ( D ) ⎞⎠ / , respectively, for any u ∈ H l ( D ), where the multi-index i = ( i , . . . , i d ) ∈ N d , | i | = i + · · · + i d , and ∂ i u = ∂ i + ··· + i d /∂x i · · · x i d d .For error analysis of NURBS-based IGA, consider interpolating a function u ∈ L ( D ) defined on D ∈ R d .However, the decay of interpolation error depends on how the NURBS space is refined. Here, an importantresult pertaining to the error analysis of NURBS, derived by Bazilevs et al. [18], is briefly summarized.In doing so, consider a family of meshes {Q h } h> over the parametric domain ˆ D , which is subjected to h -refinement, while keeping the polynomial degree fixed. The error estimate is based on introducing asupport extension ¯ Q of an element Q of the parametric mesh Q h defined as the union of the supports ofbasis functions whose supports intersect the element Q . Similarly, the physical support extension ¯ K , say,of an element K = x ( Q ) of the physical mesh K h in (6) is obtained as the image of Q via the geometricalmapping in (5). Given a function u ∈ L ( D ), denote by Π V h : L ( D ) → V h a projective operator, wherethe NURBS space V h is defined in (7). The local error, described in Theorem 1, proves convergence ofNURBS-based function interpolations. Theorem 1 (Local Error Estimate [18]) . Let V h be the NURBS space in the physical domain D defined bythe NURBS functions endowed with degrees p = · · · = p d = p , where p ∈ N . Given the integers l and r such that ≤ l ≤ r ≤ p + 1 , the local estimate of the interpolation error for a function u ∈ L ( D ) ∩ H r ( ¯ K ) ,measured in terms of the l th-order seminorm | · | H l ( K ) of the Sobolev space H l ( K ) , is | u − Π V h u | H l ( K ) ≤ C (cid:6) K ∈K h h r − lK r (cid:6) i =0 (cid:10) ∇ x (cid:10) i − rL ∞ ( ¯ Q ) | u | H i ( ¯ K ) , where h K is the size of element K of K h and C is a constant that depends on p and the shape of the domain D . In addition, a follow-up global error estimate, described by Theorem 2 of the paper by Bazilevs et al. [18],is given. These error estimates demonstrate that the NURBS space delivers the optimal rate of convergence,as for the finite-element spaces of the same degree. 7 . Karhunen-Lo`eve Representation
Let (Ω , F , P ) be a complete probability space, where Ω is a sample space, F is a σ -field on Ω, and P : F → [0 ,
1] is a probability measure. Denote by L (Ω , F , P ) a Hilbert space of random variables definedon (Ω , F , P ) and by L ( D ×
Ω) a Hilbert space of random fields defined on D . A random variable or randomfield, if it is a member of the associated Hilbert space, has finite second-moment properties. A real-valued random field α defined on a bounded domain D ⊂ R d , where d = 1, 2, or 3, is a mapping α : D × Ω → R such that for each x ∈ D , α ( x , · ) is a random variable with respect to (Ω , F , P ). Given theexpectation operator E with respect to the probability measure P , denote by μ ( x ) := E [ α ( x , · )] the meanfunction and by Γ( x , x (cid:4) ) := E [( α ( x , · ) − μ ( x ))( α ( x (cid:4) , · ) − μ ( x (cid:4) ))] , x , x (cid:4) ∈ D , the covariance function of α ( x , · ). Without loss of generality, assume that μ ( x ) = 0. A non- zero -meanrandom field can be obtained by just adding the mean function to a zero -mean random field.The random fields are often assumed to be homogeneous or stationary, meaning that their finite-dimensional probability distributions are invariant under arbitrary translations. This implies that thecovariance function is a function of the argument difference x − x (cid:4) . Moreover, random fields are some-times assumed to be isotropic, that is, invariant under orthogonal transformations. In which case, thecovariance function becomes a function of the distance (cid:10) x − x (cid:4) (cid:10) . Finally, additional assumptions are neededto ensure that the samples of random fields are continuous and differentiable in a mean-square or almost suresense. Figure 2 shows four commonly used covariance functions of a homogeneous random field defined on[0 , σ and correlation length parameter b of the random field. Let α ( x , · ) ∈ L ( D ×
Ω) be a random field such that for each x ∈ D , α ( x , · ) is a random variable in L (Ω , F , P ) and, given a realization ω ∈ Ω, α ( x , ω ) ∈ L ( D ). A definition of the Hilbert-Schmidt integraloperator [19], followed by a few relevant propositions and a theorem, leads to a formal description of theKL expansion. Definition 2.
For a bounded domain
D ∈ R d , a function κ : D × D → R is called a Hilbert-Schmidt kernelif κ ∈ L ( D × D ) , that is, if (cid:13) D (cid:13) D | κ ( x , x (cid:4) ) | d x d x (cid:4) < ∞ . Correspondingly, the Hilbert-Schmidt operator G κ : L ( D ) → L ( D ) is defined as ( G κ φ )( x ) := (cid:13) D κ ( x , x (cid:4) ) φ ( x (cid:4) ) d x (cid:4) ∀ φ ( x ) ∈ L ( D ) . Proposition 3.
The covariance function
Γ :
D × D → R of a random field α ( x , · ) ∈ L ( D × Ω) is aHilbert-Schmidt kernel, and, therefore, G Γ : L ( D ) → L ( D ) , defined as ( G Γ φ )( x ) := (cid:13) D Γ( x , x (cid:4) ) φ ( x (cid:4) ) d x (cid:4) ∀ φ ( x ) ∈ L ( D ) , (8) is the Hilbert-Schmidt operator associated with the covariance function.Proof. Since α ( x , · ) ∈ L ( D ×
Ω), the covariance function is square-integrable and hence a bounded function.Therefore, the results of the proposition follow readily.8 | x − x (cid:2) | /L Γ ( x , x (cid:2) ) / σ Exponential (a) b = 1 b = 0 . | x − x (cid:2) | /L Γ ( x , x (cid:2) ) / σ Gaussian (b) b = 1 b = 0 . | x − x (cid:2) | /L -0.4-0.200.20.40.60.81 Γ ( x , x (cid:2) ) / σ Sinusoidal (c) b = 1 b = 0 . | x − x (cid:2) | /L Γ ( x , x (cid:2) ) / σ Bessel (d) b = 1 b = 0 . Figure 2: Four commonly used covariance functions of a homogeneous random field defined on [0 , x, x (cid:2) ) = σ exp[ −| x − x (cid:2) | / ( bL )]; (b) Gaussian function: Γ( x, x (cid:2) ) = σ exp[ −| x − x (cid:2) | / ( bL ) ]; and (c) sinusoidal function:Γ( x, x (cid:2) ) = σ / [ | x − x (cid:2) | / ( bL )] sin( | x − x (cid:2) | / ( bL )); and (d) Bessel function: Γ( x, x (cid:2) ) = σ | x − x (cid:2) | / ( bL ) K ( | x − x (cid:2) | / ( bL )). Here, L = 1; K is the modified second-kind Bessel function of order one. Proposition 4.
Let G Γ : L ( D ) → L ( D ) defined in (8) be the Hilbert-Schmidt operator associated with thecovariance function Γ :
D × D → R of a zero-mean random field α ( x , · ) ∈ L ( D × Ω) . Then G Γ is a linear,compact, positive-semidefinite, and self-adjoint operator.Proof. From (8), G Γ is obviously linear. Any Hilbert-Schmidt operator from L ( D ) to L ( D ) can be expressedas the limit of a sequence of bounded finite-rank operators. Then, from Lemma 1.2.3 of Atkinson [20], G Γ is compact. To prove positive-semidefiniteness, invoke the usual inner product ( · , · ) L ( D ) of L ( D ) todemonstrate, for any 0 (cid:14) = φ ( x ) ∈ L ( D ), that(( G Γ φ )( x ) , φ ( x )) L ( D ) = (cid:13) D (cid:13) D Γ( x , x (cid:4) ) φ ( x ) φ ( x (cid:4) ) d x d x (cid:4) = (cid:13) D (cid:13) D E [ α ( x , · ) α ( x (cid:4) , · )] φ ( x ) φ ( x (cid:4) ) d x d x (cid:4) = E (cid:20)(cid:21)(cid:13) D α ( x , · ) φ ( x ) d x (cid:22) (cid:21)(cid:13) D α ( x (cid:4) , · ) φ ( x (cid:4) ) d x (cid:4) (cid:22)(cid:23) = E (cid:24)(cid:21)(cid:13) D α ( x , · ) φ ( x ) d x (cid:22) (cid:25) ≥ , where Fubini’s theorem is employed to interchange the integrals.9inally, as the covariance function Γ( x , x (cid:4) ) is symmetric with respect to arguments x and x (cid:4) ,(( G Γ φ )( x ) , ψ ( x )) L ( D ) = ( φ ( x ) , ( G Γ ψ )( x )) L ( D ) for any φ ( x ) , ψ ( x ) ∈ L ( D ), proving that G Γ is self-adjoint.A linear, compact, positive-semidefinite, and self-adjoint operator, such as G Γ : L ( D ) → L ( D ), onan infinite dimensional Hilbert space has spectral properties resembling those of a positive-semidefinite,symmetric matrix. Proposition 5 describes the spectral properties of G Γ . Proposition 5.
Let G Γ : L ( D ) → L ( D ) defined in (8) be the Hilbert-Schmidt operator associated with thecovariance function Γ :
D × D → R of a random field α ( x , · ) ∈ L ( D × Ω) . There exists an infinite sequenceof eigenpairs { λ i , φ i ( x ) } i ∈ N of G Γ , which is the solution of ( G Γ φ )( x ) = λφ ( x ) or (cid:13) D Γ( x , x (cid:4) ) φ ( x (cid:4) ) d x (cid:4) = λφ ( x ) , (9) known as the Fredholm integral equation of the second kind. Moreover, the eigensolutions, where the eigen-functions have been normalized such that (cid:10) φ i ( x ) (cid:10) L ( D ) := (cid:26) D φ i ( x ) d x = 1 , satisfy the following properties.1. The eigenvalues λ i ∈ R +0 , i ∈ N , are real and non-negative having zero as the only point of accumula-tion. Moreover, the eigenvalues can be arranged in a descending order as follows: λ ≥ λ ≥ · · · ≥ .2. The eigenfunctions φ i ( x ) ∈ L ( D ) , i ∈ N , corresponding to distinct eigenvalues, are mutually or-thonormal, that is, ( φ i ( x ) , φ j ( x )) L ( D ) := (cid:13) D φ i ( x ) φ j ( x ) d x = δ ij , (10) where δ ij is the Kronecker delta, that is, δ ij = 1 when i = j and δ ij = 0 when i (cid:14) = j .3. The number of eigenfunctions corresponding to non-zero eigenvalues is finite.4. The sequence of eigenfunctions { φ i ( x ) } i ∈ N forms an orthonormal basis of L ( D ) , that is, L ( D ) = span { φ i ( x ) } i ∈ N . Theorem 6 (Mercer’s theorem [21]) . Let
Γ :
D × D → R be a continuous covariance function of a randomfield α ( x , · ) ∈ L ( D × Ω) and G Γ : L ( D ) → L ( D ) defined in (8) be the associated Hilbert-Schmidt operator.If { λ i , φ i ( x ) } i ∈ N is an infinite sequence of eigenpairs of G Γ , then Γ( x , x (cid:4) ) = ∞ (cid:6) i =1 λ i φ i ( x ) φ i ( x (cid:4) ) (11) for all x , x (cid:4) ∈ D , where the infinite series on the right converges pointwise and uniformly on D × D . Given the mathematical results of Propositions 3, 4, and 5 and Theorem 6, the KL expansion is presentedas follows.
Theorem 7 (Karhunen-Lo`eve) . Let α ( x , · ) ∈ L ( D × Ω) be a real-valued random field with zero mean,continuous covariance function Γ :
D×D → R , and associated Hilbert-Schmidt operator G Γ : L ( D ) → L ( D ) defined in (8) . Given an infinite sequence of eigenpairs { λ i , φ i ( x ) } i ∈ N of G Γ , the random field admits aconvergent infinite series expansion α ( x , · ) ∼ ∞ (cid:6) i =1 (cid:27) λ i φ i ( x ) X i , (12) where { X i } i ∈ N is an infinite sequence of zero -mean, standardized, uncorrelated random variables, that is, E [ X i ] = (cid:13) Ω X i ( ω ) d P ( ω ) = 0 , [ X i X j ] = (cid:13) Ω X i ( ω ) X j ( ω ) d P ( ω ) = δ ij i, j ∈ N , with each random variable X i defined, for λ i (cid:14) = 0 , as X i := 1 √ λ i (cid:13) D α ( x , · ) φ i ( x ) d x , x ∈ D . (13) Proof.
With the recognition that α ( x , · ) has zero mean, apply the expectation operator on (13) to obtain E [ X i ] = 0 for all i . The orthogonality of the random variables { X i } i ∈ N follows from the orthogonality of theeigenfunctions of G Γ . Indeed, using (13) again, with (9) and (10) in mind, E [ X i X j ] = 1 √ λ i (cid:27) λ j E (cid:20)(cid:13) D (cid:13) D α ( x , · ) α ( x (cid:4) , · ) φ i ( x ) φ j ( x (cid:4) ) d x d x (cid:4) (cid:23) = 1 √ λ i (cid:27) λ j (cid:13) D (cid:21)(cid:13) D Γ( x , x (cid:4) ) φ j ( x (cid:4) ) d x (cid:4) (cid:22) φ i ( x ) d x = 1 √ λ i (cid:27) λ j (cid:13) D λ j φ j ( x ) φ i ( x ) d x = δ ij . For convergence analysis, given an integer N ∈ N , define a second-moment error e N ( x ) := E ⎡⎣(cid:7) α ( x , · ) − N (cid:6) i =1 (cid:27) λ i φ i ( x ) X i (cid:30) ⎤⎦ committed by the N -term truncation of the infinite series on the right side of (12). On expansion, e N ( x ) = E ! α ( x , · ) " + E ⎡⎣(cid:7) N (cid:6) i =1 (cid:27) λ i φ i ( x ) X i (cid:30) ⎤⎦ − E (cid:24) α ( x , · ) N (cid:6) i =1 (cid:27) λ i φ i ( x ) X i (cid:25) = Γ( x , x ) + E ⎡⎣ N (cid:6) i =1 N (cid:6) j =1 (cid:27) λ i (cid:27) λ j φ i ( x ) φ j ( x ) X i X j ⎤⎦ − E (cid:24) N (cid:6) i =1 (cid:13) D α ( x , · ) α ( x (cid:4) , · ) φ i ( x (cid:4) ) φ i ( x ) d x (cid:4) (cid:25) = Γ( x , x ) + N (cid:6) i =1 λ i φ i ( x ) − (cid:24) N (cid:6) i =1 (cid:13) D E [ α ( x , · ) α ( x (cid:4) , · )] φ i ( x (cid:4) ) φ i ( x ) d x (cid:4) (cid:25) = Γ( x , x ) + N (cid:6) i =1 λ i φ i ( x ) − (cid:24) N (cid:6) i =1 (cid:21)(cid:13) D Γ( x , x (cid:4) ) φ i ( x (cid:4) ) d x (cid:4) (cid:22) φ i ( x ) (cid:25) = Γ( x , x ) + N (cid:6) i =1 λ i φ i ( x ) − N (cid:6) i =1 λ i φ i ( x )= Γ( x , x ) − N (cid:6) i =1 λ i φ i ( x ) . Here, the second line is obtained by recognizing that the variance of α ( x , · ) is Γ( x , x ) and applying (13);the third line is attained by using orthogonality of random variables and bringing the expectation operatorinside the integral; the fourth and fifth lines are acquired by definition of covariance function and applying(9) and (10); and finally, the last line is the result of reduction. Taking the limit N → ∞ and invokingMercer’s theorem yields lim N →∞ e N ( x ) = Γ( x , x ) − ∞ (cid:6) i =1 λ i φ i ( x ) = 0 , α ( x , · ) uniformly on D and in L (Ω , F , P ). The KL expansion contains an infinite number of eigenpairs and random variables. In practice, the num-ber must be finite, meaning that the expansion must be truncated. A straightforward approach, assumingthat the eigenvalues have been arranged in a descending sequence, entails retaining the first N ∈ N termsof the expansion. The result is an N -term truncation or KL approximation α N ( x , · ) = N (cid:6) i =1 (cid:27) λ i φ i ( x ) X i (14)of α ( x , · ), comprising eigenpairs { λ i , φ i ( x ) } ≤ i ≤ N and random variables X i , i = 1 , . . . , N . This is commonlyreferred to as random field discretization. In consequence, the statistical variation of random field α ( x , · ) isbeing swapped with those possessed by N uncorrelated random variables X , . . . , X N . Therefore, the valueof N should be selected judiciously not only for maintaining desired accuracy in the discretization, but alsofor computational expediency.To determine the quality of a KL approximation, a simple and efficient approach entails second-momentanalysis. As already alluded to in the proof of Theorem 7, the second-moment error of α N ( x , · ) can beexpressed by ¯ e N ( x ) := e N ( x ) σ ( x ) = 1 − σ ( x ) N (cid:6) i =1 λ i φ i ( x ) (15)when normalized with respect to the variance σ ( x ) = Γ( x , x ) of α ( x , · ). However, the error measure in (15)is given at a specific point x ∈ D and is, therefore, local. Henceforth, a global error measure, obtained byaveraging over all points of D , can also be obtained as˜ e N := 1 |D| (cid:13) D ¯ e N ( x ) d x = 1 − |D| N (cid:6) i =1 λ i (cid:13) D φ i ( x ) σ ( x ) d x , (16)where |D| := (cid:26) D d x is the Lebesgue measure, such as length for d = 1, area for d = 2, or volume for d = 3,of D . If, in addition, the variance is constant, say, σ , as is the case for homogeneous random fields, then(16) reduces to ˜ e N := 1 − |D| σ N (cid:6) i =1 λ i . (17)Both the local and global error measures in (15)-(17) indicate that the truncated KL expansion alwaysunderestimates the variance of the original random field. Clearly, the variance of the KL approximation α N ( x , · ) is obtained by adding individual contributions from all N eigenmodes. Therefore, the larger thevalue of N , the smaller the respective error. More importantly, given a value of N , the effectiveness of an N -term KL approximation is predicated on how fast the eigenvalues decay with respect to the eigenmodes.The rate of decay depends strongly on the properties of the covariance function, especially the correlationlength parameter of the covariance function. The smoothness of the covariance function also determines therate of eigenvalue decay and regularity of eigenfunctions [22]. Nonetheless, a remarkable property of the KLexpansion is its error-minimizing property; that is, given a fixed N , the KL approximation in (14) has beenproven to be optimal among all series expansion methods with respect to a global mean-square error [7]. The KL expansion is useful for a number of reasons. First, the approximation holds for both homogeneousand inhomogeneous fields. Second, it is optimal in the sense that the mean-square error of the approximationis minimized. Third, the sequence of KL approximations { α N ( x , · ) } N ∈ N of α ( x , · ) converges in mean-squareto the correct limit as N → ∞ at each x ∈ D . Finally, as N → ∞ , the covariance function of α N approaches12he covariance function of α , rendering α N equal to α in the second-moment sense. If α is a Gaussian randomfield, then X i , i = 1 , . . . , N , in (14) are independent standard Gaussian random variables. In consequence,each element of the sequence { α N } N ∈ N is a Gaussian random field.However, if α is a non-Gaussian random field, then X i , i = 1 , . . . , N , are uncorrelated yet dependentnon-Gaussian random variables. In which case, finding their probability distribution is not trivial. Oneclass of non-Gaussian random fields for which the use of KL expansion can possibly be exploited is theclass of translation random fields, where a non-Gaussian random field is defined as a nonlinear, memorylesstransformation of a Gaussian random field [23]. But, even then, there are conditions on the covarianceproperties that must be fulfilled before proceeding with the transformation [11, 23]. Finally, it should benoted that the KL expansion uses only the second-moment properties, such as the covariance function, ofa random field. If two non-Gaussian random fields have identical covariance functions, but significantlydifferent higher-order moments or higher-order finite-dimensional distributions, then they will have thesame KL representation, but their sample properties may vary markedly. This has important ramificationsin reliability analysis.
4. Galerkin Isogeometric Method
The implementation of a KL approximation is predicated on the knowledge of eigensolutions of theintegral eigenvalue problem defined by (9). However, analytical or exact solutions of the eigenvalue problemexist when the covariance function Γ :
D × D → R is separable and has simpler functional forms, suchas exponential functions, or the domain D is rectangular. For arbitrary covariance functions or arbitrarydomains, numerical methods are often needed to solve the eigenvalue problem. In this section, the basisfunctions from isogeometric analysis, such as B-splines and NURBS, in conjunction with Galerkin’s finite-dimensional approximation, are exploited to solve the eigenvalue problem. The variational or weak formulation for solving the Fredholm integral equation (9) entails finding aneigenpair { λ, φ ( x ) } ⊂ R +0 × L ( D ) such that(( G Γ φ )( x ) , ψ ( x )) L ( D ) = λ ( φ ( x ) , ψ ( x )) L ( D ) (18)for any ψ ( x ) ∈ L ( D ). However, since (9) is, in general, not exactly solvable, neither is (18). Therefore,(18) must be solved approximately, say, by Galerkin discretization.Let {S h } h> be a sequence of finite-dimensional approximating subspaces of L ( D ). Denote by P h aprojection of L ( D ) onto S h . Then, a Galerkin solution calls for finding an eigenpair { λ h , φ h ( x ) } ⊂ R +0 × S h such that (( G Γ φ h )( x ) , ψ ( x )) L ( D ) = ( λ h φ h ( x ) , ψ ( x )) L ( D ) (19)for any ψ ( x ) ∈ S h ⊂ L ( D ) or, equivalently, solving (cid:13) D (cid:21)(cid:13) D Γ( x , x (cid:4) ) φ h ( x (cid:4) ) d x (cid:4) (cid:22) ψ ( x ) d x = λ h (cid:13) D φ h ( x ) ψ ( x ) d x , ∀ ψ ( x ) ∈ S h ⊂ L ( D ) . (20)This is known as the Galerkin variational or weak form of (9) relative to the subspace S h . In general, thesolution of (19) or (20) is an approximate solution of (9). The existence and uniqueness of the Galerkinsolution of the integral equations of the second kind were discussed by Atkinson [20].A standard error analysis of λ h and φ h ( x ) entails demonstrating that [20] (cid:10)G Γ − P h G Γ (cid:10) L ( D ) → h → L ( D ). For a general φ ( x ) ∈ L ( D ), define P h φ ( x ) to be the solution of the13ollowing minimization problem: (cid:10) φ ( x ) − P h φ ( x ) (cid:10) L ( D ) := min ψ ( x ) ∈S h (cid:10) φ ( x ) − ψ ( x ) (cid:10) L ( D ) . (22)Since S h is finite-dimensional, (22) has a solution. Moreover, by S h being an inner product space, thesolution is unique. If, indeed, P h φ ( x ) → φ ( x ) as h → ∀ φ ( x ) ∈ L ( D ) , (23)then (21) follows from the compactness of G Γ on L ( D ) as stated in Proposition 4 already. Consequently, (cid:10) φ ( x ) − φ h ( x ) (cid:10) L ( D ) → h →
0, demonstrating convergence of Galerkin solutions. However, the speedof convergence depends on S h and the smoothness of unknown solution φ ( x ). The Galerkin discretization discussed in the preceding subsection is general because the finite-dimensionalsubspaces of L ( D ) have yet to be specified. More often than not, the finite-element subspaces are chosen [7],although the subspaces from mesh-free analysis have also been employed [10]. In this work, the subspacesderived from B-splines and NURBS functions, which are the building blocks of IGA, are proposed. Theorem 8.
Let the Galerkin variational form be as described in (19) or (20) . Given the space of NURBSfunctions ¯ R i , p ( x ) , i ∈ I , p ∈ N d , select S h = V h = span (cid:9) ¯ R i , p (cid:10) i ∈I ⊂ L ( D ) , h > , as the finite-dimensional subspaces, where V h is defined in (7) . Then the eigenpairs of the variational formare obtained by solving the linear matrix eigenvalue problem Af h = λ h Bf h , (24) yielding an eigenvalue λ h ∈ R +0 and an eigenvector f h ∈ R n c , where n c is the number of control points ofIGA. Here, A ∈ R n c × n c and B ∈ R n c × n c are system matrices, which have components A ij := (cid:13) D (cid:13) D Γ( x , x (cid:4) ) ¯ R i , p ( x ) ¯ R j , p ( x (cid:4) ) d x d x (cid:4) , i , j ∈ I , (25) and B ij := (cid:13) D ¯ R i , p ( x ) ¯ R j , p ( x ) d x , i , j ∈ I , (26) with i , j representing any two control points of IGA. Henceforth, the corresponding eigenfunction is obtainedas φ h ( x ) = (cid:6) j ∈I f h, j ¯ R j , p ( x ) , (27) where f h, j is the j th component of f h ∈ R n c .Proof. Since ψ ( x ) ∈ V h , expand the function ψ ( x ) = (cid:6) i ∈I a i ¯ R i , p ( x ) (28)with respect to the basis of V h ⊂ L ( D ), where a i , i ∈ I , are the associated coefficients. Applying (27) and(28) into (20) and interchanging the integral and summation operators gives (cid:6) i ∈I (cid:6) j ∈I a i f h, j (cid:13) D (cid:13) D Γ( x , x (cid:4) ) ¯ R i , p ( x ) ¯ R j , p ( x (cid:4) ) d x d x (cid:4) = λ h (cid:6) i ∈I (cid:6) j ∈I a i f h, j (cid:13) D ¯ R i , p ( x ) ¯ R i , p ( x ) d x . (29)14rom the definitions of the system matrices A and B as in (25) and (26), (29) becomes (cid:6) i ∈I a i ⎛⎝(cid:6) j ∈I A ij f h, j − λ h (cid:6) j ∈I B ij f h, j ⎞⎠ = 0 . (30)Since ψ ( x ) is an arbitrary member of V h , the constants a i , i ∈ I , are also arbitrary. Therefore, theparenthetical term of (30) must vanish for all i ∈ I , resulting in (24).An alternative proof of Theorem 8 involves the following steps. Define the residual error e I := (cid:6) j ∈I f h, j (cid:21)(cid:13) D Γ( x , x (cid:4) ) ¯ R j , p ( x (cid:4) ) d x (cid:4) − λ h ¯ R j , p ( x ) (cid:22) committed by an isogeometric approximation using the subspace V h and associated control points of I . Seekthe coefficients f h, j , j ∈ I , by recognizing the error to be orthogonal to the subspace V h . Indeed, setting( e I , ¯ R i , p ( x )) L ( D ) = 0 for all i ∈ I produces (24).Does the Galerkin isogeometric method for solving the integral eigenvalue problem converge? Thequestion can be readily answered using Theorem 1, which demonstrates that the sequence of isogeometricapproximations using NURBS functions converges for any square-integrable function on D . In this case,the projection P h : L ( D ) → V h satisfies the condition in (23). Therefore, the eigensolutions from IGAaddressed in this work should converge as discussed in Subsection 4.1. Furthermore, a NURBS subspaceis usually endowed with smooth functions, depending on the polynomial order of the underlying B-splinesand the multiplicity of knots. In consequence, the results of IGA are expected to be smoother than thoseof FEM, especially when concerned with continuity across element boundaries. The smoothness of IGAsolutions will be further examined in the following section. The solution of the matrix eigenvalue value problem described in Theorem 8 depends on the propertiesof the system matrices A and B . The following proposition demonstrates that both matrices are symmetricand are either positive-semidefinite or positive-definite. Proposition 9.
Let the system matrices A and B be as defined in (25) and (26) , respectively. Then A issymmetric and positive-semidefinite, and B is symmetric and positive-definite.Proof. From their definitions and the fact that the covariance function Γ( x , x (cid:4) ) is symmetric with respect toits arguments x and x (cid:4) , A and B are both symmetric matrices. To prove positive-semidefiniteness of A , let (cid:14) = c ∈ R n c be a column vector of arbitrary, but non- zero , constants c i , i ∈ I . Applying (11) from Mercer’stheorem and interchanging summation and integrals operators, including the convergent infinite sum, c T Ac = (cid:6) i ∈I (cid:6) j ∈I c i c j A ij = (cid:6) i ∈I (cid:6) j ∈I c i c j (cid:13) D (cid:13) D Γ( x , x (cid:4) ) ¯ R i , p ( x ) ¯ R j , p ( x (cid:4) ) d x d x (cid:4) = (cid:13) D (cid:13) D ∞ (cid:6) k =1 λ k φ k ( x ) φ k ( x (cid:4) ) (cid:6) i ∈I c i ¯ R i , p ( x ) (cid:6) j ∈I c j ¯ R j , p ( x (cid:4) ) d x d x (cid:4) = ∞ (cid:6) k =1 λ k (cid:13) D (cid:13) D (cid:6) i ∈I c i ¯ R i , p ( x ) φ k ( x ) d x (cid:6) j ∈I c j ¯ R j , p ( x (cid:4) ) φ k ( x (cid:4) ) d x (cid:4) = ∞ (cid:6) k =1 λ k (cid:14)(cid:13) D (cid:6) i ∈I c i ¯ R i , p ( x ) φ k ( x ) d x (cid:15) ≥ , λ k is a non-negative real number while the square of the integral in the third line is non-negative.Therefore, A is positive-semidefinite.Finally, for the matrix B , swapping, again, the summation and integral operators, c T Bc = (cid:6) i ∈I (cid:6) j ∈I c i c j B ij = (cid:6) i ∈I (cid:6) j ∈I c i c j (cid:13) D ¯ R i , p ( x ) ¯ R j , p ( x ) d x = (cid:13) D (cid:6) i ∈I c i ¯ R i , p ( x ) (cid:6) j ∈I c j ¯ R j , p ( x ) d x = ⎛⎝(cid:6) i ∈I c i ¯ R i , p ( x ) , (cid:6) j ∈I c j ¯ R j , p ( x ) ⎞⎠ L ( D ) = i ∈I c i ¯ R i , p ( x ) L ( D ) > , as the norm is positive for any c (cid:14) = , thereby proving its positive-definiteness.Proposition 9 is applicable for any polynomial order p of NURBS. In fact, the symmetry and positive-(semi)definiteness are valid when the system matrices are defined in conjunction with other subspaces rootedin finite-element and mesh-free analyses. However, the definitions of respective system matrices depend onthe basis functions of individual subspaces.Given the results of Proposition 9, the following properties of eigensolutions are fulfilled: (1) all eigen-values are real and non-negative; (2) the eigenvectors corresponding to distinct eigenvalues are mutuallyorthogonal; (3) for repeated eigenvalues, m ∈ N mutually orthogonal eigenvectors can be found for eacheigenvalue of multiplicity m ; and (4) the eigenvectors constitute a basis of R n c . These properties are consis-tent with those endowed to the Hilbert-Schmidt operator G Γ : L ( D ) → L ( D ), as explained by Proposition5. The size of both system matrices is n c × n c , where n c := |I| is the number of control points definedin Section 2. Obviously, the largest number of eigensolutions of (24) is limited to n c . This is not an issuefor most IGA meshes, as n c is typically larger than N , the number of eigensolutions retained in the KLapproximation. On the contrary, if n c < N , then all needed eigensolutions cannot be obtained. This will befurther clarified in Section 5 where numerical results are discussed. The assembly of the system matrices A and B requires domain integrations in the physical space. Ingeneral, these integrals cannot be determined exactly. Therefore, the matrices must be estimated by numer-ical integration. However, the use of NURBS functions in isogeometric analysis introduces the parametricdomain as explained in Section 2. This slightly complicates the matter because, for numerical integration,an additional domain [ − , +1] d is needed. The latter domain is commonly referred to as the parent elementin the isogeometric literature [12].Consider an arbitrary element Q ∈ Q h in the parametric domain ˆ D = [0 , d . Each such element canbe viewed as the image of the parent element [ − , +1] d defined by an affine mapping ξ : [ − , +1] d → Q or,equivalently, by ξ ( η ), where η is the coordinate of the parent element. Similarly, there is a correspondingelement K ∈ K h in the physical domain D ⊂ R d , which is the image of that very element Q in theparametric domain. Recall that x : ˆ D → D , that is, x ( ξ ), is the mapping between the parametric andphysical domains. The same mapping is used for these two corresponding elements. To integrate a functionon an element of the physical domain, a pullback of the physical element to the parent element is required.This is accomplished using the composition of the inverses, that is, x − : K → Q and ξ − : Q → [ − , +1] d ,of the two aforementioned mappings. 16et the Jacobians of the mappings ξ ( η ) and x ( ξ ) be defined as J η := (cid:20) ∂ ξ ∂ η (cid:23) = ⎡⎢⎢⎢⎢⎣ ∂ξ ∂η · · · ∂ξ ∂η d ... . . . ... ∂ξ d ∂η · · · ∂ξ d ∂η d ⎤⎥⎥⎥⎥⎦ and J ξ := (cid:20) ∂ x ∂ ξ (cid:23) = ⎡⎢⎢⎢⎢⎣ ∂x ∂ξ · · · ∂x ∂ξ d ... . . . ... ∂x d ∂ξ · · · ∂x d ∂ξ d ⎤⎥⎥⎥⎥⎦ , respectively. As the mapping is affine, the calculation of the partial derivatives ∂ξ i /∂η j , i, j = 1 , . . . , d , isstraightforward. However, to determine the partial derivatives ∂x i /∂ξ j , i, j = 1 , . . . , d , the derivatives ofNURBS and B-Spline functions are involved. Due to brevity, explicit details of the derivatives of NURBSand B-spline functions are not reported here as they are available elsewhere [12].Given the mappings and their respective Jacobians, the components of the system matrices are thenevaluated by summing contributions from all element-level integrations on the parent element, that is, A ij := (cid:26) D (cid:26) D Γ( x , x (cid:4) ) ¯ R i , p ( x ) ¯ R j , p ( x (cid:4) ) d x d x (cid:4) = (cid:6) K ∈K h (cid:6) K (cid:3) ∈K h (cid:13) K (cid:13) K (cid:3) Γ( x , x (cid:4) ) ¯ R i , p ( x ) ¯ R j , p ( x (cid:4) ) d x d x (cid:4) = (cid:6) Q ∈Q h (cid:6) Q (cid:3) ∈Q h (cid:13) Q (cid:13) Q (cid:3) Γ( x ( ξ ) , x (cid:4) ( ξ (cid:4) )) ¯ R i , p ( x ( ξ )) ¯ R j , p ( x (cid:4) ( ξ (cid:4) )) | det J ξ | d ξ d ξ (cid:4) = (cid:6) (cid:6) (cid:13) [ − , +1] d (cid:13) [ − , +1] d Γ( x ( ξ ( η )) , x (cid:4) ( ξ (cid:4) ( η (cid:4) ))) ¯ R i , p ( x ( ξ ( η ))) ¯ R j , p ( x (cid:4) ( ξ (cid:4) ( η (cid:4) ))) | det J ξ || det J η | d η d η (cid:4) (31)and B ij := (cid:26) D ¯ R i , p ( x ) ¯ R j , p ( x ) d x = (cid:6) K ∈K h (cid:13) K ¯ R i , p ( x ) ¯ R j , p ( x ) d x = (cid:6) Q ∈Q h (cid:13) Q ¯ R i , p ( x ( ξ )) ¯ R j , p ( x ( ξ )) | det J ξ | d ξ = (cid:6) (cid:13) [ − , +1] d ¯ R i , p ( x ( ξ ( η ))) ¯ R j , p ( x ( ξ ( η ))) | det J ξ || det J η | d η . (32)Here, the summations in the last lines of (31) and (32) are over all n e := |Q h | = |K h | elements of IGA.The final integrals in (31) and (32) are estimated by a suitable numerical integration scheme, such as theGauss-Legendre quadrature. Even though the NURBS functions are not necessarily polynomials, the Gaussquadrature is still effective [12]. In this case, the same quadrature rule employed for a p th-order polynomialcan be used for a NURBS function built from an underlying p th-order B-spline. Having said so, a Gaussquadrature is not an optimal choice for IGA. That is why current research is focused on finding optional ornear-optimal numerical integration techniques to tackle NURBS functions [24].It is important to underscore that the construction of the system matrices A and B mandates, respec-tively, 2 d - and d -dimensional domain integrations – a fundamental imposition of the Galerkin discretiza-tion. While forming B requires an effort similar to that of assembling the mass matrix in solid mechanics,building A is computationally daunting as it is d -order more expensive than forming B . For instance, onthree-dimensional ( d = 3) domains, A requires a six-fold integration as opposed to a three-fold integration17eeded by B . To speed up the assembly, a few researchers [25, 26] have suggested replacing A with a hier-archical matrix [27], but this also adds a new layer of approximation. Nonetheless, the implementation ofan industrial-scale matrix eigenvalue problem in high dimension is a formidable task. Such a computationalchallenge is not unique to IGA; it persists in finite-element and mesh-free analyses as well. While the symmetry and positive-(semi)definiteness of the system matrices facilitate seeking only real-valued eigensolutions, the eigenvalue problem presents a few computational challenges of its own. First,depending on how fast or how slow the covariance function decays, the number of eigenmodes retained in atruncated KL expansion can be large or small. If, indeed, the required number of eigenmodes is very large,especially for a problem with complex domain encompassing a large number of elements or control points,the computational effort in deriving the concomitant eigensolutions can also be very large. Second, in manycases, B is a relatively sparse and diagonalizable matrix. In contrast, A is a dense matrix, making itsgeneration and storage expensive. That is why selecting efficient eigenvalue solvers, such as Krylov subspaceiteration in conjunction with the Arnoldi method [28], the Lanczos-based thick-restart method [29], and thefast multipole method [30], continues to be studied by some researchers [22, 25, 26].
5. Numerical Examples
Three examples, each with a progressively higher dimension, are presented to illustrate the proposedGalerkin isogeometric method for solving the integral eigenvalue problem associated with the KL expansion.The random fields have zero means and are homogeneous and isotropic in all examples. The polynomialorder p of B-splines, leading to NURBS, and the order of Gauss-Legendre quadrature to estimate thesystem matrices vary from example to example. The isogeometric analysis and subsequent matrix eigenvaluecalculations were performed using MATLAB (Version 2016a) [31]. The eigenvalue calculations were checkedusing multiple algorithms and methods, such as Cholesky factorization, the QZ algorithm, and the Lanczosmethod, all of which are available in MATLAB. Consider a one-dimensional random field α ( x, · ) with covariance function Γ( x, x (cid:4) ) = E [ α ( x, · ) α ( x (cid:4) , · )]defined on D = [0 , ⊂ R . Three types of covariance functions, described by exponential, Gaussian, andBessel functions, were selected. Mathematically,Γ( x, x (cid:4) ) = ⎧⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎩ σ exp (cid:21) − | x − x (cid:4) | bL (cid:22) , Type 1 (exponential) ,σ exp (cid:21) − | x − x (cid:4) | ( bL ) (cid:22) , Type 2 (Gaussian) ,σ | x − x (cid:4) | bL K (cid:21) | x − x (cid:4) | bL (cid:22) , Type 3 (Bessel) , where K is the modified second-kind Bessel function of order one, σ = 1, L = 1, and b = 1 or 0.1 asthe correlation length parameter. These covariance functions, depicted in Figure 2, are commonly used inengineering and applied sciences.Three polynomial orders, p = 1, p = 2, and p = 3, representing, respectively, linear, quadratic, and cubicelements, were employed. The knot vectors for the coarsest one-element IGA meshes are as follows: (1) Ξ =(0 , , ,
1) for linear elements; (2) Ξ = (0 , , , , ,
1) for quadratic elements; and (3) Ξ = (0 , , , , , , , h -refined globally, resulting in a seriesof progressively finer meshes with the number of elements increasing in size. Two mesh sizes were examined:(1) a coarse mesh with 16 elements, that is, n e =16; and (2) a fine mesh with 256 elements, that is, n e =256.18 i -3 -2 -1 λ i Type 1, b = 1 (a) ExactIGA ( p = 1, n e = 16)IGA ( p = 1, n e = 256) i -3 -2 -1 λ i Type 1, b = 1 (b) ExactIGA ( p = 2, n e = 16)IGA ( p = 2, n e = 256) i -3 -2 -1 λ i Type 1, b = 1 (c) ExactIGA ( p = 3, n e = 16)IGA ( p = 3, n e = 256) x -0.500.51 √ λ i φ i ( x ) Type 1, b = 1 (d) Figure 3: Ten largest exact and IGA-derived eigensolutions for the exponential covariance function with b = 1 in Example 1;(a) exact eigenvalues and linear IGA solutions; (b) exact eigenvalues and quadratic IGA solutions; (c) exact eigenvalues andcubic IGA solutions; (d) exact eigenfunctions. The number of control points varies accordingly and fittingly with the order of elements. A ( p + 1)-pointGauss-Legendre quadrature was employed to estimate the system matrices and produce the IGA results ofFigures 3 through 5.Figures 3(a) through 3(c) present the scatter plots of the first ten largest eigenvalues for the exponential(Type 1) covariance function with b = 1. The exact eigenvalues, which exist for this covariance function[5, 7], and approximate eigenvalues obtained using the proposed isogeometric method are displayed. Theeigenvalue decays with the mode number as expected. For the IGA results, the figures are arranged accordingto linear [Figure 3(a)], quadratic [Figure 3(b)], and cubic [Figure 3(c)] elements, each containing the resultsof the coarse mesh (16 elements) and the fine mesh (256 elements), as defined previously. The quality ofagreement between the exact eigenvalues and their respective IGA approximations is excellent even for thecoarse mesh. Any distinction between the exact and IGA solutions is impalpable to the naked eye, especiallywhen examining the results of higher-order IGA.The same observation holds when comparing the exact (scaled) eigenfunctions, also available for theexponential covariance function [5, 7], in Figure 3(d) and their IGA derived approximations in Figures 4(a)through 4(d) when b = 1. Again, the first ten eigenfunctions are shown. The eigenfunctions obtained fromIGA are organized according to the use of (1) linear elements with the coarse [Figure 4(a)] and fine [Figure4(b)] meshes; (2) quadratic elements with the coarse mesh [Figure 4(c)]; and (3) cubic elements with thecoarse mesh [Figure 4(d)]. In Figure 4(a), the IGA-derived eigenfunctions show a lack of smoothness, butthis is expected for linear elements with the coarse mesh. The smoothness returns, and the results get betterfor the fine mesh, as confirmed in Figure 4(b). Having said this, the eigenfunctions are still C -continuousacross element boundaries regardless of the mesh size because of linear basis functions. Indeed, the sameresult should be expected from traditional FEM employing piecewise-linear basis functions. In contrast,19 x -0.500.51 √ λ i φ i ( x ) p =1 ( n e =16)Type 1, b = 1 (a) 0 0.2 0.4 0.6 0.8 1 x -0.500.51 √ λ i φ i ( x ) p =1 ( n e =256)Type 1, b = 1 (b)0 0.2 0.4 0.6 0.8 1 x -0.500.51 √ λ i φ i ( x ) p =2 ( n e =16)Type 1, b = 1 (c) 0 0.2 0.4 0.6 0.8 1 x -0.500.51 √ λ i φ i ( x ) p =3 ( n e =16)Type 1, b = 1 (d) Figure 4: Ten largest IGA-derived eigenfunctions for the exponential covariance function with b = 1 in Example 1; (a) linearIGA with 16 elements; (b) linear IGA with 256 elements; (c) quadratic IGA with 16 elements; (d) cubic IGA with 16 elements. when quadratic or cubic elements are used, the eigenfunctions from IGA even for the coarse mesh, shownin Figure 4(c) or 4(d), are already smooth and similar to, if not better than, the results of linear elementswith the fine mesh. In addition, these higher-order IGA solutions are endowed with C - or C - continuityacross element boundaries when compared with low- or high-order FEM with C -continuity.Figures 5(a) and 5(b) contrast the reference and IGA solutions of the first ten eigenvalues for theGaussian (Type 2) and Bessel (Type 3) covariance functions, respectively, both employing the correlationlength parameter b = 1. Since no exact solutions exist for these two covariance functions, respective IGAsolutions obtained for a further refined mesh of 512 elements were used as reference solutions. In eachfigure, the top, middle, and bottom sub-figures contain the results of linear, quadratic, and cubic elements,respectively, from both the coarse mesh and the fine mesh. The IGA-derived eigenvalues converge to thereference solutions for both types of covariance functions. No comparisons between reference solutions andIGA results of eigenfunctions are reported here, as they essentially show the same qualitative trend exhibitedfor the exponential covariance function.Finally, a limited numerical error analysis of IGA-derived eigenvalues was performed for the Gaussianand exponential covariance functions with b = 0 .
1. More specifically, a relative error, defined as the ratioof (1) the 2-norm of the vector of first ten eigenvalue differences between reference or exact solutions andIGA solutions and (2) the 2-norm of the vector of first ten reference or exact eigenvalues, was studied. Toeliminate possible errors due to numerical integration, a higher-order quadrature, that is, a ( p + 5)-pointGauss-Legendre quadrature was employed for obtaining all IGA solutions, including the reference IGAsolution entailing 512 cubic elements for the Gaussian covariance function. The results are delineated inFigures 6(a) and 6(b), explaining how the relative error decays with respect to the element size h , whichis reciprocal to the number of elements and hence constant for a fixed number of elements. For either20 i -15 -10 -5 λ i Type 2, b = 1 ReferenceIGA ( p = 1, n e = 16)IGA ( p = 1, n e = 256) i -15 -10 -5 λ i Type 2, b = 1 ReferenceIGA ( p = 2, n e = 16)IGA ( p = 2, n e = 256) i -15 -10 -5 λ i Type 2, b = 1 (a) ReferenceIGA ( p = 3, n e = 16)IGA ( p = 3, n e = 256) i -4 -3 -2 -1 λ i Type 3, b = 1 ReferenceIGA ( p = 1, n e = 16)IGA ( p = 1, n e = 256) i -4 -3 -2 -1 λ i Type 3, b = 1 ReferenceIGA ( p = 2, n e = 16)IGA ( p = 2, n e = 256) i -4 -3 -2 -1 λ i Type 3, b = 1 (b) ReferenceIGA ( p = 3, n e = 16)IGA ( p = 3, n e = 256) Figure 5: Ten largest eigenvalues from the reference solutions and IGA for the two other covariance functions with b = 1 inExample 1; (a) Gaussian covariance function; (b) Bessel covariance function. p . Given a fixed mesh size, the error from a higher-order IGA is consistently lower thanthat from the low-order IGA. Moreover, the rate of decay in Figure 6(a) is faster for the higher-order IGAfor the Gaussian covariance function. This is due to not only IGA, but also the smoothness properties ofthe Gaussian covariance function. By contrast, no such convergence acceleration is found in Figure 6(b), asthe exponential covariance function is a non-differentiable one. Therefore, the regularity of the covariancefunction plays an important role on the performance of any numerical method, including IGA. A theoreticalerror analysis of IGA-derived eigensolutions supporting numerical trends merits further study. h -15 -12 -9 -6 -3 R e l a t i v ee rr o r Type 2, b = 0 . (a) Linear ( p =1)Quadratic ( p =2)Cubic ( p =3) h -5 R e l a t i v ee rr o r Type 1, b = 0 . (b) Linear ( p =1)Quadratic ( p =2)Cubic ( p =3) Figure 6: Relative errors in IGA-derived eigenvalues with b = 0 . .2. Example 2 For the second example, let α ( x , · ) be a two-dimensional random field with covariance functionΓ( x , x (cid:4) ) = σ exp (cid:21) − (cid:10) x − x (cid:4) (cid:10) bL (cid:22) , x , x (cid:4) ∈ D ⊂ R , (33)defined on a quarter-annulus of inner radius R i = 0 . R o = 1, as depicted in Figure 7(a).The following covariance parameters were selected: σ = 1, L = 1, and b = 0 . Ξ = (0 , , , Ξ = (0 , , , , , p = 1, p = 2. The initial knot index space and parametricdomain are depicted in Figures 7(b) and 7(c), respectively. The associated control points and weights aregiven in Table A.2 of Appendix A. The weights were chosen in such a way that the resulting NURBSfunctions could replicate exactly a circular arc. Adopting a global h -refinement strategy, the knot indexspace was successively divided, and new control points and weights were added as needed. Figure 8 displayssix meshes, that is, Meshes 1 through 6, with corresponding control points marked in red. The numberof elements in these meshes varies from one to 1024. The one-element mesh (Mesh 1), obtained using theinitial knot index space, control points, and weights in Table A.2, represents already the exact geometry ofthe quarter-annulus. In fact, all finer meshes, albeit they have more and more elements, represent the exactgeometry of the physical domain. This is in sharp contrast with FEM, where a mesh, especially when it iscoarse, will always incur some geometrical errors. A 4-point Gauss-Legendre quadrature was employed ineach coordinate direction.Table 1 presents the first ten largest eigenvalues for the covariance function in (33) calculated by IGAfor all six meshes in Figure 8. With the exception of Mesh 1, which has six control points, there aremore than ten control points in Meshes 2 through 6. Therefore, all ten eigensolutions were calculated forMeshes 2 through 6, whereas only the first six were calculated for Mesh 1. Clearly, the eigenvalues convergewith respect to the number of elements or mesh refinement, as expected. A comparison of the first foureigenfunctions, displayed as contour plots in Figure 9 for Mesh 5 (256 elements) and in Figure 10 for Mesh6 (1024 elements), tells the same tale with regards to the convergence of eigenfunctions. Note that, unlikein Example 1, there are no analytical eigensolutions for the annular domain and inseparable covariancefunction in Example 2. That is why multiple meshes were analyzed. R i = 0.6 ξ =0 ξ =0 ξ =1 ξ =1 ξ =0 ξ =0 ξ =0 ξ =1 ξ =1 ξ =1 (a) (b) Center (c) ξ = ξ =0 ξ = ξ =1 ξ = ξ = ξ =0 ξ = ξ = ξ =1 R o = 1.0 Domain Figure 7: A quarter-annulus in Example 2; (a) physical domain; (b) initial knot index space; (c) parametric domain. able 1: Ten largest eigenvalues estimated by IGA using six meshes in Example 2. EigenvalueMesh 1 Mesh 2 Mesh 3 Mesh 4 Mesh 5 Mesh 6Mode (1 element) (4 elements) (16 elements) (64 elements) (256 elements) (1024 elements)1 0.241387791 0.234271475 0.233414419 0.233302062 0.233287647 0.233285822 0.087233705 0.086324241 0.085394757 0.085289015 0.085275164 0.0852734093 0.035232598 0.032777456 0.035032958 0.034994641 0.034980189 0.0349783084 0.024420835 0.020293213 0.019896928 0.019806854 0.019793397 0.019791635 0.013968298 0.016212808 0.016475201 0.016407932 0.016395342 0.0163934016 0.008526274 0.012551817 0.01211771 0.012029456 0.012017404 0.0120158127 − (a) − (a) − (a) − (a) (a) Not calculable as there are only six control points. Mesh 1 ( n e =1) Mesh 2 ( n e =4) Mesh 3 ( n e =16)Mesh 4 ( n e =64) Mesh 5 ( n e =256) Mesh 6 ( n e =1024) Figure 8: Six IGA meshes obtained by h -refinement in Example 2. igure 9: Contour plots of first four IGA-derived eigenfunctions using Mesh 5.Figure 10: Contour plots of first four IGA-derived eigenfunctions using Mesh 6. .3. Example 3 The final example entails a three-dimensional random field α ( x , · ) with covariance functionΓ( x , x (cid:4) ) = σ exp (cid:21) − (cid:10) x − x (cid:4) (cid:10) bL (cid:22) , x , x (cid:4) ∈ D ⊂ R , (34)defined on a half-cylinder of inner radius R i = 8, outer radius R o = 10, and length L c = 15, as displayed inFigure 11(a). The covariance parameters were chosen as follows: σ = 1, L = 10 and b = 0 . Figure 11: A half-cylinder in Example 3; (a) physical domain; (b) two IGA meshes obtained by circumferential h -refinement. able 2: Twenty largest eigenvalues estimated by IGA using coarse and fine meshes in Example 3. EigenvalueCoarse Mesh Fine MeshMode (128 elements) (256 elements)1 162.8012956 162.79936882 91.43276902 91.430793173 57.56984816 57.567657694 51.09209411 51.090294185 38.80033743 38.798087526 27.90574988 27.903921697 25.05887115 25.056815668 19.37089182 19.368665769 16.15870621 16.1570207810 15.79809919 15.7960133611 15.1484597 15.146137212 11.21556267 11.2134152913 10.1816058 10.1796470814 9.694963725 9.69353534815 8.057339474 8.05514338716 7.579083704 7.57697096917 6.724572177 6.72276809618 6.446973861 6.44480035119 6.178216445 6.17728803120 5.766381043 5.764324956
Table 2 lists the first twenty largest eigenvalues obtained by IGA using the coarse (128 elements) andfine (256 elements) meshes for the covariance function defined in (34). The respective eigenvalues fromboth meshes are very close to each other. The same can be said about the eigenfunctions. Due to brevity,however, only the first six eigenfunctions obtained for the fine mesh are portrayed in Figure 12. Again, noanalytical solutions exist for this problem, but the relative invariance of eigensolutions from the two meshesprovides confidence in the proposed IGA method.While the computational efforts in the first two examples are relatively low or modest, generating theIGA solution in Example 3, which involves a three-dimensional domain, is computationally demanding. Forinstance, the total run time for the fine-mesh IGA results in Example 3, obtained using an Intel Core i7-4770, 3.4 Ghz, 16 GB RAM PC, exceeded 24 hours. The root cause for this high computational cost is thedouble integral over three-dimensional domain in assembling the system matrix A , alluded to in Subsection4.4. Therefore, developing efficient methods for estimating or constructing A , in the context of the Galerkinframework, is desirable for future work.
6. Discussion
While the paper focuses on the Galerkin-based IGA for random field discretization, a brief discussionon the practical implementation of the work and future endeavors is warranted. First, the Galerkin methodmandates 2 N -dimensional domain integration for building the system matrix A ; this is computationallyintensive, as illustrated in the third example, and possibly prohibitive for industrial-scale, three-dimensionalapplications. Therefore, alternative isogeometric formulations entailing, for instance, collocation methods,which require at most N -dimensional domain integrations to construct the system matrices, should beexplored to determine their accuracy. Indeed, the collocation method for random field discretization, whileexploiting the geometrical flexibility of IGA, is expected to offer a huge computational advantage over theGalerkin method. 27 Figure 12: Contour plots of first six IGA-derived eigenfunctions using the fine mesh.
7. Conclusion
A Galerkin isogeometric method was developed for solving an integral eigenvalue problem, resulting in aneffective discretization of random fields by means of the well-known Karhunen-Lo`eve expansion. The methodemploys a Galerkin discretization, which projects the eigensolutions onto a finite-dimensional subspace of aHilbert space. Using B-splines and NURBS functions as the basis of the subspace, a concomitant matrixeigenvalue problem is formulated, where the system matrices are constructed by domain integrations. Finally,the eigensolutions are obtained using standard methods. Although there exist similar Galerkin methods,such as the finite-element and mesh-free methods, the NURBS-based isogeometric method offers a fewcomputational advantages. First, many physical or computational domains, such as freeform and sculpturedsurfaces and conic sections, are exactly represented by NURBS. In consequence, potential numerical errorsoriginating from imprecise geometry, accepted in the finite-element and mesh-free methods, are avoided.Second, as NURBS functions have higher-order continuity, the eigensolutions derived from isogeometricanalysis are usually globally smoother than those derived from finite-element analysis. The smoothnesscan be controlled by judiciously selecting or adjusting the polynomial order of the underlying B-splinesas well as the multiplicity of knots. Therefore, the introduction of the isogeometric method for randomfield discretization is not only novel, but it also presents an attractive alternative to existing methods.More importantly, using NURBS for random field discretization enhances the isogeometric paradigm. Inconsequence, one can envision developing a seamless uncertainty quantification pipeline, where geometricmodeling, stress analysis, and stochastic simulation are all consolidated using the same building blocks ofNURBS. Numerical results, obtained for three random field discretization problems in all three dimensions,indicate that the isogeometric method developed provides accurate and convergent eigensolutions.
Appendix A. IGA Details of Numerical Examples
Tables A.1 and A.2 list the control points and weights for the coarsest mesh in Examples 1 and 2,respectively. Table A.3 describes the knot vectors and polynomial orders, including basic mesh properties,for both the coarse and fine meshes in Example 3. 29 able A.1: Control points and weights for the coarsest one-element IGA mesh in Example 1.
Linear elements ( p = 1) Quadratic elements ( p = 2) Cubic elements ( p = 3) C i w i C i w i C i w i (0,0) 1 (0,0) 1 (0,0) 1(1,0) 1 (1/2,0) 1 (1/3,0) 1(1,0) 1 (2/3,0) 1(1,0) 1 Table A.2: Control points and weights for the coarsest one-element IGA mesh in Example 2. C i w i (0.6,0) 1(1,0) 1(0.6,0.6) 1 √ √ Table A.3: Knot vectors and polynomial orders for the coarse and fine IGA meshes in Example 3. (a)
Coarse mesh Fine mesh Ξ = (0 , , , , , Ξ = (0 , , , , , Ξ = (cid:2) , , , , , , , , , , , , , , , , , , , , , (cid:3) Ξ = (cid:2) , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , (cid:3) Ξ = (cid:2) , , , , , , , , , , , , (cid:3) Ξ = (cid:2) , , , , , , , , , , , , (cid:3) p = p = p = 2 p = p = p = 2No. of elements, n e = 128 No. of elements, n e = 256No. of control points, n c = 570 No. of control points, n c = 1050 (a) 1 = radial direction; 2 = circumferential direction; 3 = length direction. eferences [1] W. Betz, I. Papaioannou, D. Straub, Numerical methods for the discretization of random fields by means of the karhunen-lo`eve expansion, Computer Methods in Applied Mechanics and Engineering 271 (2014) 109–129.[2] K. Karhunen, ¨Uber lineare methoden in der wahrscheinlichkeitsrechnung, Ann. Acad. Sci. Fenn. Ser. A. I. 37 (1947) 3–79.[3] M. Lo`eve, Fonctions al´eatoires de second ordre, in: Processus Stochastic et Mouvement Brownien, Gauthier Villars: Paris.[4] M. Lo`eve, Probability Theory, Vol II., Springer: Berlin, Heidelberg, New York, 1977.[5] O. P. Le Maˆıtre, O. M. Knio, Spectral Methods for Uncertainty Quantification, Springer: Dordrecht Heidelberg LondonNew York, 2010.[6] E. I. Fredholm, Sur une classe d’equations fonctionnelles, Acta Mathematica 27 (1903) 365–390.[7] R. Ghanem, P. D. Spanos, Stochastic finite elements: a spectral approach, World Publishing Corp., 1991.[8] T. J. R. Hughes, The Finite Element Method: Linear Static and Dynamic Finite Element Analysis, Dover Publications,2000.[9] T. Belytschko, Y. Y. Lu, , L. Gu, Element-free galerkin methods, International Journal for Numerical Methods in Engi-neering 37 (1994) 229–256.[10] S. Rahman, H. Xu, A meshless method for computational stochastic mechanics, International Journal of ComputationalMethods in Engineering Science and Mechanics 64 (2005) 41–58.[11] S. Rahman, Meshfree methods in computational stochastic mechanics, in: Reliability-Based Civil Engineering (edited byA. Haldar), World Scientific Publishing Company: Singapore, 2005.[12] J. A. Cottrell, T. J. R. Hughes, Y. Bazilevs, Isogeometric Analysis: Toward Integration of CAD and FEA, John Wiley &Sons, 2009.[13] T. J. R. Hughes, J. A. Cottrell, , Y. Bazilevs, Isogeometric analysis: Cad, finite elements, nurbs, exact geometry and meshrefinement, Computer Methods in Applied Mechanics and Engineering 194 (2005) 4135–4195.[14] L. A. Piegl, W. Tiller, The NURBS Book, Second Edition, Springer-Verlag: Berlin, 1997.[15] M. Cox, The numerical evaluation of b-splines (1971).[16] C. De Boor, On calculation with b-splines, Journal of Approximation Theory 6 (1972) 50–62.[17] G. E. Farin, NURBS Curves and Surfaces: From Projective Geometry to Practical Use, A. K. Peters, Ltd.: Natick, MA,1999.[18] Y. Bazilevs, L. Beirao de Veiga, J. Cottrell, T. J. R. Hughes, G. Sangalli, Isogeometric analysis: approximation, stabilityand error estimates for h-refined meshes, Mathematical Models and Methods in Applied Sciences 16 (2006) 1031–1090.[19] N. Dunford, J. T. Schwartz, Linear Operators, Spectral Theory, Self Adjoint Operators in Hilbert Space, Part 2, Wiley-Interscience, 1988.[20] K. Atkinson, The numerical solution of integral equations of the second kind, Cambridge University Press: Cambridge,UK, 1997.[21] J. Mercer, Functions of positive and negative type and their connection with the theory of integral equations, PhilosophicalTransactions of the Royal Society A 209 (1909) 415–446.[22] C. Schwab, R. A. Todor, Karhunenlo`eve approximation of random fields by generalized fast multipole methods, Journalof Computational Physics 217 (2006) 100–122.[23] M. Grigoriu, Applied non-Gaussian Processes, Prentice-Hall: Englewood Cliffs, NJ, 1988.[24] F. Auricchio, F. Calabro, T. J. R. Hughes, A. Reali, G. Sangalli, A simple algorithm for obtaining nearly optimalquadrature rules for nurbs-based isogeometric analysis, Computer Methods in Applied Mechanics and Engineering 249252(2012) 15–27.[25] M. Eiermann, O. G. Ernst, E. Ullmann, Computational aspects of the stochastic finite element method, Computing andVisualization in Science 10 (2007) 3–15.[26] B. N. Khoromskij, A. Litvinenko, H. G. Matthies, Application of hierarchical matrices for computing the karhunenlo`eveexpansion, Computing 84 (2009) 49–67.[27] S. B¨orm, L. Grasedyck, W. Hackbusch, Introduction to hierarchical matrices with applications, Engineering Analysis withBoundary Elements 27 (2003) 405–422.[28] R. B. Lehoucq, D. C. Sorensen, C. Yang, ARPACK Users Guide: Solution of Large Scale Eigenvalue Problems withImplicitly Restarted Arnoldi Methods, SIAM: Philadelphia, PA, 1998.[29] K. Wu, H. Simon, Thick-restart lanczos method for large symmetric eigenvalue problems, SIAM Journal on Matrix Analysisand Applications 22 (2000) 602–616.[30] L. Greengard, V. Rokhlin, A new version of the fast multipole method for the laplace equation in three dimensions, ActaNumerica 6 (1997) 229–269.[31] MATLAB, Version 2016a, The MathWorks, Inc.: Natick, MA, 2016.[1] W. Betz, I. Papaioannou, D. Straub, Numerical methods for the discretization of random fields by means of the karhunen-lo`eve expansion, Computer Methods in Applied Mechanics and Engineering 271 (2014) 109–129.[2] K. Karhunen, ¨Uber lineare methoden in der wahrscheinlichkeitsrechnung, Ann. Acad. Sci. Fenn. Ser. A. I. 37 (1947) 3–79.[3] M. Lo`eve, Fonctions al´eatoires de second ordre, in: Processus Stochastic et Mouvement Brownien, Gauthier Villars: Paris.[4] M. Lo`eve, Probability Theory, Vol II., Springer: Berlin, Heidelberg, New York, 1977.[5] O. P. Le Maˆıtre, O. M. Knio, Spectral Methods for Uncertainty Quantification, Springer: Dordrecht Heidelberg LondonNew York, 2010.[6] E. I. Fredholm, Sur une classe d’equations fonctionnelles, Acta Mathematica 27 (1903) 365–390.[7] R. Ghanem, P. D. Spanos, Stochastic finite elements: a spectral approach, World Publishing Corp., 1991.[8] T. J. R. Hughes, The Finite Element Method: Linear Static and Dynamic Finite Element Analysis, Dover Publications,2000.[9] T. Belytschko, Y. Y. Lu, , L. Gu, Element-free galerkin methods, International Journal for Numerical Methods in Engi-neering 37 (1994) 229–256.[10] S. Rahman, H. Xu, A meshless method for computational stochastic mechanics, International Journal of ComputationalMethods in Engineering Science and Mechanics 64 (2005) 41–58.[11] S. Rahman, Meshfree methods in computational stochastic mechanics, in: Reliability-Based Civil Engineering (edited byA. Haldar), World Scientific Publishing Company: Singapore, 2005.[12] J. A. Cottrell, T. J. R. Hughes, Y. Bazilevs, Isogeometric Analysis: Toward Integration of CAD and FEA, John Wiley &Sons, 2009.[13] T. J. R. Hughes, J. A. Cottrell, , Y. Bazilevs, Isogeometric analysis: Cad, finite elements, nurbs, exact geometry and meshrefinement, Computer Methods in Applied Mechanics and Engineering 194 (2005) 4135–4195.[14] L. A. Piegl, W. Tiller, The NURBS Book, Second Edition, Springer-Verlag: Berlin, 1997.[15] M. Cox, The numerical evaluation of b-splines (1971).[16] C. De Boor, On calculation with b-splines, Journal of Approximation Theory 6 (1972) 50–62.[17] G. E. Farin, NURBS Curves and Surfaces: From Projective Geometry to Practical Use, A. K. Peters, Ltd.: Natick, MA,1999.[18] Y. Bazilevs, L. Beirao de Veiga, J. Cottrell, T. J. R. Hughes, G. Sangalli, Isogeometric analysis: approximation, stabilityand error estimates for h-refined meshes, Mathematical Models and Methods in Applied Sciences 16 (2006) 1031–1090.[19] N. Dunford, J. T. Schwartz, Linear Operators, Spectral Theory, Self Adjoint Operators in Hilbert Space, Part 2, Wiley-Interscience, 1988.[20] K. Atkinson, The numerical solution of integral equations of the second kind, Cambridge University Press: Cambridge,UK, 1997.[21] J. Mercer, Functions of positive and negative type and their connection with the theory of integral equations, PhilosophicalTransactions of the Royal Society A 209 (1909) 415–446.[22] C. Schwab, R. A. Todor, Karhunenlo`eve approximation of random fields by generalized fast multipole methods, Journalof Computational Physics 217 (2006) 100–122.[23] M. Grigoriu, Applied non-Gaussian Processes, Prentice-Hall: Englewood Cliffs, NJ, 1988.[24] F. Auricchio, F. Calabro, T. J. R. Hughes, A. Reali, G. Sangalli, A simple algorithm for obtaining nearly optimalquadrature rules for nurbs-based isogeometric analysis, Computer Methods in Applied Mechanics and Engineering 249252(2012) 15–27.[25] M. Eiermann, O. G. Ernst, E. Ullmann, Computational aspects of the stochastic finite element method, Computing andVisualization in Science 10 (2007) 3–15.[26] B. N. Khoromskij, A. Litvinenko, H. G. Matthies, Application of hierarchical matrices for computing the karhunenlo`eveexpansion, Computing 84 (2009) 49–67.[27] S. B¨orm, L. Grasedyck, W. Hackbusch, Introduction to hierarchical matrices with applications, Engineering Analysis withBoundary Elements 27 (2003) 405–422.[28] R. B. Lehoucq, D. C. Sorensen, C. Yang, ARPACK Users Guide: Solution of Large Scale Eigenvalue Problems withImplicitly Restarted Arnoldi Methods, SIAM: Philadelphia, PA, 1998.[29] K. Wu, H. Simon, Thick-restart lanczos method for large symmetric eigenvalue problems, SIAM Journal on Matrix Analysisand Applications 22 (2000) 602–616.[30] L. Greengard, V. Rokhlin, A new version of the fast multipole method for the laplace equation in three dimensions, ActaNumerica 6 (1997) 229–269.[31] MATLAB, Version 2016a, The MathWorks, Inc.: Natick, MA, 2016.