A new numerical scheme for simulating non-gaussian and non-stationary stochastic processes
AA new numerical scheme for simulating non-gaussianand non-stationary stochastic processes
Zhibao Zheng a,b, ∗ , Hongzhe Dai a,b , Yuyin Wang a,b , Wei Wang a,b a Key Lab of Structures Dynamic Behavior and Control, Harbin Institute of Technology,Ministry of Education, Harbin 150090, China b School of Civil Engineering, Harbin Institute of Technology, Harbin 150090, China
Abstract
This paper presents a new numerical scheme for simulating stochastic pro-cesses specified by their marginal distribution functions and covariance func-tions. Stochastic samples are firstly generated to automatically satisfy targetmarginal distribution functions. An iterative algorithm is proposed to matchthe simulated covariance function of stochastic samples to the target covari-ance function, and only a few times iterations can converge to a requiredaccuracy. Several explicit representations, based on Karhunen-Lo`eve expan-sion and Polynomial Chaos expansion, are further developed to representthe obtained stochastic samples in series forms. Proposed methods can beapplied to non-gaussian and non-stationary stochastic processes, and threeexamples illustrate their accuracies and efficiencies.
Keywords:
Stochastic samples, Non-gaussian, Non-stationary,Karhunen-Lo`eve expansion, Polynomial Chaos expansion ∗ Corresponding author.
Email address: [email protected] (Zhibao Zheng)
Preprint submitted to Journal August 11, 2020 a r X i v : . [ phy s i c s . c o m p - ph ] A ug . Introduction With widely developments of uncertainty quantification theories and meth-ods, stochastic problems involving uncertainties commonly arise in variousfields of engineering, such as computational mechanics [1], financial anal-ysis [2] and biomedical science [3]. A large number of these problems in-volves uncertain quantities which should be modeled as random processes orfields. On the one hand, assumptions regarding probabilistic distributionsare made due to the incomplete experimental data [4]. On the other hand,stochastic simulations are provided for sufficient observation data. Thus,applications of stochastic process and field theories to engineering problemshave gained considerable interests. In general, stochastic processes are as-sumed to be gaussian because of simplicity and the Central Limit Theorem.Since the Gaussian stochastic processes can be completely described by theirsecond-order statistics, [5], methods for simulating gaussian stochastic pro-cesses [6, 7, 8] have been quite well established. However, the gaussian as-sumption does not work due to the fact that some physical phenomena areobviously non-gaussian in some cases [9]. The difficulty for simulating non-gaussian stochastic processes is that all the joint distribution functions areneeded to completely characterize the non-gaussian properties. The problemis even further complicated when the process is also non-stationary since themarginal distributions depend on time or space. With these motivations, theefficient simulation of non-gaussian stochastic processes are urgent becauseof practical and theoretical importance.Spectral representation is the first widely developed method for the simu-lation of non-gaussian stochastic processes [10]. This method is implemented2n frequency domain and is initially developed for gaussian stochastic pro-cesses [11]. It has been extended to non-gaussian stochastic processes by com-bining the spectral representation method with non-linear transformations[10], i.e., tranforming gaussian stochastic samples generated by the spectralrepresentation method into the non-gaussian stochastic process and match-ing the target power spectral density function and non-gaussian marginal dis-tribution function. Extensive studies based on this method can be found in[12, 13, 14, 15]. Different from the spectral representation method, Karhunen-Lo`eve (KL) expansion [1, 16] is implemented in time or space domains, whichis usually used in the simulation of stationary and non-stationary gaussianprocesses [17, 18, 19, 20]. Iterative algorithms for updating the non-gaussianexpanded random variables are proposed in [21, 22] for the simulation of non-gaussian stochastic processes. The method can be applied to highly skewednon-gaussian marginal distribution functions. Hence, KL expansion providesa unified and powerful framework for the simulation of stochastic processes,which is potentially capable of providing a better fit to non-gaussian andnon-translational data [23]. Another important technique, Polynomial Chaos(PC) expansion, has also been developed for simulation of non-gaussian andnon-stationary stochastic processes and fields in [24, 25]. The method rep-resents the target stochastic process and field as multidimensional Hermitepolynomial chaos in a set of normalized gaussian random variables. Theaccuracy and efficiency of this method were further examined in [26, 27].In this paper, we present numerical schemes for simulating non-gaussianand non-stationary stochastic processes that have been specified by their co-variance functions and non-gaussian marginal distribution functions. The3asic idea is to firstly generate stochastic samples that satisfying targetmarginal distribution functions, and then match target covariance functionsby developing delicate iterative algorithm. In this way, the simulation ofboth gaussian and non-gaussian stochastic processes can be implementedin an unified framework since marginal distribution functions are automati-cally satisfied by generated samples, and the accuracy and efficiency of thesimulation are only dependent on matching the target covariance functions.Another advantage is that there are no differences in the application of theproposeed iterative algorithm to stationary and non-stationary stochasticprocesses. Thus, the proposed method can be considered as a unified nu-merical scheme for simulating samples of stochastic processes. Further, it’susually not convenient to apply stochastic samples in practical problems. Inthis paper, we exploit KL expansion for expanding the obtained stochasticsamples since KL expansion is optimal among series expansion methods inthe global mean square error with respect to the number of random variablesin the representation. Thus, the proposed strategy is capable of representingstochastic processes with sufficient accuracy with as few random variables aspossible. In order to meet the requirements of different practical problems, wealso exploit PC expansion and KL-PC expansion (combination of KL expan-sion and PC expansion) to represent the obtained stochastic samples, whosemethodology are similar to KL expansion but based on different expansions.The accuracies and efficiencies are demonstrated by several numerical ex-amples. Proposed methods can be readily generalized to multi-dimensionalrandom fields [25, 28, 29, 30, 31], but it’s beyond the scope of this article andwill be studied in subsequent papers. 4he paper is organized as follows: a new algorithm for simulating stochas-tic samples is presented in Section 2, Section 3 develops several numericalalgorithms for representing the obtained stochastic samples and three illus-trative examples are finally given in Section 4 to demonstrate the proposedalgorithms.
2. Simulation of stochastic samples
Consider a stochastic process ω ( x, θ ), x ∈ D specified by its covari-ance function C ( x , x ) and marginal distribution function F ( y ; x ). In or-der to obtain stochastic samples that satisfying the target covariance func-tion and marginal distribution function, we discretize spatial domain as x = { x , · · · , x n } and generate random variables samples (cid:110) { η i ( θ k ) } Nk =1 (cid:111) ni =1 according to the target marginal distribution function F ( y ; x ), where N isthe number of random variables and η i ( θ ) = ω ( x i , θ ) , i = 1 , · · · , n . Notethat random variables { η i ( θ ) } ni =1 automatically satisfy the target marginaldistribution function, i.e., η i ( θ ) ∼ F ( y ; x i ). However, the generated sam-ples of random variables { η i ( θ ) } ni =1 don’t match the target covariance func-tions C ( x , x ) since statistical correlations between samples { η i ( θ k ) } Nk =1 and { η j ( θ k ) } Nk =1 are no constraints. Thus, delicate algorithm is required to bedeveloped to match the target covariance function C ( x , x ).Statistical correlations between random variable η i ( θ ) and η j ( θ ) are givenas T ij = 1 N − N (cid:88) k =1 [ η i ( θ k ) − ¯ η i ] [ η j ( θ k ) − ¯ η j ] (1)where ¯ η i and ¯ η j are the mean of random variables η i ( θ ) and η j ( θ ), respec-5ively. Expanding Eq.(1) yields, T ij = 1 N − N (cid:88) k =1 ( η i ( θ k ) η j ( θ k ) − η i ( θ k ) ¯ η j − ¯ η i η j ( θ k ) + ¯ η i ¯ η j )= 1 N − (cid:34) N (cid:88) k =1 η i ( θ k ) η j ( θ k ) − (cid:32) N (cid:88) k =1 η i ( θ k ) (cid:33) (cid:32) N N (cid:88) k =1 η j ( θ k ) (cid:33) − (cid:32) N N (cid:88) k =1 η i ( θ k ) (cid:33) (cid:32) N (cid:88) k =1 η j ( θ k ) (cid:33) + N (cid:32) N N (cid:88) k =1 η i ( θ k ) (cid:33) (cid:32) N N (cid:88) k =1 η j ( θ k ) (cid:33)(cid:35) = 1 N − (cid:34) N (cid:88) k =1 η i ( θ k ) η j ( θ k ) − N (cid:32) N (cid:88) k =1 η i ( θ k ) (cid:33) (cid:32) N (cid:88) k =1 η j ( θ k ) (cid:33)(cid:35) = 1 N − N (cid:88) k =1 η i ( θ k ) η j ( θ k ) − N ( N − (cid:32) N (cid:88) k =1 η i ( θ k ) (cid:33) (cid:32) N (cid:88) k =1 η j ( θ k ) (cid:33) (2)By introducing matrix Y and assembling random samples, we have Y = (cid:104) { η ( θ k ) } Nk =1 , · · · , { η n ( θ k ) } Nk =1 (cid:105) = η ( θ ) · · · η n ( θ )... ... η ( θ N ) · · · η n ( θ N ) (3)Then Eq.(2) can be rewritten in matrix T = Y T YN − − Y T U U T YN ( N −
1) (4)where U = [1] N × , and T is the simulated covariance matrix of randomvariables samples (cid:110) { η i ( θ k ) } Nk =1 (cid:111) ni =1 .Both target covariance matrix C and simulated covariance matrix T aresymmetric and positive definite, thus Cholesky decompositions can be usedto performed on matrix C and T as C = P T P (5)6nd T = Q T Q (6)where Q and P are upper triangular matrices.In order to match the target covariance matrix C , a new random samplesmatrix Y (cid:48) is introduced as Y (cid:48) = Y Q − P (7)By the transformation in Eq.(7), simulated covariance matrix T (cid:48) can matchthe target covariance matrix C and proof as follows, Proof.
Simulated covariance matrix T (cid:48) of the new random samples Y (cid:48) is T (cid:48) = Y (cid:48) T Y (cid:48) N − − Y (cid:48) T U U T Y (cid:48) N ( N − P T Q − T Y T Y Q − PN − − P T Q − T Y T U U T Y Q − PN ( N − P T Q − T (cid:18) Y T YN − − Y T U U T YN ( N − (cid:19) Q − P = P T Q − T T Q − P = C It has to be noted that sample realizations { η i ( θ k ) } Nk =1 in each columnof Y (cid:48) in Eq.(7) are different from those in each column of Y in Eq.(3) dueto the factor matrix Q − P . The realizations in each column of Y (cid:48) changewith updated factor matrix Q − P and do not match the marginal distribu-tion function F ( y ; x i ). A fact that re-ordering the sample realizations will7ot change distributions of random variables but change the statistical cor-relations, i.e., simulated covariance matrix, is enlightened. Hence, we usethe strategy that re-ordering the sample realizations { η i ( θ k ) } Nk =1 in each col-umn of Y in Eq.(3) follows the ranking of the realizations in each column of Y (cid:48) in Eq.(7). The target covariance matrix C is matched by repeating theprocedure of Eq.(4), Eq.(6), Eq.(7) and re-ordering stochastic samples.The resulting procedure for simulating stochastic process samples is sum-marized in Algorithm 1 as follows, Algorithm 1
Algorithm for simulating stochastic process samples Discretize spatial domain x = { x , · · · , x i , · · · , x n } and generate randomvariables samples { η i ( θ k ) } Nk =1 ∼ F ( y ; x i ) , i = 1 , · · · , n . Compute the upper triangular matrix P by use of a Cholesky decompo-sition in Eq.(5). repeat Compute upper triangular matrix Q ( k − by Eq.(6) based on simu-lated covariance matrix T ( k − obtained by Eq.(4). Compute Y (cid:48) ( k − through Eq.(7) and re-order samples in each columnof Y following the ranking of the realizations in each column of Y (cid:48) . Compute simulated covariance matrix T ( k ) by Eq.(4). until (cid:13)(cid:13) T ( k ) − C (cid:13)(cid:13)(cid:14) (cid:107) C (cid:107) < ε Algorithm 1 provides a simple and efficient framework to simulate stochas-tic processes samples. Spatial discrete points and initial random variablessamples are generated in Step 1. The number of random variables is equalto the that of spatial points and it’s not necessary to discretize excessive8patial points. A Cholesky decomposition of target covariance matrix C isperformed in Step 2. The computational cost can be neglected since only onetime decomposition needs to be computed for matrix C . The Step 3 to Step7 includes a loop iteration procedure to match the target covariance matrix C , and the computational cost in these steps is low since only Cholesky de-compositions and re-ordering samples are involved. The convergence error inStep 7 can adopt 2-norm or infinite-norm (here same to 1-norm) and we adopt2-norm in this paper. Note that, Algorithm 1 can be applied to non-gaussianand non-stationary stochastic processes and can be readily generalized tohigh-dimensional random fields.
3. Expansions of stochastic processes
Algorithm 1 provides an efficient procedure to simulate samples of stochas-tic processes. However, sample-based descriptions of stochastic processes arenot suitable for subsequent applications in some cases [1, 4] and it’s necessaryto develop methods to represent the obtained samples of sochastic process.In general, a stochastic process ω ( x, θ ) can be expressed as ω ( x, θ ) = ∞ (cid:88) i =0 ξ i ( θ ) f i ( x ) (8)where { ξ i ( θ ) } ∞ i =0 and { f i ( x ) } ∞ i =0 are a set of random variables and deter-ministic functions, respectively. In practical, Eq.(8) can be truncated at theterm M as ω ( x, θ ) = M (cid:88) i =0 ξ i ( θ ) f i ( x ) (9)There exist three unknown ‘variables’ in Eq.(9), i.e., stochastic process ω ( x, θ ), random variables { ξ i ( θ ) } Mi =0 and deterministic functions { f i ( x ) } Mi =0 .9q.(9) can be determined if any two variables are available. In this con-text, only a set of random variables { ξ i ( θ ) } Mi =0 or deterministic functions { f i ( x ) } Mi =0 are required to be determined since samples of the stochasticprocess ω ( x, θ ) have been obtained by Algorithm 1. Three stratigies aredeveloped to simulate stochastic processes through Eq.(9): (i). select a setof deterministic basis { f i ( x ) } Mi =0 , then compute random variables { ξ i ( θ ) } Mi =0 ;(ii). select a set of random variables { ξ i ( θ ) } Mi =0 , then compute deterministicfunctions { f i ( x ) } Mi =0 ; (iii). select a set of deterministic basis { f i ( x ) } Mi =0 andrandom variables { ξ i ( θ ) } Mi =0 , then compute unknown projection coefficients. We’ll develop corresponding algorithms based on above strategies in Section3.1, Section 3.2 and Section 3.3, respectively.
In order to obtain deterministic basis { f i ( x ) } Mi =0 in Strategy (i) , we adoptKarhunen-Lo`eve (KL) expansion due to its minimum mean-square errorproperty. The KL expansion is a special case of Eq.(8) and is based on aspectral decomposition of the covariance function C ( x , x ) of the stochasticprocess ω ( x, θ ) with the form ω ( x, θ ) = ¯ ω ( x ) + M (cid:88) i =1 ξ i ( θ ) (cid:112) λ i f i ( x ) (10)where ¯ ω ( x ) is the mean function of the stochastic process ω ( x, θ ), M is thenumber of terms of KL expansion and { ξ i ( θ ) } Mi =1 is a set of uncorrelatedrandom variables with zero mean and unit variance, i.e., E { ξ i ( θ ) } = 0 , E { ξ i ( θ ) ξ j ( θ ) } = δ ij (11)10nd given by ξ i ( θ ) = 1 √ λ i (cid:90) D [ ω ( x, θ ) − ¯ ω ( x )] f i ( x ) dx (12)where { λ i } and { f i ( x ) } are the eigenvalues and eigenfunctions of the covari-ance function C ( x , x ), obtained from solving the following homogeneousFredholm integral equation of the second kind (cid:90) D C ( x , x ) f i ( x ) dx = λ i f i ( x ) (13)which satisfies (cid:90) D f i ( x ) f j ( x ) dx = δ ij (14)The solution of Eq.(13) can be determined numerically for problems ofpractical interests. It is known that, for fixed M , the resulting random pro-cess approximation ω ( x, θ ) is optimal among series expansion methods withrespect to the global mean square error [1]. If the stochastic process ω ( x, θ )is gaussian, then { ξ i ( θ ) } Mi =1 are independent standard gaussian random vari-ables. But for non-gaussian stochastic processes, { ξ i ( θ ) } Mi =1 are generallynon-gaussian and there are no general methods for determining them. Inorder to determine their distributions, Eq.(12) has to be solved. Here weadopt a sample-based method to compute the samples of random variables { ξ i ( θ ) } Mi =1 as ξ i ( θ k ) = 1 √ λ i (cid:90) D [ ω ( x, θ k ) − ¯ ω ( x )] f i ( x ) dx, k = 1 , · · · , N (15)which needs N times deterministic integral and has very low computationalcosts. The above method is summarized in Algorithm 2 as11 lgorithm 2 Algorithm based on stochastic samples and KL expansion Generate samples of the stochastic process by use of Algorithm 1. Solve a set of deterministic basis { f i ( x ) } Mi =1 by Eq.(13). Compute random variables’ samples (cid:110) { ξ i ( θ k ) } Nk =1 (cid:111) Mi =1 by Eq.(15).Algorithm 2 provides an efficient method to expand stochastic samplesontained from Algorithm 1. Deterministic basis { f i ( x ) } Mi =1 are solved byuse of KL eapansion, thus guarenting the minimum numbers of expansionitems M . Random variables { ξ i ( θ ) } Mi =1 are described by samples and can becomputed with very low computational costs. For the
Strategy (ii) , it’s not easy to select random variables { ξ i ( θ ) } Mi =1 inEq.(9) directly, here we utilize Polynomial Chaos (PC) expansion to repre-sent the random variables { ξ i ( θ ) } Mi =1 and then compute deterministic coeffi-cient functions { f i ( x ) } Mi =1 . A general representation of the stochastic process ω ( x, θ ) in PC expansion has the following form ω ( x, θ ) = ¯ ω ( x ) + P (cid:88) i =1 Γ i ( θ ) f i ( x ) (16)where ¯ ω ( x ) is the mean function of the stochastic proces ω ( x, θ ), { Γ i ( θ ) } Pi =1 are Polynomial Chaos basis and can be generated by Rodriguez formula [32].The deterministic coefficient functions { f i ( x ) } Mi =1 can be obtained by f i ( x ) = E { [ ω ( x, θ ) − ¯ ω ( x )] Γ i ( θ ) } E { Γ i ( θ ) } (17)where E {·} is the expectation operator and the above method is summarizedin Algorithm 3 as 12 lgorithm 3 Algorithm based on stochastic samples and PC expansion Generate samples of the stochastic process by use of Algorithm 1. Choose standard random variables { γ i ( θ ) } M γ i =1 . Generate PC basis { Γ i ( θ ) } Pi =1 of random variables { γ i ( θ ) } M γ i =1 . Compute { f i ( x ) } Pi =1 by Eq.(17).There are various choices for specifying random variables γ i ( θ ) in Step2, such as gaussian random variables and uniform random variables, and thecorrsponding PC basis should be adopted as Hermite Polynomial Chaos basis[1] and Generalized Polynomial Chaos basis [1, 32], respectively. Further,Algorithm 3 provides a suitable method to simulate stochastic processes ifEq.(13) is not easy to solve. However the computational efficiency decreasesif the number M γ of random variables γ i ( θ ) or the order of PC basis is toolarge. A natural idea is to combine methods in Section 3.1 and Section 3.2for the
Strategy (iii) . Hence, we choose deterministic functions { f i ( x ) } Mi =1 obtained by Eq.(13) and express random variable ξ i ( θ ) in Eq.(10) by use ofPC expansion as ξ i ( θ ) = P (cid:88) j =1 c ij Γ j ( θ ) (18)Substituting Eq.(18) into Eq.(10) yields ω ( x, θ ) = ¯ ω ( x ) + M (cid:88) i =1 P (cid:88) j =1 c ij Γ j ( θ ) (cid:112) λ i f i ( x ) (19)13ence, unknown projection coefficients c ij can be computed by c ij = 1 √ λ i E (cid:8) Γ j ( θ ) (cid:9) (cid:90) D E { [ ω ( x, θ ) − ¯ ω ( x )] Γ j ( θ ) } f i ( x ) dx (20)The above method is summarized in Algorithm 4 as Algorithm 4
Algorithm based on stochastic samples and KL-PC expansion Generate samples of the stochastic process by use of Algorithm 1. Solve a set of deterministic basis { f i ( x ) } Mi =1 by Eq.(13). Choose standard random variables { γ i ( θ ) } M γ i =1 . Generate PC basis { Γ i ( θ ) } Pi =1 of random variables { γ i ( θ ) } M γ i =1 . Compute { c ij } i,j =1 by Eq.(20).Algorithm 4 combines KL expansion and PC expansion since PC ex-pansion provides an explicit representation for random variables { ξ i ( θ ) } Mi =1 obtained in Algorithm 2. Algorithm 4 is a good choice if sample-based de-scriptions are not convenient for some practical problems. Further, stochasticprocesses may be described directly by real samples instead of being speci-fied by their covariance functions and marginal probability distributions. Wecan use real samples instead of stochastic samples in Step 1 of Algorithm 2,Algorithm 3 and Algorithm 4, thus Algorithm 1 is no longer necessary.This section provides three numerical algorithms to expand the stochas-tic samples obtained by Algorithm 1. Different algorithms can be chosenaccording to properties of practical problems and the performances of pro-posed algorithms are shown in next section.14 . Numerical examples Three numerical examples, including a non-gaussian and stationary stochas-tic process, a non-gaussian and non-stationary stochastic process and a stronglynon-gaussian and non-stationary stochastic process, are used to demonstratethe accuracies and efficiencies of the proposed algorithms. The performancesof Algorithm 2, Algorithm 3 and Algorithm 4 are indicated in Example 4.1,and only Algorithm 2 is used to represent stochastic samples in Example 4.2and Example 4.3. Since the simulated marginal distribution function auto-matically matches the target one, the accuracies of the proposed methodsare validated by comparing the simulated covariance function with the exactone. For all examples, spatial domains are discretized as 50 points, conver-gence errors are set as 5 × − and 1 × initial samples for each randomvariable are generated. Consider a stochastic process with beat marginal distribution function[21] F ( y ; p, q ) = Γ ( p + q )Γ ( p ) Γ ( q ) (cid:90) u z p − (1 − z ) q − dz (21)and squared exponential covariance function C ( x , x ) = e − ( x − x ) (22)In Eq.(21), Γ ( · ) is the gamma function and parameters are given by u = y − y min y max − y min , p = 4 , q = 2 (23)15he expectation function and variance function of the beat distributionin Eq.(21) are µ F ( x ) = ( y max − y min ) pp + q + y min σ F ( x ) = ( y max − y min ) pq ( p + q ) ( p + q +1) (24)According to Eq.(22), we can obtain σ F ( x ) = 1. Letting µ F ( x ) = 0 andsolving Eq.(24) yield y min = −√ ≈ − . y max = √ . ≈ . (cid:8) T ( k ) (cid:9) k =0 , where T (0) is the covariance function of the initial stochastic samples. It indicatesthat a rough approximation of the target covariance function can be obtainedwith only one round of iteration. The convergence error in each iteration isshown in Fig.2, which demonstrates Algorithm 1 has a good convergence anda high accuracy. Figure 1: Iterations of simulated covariance functions (cid:8) T ( k ) (cid:9) k =0 . E rr o r s Figure 2: Iterative errors.
In order to verify the performances of the porposed methods, we utilizeAlgorithm 2, Algorithm 3 and Algorithm 4 to represent obtained stochasticsamples, respectively. For Algorithm 3, Fig.3 shows first four eigenfunctionsand eigenvalues of C ( x , x ) obtained by Eq.(13). x -3-2-10123 f n ( x ) Index n λ n Figure 3: Eigenfunctions (left) and eigenvalues (right) of covariance C ( x , x ). Fig.4 shows cumulative distribution functions (CDFs) of random vari-17bles { ξ i ( θ ) } i =1 computed by Eq.(15) and Table.1 indicates that they areuncorrelated random variables, which is consistent with the theory of KLexpansion. -4 -2 0 2 400.20.40.60.81 C D F s Figure 4: CDFs of random variables { ξ i ( θ ) } i =1 .Table 1: Statistical correlations between ξ i ( θ ) and ξ j ( θ ) E { ξ i ξ j } ξ ( θ ) ξ ( θ ) ξ ( θ ) ξ ( θ ) ξ ( θ ) 1.0005 ξ ( θ ) 0.0004 0.9995 sym. ξ ( θ ) 0.0042 0.0016 0.9927 ξ ( θ ) 0.0272 0.0087 − Figure 5: Comparisons between target, sample and KL-simulated covariance.
For Algorithm 3, we adopt 2-order Hermite Polynomial Chaos basis, thenumber of total terms M is 14. Fig.6 shows { f i ( x ) } Pi =1 obtained by Eq.(17).19 f i ( x ) Figure 6: Coefficient functions { f i ( x ) } i = of PC expansion. Comparisons between target, sample and PC-simulated covariance (thecorresponding legends of relative errors are the same as Fig.5) are shown inFig.7, which demonstrates the high accuracy of the proposed Algorithm 3.
Figure 7: Comparisons between target, sample and PC-simulated covariance. c ij obtained byEq.(20) and comparisons between target, sample and KL-PC-simulated co-variance (the corresponding legends of relative errors are the same as Fig.5)are shown as Fig.8, which demonstrates the high accuracy of the proposedAlgorithm 4. Table 2: Coefficients c ij of the KL-PC expansion c ij − − − − − − − − − − − − − − c ij − − − − − − − − − − − − − − − − − igure 8: Comparison between target, sample and KL-PC-simulated covariance. Consider the stochastic process with beat marginal distribution functionin Eq.(21) and Brown-Bridge covariance function [21] C ( x , x ) = min ( x , x ) − x x (25)According to Eq.(25), we can obtain σ F ( x ) = C ( x, x ) = x − x . Letting µ F ( x ) = 0 and solving Eq.(24) yields, y min = − (cid:112)
14 ( x − x ) , y max = (cid:112) . x − x ) (26)Fig.9 shows iterations of sample covariance functions (cid:8) T ( k ) (cid:9) k =0 and Fig.10shows convergence error in each iteration, which once again demonstrate thegood convergence and the high accuracy of Algorithm 1.22 igure 9: Iterations of sample covariance functions (cid:8) T ( k ) (cid:9) k =0 . E rr o r s Figure 10: Iterative errors.
Example 4.1 validate Algorithm 2, Algorithm 3 and Algorithm 4, here weonly consider using Algorithm 2 expand the obtained stochastic samples fromAlgorithm 1. According to Eq.(13), analytical eigenfunctions and eigenvaluesof C ( x , x ) can be obtained as f k ( x ) = √ kπx, λ k = 1 k π , k = 1 , , · · · (27)First six eigenfunctions and eigenvalues are shown in Fig.11.23 x -1.5-1-0.500.511.5 f n ( x ) Index n λ n Figure 11: Eigenfunctions (left) and eigenvalues (right) of covariance C ( x , x ). Fig.12 shows CDFs of random variables { ξ i ( θ ) } i =1 and they are uncorre-lated as shown as Table.3, which is once again consistent with the theory ofKL expansion. Fig.13 shows comparisons between target, sample and KL-simulated covariance (the corresponding legends of relative errors are thesame as Fig.5), which verify the applicability of the proposed Algorithm 2 tonon-gaussian and non-stationary stochastic processes.24 C D F s Figure 12: CDFs of random variables { ξ i ( θ ) } i =1 .Table 3: Statistical correlations between ξ i ( θ ) and ξ j ( θ ) E { ξ i ξ j } ξ ( θ ) ξ ( θ ) ξ ( θ ) ξ ( θ ) ξ ( θ ) ξ ( θ ) ξ ( θ ) 0.9991 ξ ( θ ) − ξ ( θ ) − ξ ( θ ) − ξ ( θ ) − − − ξ ( θ ) 0.0026 − − igure 13: Comparisons between target, sample and KL-simulated covariance. Consider a stochastic process with shifted lognormal marginal distribu-tion proposed in [21] F ( y ; µ, δ, σ ) = Φ (cid:18) ln ( y − δ ( x )) − µ ( x ) σ (cid:19) (28)and covariance function C ( x , x ) = e − ( x + x ) −| x − x | (29)The expectation function and variance function of the shifted lognormaldistribution in Eq.(28) are µ F ( x ) = δ ( x ) + e µ ( x )+ σ σ F ( x ) = (cid:16) e σ − (cid:17) e µ ( x )+ σ (30)According to Eq.(29), the variance function is σ F ( x ) = C ( x, x ) = e − x . Let-ting µ F ( x ) = 0, σ = 1 and solving Eq.(30) yield µ ( x ) = − x − ln (cid:112) e ( e − ≈− x − . δ ( x ) = − e − x √ e − ≈ − . e − x .26ig.14 shows iterations of sample covariance functions (cid:8) T ( k ) (cid:9) k =0 andthe corresponding convergence error of each iteration is shown as Fig.15.The good convergence of Algorithm 2 for non-gaussian and non-stationarystochastic processes is demonstrated. Figure 14: Iterations of sample covariance functions (cid:8) T ( k ) (cid:9) k =0 . E rr o r s Figure 15: Iterative errors.
According to Eq.(13), first six eigenfunctions and eigenvalues of C ( x , x )are shown in Fig.16. x -2-1.5-1-0.500.511.52 f n ( x ) Index n λ n Figure 16: Eigenfunctions (left) and eigenvalues (right) of covariance C ( x , x ). Fig.17 shows CDFs of random variables { ξ i ( θ ) } i =1 and their uncorre-lated properties are shown as Table.4. Fig.18 shows comparisons between28arget, sample and KL-simulated covariance (the corresponding legends ofrelative errors are the same as Fig.5), which verify the applicability of theproposed Algorithm 2 to strongly non-gaussian and non-stationary stochasticprocesses. -4 -2 0 2 400.20.40.60.81 C D F s Figure 17: CDFs of random variables { ξ i ( θ ) } i =1 .Table 4: Statistical correlations between ξ i ( θ ) and ξ j ( θ ) E { ξ i ξ j } ξ ( θ ) ξ ( θ ) ξ ( θ ) ξ ( θ ) ξ ( θ ) ξ ( θ ) ξ ( θ ) 1.0051 ξ ( θ ) − ξ ( θ ) − ξ ( θ ) 0.0141 0.0004 − ξ ( θ ) − ξ ( θ ) − − igure 18: Comparisons between target, sample and KL-simulated covariance.
5. Conclusion
In this paper, efficient numerical schemes have been presented for simu-lating non-gaussian and non-stationary stochastic processes specified by co-variance functions and marginal distribution functions. In order to simulatesamples of the target stochastic process, stochastic samples automaticallymatching the target marginal distribution function are firstly generated, andan iterative algorithm is proposed to match the target covariance functionby transform the order of initial stochastic samples. Three numerical exam-ples demonstrate the fast convergence and the high accuracy of the proposedalgorithm. In order to overcome the difficulty that sample-descriptions arenot convenient to applied to subsequent stochastic analysis, three numericalalgorithms are developed to represent the obtained stochastic samples basedon KL expansion and PC expansion. Different algorithms can be used for dif-ferent problems of practical interests and the performances of the developed30lgorithms are indicated by numerical examples. All proposed algorithms canbe readily extended to multi-dimensional random fields and will be shown insubsequent researches.
Acknowledgments
This research was supported by the National Natural Science Foundationof China (Project 11972009). This support is gratefully acknowledged.