Asymptotic expansion for the transition densities of stochastic differential equations driven by the gamma processes
aa r X i v : . [ q -f i n . C P ] M a r Asymptotic expansion for the transition densities of stochasticdifferential equations driven by the gamma processes
Fan Jiang ∗ Xin Zang † Jingping Yang ‡ March 16, 2020
Abstract
In this paper, enlightened by the asymptotic expansion methodology developed by Li (2013b)and Li and Chen (2016), we propose a Taylor-type approximation for the transition densities ofthe stochastic differential equations (SDEs) driven by the gamma processes, a special type ofL´evy processes. After representing the transition density as a conditional expectation of Diracdelta function acting on the solution of the related SDE, the key technical method for calculatingthe expectation of multiple stochastic integrals conditional on the gamma process is presented.To numerically test the efficiency of our method, we examine the pure jump Ornstein–Uhlenbeck(OU) model and its extensions to two jump-diffusion models. For each model, the maximumrelative error between our approximated transition density and the benchmark density obtainedby the inverse Fourier transform of the characteristic function is sufficiently small, which showsthe efficiency of our approximated method.
It is known that L´evy-driven stochastic differential equations (SDEs) have been discussed in de-tail (Applebaum, 2009; Kunita, 2019; Kohatsu-Higa and Takeuchi, 2019). The jump-diffusion SDEdriven by the gamma process, as one important type of the L´evy-driven SDEs, has been widelyused in finance. For instance, the Ornstein–Uhlenbeck (OU) type SDEs driven by the gamma pro-cesses were applied for modeling the short rate (Eberlein et al., 2013) and the returns of S&P 500index (James et al., 2017). The various sensitivity indices for the asset price dynamics driven bythe gamma processes were discussed in Kawai and Takeuchi (2010) and Kawai and Takeuchi (2011).Note that the gamma process is a pure-jump increasing L´evy process (Yor, 2007; Cont and Tankov, ∗ School of Mathematical Sciences, Peking University, China, Email: jiangf [email protected] † Corresponding author. School of Mathematical Sciences, Peking University, China, Email: [email protected] ‡ School of Mathematical Sciences, Peking University, China, Email: [email protected]
Before introducing our asymptotic expansion methodology, we first give a brief introduction of theDirac delta function. Please refer to Kanwal (2004), Hayashi (2008), Ishikawa (2013) and Kunita22019) for more details.For ease of exposition later, we introduce the following notations and concepts. Denote by S ′ ( R )the set of all real-valued tempered distributions. According to Section 6.2 in Kanwal (2004), theDirac delta function denoted as δ ( · ) and its associated derivative operators d ℓ δ ( · ) dx ℓ for ℓ ≥ S ′ ( R ). Here, for each ℓ ≥
1, the derivative operator d ℓ δ ( · ) dx ℓ is defined through an inner product withan indefinitely differentiable function f ( · ) with compact support on R , i.e., (cid:28) d ℓ δ ( x − y ) dx ℓ , f ( x ) (cid:29) x = ( − ℓ (cid:28) δ ( x − y ) , d ℓ fdx ℓ ( x ) (cid:29) x (1)for a fixed y ∈ R , where the inner product h f ( x ) , g ( x ) i x := R ∞−∞ f ( x ) g ( x ) dx , see Section 2.6 inKanwal (2004) for more details.Let D ∞ ( R ) be the set of all real-valued smooth Wiener-Poisson functionals and D ′∞ ( R ) be theset of all real-valued generalized Wiener-Poisson functionals (Kunita, 2019). According to Theorem5.12.1 and equation (5.175) in Kunita (2019), for a tempered distribution Φ ∈ S ′ ( R ), a regularnondegenerate Wiener-Poisson functional F ∈ D ∞ ( R ) and a smooth Wiener-Poisson functional G ∈ D ∞ ( R ), the generalized expectation E [ · ] is defined as E [Φ ( F ) · G ] := 12 π Z + ∞−∞ Z + ∞−∞ e − ivx Φ ( x ) E (cid:2) Ge ivF (cid:3) dxdv, (2)where F and G can be treated as the random variables on the Wiener-Poisson space and the expecta-tion in the right-hand side of (2) is the usual expectation in common sense. Hereafter, the notations E [ · ] and E [ · ] represent the generalized expectation and usual expectation respectively.For a fixed y ∈ R , when we take Φ( · ) = δ ( · − y ) and G ≡ Z + ∞−∞ e − ivx δ ( x − y ) dx = e − ivy , (3)we obtain that E [ δ ( F − y )] = 12 π Z ∞−∞ e − ivy E (cid:2) e ivF (cid:3) dv. (4)Moreover, the regular nondegenerate Wiener-Poisson functional F in (4) can also be taken as thestrong solution of a homogeneous jump-diffusion SDE satisfying the nondegenerate bounded (NDB)condition, see Sections 3.5 – 3.6 in Ishikawa (2013) for more details. Especially, taking F ≡ δ ( y ) = 12 π Z + ∞−∞ e ivy dv. (5)Given a parameter ǫ >
0, let F ( ǫ ) be a regular Wiener-Poisson functional in D ∞ ( R ). Forexample, F ( ǫ ) can be taken as the strong solution of some jump-diffusion SDE with a parameter ǫ >
0. In the remaining of this section, for a fixed y ∈ R , we introduce some theoretical resultsabout the asymptotic expansion of δ ( F ( ǫ ) − y ) with respect to the parameter ǫ (see Section 4.1 of3shikawa (2013)). We assume that the functional F ( ǫ ) satisfies the uniformly nondegenerate condition(Definition 4.1 in Ishikawa (2013)), and has an expansion F ( ǫ ) = ∞ X j =0 f j ǫ j (6)with respect to the norm in D ∞ ( R ), where f , f , f , . . . are smooth Wiener-Poisson functionals.According to Theorem 4.1 of Ishikawa (2013), for each fixed y ∈ R , δ ( F ( ǫ ) − y ) belongs to D ′∞ ( R ) and has an asymptotic expansion δ ( F ( ǫ ) − y ) = M X m =0 Φ m ( y ) ǫ m + O ( ǫ M +1 ) (7)with respect to the norm in D ′∞ ( R ), where M ∈ N denotes an arbitrary order of the expansion.Given the functionals { f , f , f , . . . } defined by (6), the coefficients Φ m ( y ) ∈ D ′∞ ( R ) for m ≥ ( y ) = δ ( f − y ) and Φ m ( y ) = X ( ℓ, ( j ,j , ··· ,j ℓ )) ∈S m ℓ ! d ℓ δ ( f − y ) dx ℓ ℓ Y i =1 f j i for m ≥ , (8)where the index set S m is defined as S m := { ( ℓ, j ( ℓ )) | ℓ = 1 , , . . . , j ( ℓ ) = ( j , j , . . . , j ℓ ) with j , j , . . . , j ℓ ≥ j + j + · · · + j ℓ = m } . (9)For example, the coefficients Φ ( y ) and Φ ( y ) are given byΦ ( y ) = f dδ ( f − y ) dx and Φ ( y ) = f dδ ( f − y ) dx + 12 f d δ ( f − y ) dx . According to Section 4 in Ishikawa (2013), by taking the generalized expectation defined in (2)on both sides of equation (7), we can obtain that E [ δ ( F ( ǫ ) − y )] = M X m =0 E [Φ m ( y )] ǫ m + O ( ǫ M +1 ) . (10)For example, the terms E [Φ ( y )] and E [Φ ( y )] in (10) can be evaluated via (1), (2) and (3) as E [Φ ( y )] = E [ δ ( f − y )] = 12 π Z + ∞−∞ e − ivy E [ e ivf ] dv and E [Φ ( y )] = E [ f dδ ( f − y ) dx ] = 12 π Z + ∞−∞ ive − ivy E [ f e ivf ] dv. In the subsequent calculation of the expansion terms for the transition density, by using equation(2), we will transform some specific generalized expectations like E [Φ m ( y )] into the usual expecta-tions. 4 .2 The model setup In this paper, we consider the following homogeneous jump-diffusion SDE driven by a gamma process dX ( t ) = µ ( X ( t ); θ ) dt + σ ( X ( t ); θ ) dW ( t ) + dL ( t ) , X (0) = x , (11)where the functions µ ( x ; θ ) and σ ( x ; θ ) are assumed to depend on some parameter vector θ belongingto an open bound set Θ, { W ( t ) , t ≥ } is a Brownian motion and { L ( t ) , t ≥ } is a gamma process.Moreover, we assume that the gamma process { L ( t ) , t ≥ } starts at L (0) = 0 with the densityfunction p L ( t ) ( x ) = b at x at − e − bx Γ ( at ) , x ≥ t >
0, where a and b are positive constants, and Γ( · ) denotes the gamma function. The twoprocesses { W ( t ) , t ≥ } and { L ( t ) , t ≥ } are independent. The characteristic function of the gammaprocess L ( t ) is calculated as Ee iλL ( t ) = (cid:18) − iλb (cid:19) − at , exp [ tψ L ( λ )] , (13)where ψ L ( λ ) = − a log (1 − iλ/b ) is the characteristic exponent of L ( t ).To guarantee the existence and uniqueness of the strong solution X ( t ) of SDE (11) and ob-tain other desirable properties for implementing our method, the following standard and technicalassumptions are assumed in this paper: Assumption 1.
The diffusion function σ ( x ; θ ) satisfies that inf x ∈ R σ ( x ; θ ) > for any θ ∈ Θ . Assumption 2.
For each k ∈ N + , the k -th order partial derivatives in x of µ ( x ; θ ) and σ ( x ; θ ) areuniformly bounded for any ( x, θ ) ∈ R × Θ . Assumption 3.
The functions µ ( x ; θ ) and σ ( x ; θ ) satisfy the linear growth conditions | µ ( x ; θ ) | ≤ c (1 + | x | ) and | σ ( x ; θ ) | ≤ c (1 + | x | ) , for some c , c ∈ R + and any ( x, θ ) ∈ R × Θ . Assumption 1 and Assumption 2 guarantee the NDB condition and uniformly nondegeneratecondition for justifying the validity and convergence of our proposed asymptotic expansion method,which will be shown in section 2.3. Assumption 2 and Assumption 3 guarantee the existence anduniqueness of the strong solution X ( t ) of SDE (11). For the jump-diffusion SDE (11), by the time-homogeneity nature, the transition density of X ( t + ∆)given X ( t ) = x can be expressed as P ( X ( t + ∆) ∈ dx | X ( t ) = x ; θ ) = p X (∆) ( x | x ; θ ) dx, (14)5here ∆ denotes the time interval. For most SDEs defined in (11), their transition densities do notadmit closed-form expressions. Even for some special cases with closed-form conditional character-istic functions, the inversion to transition densities may not be easy, especially for the pure jumpprocesses (Barndorff-Nielsen et al., 2001; Schoutens, 2003). In the following, we propose a closed-form expansion for approximating the transition density p X (∆) ( x | x ; θ ) of the jump-diffusion SDE(11).To start with, we parameterize the dynamics of X ( t ) in (11) via a parameter ǫ ∈ (0 ,
1] as dX ( ǫ, t ) = ǫ [ µ ( X ( ǫ, t ); θ ) dt + σ ( X ( ǫ, t ); θ ) dW ( t ) + dL ( t )] , X ( ǫ,
0) = x . (15)Note that the solution X ( ǫ, t ) of (15) satisfies X ( ǫ, t ) | ǫ =1 = X ( t ). By regarding ǫ ∈ (0 ,
1] asan extra element of the parameter vector, we see that the SDE (15) still satisfies Assumption2 and Assumption 3, which implies the existence and uniqueness of the strong solution X ( ǫ, t )(Platen and Bruti-Liberati, 2010). The transition density of X ( ǫ, t ) in (15) can be expressed as P ( X ( ǫ, t + ∆) ∈ dx | X ( ǫ, t ) = x ; θ ) = p X ( ǫ, ∆) ( x | x ; θ ) dx. (16)Once we obtain an asymptotic expansion of p X ( ǫ, ∆) ( x | x ; θ ) as a series of ǫ , the transition density p X (∆) ( x | x ; θ ) in (14) can be obtained by letting ǫ = 1.To derive the asymptotic expansion of p X ( ǫ, ∆) ( x | x ; θ ) in (16), we first claim that X ( ǫ, t ) satisfiesthe NDB condition introduced in Section 2.1, which will justify the representation of the transitiondensity p X ( ǫ, ∆) ( x | x ; θ ) as a conditional expectation shown below. Note that for X ( ǫ, t ), the NDBcondition introduced from Definition 3.5 in Ishikawa (2013) is transformed to the condition that σ ( x ; θ ) = 0 for any ( x, θ ) ∈ R × Θ, which is guaranteed by Assumption 1.Based on the NDB condition and the time-homogeneity nature of X ( ǫ, t ), we represent p X ( ǫ, ∆) ( x | x ; θ ) as a conditional expectation of Dirac delta function acting on X ( ǫ, ∆) − x by p X ( ǫ, ∆) ( x | x ; θ ) = E [ δ ( X ( ǫ, ∆) − x ) | X ( ǫ,
0) = x ; θ ] . (17)The validity of (17) will be verified in Remark 1 below in detail. For brevity, we omit the initialcondition X ( ǫ,
0) = x and drop the dependence of θ in the dynamics of (11) and (15) hereafter,unless especially noted. The starting point of the expansion for (17) lies in that X ( ǫ, ∆) admits thepathwise Taylor-type expansion X ( ǫ, ∆) = M X m =0 X m (∆) ǫ m + O ( ǫ M +1 ) , (18)where M ∈ N denotes an arbitrary order of expansion, see, e.g., Chapter 4 in Platen and Bruti-Liberati(2010) for the validity of this expansion.In the following, we first derive the expressions of the expansion terms X m (∆) in (18). We rewrite(15) in integrated form as 6 ( ǫ, ∆) = x + ǫ (cid:20)Z ∆0 µ ( X ( ǫ, s )) ds + Z ∆0 σ ( X ( ǫ, s )) dW ( s ) + L (∆) (cid:21) . (19)Letting ǫ ↓ X (0 , ∆) = x (20)for any fixed ∆ ≥
0. Further, from the expansion of X ( ǫ, ∆) in (18), we can obtain that µ ( X ( ǫ, t )) := M X m =0 µ m ( t ) ǫ m + O ( ǫ M +1 ) (21)and σ ( X ( ǫ, t )) := M X m =0 σ m ( t ) ǫ m + O ( ǫ M +1 ) (22)in the SDE (15), where µ m ( t ) := 1 m ! d m µ ( X ( ǫ, t )) dǫ m (cid:12)(cid:12)(cid:12)(cid:12) ǫ =0 = X ( ℓ, ( j ,j , ··· ,j ℓ )) ∈S m ℓ ! d ℓ µ ( x ) dx ℓ ℓ Y i =1 X j i ( t ) (23)and σ m ( t ) := 1 m ! d m σ ( X ( ǫ, t )) dǫ m (cid:12)(cid:12)(cid:12)(cid:12) ǫ =0 = X ( ℓ, ( j ,j , ··· ,j ℓ )) ∈S m ℓ ! d ℓ σ ( x ) dx ℓ ℓ Y i =1 X j i ( t ) (24)with the index set S m defined by (9) and the condition X (0 , t ) = x as in (20). Plugging (18), (21),and (22) into (19), we have M X m =0 X m (∆) ǫ m + O ( ǫ M +1 )= x + M X m =0 (cid:18)Z ∆0 µ m ( s ) ds + Z ∆0 σ m ( s ) dW ( s ) (cid:19) ǫ m +1 + ǫL (∆) + O ( ǫ M +2 ) . By comparing the coefficients of ǫ m for m = 0 , , , . . . , we conclude that X (∆) ≡ x , X (∆) = µ ( x ) ∆ + σ ( x ) W (∆) + L (∆) , (25)and X m (∆) = Z ∆0 µ m − ( s ) ds + Z ∆0 σ m − ( s ) dW ( s ) for m ≥ , (26)where µ m − ( s ) and σ m − ( s ) are defined in equations (23) and (24) respectively. As the expres-sions µ m − ( s ) and σ m − ( s ) involved in the right-hand side of (26) are determined by the expansionterms X ( s ) , X ( s ) , . . . , X m − ( s ) in (18) with orders at most m −
1, we notice that X m (∆) is fullydetermined by X i ( s ) , i = 0 , , , . . . , m − ≤ s ≤ ∆.7ext, we illustrate the expansion of p X ( ǫ, ∆) ( x | x ; θ ) in (17) as a convergent series of ǫ . To dothis, we standardize X ( ǫ, ∆) into Y ( ǫ, ∆) = X ( ǫ, ∆) − x σ ( x ) √ ∆ ǫ , (27)from which the transition density p X ( ǫ, ∆) ( x | x ; θ ) in (17) can be represented in terms of Y ( ǫ, ∆) as p X ( ǫ, ∆) ( x | x ; θ ) = 1 σ ( x ) √ ∆ ǫ E [ δ ( Y ( ǫ, ∆) − y )] (cid:12)(cid:12)(cid:12)(cid:12) y = x − x σ ( x √ ∆ ǫ . (28)According to Hayashi and Ishikawa (2012) and Definition 4.1 in Ishikawa (2013), Y ( ǫ, ∆) in (27)satisfies the uniformly nondegenerate condition which is guaranteed by Assumption 1 and Assumption2. Then the expectation in the right-hand side of (28) admits a convergent series of ǫ . To obtain anexpansion of E [ δ ( Y ( ǫ, ∆) − y )] in (28) with respect to ǫ , we notice from (18), (27) and X (∆) ≡ x that Y ( ǫ, ∆) = M X m =0 Y m (∆) ǫ m + O ( ǫ M +1 ) , (29)where Y m (∆) = X m +1 (∆) σ ( x ) √ ∆ , for m = 0 , , , . . . . (30)Since the functional δ ( · − y ) belongs to S ′ ( R ), according to (7), we obtain a Taylor-type expansionof δ ( Y ( ǫ, ∆) − y ) as δ ( Y ( ǫ, ∆) − y ) = M X m =0 Φ m ( y ) ǫ m + O ( ǫ M +1 ) (31)for any M ∈ N . Here in (31), it follows from (8), (29) and (30) thatΦ ( y ) = δ ( Y (∆) − y ) (32)and Φ m ( y ) = X ( ℓ, ( j ,j , ··· ,j ℓ )) ∈S m ℓ ! 1( σ ( x ) √ ∆) ℓ d ℓ δ ( Y (∆) − y ) dx ℓ ℓ Y i =1 X j i +1 (∆) (33)for m ≥
1, with the index set S m defined in (9) and Y (∆) = X (∆) σ ( x ) √ ∆ . Further, based on (10), wetake the generalized expectation on both sides of (31) to obtain that E [ δ ( Y ( ǫ, ∆) − y )] = M X m =0 Ω m ( y ) ǫ m + O ( ǫ M +1 ) , (34)where the generalized expectation Ω m ( y ) := E Φ m ( y ) for m ≥ ( y ) and Ω m ( y ) for m ≥ ǫ = 1, the approximated transition densityof X (∆) up to the M -th order is proposed as p ( M ) X (∆) ( x | x ; θ ) := 1 σ ( x ) √ ∆ M X m =0 Ω m (cid:18) x − x σ ( x ) √ ∆ (cid:19) . (35)Consequently, to approximate the transition density p X (∆) ( x | x ; θ ) in (14) up to any finite order,it suffices to specify the functions Ω ( y ) and Ω m ( y ) for m ≥ Remark 1.
The equation (17) can be verified as follows. According to Section 6.4 in Kunita (2019),under the initial condition X ( ǫ,
0) = x , the strong solution X ( ǫ, t ) of SDE (15) is uniquely tied toa stochastic flow, which is a regular Wiener-Poisson functional belonging to the set D ∞ ( R ) (Section3.1 in Kunita (2019)). Thus, by noting that X ( ǫ, t ) satisfies the NDB condition, under the initialcondition X ( ǫ,
0) = x , we take F = X ( ǫ, ∆) in (4) to obtain that E [ δ ( X ( ǫ, ∆) − x ) | X ( ǫ,
0) = x ; θ ]= 12 π Z + ∞−∞ e − ivx E h e ivX ( ǫ, ∆) (cid:12)(cid:12)(cid:12) X ( ǫ,
0) = x ; θ i dv = p X ( ǫ, ∆) ( x | x ; θ ) , which verifies (17). Remark 2.
The above closed-form expansion method can also be applied to the special case of SDE(11) with σ ( X ( t ); θ ) ≡
0. In this context, we only adjust the above algorithm to standardize X ( ǫ, ∆)defined by (15) into Y ( ǫ, ∆) = X ( ǫ, ∆) − x ǫ instead of (27), which implies that E [ δ ( X ( ǫ, ∆) − x )] = 1 ǫ E [ δ ( Y ( ǫ, ∆) − y )] (cid:12)(cid:12)(cid:12)(cid:12) y = x − x ǫ . The remaining procedures are performed in a similar manner. Therefore, the approximated transitiondensity of X (∆) up to the M -th order can be obtained by p ( M ) X (∆) ( x | x ; θ ) = M X m =0 Ω m ( x − x ) , (36)where Ω m ( y ) = E Φ m ( y ) and the terms Φ m ( y ) are calculated byΦ ( y ) = δ ( Y (∆) − y )and Φ m ( y ) = X ( ℓ, ( j ,j , ··· ,j ℓ )) ∈S m ℓ ! d ℓ δ ( Y (∆) − y ) dx ℓ ℓ Y i =1 X j i +1 (∆)for m ≥
1, with the index set S m defined in (9) and Y (∆) = X (∆).9 .4 General expressions of the leading term and high-order terms In this part, we give the explicit expression of the leading term Ω ( y ) and the general representationsof the higher-order terms Ω m ( y ) for m ≥ φ ( · ) the density function of a standard normal variable and recall that p L ( t ) ( · ) is the density functionof the gamma process L ( t ) given by (12).From (32), the leading term Ω ( y ) is expressed asΩ ( y ) = E [ δ ( Y (∆) − y )] , (37)which is exactly the density function of Y (∆) evaluated at y . The explicit expression of Ω ( y ) isgiven in the following proposition. Proposition 1.
The leading term Ω ( y ) in (35) admits the following explicit expression Ω ( y ) = Z + ∞ φ (cid:18) y − µ ( x ) ∆ + uσ ( x ) √ ∆ (cid:19) p L (∆) ( u ) du, where p L (∆) ( · ) is the density function of the gamma process L (∆) given by (12).Proof. From (4) and (37), we obtain thatΩ ( y ) = E [ δ ( Y (∆) − y )]= 12 π Z + ∞−∞ e − ivy E h e ivY (∆) i dv = E (cid:20) π Z + ∞−∞ e − ivy E h e ivY (∆) (cid:12)(cid:12)(cid:12) L (∆) i dv (cid:21) . (38)We notice from (25) and (30) that Y (∆) in (38) can be represented as Y (∆) = X (∆) σ ( x ) √ ∆ = W (∆) √ ∆ + µ ( x ) ∆ + L (∆) σ ( x ) √ ∆ . (39)Here, conditioned on the jump term L (∆), the variable Y (∆) in (39) follows a normal distribution.Therefore, the inner term of the expectation in the last equation of (38) can be calculated as12 π Z + ∞−∞ e − ivy E h e ivY (∆) (cid:12)(cid:12)(cid:12) L (∆) i dv = E " π Z + ∞−∞ e − ivy e iv (cid:18) W (∆) √ ∆ + µ ( x L (∆) σ ( x √ ∆ (cid:19) dv (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) L (∆) = Z ∞−∞ E " π Z + ∞−∞ e − ivy e iv (cid:18) x + µ ( x L (∆) σ ( x √ ∆ (cid:19) dv (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) L (∆) φ ( x ) dx = E " Z ∞−∞ π Z + ∞−∞ e − ivy e iv (cid:18) x + µ ( x L (∆) σ ( x √ ∆ (cid:19) dv ! φ ( x ) dx (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) L (∆) . (40)10y the relation (5), we obtain that12 π Z + ∞−∞ e − ivy e iv (cid:18) x + µ ( x L (∆) σ ( x √ ∆ (cid:19) dv = δ (cid:18) x + µ ( x ) ∆ + L (∆) σ ( x ) √ ∆ − y (cid:19) . Then plugging the above equation into (40) by noting the definition of the Dirac delta function, weobtain that12 π Z + ∞−∞ e − ivy E h e ivY (∆) (cid:12)(cid:12)(cid:12) L (∆) i dv = E (cid:20) Z ∞−∞ δ (cid:18) x + µ ( x ) ∆ + L (∆) σ ( x ) √ ∆ − y (cid:19) φ ( x ) dx (cid:12)(cid:12)(cid:12)(cid:12) L (∆) (cid:21) = E (cid:20) φ (cid:18) y − µ ( x ) ∆ + L (∆) σ ( x ) √ ∆ (cid:19)(cid:12)(cid:12)(cid:12)(cid:12) L (∆) (cid:21) . (41)Plugging (41) into (38), the leading term Ω ( y ) can be finally calculated asΩ ( y ) = E (cid:20) E (cid:20) φ (cid:18) y − µ ( x ) ∆ + L (∆) σ ( x ) √ ∆ (cid:19)(cid:12)(cid:12)(cid:12)(cid:12) L (∆) (cid:21)(cid:21) = Z + ∞ φ (cid:18) y − µ ( x ) ∆ + uσ ( x ) √ ∆ (cid:19) p L (∆) ( u ) du. To calculate the higher-order terms Ω m ( y ) for m ≥
1, we introduce the following notations. For ℓ ≥ j ( ℓ ) = ( j , j , . . . , j ℓ ) with j i ≥
1, we define K ( ℓ, j ( ℓ )) ( z , z ) := E ℓ Y i =1 X j i +1 (∆) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) W (∆) , L (∆) !(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) W (∆)= z √ ∆ ,L (∆)= z . (42)Meanwhile, for any bivariate differentiable function u ( x, y ) defined on R , we introduce the followingpartial differential operators with respect to the first variable: D (1)1 ( u ( x, y )) := ∂u ( x, y ) ∂x − xu ( x, y ) and D ( n )1 ( u ( x, y )) := D (1)1 (cid:16) D ( n − ( u ( x, y )) (cid:17) for n ≥ . (43)The representations of Ω m ( y ) for m ≥ Theorem 1.
For any integer m ≥ , the high-order term Ω m ( y ) in (35) admits the followingexpression: Ω m ( y ) = X ( ℓ, ( j ,j , ··· ,j ℓ )) ∈S m ( − ℓ ℓ ! 1( σ ( x ) √ ∆) ℓ Z + ∞ D ( ℓ )1 (cid:0) K ( ℓ, j ( ℓ )) ( z , z ) (cid:1) · φ ( z ) · p L (∆) ( z ) dz , (44) where the index set S m is defined in (9), p L (∆) ( · ) is the density function of the gamma process L (∆) given by (12) and z = y − µ ( x ) ∆ + z σ ( x ) √ ∆ . (45)11 roof. We see from the definition of Φ m ( y ) in (33) thatΩ m ( y ) = E Φ m ( y ) = X ( ℓ, ( j ,j , ··· ,j ℓ )) ∈S m E " ℓ ! 1( σ ( x ) √ ∆) ℓ d ℓ δ ( Y (∆) − y ) dx ℓ ℓ Y i =1 X j i +1 (∆) = X ( ℓ, ( j ,j , ··· ,j ℓ )) ∈S m ℓ ! 1( σ ( x ) √ ∆) ℓ E " d ℓ δ ( Y (∆) − y ) dx ℓ ℓ Y i =1 X j i +1 (∆) . (46)For the generalized expectation in the last line of (46), according to (2), we have E " d ℓ δ ( Y (∆) − y ) dx ℓ ℓ Y i =1 X j i +1 (∆) = 12 π Z + ∞−∞ Z + ∞−∞ e − ivx d ℓ δ ( x − y ) dx ℓ E " ℓ Y i =1 X j i +1 (∆) · e ivY (∆) dxdv = Z + ∞−∞ (cid:18) π Z + ∞−∞ e − ivx d ℓ δ ( x − y ) dx ℓ dx (cid:19) E " ℓ Y i =1 X j i +1 (∆) · e ivY (∆) dv. By the relations (1) and (3), the above equation is further calculated as E " d ℓ δ ( Y (∆) − y ) dx ℓ ℓ Y i =1 X j i +1 (∆) = Z + ∞−∞ (cid:18) π ( − ℓ Z + ∞−∞ δ ( x − y ) d ℓ e − ivx dx ℓ dx (cid:19) E " ℓ Y i =1 X j i +1 (∆) · e ivY (∆) dv = Z + ∞−∞ π ( iv ) ℓ e − ivy · E " ℓ Y i =1 X j i +1 (∆) · e ivY (∆) dv. (47)Then following the definition of Y (∆) in (39) and using the independence between Brownian motion W ( t ) and gamma process L ( t ), the expectation in the last line of (47) is calculated as E " ℓ Y i =1 X j i +1 (∆) · e ivY (∆) = E " ℓ Y i =1 X j i +1 (∆) · e iv (cid:18) W (∆) √ ∆ + µ ( x L (∆) σ ( x √ ∆ (cid:19) = Z ∞ Z ∞−∞ E " ℓ Y i =1 X j i +1 (∆) · e iv (cid:18) z + µ ( x z σ ( x √ ∆ (cid:19) (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) W (∆) , L (∆) W (∆)= z √ ∆ ,L (∆)= z × φ ( z ) p L (∆) ( z ) dz dz . (48)By the definition of K ( ℓ, j ( ℓ )) ( z , z ) in (42), the expectation (48) can be expressed as E " ℓ Y i =1 X j i +1 (∆) · e ivY (∆) = Z ∞ Z ∞−∞ e iv (cid:18) z + µ ( x z σ ( x √ ∆ (cid:19) K ( ℓ, j ( ℓ )) ( z , z ) φ ( z ) p L (∆) ( z ) dz dz . (49)12lugging (49) into (47), we obtain that E " d ℓ δ ( Y (∆) − y ) dx ℓ ℓ Y i =1 X j i +1 (∆) = Z + ∞−∞ π ( iv ) ℓ e − ivy Z ∞ Z ∞−∞ e iv (cid:18) z + µ ( x z σ ( x √ ∆ (cid:19) K ( ℓ, j ( ℓ )) ( z , z ) φ ( z ) p L (∆) ( z ) dz dz dv = Z + ∞ Z ∞−∞ π e − ivy p L (∆) ( z ) Z ∞−∞ ∂ ℓ e iv (cid:18) z + µ ( x z σ ( x √ ∆ (cid:19) ∂z ℓ K ( ℓ, j ( ℓ )) ( z , z ) φ ( z ) dz dvdz . (50)Using integration by parts, the last line of (50) can be further calculated as Z + ∞ Z ∞−∞ π e − ivy p L (∆) ( z ) Z ∞−∞ ( − ℓ e iv (cid:18) z + µ ( x z σ ( x √ ∆ (cid:19) ∂ ℓ (cid:0) K ( ℓ, j ( ℓ )) ( z , z ) φ ( z ) (cid:1) ∂z ℓ dz dvdz = Z + ∞ Z ∞−∞ Z ∞−∞ π e − ivy e iv (cid:18) z + µ ( x z σ ( x √ ∆ (cid:19) dv ! × ( − ℓ ∂ ℓ (cid:0) K ( ℓ, j ( ℓ )) ( z , z ) φ ( z ) (cid:1) ∂z ℓ p L (∆) ( z ) dz dz . (51)By the relation (5), it follows that12 π Z + ∞−∞ e − ivy e iv (cid:18) z + µ ( x z σ ( x √ ∆ (cid:19) dv = 12 π Z + ∞−∞ e iv (cid:18) z + µ ( x z σ ( x √ ∆ − y (cid:19) dv = δ (cid:18) z + µ ( x ) ∆ + z σ ( x ) √ ∆ − y (cid:19) , Plugging the above equation into (51), we obtain that E " d ℓ δ ( Y (∆) − y ) dx ℓ ℓ Y i =1 X j i +1 (∆) = Z + ∞ Z ∞−∞ δ (cid:18) z + µ ( x ) ∆ + z σ ( x ) √ ∆ − y (cid:19) ( − ℓ ∂ ℓ (cid:0) K ( ℓ, j ( ℓ )) ( z , z ) φ ( z ) (cid:1) ∂z ℓ p L (∆) ( z ) dz dz = Z + ∞ ( − ℓ ∂ ℓ (cid:0) K ( ℓ, j ( ℓ )) ( z , z ) φ ( z ) (cid:1) ∂z ℓ (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) z = y − µ ( x z σ ( x √ ∆ · p L (∆) ( z ) dz . (52)Thus, plugging (52) into (46), we obtain thatΩ m ( y ) = X ( ℓ, ( j ,j , ··· ,j ℓ )) ∈S m ( − ℓ ℓ ! 1( σ ( x ) √ ∆) ℓ × Z + ∞ ∂ ℓ (cid:0) K ( ℓ, j ( ℓ )) ( z , z ) φ ( z ) (cid:1) ∂z ℓ (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) z = y − µ ( x z σ ( x √ ∆ p L (∆) ( z ) dz . (53)From the definition (43), for any bivariate differentiable function u ( z , z ), we have ∂∂z ( u ( z , z ) φ ( z )) = (cid:18) ∂u ( z , z ) ∂z − z u ( z , z ) (cid:19) φ ( z ) ≡ D (1)1 ( u ( z , z )) · φ ( z ) , ∂ ℓ ∂z ℓ ( u ( z , z ) φ ( z )) = D ( ℓ )1 ( u ( z , z )) · φ ( z )for any ℓ ≥
1, from which we obtain that ∂ ℓ (cid:0) K ( ℓ, j ( ℓ )) ( z , z ) φ ( z ) (cid:1) ∂z ℓ = D ( ℓ )1 (cid:0) K ( ℓ, j ( ℓ )) ( z , z ) (cid:1) · φ ( z ) . (54)Plugging (54) into (53), we obtain the formula (44).According to (44) in Theorem 1, to calculate the high-order terms Ω m ( y ) for m ≥
1, it sufficesto derive the bivariate function K ( ℓ, j ( ℓ )) ( z , z ) in (42), which will be shown in Section 3. K ( ℓ, j ( ℓ )) ( z , z ) In this section, we explicitly derive the function K ( ℓ, j ( ℓ )) ( z , z ) for every fixed ℓ ≥ j ( ℓ ) =( j , j , . . . , j ℓ ) with j i ≥ m ( y ) for m ≥ p ( M ) X (∆) ( x | x ; θ ) in (35). Moreover, for illustration purpose, thepure jump OU model, constant diffusion model and square-root diffusion model are introduced asexamples of SDE (11) to exhibit the first several expansion terms of { Ω m ( y ) , m ≥ } in (35).To present our algorithm for calculating the function K ( ℓ, j ( ℓ )) ( z , z ), we introduce the followingnotation. For any integer h ≥ h -dimensional index n ( h ) = ( n , n , . . . , n h ) withnonnegative integers n , n , . . . , n h , we define the h -dimensional vector L n ( h ) ( t ) := ( L n ( t ) , L n ( t ) , . . . , L n h ( t )) (55)by using the gamma process L ( · ). For example, L n (1) ( t ) = ( L ( t )) when h = 1 and n (1) = (1), and L n (2) ( t ) = (1 , L ( t )) when h = 2 and n (2) = (0 , h -dimensional index i ( h ) = ( i , i , . . . , i h ) with i , i , . . . , i h ∈ { , } and h -dimensionalvector L n ( h ) ( t ) in (55), we define an iterated stochastic integral as I i ( h ) , L n ( h ) (∆):= Z ∆0 Z s h · · · Z s L n ( s ) · · · L n h − ( s h − ) L n h ( s h ) dW i ( s ) · · · dW i h − ( s h − ) dW i h ( s h ) , (56)where W ( t ) := t and W ( t ) := W ( t ). For example, we have I (0) , L (0) (∆) = ∆, I (1) , L (0) (∆) = W (∆), I (0) , L (1) (∆) = Z ∆0 L ( s ) ds , I (1) , L (1) (∆) = Z ∆0 L ( s ) dW ( s ), I (0 , , L (0 , (∆) = Z ∆0 Z s ds ds , I (0 , , L (0 , (∆) = Z ∆0 Z s ds dW ( s ),14nd I (1 , , L (0 , (∆) = Z ∆0 Z s L ( s ) dW ( s ) ds , I (1 , , L (1 , (∆) = Z ∆0 Z s L ( s ) dW ( s ) dW ( s ) . We notice that the iterated stochastic integral I i ( h ) , L n ( h ) (∆) defined by (56) involves two independentprocesses, i.e., the Brownian motion W ( · ) and the gamma process L ( · ). Such independence willsimplify the calculation related to I i ( h ) , L n ( h ) (∆) as seen below.In order to clarify the procedures of calculating K ( ℓ, j ( ℓ )) ( z , z ) in (42), we briefly outline a generalalgorithm before the detailed descriptions below, which can be implemented by traditional symbolicsoftwares, e.g., Wolfram Mathematica. Algorithm
Framework of calculating K ( ℓ, j ( ℓ )) ( z , z ) in (42).Step 1 Convert the multiplication of the expansion terms in K ( ℓ, j ( ℓ )) ( z , z ), i.e., Q ℓi =1 X j i +1 (∆), toa linear combination of iterated Itˆo integrals as defined in (56);Step 2 Simplify the conditional expectation of the iterated Itˆo integral I i ( h ) , L n ( h ) (∆) via Brownianbridge;Step 3 Compute the conditional expectation of the result from Step 2 with respect to the gammaprocess.In the following Sections 3.1, 3.2 and 3.3, we give the detailed descriptions of Steps 1, 2 and3 in the above algorithm respectively. In Section 3.4, we consider three examples of SDE (11) forillustrations. Q ℓi =1 X j i +1 (∆) into a linear combination ofiterated Itˆo integrals First, we illustrate that the multiplication of iterated Itˆo integrals defined in (56) can be convertedinto a linear combination of the iterated Itˆo integrals taking the same form as in (56).Given an index i ( h ) = ( i , i , . . . , i h ), we denote by i ( h ) − the index obtained from deleting thelast element of index i ( h ), i.e., i ( h ) − := ( i , i , . . . , i h − ) . Similarly, we denote by L n ( h ) − ( t ) := ( L n ( t ) , L n ( t ) , . . . , L n h − ( t ))the ( h − L n ( h ) ( t ) in (55). Conse-15uently, the iterated Itˆo Integral I i ( h ) − , L n ( h ) − (∆) can be defined as I i ( h ) − , L n ( h ) − (∆) := Z ∆0 Z s h − · · · Z s L n ( s ) · · · L n h − ( s h − ) × L n h − ( s h − ) dW i ( s ) · · · dW i h − ( s h − ) dW i h − ( s h − ) . For two fixed positive integers h, q and the gamma process L ( · ), we consider the h -dimensionalvector L n ( h ) ( t ) and q -dimensional vector L m ( q ) ( t ), L n ( h ) ( t ) = ( L n ( t ) , L n ( t ) , . . . , L n h ( t )) and L m ( q ) ( t ) = ( L m ( t ) , L m ( t ) , . . . , L m q ( t ))for some indices n ( h ) = ( n , n , . . . , n h ) and m ( q ) = ( m , m , . . . , m q ) with nonnegative integers n , n , . . . , n h , m , m , . . . , m q . Then for two indices i ( h ) = ( i , i , . . . , i h ) and j ( q ) = ( j , j , . . . , j q ) with i , i , . . . , i h , j , j , . . . , j q ∈ { , } , the product of two iterated Itˆo integrals I i ( h ) , L n ( h ) (∆) and I j ( q ) , L m ( q ) (∆) satisfies the following iterative relation I i ( h ) , L n ( h ) (∆) I j ( q ) , L m ( q ) (∆)= (cid:20)Z ∆0 I i ( h ) − , L n ( h ) − ( s ) · L n h ( s ) dW i h ( s ) (cid:21) · (cid:20)Z ∆0 I j ( q ) − , L m ( q ) − ( s ) · L m q ( s ) dW j q ( s ) (cid:21) = Z ∆0 I i ( h ) , L n ( h ) ( s ) I j ( q ) − , L m ( q ) − ( s ) · L m q ( s ) dW j q ( s )+ Z ∆0 I i ( h ) − , L n ( h ) − ( s ) I j ( q ) , L m ( q ) ( s ) · L n h ( s ) dW i h ( s )+ Z ∆0 I i ( h ) − , L n ( h ) − ( s ) I j ( q ) − , L m ( q ) − ( s ) · L n h + m q ( s ) · { i h = j q =1 } ds , (57)where the second equation follows from the Itˆo product formula Z ∆0 f ( s ) dW i ( s ) · Z ∆0 g ( s ) dW j ( s )= Z ∆0 Z s f ( s ) dW i ( s ) g ( s ) dW j ( s ) + Z ∆0 Z s g ( s ) dW j ( s ) f ( s ) dW i ( s )+ Z ∆0 f ( s ) g ( s )1 { i = j =1 } ds and 1 { i h = j q =1 } is the indicator function defined as1 { i h = j q =1 } = ( , if i h = j q = 1 , , otherwise.By iterative applications of the relation (57), the product of I i ( h ) , L n ( h ) (∆) and I j ( q ) , L m ( q ) (∆) can beexpressed as a linear combination of the iterated Itˆo integrals defined by (56).Next, we show that the expansion terms X j +1 (∆) , X j +1 (∆) , . . . , X j ℓ +1 (∆) in (42) can be ex-pressed as a linear combination of iterated Itˆo integrals I i ( h ) , L n ( h ) (∆) defined in (56), with coefficients16epending on µ ( x ), σ ( x ) and their higher-order derivatives evaluated at x . Based on this, it fol-lows from (57) that the multiplication Q ℓi =1 X j i +1 (∆) can be converted into a linear combinationof I i ( h ) , L n ( h ) (∆) defined in (56). To do this, in what follows, we illustrate that X m (∆) admits theaforementioned linear combination form for m ≥
1. By the notation (56), X (∆) in (25) can bewritten as X (∆) = µ ( x ) I (0) , L (0) (∆) + σ ( x ) I (1) , L (0) (∆) + L (∆) , (58)which admits the linear combination form. For m ≥
1, we notice from (26) that X m +1 (∆) = Z ∆0 µ m ( s ) ds + Z ∆0 σ m ( s ) dW ( s ) , (59)with µ m ( s ) and σ m ( s ) defined by (23) – (24). Since both µ m ( s ) and σ m ( s ) are linear combinationsof the products of the terms chosen among { X ( s ) , X ( s ) , . . . , X m ( s ) } (cf. Section 2.3), by iterativeapplications of (57), (58) and (59), we can also derive X m +1 (∆) for m ≥ I i ( h ) , L n ( h ) (∆) for h ≤ m + 1 formed as (56), with the coefficients dependingon µ ( x ), σ ( x ) and their higher-order derivatives evaluated at x .In summary, to calculate K ( ℓ, j ( ℓ )) ( z , z ) in (42) for every fixed ℓ ≥ j ( ℓ ) = ( j , j , . . . , j ℓ )with j i ≥
1, it suffices to focus on the following type of conditional expectation E (cid:16) I i ( h ) , L n ( h ) (∆) | W (∆) , L (∆) (cid:17)(cid:12)(cid:12)(cid:12) W (∆)= z √ ∆ ,L (∆)= z (60)with I i ( h ) , L n ( h ) (∆) defined by (56). Starting from this part, we focus on calculating the following conditional expectation E (cid:16) I i ( h ) , L n ( h ) (∆) | W (∆) , L (∆) (cid:17)(cid:12)(cid:12)(cid:12) W (∆)= z √ ∆ ,L (∆)= z = E (cid:18)Z ∆0 Z s h · · · Z s L n ( s ) · · · L n h − ( s h − ) L n h ( s h ) × dW i ( s ) · · · dW i h − ( s h − ) dW i h ( s h ) | W (∆) , L (∆) (cid:1)(cid:12)(cid:12) W (∆)= z √ ∆ ,L (∆)= z , (61)where the iterated Itˆo integral I i ( h ) , L n ( h ) (∆) is defined by (56) with i , i , . . . , i h ∈ { , } , W ( t ) = t and W ( t ) = W ( t ).To simplify (61), we utilize the following representation of Brownian bridge, i.e., (cid:16) W ( s ) (cid:12)(cid:12)(cid:12) W (∆) = z √ ∆ (cid:17) d = B z ( s ) := B ( s ) − s ∆ B (∆) + s √ ∆ z (62)for 0 ≤ s ≤ ∆, where the symbol “ d =” means distributional identity and B ( · ) is a 1-dimensional stan-dard Brownian motion. Then by the independence between W ( · ) and L ( · ), (61) can be equivalently17xpressed as E (cid:16) I i ( h ) , L n ( h ) (∆) | W (∆) , L (∆) (cid:17)(cid:12)(cid:12)(cid:12) W (∆)= z √ ∆ ,L (∆)= z = E (cid:18)Z ∆0 Z s h · · · Z s L n ( s ) · · · L n h − ( s h − ) × L n h ( s h ) dB z i ( s ) · · · dB z i h − ( s h − ) dB z i h ( s h ) (cid:12)(cid:12)(cid:12) L (∆) (cid:17)(cid:12)(cid:12)(cid:12) L (∆)= z , (63)where B z ( s ) := B z ( s ) and B z ( s ) := s . Therefore, we only need focus on the conditional expecta-tion E (cid:18) Z ∆0 · · · Z s L n ( s ) · · · L n h ( s h ) dB z i ( s ) · · · dB z i h ( s h ) (cid:12)(cid:12)(cid:12)(cid:12) L (∆) (cid:19) , (64)from which (63) can be obtained by letting L (∆) = z . For the sake of simplicity, we denote by E L ( · ) := E ( ·| L (∆)) the conditional expectation given L (∆) hereafter. By plugging (62) into (64),we obtain that E (cid:18) Z ∆0 Z s h · · · Z s L n ( s ) · · · L n h − ( s h − ) L n h ( s h ) dB z i ( s ) · · · dB z i h − ( s h − ) dB z i h ( s h ) (cid:12)(cid:12)(cid:12)(cid:12) L (∆) (cid:19) = E L (cid:18)Z ∆0 Z s h · · · Z s L n ( s ) dB zi ( s ) · · · L n h − ( s h − ) dB z i h − ( s h − ) L n h ( s h ) dB z i h ( s h ) (cid:19) = E L (cid:18)Z ∆0 Z s h · · · Z s L n ( s ) (cid:26) { i =1 } (cid:18) dB ( s ) − B (∆)∆ ds + z √ ∆ ds (cid:19) + 1 { i =0 } ds (cid:27) × · · · × L n h − ( s h − ) (cid:26) { i h − =1 } (cid:18) dB ( s h − ) − B (∆)∆ ds h − + z √ ∆ ds h − (cid:19) + 1 { i h − =0 } ds h − (cid:27) × L n h ( s h ) (cid:26) { i h =1 } (cid:18) dB ( s h ) − B (∆)∆ ds h + z √ ∆ ds h (cid:19) + 1 { i h =0 } ds h (cid:27)(cid:19) . (65)In order to derive the explicit expression of (65), for any h -dimensional index i ( h ) = ( i , i , . . . , i h )with i , i , . . . , i h ∈ { , } and h -dimensional vector L n ( h ) ( t ) in (55), we define an iterated stochasticintegral J i ( h ) , L n ( h ) (∆):= Z ∆0 Z s h · · · Z s L n ( s ) · · · L n h − ( s h − ) L n h ( s h ) dB i ( s ) · · · B i h − ( s h − ) dB i h ( s h ) , (66)where B ( t ) := t and B ( t ) := B ( t ), with the Brownian motion B ( t ) introduced in (62). Then wefully expand the product of the differential forms in the last equation of (65) and find that it sufficesto calculate the following two kinds of conditional expectations (cid:18) z √ ∆ (cid:19) k E L (cid:16) J i ( h ) , L n ( h ) (∆) (cid:17) (67)and (cid:18) z √ ∆ (cid:19) k E L (cid:16) B (∆) k · J i ( h ) , L n ( h ) (∆) (cid:17) , (68)18here the integers k , k , k ∈ { , , . . . , h } satisfying the condition k + k ≤ h . To precede, wenotice the following relation B (∆) J i ( h ) , L n ( h ) (∆) = h +1 X m =1 J ( i ,...,i m − , ,i m ,...,i h ) , ( L n (∆) ,...,L nm − (∆) , ,L nm (∆) ,...,L nh (∆)) (∆)+ h X m =1 { i m =1 } J ( i ,...,i m − , ,i m +1 ,...,i h ) , L n ( h ) (∆) , (69)which can be verified similarly as in Proposition 5.2.3 of Kloeden and Platen (1992). By iterativeapplications of (69), the conditional expectation (68) can be converted into a linear combination ofthe conditional expectations uniformly represented as in (67). Then from the martingale propertyof stochastic integrals and the independence between the gamma process and Brownian motion, theconditional expectation (67) equals to zero if there exists some integer m ∈ { , , . . . , h } such that i m = 1. Therefore, the conditional expectation (64) can be finally derived as a linear combination ofthe terms uniformly represented as z m · Z ∆0 Z s h · · · Z s E [ L n ( s ) · · · L n h − ( s h − ) L n h ( s h ) | L (∆)] ds · · · ds h − ds h , (70)for some h ≥
1, 0 ≤ m ≤ h , 0 < s < s < · · · < s h < ∆, and nonnegative integers n , n , . . . , n h .Thus, to calculate (64), it suffices to derive the conditional expectation in (70). E [ L n ( s ) L n ( s ) · · · L n h ( s h ) | L (∆)] In this part, we focus on the following conditional expectation E [ L n ( s ) L n ( s ) · · · L n h ( s h ) | L (∆)] (71)appeared in (70), for some 0 < s < s < · · · < s h < ∆ and nonnegative integers n , n , . . . , n h .The expectation (71) involves the product of values of the gamma process L ( · ) evaluated at differentintermediate times conditional on the value of L ( · ) at the terminal time ∆, and can be representedas a function of L (∆) by the following theorem. Theorem 2.
For h ≥ , < s < s < · · · < s h < ∆ and nonnegative integers n , n , . . . , n h , wehave E [ L n ( s ) L n ( s ) · · · L n h ( s h ) | L (∆)]= m − Q r =0 ( as + r ) m − Q r = m ( as + r ) · · · m h − Q r = m h − ( as h + r ) m h − Q r =0 ( a ∆ + r ) L m h (∆) , (72) where m k = n + n + · · · + n k for k = 1 , , . . . , h , and the parameter a is defined through the densityfunction of L ( · ) in (12). roof. We first notice a fact that for the gamma process L ( · ) with density function given by (12) and t < t < t , conditional on L ( t ) = v and L ( t ) = v , we have (cf. Ribeiro and Webber (2004)) L ( t ) d = v + p ( v − v ) , (73)where p is a random variable following Beta distribution as p ∼ B ( a ( t − t ) , a ( t − t )).Now we return to the proof of this lemma. For 0 < s < s < · · · < s h < ∆, by the property ofiterated expectation and L (0) = 0, we can get E [ L n ( s ) L n ( s ) · · · L n h ( s h ) | L (∆)]= E { E [ L n ( s ) L n ( s ) · · · L n h ( s h ) | L ( s ) , . . . , L ( s h ) , L (∆)] | L (∆) } = E { L n ( s ) · · · L n h ( s h ) E [ L n ( s ) | L ( s ) , . . . , L ( s h ) , L (∆)] | L (∆) } = E { L n ( s ) · · · L n h ( s h ) E [ L n ( s ) | L ( s )] | L (∆) } , (74)where the last equality follows from the harness property of general L´evy process (see, for example,in Section 11.2.7 of Jeanblanc et al. (2009)). To calculate E [ L n ( s ) | L ( s )] in the last line of (74),we see from (73) that given L ( s ), L ( s ) d = p L ( s ) , where p ∼ B ( as , a ( s − s )) , from which E [ L n ( s ) | L ( s )] = L n ( s ) E [ p n ] and (74) can be further calculated as E [ L n ( s ) L n ( s ) · · · L n h ( s h ) | L (∆)] = E [ p n ] E (cid:2) L n + n ( s ) L n ( s ) · · · L n h ( s h ) (cid:12)(cid:12) L (∆) (cid:3) . (75)Similarly, in the right-hand side of (75), we notice that E (cid:2) L n + n ( s ) L n ( s ) · · · L n h ( s h ) (cid:12)(cid:12) L (∆) (cid:3) = E (cid:8) L n ( s ) · · · L n h ( s h ) E (cid:2) L n + n ( s ) (cid:12)(cid:12) L ( s ) (cid:3)(cid:12)(cid:12) L (∆) (cid:9) = E (cid:2) p n + n (cid:3) E (cid:2) L n + n + n ( s ) L n ( s ) · · · L n h ( s h ) (cid:12)(cid:12) L (∆) (cid:3) , where p ∼ B ( as , a ( s − s )), so that E [ L n ( s ) L n ( s ) · · · L n h ( s h ) | L (∆)] = E [ p n ] E (cid:2) p n + n (cid:3) E (cid:2) L n + n + n ( s ) L n ( s ) · · · L n h ( s h ) (cid:12)(cid:12) L (∆) (cid:3) . Continuing the above procedure in a similar manner, for h ≥
2, we deduce that E [ L n ( s ) L n ( s ) · · · L n h ( s h ) | L (∆)]= E [ p n ] E (cid:2) p n + n (cid:3) · · · E (cid:2) p n + n + ··· + n h h (cid:3) L n + n + ··· + n h (∆) , E [ p m ] E [ p m ] · · · E (cid:2) p m h h (cid:3) L m h (∆) , (76)where p k ∼ B ( as k , a ( s k +1 − s k )) , for 1 ≤ k ≤ h − p h ∼ B ( as h , a (∆ − s h )) , (78)with m k = n + n + · · · + n k for 1 ≤ k ≤ h .To evaluate the expectation E (cid:2) p m k k (cid:3) for k = 1 , , . . . , h in (76), we notice that for a randomvariable X ∼ B ( α, β ), E h X k i = α ( k ) ( α + β ) ( k ) := Y k − r =0 α + rα + β + r holds for any positive integer k . Then it follows from (77) and (78) that E (cid:2) p m k k (cid:3) = Y m k − r =0 as k + ras k +1 + r = Q m k − r =0 ( as k + r ) Q m k − r =0 ( as k +1 + r ) , for 1 ≤ k ≤ h − , and E (cid:2) p m h h (cid:3) = Y m h − r =0 as h + ra ∆ + r = Q m h − r =0 ( as h + r ) Q m h − r =0 ( a ∆ + r ) . Plugging the above two equations into (76), we obtain that E [ L n ( s ) L n ( s ) · · · L n h ( s h ) | L (∆)]= Q m − r =0 ( as + r ) Q m − r =0 ( as + r ) Q m − r =0 ( as + r ) Q m − r =0 ( as + r ) · · · Q m h − − r =0 ( as h − + r ) Q m h − − r =0 ( as h + r ) Q m h − r =0 ( as h + r ) Q m h − r =0 ( a ∆ + r ) L m h (∆)= Q m − r =0 ( as + r ) Q m − r = m ( as + r ) · · · Q m h − r = m h − ( as h + r ) Q m h − r =0 ( a ∆ + r ) L m h (∆) , which concludes the proof.Based on the previous calculations, the function K ( ℓ, j ( ℓ )) ( z , z ) defined by (42) can be expressedas a linear combination of z n z n for n , n ≥ µ ( x ), σ ( x ) and their higher-order derivatives evaluated at x . According to the definition of the partialdifferential operators D ( ℓ )1 ( · ) for ℓ ≥ D ( ℓ )1 (cid:0) K ( ℓ, j ( ℓ )) ( z , z ) (cid:1) can also beestablished as a linear combination of z n z n for n , n ≥
0, denoted by D ( ℓ )1 (cid:0) K ( ℓ, j ( ℓ )) ( z , z ) (cid:1) = P n ,n ≥ P µ,σ ( ℓ, j ( ℓ )) ( n , n ) z n z n for some coefficient functions P µ,σ ( ℓ, j ( ℓ )) ( n , n ) and n , n ≥
0. Then it follows from (44) – (45) thatΩ m ( y ) can be finally represented asΩ m ( y ) = X ( ℓ, ( j ,j , ··· ,j ℓ )) ∈S m ( − ℓ ℓ ! 1( σ ( x ) √ ∆) ℓ P n ,n ≥ P µ,σ ( ℓ, j ( ℓ )) ( n , n ) Z + ∞ z n z n φ ( z ) p L (∆) ( z ) dz , where z = y − µ ( x )∆+ z σ ( x ) √ ∆ and y = x − x σ ( x ) √ ∆ , the index set S m is defined in (9), µ ( · ) and σ ( · ) aredefined through the SDE (11), φ ( · ) is the standard normal density function and p L (∆) ( · ) is the densityfunction of L (∆) in (12). 21 emark 3. For the special case of the SDE (11) with σ ( X ( t ); θ ) ≡ m ( y ) for m = 0 , , , ... in (36), we skip the Step 2 in the general algorithm stated prior to Section3.1. The remaining procedures are performed in a similar manner. In this section, we consider the pure jump OU model, the constant diffusion model and the square-root diffusion model as three examples of SDE (11) to give the concrete expressions of the first severalexpansion terms of { Ω m ( y ) , m ≥ } in (35).The first model below is the pure jump OU process which is a special case of the non-GaussianOU processes proposed by Barndorff-Nielsen and Shephard (2001a). The pure jump OU process iswidely used in finance and statistical analysis, e.g., to specify the stochastic volatility driving thedynamics of asset prices. We refer to Barndorff-Nielsen and Shephard (2001a), Schoutens (2003) andCont and Tankov (2004) for more details of the non-Gaussian OU processes.Model 1 (Pure jump OU model). By taking θ = { κ, θ } and letting µ ( x ; θ ) = κ ( θ − x ) and σ ( x ; θ ) ≡ dX ( t ) = κ ( θ − X ( t )) dt + dL ( t ) , X (0) = x . (79)The function Ω m ( y ) for m = 0 , , , ( y ) = b a ∆ ( y − η ∆) a ∆ − e − b ( y − η ∆) Γ ( a ∆) , Ω ( y ) = − b a ∆ ( y − η ∆) a ∆ − e − b ( y − η ∆)
2Γ ( a ∆) κ ∆ [( by − a ∆) y + η ∆ (1 − by )] , Ω ( y ) = b a ∆ ( y − η ∆) a ∆ − e − b ( y − η ∆)
24 (1 + a ∆) Γ ( a ∆) κ ∆ h b ( y − η ∆) (cid:0) y (4 + 3 a ∆) − yη ∆ + η ∆ (cid:1) − b (1 + a ∆) ( y − η ∆) (cid:0) y (2 + 3 a ∆) − η ∆ y + κ η (cid:1) + (1 + a ∆) (cid:0) a y ∆ + 2 (2 − a ∆) η ∆ y + (2 + a ∆) η ∆ (cid:1)(cid:3) , andΩ ( y ) = − b a ∆ ( y − η ∆) a ∆ − e − b ( y − η ∆)
48 (1 + a ∆) Γ ( a ∆) κ ∆ × (cid:8)(cid:2) b (2 + a ∆) y − b (6 + a ∆ (8 + 3 a ∆)) y + b (1 + a ∆) (2 + a ∆ (4 + 3 a ∆)) y − a ∆ (1 + a ∆) (cid:3) y − η ∆ (cid:2) b (8 + 3 a ∆) y − b (26 + 3 a ∆ (9 + 2 a ∆)) y + b (1 + a ∆) (6 + a ∆ (20 + 3 a ∆)) y − (1 + a ∆) (2 + a ∆ ( − a ∆))] y + η ∆ (cid:2) b (13 + 3 a ∆) y − b (40 + 3 a ∆ (11 + a ∆)) y + b (1 + a ∆) (14 + 19 a ∆) y − (1 + a ∆) ( − a ∆ (6 + a ∆))] y − η ∆ (cid:2) b (11 + a ∆) y − b (27 + 17 a ∆) y + 3 b (1 + a ∆) (4 + a ∆) y − a ∆ (1 + a ∆) (cid:3) + bη ∆ [ by ( − − a ∆ + 5 by ) + 2 (1 + a ∆)] + b η ∆ (1 − by ) (cid:9) , η := κ ( θ − x ).The following two models generalize the pure jump OU process (79) in Model 1 with extra inno-vation driven by the Brownian motion, specified as the constant diffusion and square-root diffusionrespectively. We refer to Kunita (2019) for more advanced descriptions of the jump-diffusion SDEsdriven by general L´evy processes.Model 2 (Constant diffusion model). By taking θ = { κ, θ, σ } and letting µ ( x ; θ ) = κ ( θ − x ) and σ ( x ; θ ) ≡ σ > dX ( t ) = κ ( θ − X ( t )) dt + σdW ( t ) + dL ( t ) , X (0) = x . (80)The function Ω m ( y ) for m = 0 , , , ( y ) = S ( y ) , Ω ( y ) = κ ∆ / σ n yS ( y ) + h η ∆ y + σ ∆ / (cid:0) − y (cid:1)i S ( y ) o , Ω ( y ) = κ σ (1 + a ∆) × n S ( y ) + 2 (cid:16) η ∆ − σ ∆ / y (cid:17) S ( y ) + h η ∆ − ησ ∆ / y + σ (cid:0) a ∆ + (4 + 3 a ∆) y (cid:1)i ∆ S ( y )+ 2 σ ∆ / (1 + a ∆) h η (cid:0) y (cid:1) ∆ / + 3 σ (cid:0) − y (cid:1) y i S ( y ) σ ∆ (1 + a ∆) h η (cid:0) y (cid:1) ∆ + 6 ησ ∆ / (cid:0) − y (cid:1) y + σ (cid:0) − y + 3 y (cid:1)i S ( y ) o , and Ω ( y ) = κ ∆ / σ (1 + a ∆) × n yS ( y ) + h η ∆ y − σ ∆ / (cid:0) − a ∆ + 21 y (cid:1)i S ( y )+ h η ∆ y + ησ ∆ / (cid:0) − a ∆ − y (cid:1) + σ (cid:0) −
19 + 16 a ∆ + 7 (4 + a ∆) y (cid:1) y i ∆ S ( y )+ h η ∆ / y − η σ ∆ (cid:0) a ∆ + 21 y (cid:1) + ησ ∆ / y (cid:0) a ∆ + 21 (2 + a ∆) y (cid:1) + σ (cid:0) a ∆ − a ∆) y + (33 + 5 a ∆) y (cid:1)(cid:3) ∆ / S ( y )+ σ ∆ (1 + a ∆) h − η ∆ / + 3 η σ ∆ (cid:0)
10 + 7 y (cid:1) y + ησ ∆ / (cid:0)
23 + 19 y − y (cid:1) + σ (cid:0) − − y + 21 y (cid:1) y (cid:3) S ( y ) + 7 σ ∆ / (1 + a ∆) × h η ∆ / (cid:0) y (cid:1) y + η σ ∆ (cid:0) y − y (cid:1) + ησ ∆ / (cid:0) − − y + 3 y (cid:1) y − σ (cid:0) y − y + y (cid:1)i S ( y ) o , where η := κ ( θ − x ) and for m = 0 , , , . . . , S m ( y ) := 12 √ π b a ∆ Γ ( a ∆) A − − r m / exp (cid:20) − (cid:16) y − κ ( θ − x ) ∆ / /σ (cid:17) (cid:21) × (cid:20) B Γ (cid:16) r m (cid:17) F (cid:18) r m ,
32 ; B A (cid:19) + √ A Γ (cid:18) r m (cid:19) F (cid:18) r m ,
12 ; B A (cid:19)(cid:21) r m := m + a ∆ − A := 1 / (cid:0) σ ∆ (cid:1) , B := yσ ∆ / − κ ( θ − x ) σ − b , and F ( a, b ; z ) := X ∞ k =0 (( a ) k / ( b ) k ) z k /k !as the Kummer confluent hypergeometric function.Model 3 (Square-root diffusion model). By taking θ = { κ, θ, σ } and letting µ ( x ; θ ) = κ ( θ − x )and σ ( x ; θ ) ≡ σ √ x with σ > dX ( t ) = κ ( θ − X ( t )) dt + σ p X ( t ) dW ( t ) + dL ( t ) , X (0) = x . (81)The function Ω m ( y ) for m = 0 , , ( y ) = S ( y ) , Ω ( y ) = 14 σx √ ∆ n yS ( y ) + h κθ ∆ y + 2 σ (cid:0) x − y (cid:1) √ ∆ i S ( y ) h κη ( θ + x ) ∆ y + 2 κθσ (cid:0) x − y (cid:1) ∆ / + σ (cid:0) − x + y (cid:1) ∆ y i S ( y ) o , andΩ ( y ) = 1480 σ x ∆ (1 + a ∆) n S ( y ) + 20 (cid:16) κθ ∆ − σ √ ∆ y (cid:17) S ( y )+ h κ (cid:0) θ − x (cid:1) ∆ − κθσ ∆ / y + (cid:0)
15 (3 + a ∆) y + ( −
12 + 17 a ∆) x (cid:1) σ ∆ i S ( y )+ 20 κ θ (cid:0) θ − x (cid:1) ∆ − κ σ (cid:0) θ − x (cid:1) ∆ / y − κσ (cid:0) (7 + 6 a ∆) x + θ (cid:0) x (1 − a ∆) −
30 (2 + a ∆) y (cid:1)(cid:1) ∆ + 2 σ (cid:0) x (38 + 9 a ∆) −
10 (4 + 3 a ∆) y (cid:1) ∆ / y i S ( y )+ h κ (cid:0) θ − x (cid:1) ∆ − κ σθ (cid:0) θ − x (cid:1) ∆ / y + (cid:0) η (cid:0) y (4 + 3 a ∆) + x (37 + 66 a ∆) (cid:1) + 12 κηx (cid:0) y (4 + 3 a ∆) + x (4 + 9 a ∆) (cid:1) + 20 κ x (cid:0) x a ∆ + (4 + 3 a ∆) y (cid:1)(cid:1) σ ∆ + 2 σ η (cid:0) − y (10 + 9 a ∆) + x (77 + 48 a ∆) (cid:1) √ ∆ y + 4 κσ x (cid:0) − y (10 + 9 a ∆) + x (48 + 33 a ∆) (cid:1) √ ∆ y − σ (cid:0) − a ∆) y + (281 + 252 a ∆) x y + (9 + 23 a ∆) x (cid:1)(cid:3) ∆ S ( y ) + σ (1 + a ∆) ∆ / h − κ (cid:0) θ − x (cid:1) (cid:0) x − θx − θy (cid:1) ∆ / + 6 σ (cid:0) η (cid:0) x − y (cid:1) + 4 κηx (cid:0) x − y (cid:1) + 20 κ x (cid:0) x − y (cid:1)(cid:1) ∆ y + σ (cid:0) η (cid:0) x − x y + 180 y (cid:1) + 36 κx (cid:0) x − x y + 5 y (cid:1)(cid:1) √ ∆+ σ (cid:0) − x + 382 x y − y (cid:1) y (cid:3) S ( y )+ 5 σ (1 + a ∆) ∆ κ η (cid:0) x + 3 y (cid:1) ( θ + x ) ∆ + 12 κ θση (cid:0) x − y (cid:1) ( θ + x ) ∆ / y + 2 σ ∆ (cid:0) x − x y + 3 y (cid:1) (cid:0) η + 6 κηx + 2 κ x (cid:1) − κθσ (cid:0) x − x y + 3 y (cid:1) √ ∆ y + 3 σ (cid:0) − x + 21 x y − x y + y (cid:1)(cid:3) S ( y ) (cid:9) , η := κ ( θ − x ) and for m = 0 , , , . . . , S m ( y ) := 12 √ π b a ∆ Γ ( a ∆) A − − r m / exp − y − κ ( θ − x ) ∆ / σ √ x ! × (cid:20) B Γ (cid:16) r m (cid:17) F (cid:18) r m ,
32 ; B A (cid:19) + √ A Γ (cid:18) r m (cid:19) F (cid:18) r m ,
12 ; B A (cid:19)(cid:21) with r m := m + a ∆ − A := 1 / (cid:0) σ x ∆ (cid:1) , and B := yσ √ x ∆ / − κ ( θ − x ) σ x − b . In this section, we demonstrate the performance of the approximations for transition densities via theprevious introduced pure jump OU model, constant diffusion model and square-root diffusion modelin Section 3.4 as examples. To test the accuracy of the asymptotic expansion methodology, we calcu-late the true transition density by inverse Fourier transform of its known characteristic function as thebenchmark for each of the above three models. Here, we use the numerical inverse Fourier transformmethod proposed by Abate and Whitt (1992), which has been proved to be efficient and accurate.By comparing the approximated transition densities using our asymptotic expansion method withthe true densities obtained by inverse Fourier transform, we show that the approximation errorsdecrease quickly as the approximation order M in (35) increases.For each example, the true transition density of X (∆) can be obtained via its characteristicfunction φ (∆; ω ) := Ee iωX (∆) by p X (∆) ( x | x ; θ ) = 12 π Z + ∞−∞ e − ixω φ (∆; ω ) dω = 1 π Z + ∞ [cos ( xω ) Re ( φ ) (∆; ω ) + sin ( xω ) Im ( φ ) (∆; ω )] dω, (82)which can be efficiently approximated via the following Euler summation as E ( m, n, x ) = m X k =1 mk ! − m s n + k ( x ) , where the truncated series is defined by s n ( x ) := h π + hπ n X k =1 [Re ( φ ) (∆; kh ) cos ( khx ) + Im ( φ ) (∆; kh ) sin ( khx )] . We refer to Abate and Whitt (1992) for more technical details.By using (13), we can derive the explicit expressions of the characteristic functions for the abovethree models as follows. For the pure jump OU model, we have φ ( t ; ω ) = exp (cid:26) iω (cid:2) e − κt x + θ (cid:0) − e − κt (cid:1)(cid:3) − aκ (cid:20) Li (cid:18) iωe − κt b (cid:19) − Li (cid:18) iωb (cid:19)(cid:21)(cid:27) , s ( z ) := P + ∞ k =1 z k /k s is the polylogarithm function. For the constant diffusion model, wehave φ ( t ; ω ) = exp ( iω (cid:2) e − κt x + θ (cid:0) − e − κt (cid:1)(cid:3) − ω σ (cid:0) − e − κt (cid:1) κ − aκ (cid:20) Li (cid:18) iωe − κt b (cid:19) − Li (cid:18) iωb (cid:19)(cid:21)) . For the square-root diffusion model, we have φ ( t ; ω ) = E h e iωX ( t ) | X (0) = x i = e α ( t )+ β ( t ) x , where β ( t ) = iωκ κe κt + iωσ (1 − e κt ) and α ( t ) = 1 σ (cid:26) κ θt + aσ t log ( b ) − (cid:0) κθ + aσ t (cid:1) log (cid:20) − e κt (cid:18) − κiωσ (cid:19)(cid:21) + 2 κθ log (cid:18) κiωσ (cid:19) + aσ t " log − be κt (cid:0) κ − iωσ (cid:1) iω (2 κ − bσ ) ! − log (cid:18) b − iωκiωσ (1 − e κt ) + 2 κe κt (cid:19) + aκ (cid:20) Li (cid:18) − κiωσ (cid:19) − Li (cid:18) e κt (cid:18) − κiωσ (cid:19)(cid:19) − Li b (cid:0) κ − iωσ (cid:1) iω (2 κ − bσ ) ! + Li be κt (cid:0) κ − iωσ (cid:1) iω (2 κ − bσ ) ! . For the numerical comparison, according to Barndorff-Nielsen and Shephard (2001a), Li and Chen(2016) and James et al. (2013), we set the parameters of the above three examples as follows. Forthe pure jump OU model in (79), we set κ = 0 . θ = 0 . a = 100, and b = 10. For the constantdiffusion model in (80), we set κ = 0 . θ = 0 . σ = 0 . a = 100, and b = 10. For the square-rootdiffusion model in (81), we set κ = 0 . θ = 0 . σ = 0 . a = 100, and b = 10. In each model, weset the initial value x = 0 . p X (∆) ( x | x ; θ ) in (82) and the approximateddensity p ( M ) X (∆) ( x | x ; θ ) derived by (35) or (36), we denote by e M ( ∆ , x | x ; θ ) = p X (∆) ( x | x ; θ ) − p ( M ) X (∆) ( x | x ; θ ) (83)the M -th order approximation error and define the maximum relative error asmax x ∈D (cid:12)(cid:12)(cid:12)(cid:12) e M ( ∆ , x | x ; θ ) p X (∆) ( x | x ; θ ) (cid:12)(cid:12)(cid:12)(cid:12) over a region D .We consider monthly, weekly, and daily monitoring frequencies (∆ = 1 / , / , / M = 0 , , , /
252 and the order of M = 2, the maximum relative error of each model26an attain the level as 10 − . Besides, when the monitoring frequency rises, i.e. the time interval ∆becomes smaller, the maximum relative error will decrease correspondingly for each model. -12 -10 -8 -6 -4 -2 monthlyweeklydaily (a) Pure jump OU model. -10 -8 -6 -4 -2 monthlyweeklydaily (b) Constant diffusion model. -6 -5 -4 -3 -2 -1 monthlyweeklydaily (c) Square-root diffusion model.Figure 1: Maximum relative absolute errors of density approximation for Models 1, 2 and 3 withorders M = 0 , , , /
52 and denote by e , e , e , e theabbreviation of e M ( ∆ , x | x ; θ ) in (83) for M = 0 , , , e rr o r (a) e . e rr o r -4 (b) e . e rr o r -7 (b) e . e rr o r -9 (c) e .Figure 2: Errors of density approximation for weekly monitoring frequency (∆ = 1 /
52) in purejump OU model, i.e., e , e , e , e as the approximation errors with respect to the expansion orders M = 0 , , , .2 0.4 0.6 0.8 1-0.0100.010.02 e rr o r (a) e . e rr o r -5 (b) e . e rr o r -7 (b) e . e rr o r -8 (c) e .Figure 3: Errors of density approximation for weekly monitoring frequency (∆ = 1 /
52) in constantdiffusion model, i.e., e , e , e , e as the approximation errors with respect to the expansion orders M = 0 , , , e rr o r (a) e . e rr o r -4 (b) e . e rr o r -4 (b) e . e rr o r -4 (c) e .Figure 4: Errors of density approximation for weekly monitoring frequency (∆ = 1 /
52) in square-rootdiffusion model, i.e., e , e , e , e as the approximation errors with respect to the expansion orders M = 0 , , , (1) When the solution of SDE (11) does not admit an explicit expression of X ( t ) or characteristicfunction φ ( t ; ω ), our method can still be used to approximate the transition density. (2) The approximation errors decrease quickly as the approximation order M in (35) increases, thusit suffices to use the first several expansion terms for the approximation. (3) Since the SDE driven by gamma process involves the fat-tail characteristic, the characteristicfunction φ ( t ; ω ) for the pure jump SDE decreases slowly when ω → + ∞ , which induces heavy28omputation burden in the inverse Fourier transform method. In contrast, our expansion termsfor the pure jump SDE can achieve quick convergence and be evaluated in a few seconds forany rational initial value of X . In this paper, we propose a closed-form asymptotic expansion to approximate the transition densityof the jump-diffusion SDE driven by the gamma process. We employ three examples with knowncharacteristic functions for numerical illustrations and comparisons. Compared with the methodof calculating the transition density by inverse Fourier transform of the characteristic function, ourmethod is more efficient while achieving low approximation errors. In terms of the applicationsin financial engineering, our approximation method can be directly applied for option pricing andhedging to obtain analytically tractable results, which is left for further study.
Acknowledgements
Jiang and Yang’s research was supported by the National Natural Science Foundation of China(Grants No. 11671021).