A representation formula for the probability density in stochastic dynamical systems with memory
aa r X i v : . [ m a t h . D S ] F e b A representation formula for the probability density in stochasticdynamical systems with memory
Fang Yang a,b and Xu Sun a,b ∗† a School of Mathematics and Statistics, Huazhong University of Science and Technology, Wuhan 430074, Hubei, China. b Center for Mathematical Sciences, Huazhong University of Science and Technology, Wuhan 430074, Hubei, China.
Feb. 8th, 2021
Abstract: Marcus stochastic delay differential equations (SDDEs) are often used to model stochasticdynamical systems with memory in science and engineering. Since no infinitesimal generators exist forMarcus SDDEs due to the non-Markovian property, conventional Fokker-Planck equations, which governthe evolution behavior of density, are not available for Marcus SDDEs. In this paper, we identify theMarcus SDDE with some Marcus stochastic differential equation (SDE) without delays but subject to extraconstraints. This provides an efficient way to establish existence and uniqueness for the solution, and obtaina representation formula for probability density of the Marcus SDDE. In the formula, the probability densityfor Marcus SDDE is expressed in terms of that for Marcus SDE without delay.Key words: Marcus integral, stochastic differential equations, stochastic delay differential equations, L´evyprocesses, non-Gaussian white noise.
Stochastic differential equations (SDEs) driven by L´evy processes are widely used to model stochasticdynamical systems perturbed by non-Gaussian white noises. While Itˆo SDEs driven by L´evy processes arewidely applied in biology and finance [1, 2], Marcus SDEs [3, 4, 5] are more appropriate models in physicsand engineering [6, 7]. To study the propagation and evolution of the uncertainty in stochastic dynamicalsystems, it is essential to study the dynamics of the probability density, which contains all the statisticalinformation about the uncertainty. For SDEs without delays, the dynamical behavior of the probabilitydensity is governed by Fokker-Planck equations. Fokker-Planck equations corresponding to Marcus SDEsdriven by L´evy processes are derived in [7]. ∗ Corresponding author: [email protected] † The authors are supported by National Natural Science Foundation Grant 11531006.
First, we introduce some notations for vectors. Throughout this paper, each element in R d is representedas a column vector. Given x , x , · · · , x k ∈ R d , then (cid:0) x T , x T , · · · , x Tk (cid:1) T is a column vector in R kd . Here {·} T represents the transpose of {·} .Now, we review the definition of Marcus SDEs driven by L´evy processes and without delays. Consider d Z ( t ) = α ( Z ( t ) , t )d t + β ( Z ( t ) , t ) ⋄ d L ( t ) , for t > ,Z (0) = z ∈ R d , (1)where Z ( t ) ∈ R d , α : R d × R + → R d , ( x, t ) → α ( x, t ), β : R d × R + → M d × n , ( x, t ) → β ( x, t ) = ( β ij ) d × n and L ( t ) ∈ R n . Here M d × n is the set of all d -by- n real matrix.By L´evy-Itˆo decomposition [1], the L´evy process L ( t ) can be expressed as L ( t ) = bt + B A ( t ) + Z k y k < y e N ( t, d y ) + Z k y k≥ yN ( t, d y ) , (2)where b ∈ R n is a drift vector, B A ( t ) is the n -dimensional Brownian motion with the covariance matrix A , N ( t, d y ) is the Poisson random measure defined as N ( t, S )( ω ) = { s | ≤ s ≤ t ; ∆ L ( s )( ω ) ∈ S } , (3)with {·} representing the number of elements in the set “ · ”. S is the Borel set in B (cid:0) R d \ { } (cid:1) , ∆ L ( s )is the jump of L ( s ) at time s defined as ∆ L ( s ) = L ( s ) − L ( s − ), and e N (d t, d y ) is the compensated Poisson2easure defined as e N (d t, d y ) = N (d t, d y ) − d tν (d y ).Note that it’s convenient to write the n -dimensional Brownian motion B A ( t ) in the form of B A ( t ) = σB ( t )[1], where B ( t ) is a standard n -dimensional Brownian motion and σ is an n × n nonzero matrix for which A = σσ T , and B iA ( t ) = P nj =1 σ ij B j ( t ) for i = 1 , , · · · , n .To proceed, the following definition is needed. Definition 1.
For u = ( u , u , · · · , u d ) T ∈ R d and v = ( v , v , · · · , v n ) T ∈ R n , the mapping H is defined by H : R d × R n → R d , ( u, v ) H ( u, v ) = Ψ(1) , (4) where Ψ : R → R d , r Ψ( r ) is the solution of the following ordinary differential equation ( ODE )dΨ( r )d r = β (Ψ( r )) v, Ψ(0) = u. (5)The R d -valued strong solution Z ( t ) of Marcus SDE (1) is defined as Z ( t ) = z + Z t α ( Z ( s ) , s ) d s + Z t β ( Z ( s − ) , s ) ⋄ d L ( s ) , (6)where the left limit Z ( s − ) = lim u
As shown in [5], the Marcus integral operation “ ⋄ ” satisfies the chain rule of classical calculus,and can be obtained as the limit of Wong-Zakai approximation. We have the following result from [1, 5, 10].
Lemma 1.
Suppose ∀ x ∈ R d , α ( x, t ) , β ( x, t ) and ∂∂x β ( x, t ) satisfy the following two conditions, (i) continuous with respect to t ; (ii) globally Lipschitz with respect to x , i.e., ∀ x , x ∈ R d , t ∈ R + , k α ( x , t ) − α ( x , t ) k + k β ( x , t ) − β ( x , t ) k + (cid:13)(cid:13)(cid:13)(cid:13) ∂∂x β ( x , t ) − ∂∂x β ( x , t ) (cid:13)(cid:13)(cid:13)(cid:13) ≤ C k x − x k , (9)3 here C is constant in R .Then there exists a unique strong solution to the Marcus SDE (1) . Consider the following Marcus SDDE, d X ( t ) = f ( X ( t ) , X ( t − τ ))d t + g ( X ( t ) , X ( t − τ )) ⋄ d L ( t ) , for t > ,X ( t ) = γ ( t ) , t ∈ [ − τ, , (10)where X ( t ) is a R d -valued stochastic process, L ( t ) is a R n -valued L´evy process defined on a probability space(Ω , F , P ), τ ∈ R + is time delay, f : R d × R d → R d , g = ( g ij ) d × n : R d × R d → M d × n , and γ : [ − τ, → R d .The “ ⋄ ” in (10), as stated before, can be obtained by Wong-Zakai approximation. However, unlike in(1), there seems no neat expression for “ ⋄ ” in (10) due to the delay τ appearing in the diffusion coefficient g ( X ( t ) , X ( t − τ )). We circumvent this problem by associating (10) with the following Marcus SDE withoutdelays, d X ( t ′ ) = f ( X ( t ′ ) , γ ( t ′ − τ )) d t ′ + g ( X ( t ′ ) , γ ( t ′ − τ )) ⋄ d L ( t ′ ) , d X ( t ′ ) = f ( X ( t ′ ) , X ( t ′ )) d t ′ + g ( X ( t ′ ) , X ( t ′ )) ⋄ d L ( t ′ ) , for t ′ ∈ (0 , τ ] , ... ... ...d X k ( t ′ ) = f ( X k ( t ′ ) , X k − ( t ′ )) d t ′ + g ( X k ( t ′ ) , X k − ( t ′ )) ⋄ d L k ( t ′ ) , (11)constrained by the condition that the initial value of X ( t ′ ) is prescribed, and the finial value of X i ( t ′ ) is setto be equal to the initial value of X i +1 ( t ′ ) for i = 1 , , · · · , k −
1, i.e., X (0) = γ and X i ( τ ) = X i +1 (0) . (12)In (11) and (12), X i ( t ′ ) ∈ R d ( i = 1 , , · · · , k ) and γ = γ (0) is a constant in R d . Note that condition(12) is different from the conventional initial condition (cid:0) X T (0) , X T (0) , · · · , X Tk (0) (cid:1) T = x , (13)where x ∈ R kd is a constant. Equation (11) under condition (13), which is component-wise, can also berewritten in form of (1), but with higher dimensionality.Equation (11) under the condition (12) can be converted from (10) in the way given below. For eachsolution to Marcus SDDE (10), we can obtain a solution to (11) and (12) by constructing X i ( t ′ ) = X ( t ′ +( i − τ ), L i ( t ′ ) = L ( t ′ + ( i − τ ) − L (( i − τ ) for i = 1 , , · · · , k . It is straightforward to check that the path4f X i ( t ′ ) with t ′ ∈ (0 , τ ] coincides with the path of X ( t ) with t ′ ∈ (( i − τ, iτ ]. Therefore, Marcus SDDE(10) is equivalent to Marcus SDE (11) under condition (12) in the following sense, X ( t ) a.s. = X ( t ) , for t ∈ (0 , τ ] ,X ( t − τ ) , for t ∈ ( τ, τ ] , ... ... X k ( t − ( k − τ ) , for t ∈ (( k − τ, kτ ] , (14)or equivalently X i ( t ′ ) a.s. = X ( t ′ + ( i − τ ) for t ′ ∈ (0 , τ ] and i = 1 , , · · · , k. (15)The solution to Marcus SDDE (10) can be interpreted by Marcus SDE (11) under condition (12). We arenow ready to establish the existence and uniqueness for the solution to (10).We first introduce some notations that will be used later. Denote Φ k ( γ , t ′ ), with Φ k : R d × [0 , τ ] → R kd , ( γ , t ′ ) → Φ k ( γ , t ′ ) = (cid:0) X T ( t ′ ) , X T ( t ′ ) , · · · , X Tk ( t ′ ) (cid:1) T , as the solution at time t ′ to (11) under condition (12).Examining the recursive structure of (11) and (12), it is straightforward to check that for k ≥
2, Φ k − ( γ , t ′ )coincides with the first k − k ( γ , t ′ ). Assumption (H1) . Suppose that γ ( t ) is continuous on [ − τ, , f ( x, y ) , g ( x, y ) , ∂∂x g ( x, y ) and ∂∂y g ( x, y ) areglobally Lipschitz, i.e., ∀ x , x , y , y ∈ R d k f ( x , y ) − f ( x , y ) k + k g ( x , y ) − g ( x , y ) k + (cid:13)(cid:13)(cid:13)(cid:13) ∂∂x g ( x , y ) − ∂∂x g ( x , y ) (cid:13)(cid:13)(cid:13)(cid:13) + (cid:13)(cid:13)(cid:13)(cid:13) ∂∂y g ( x , y ) − ∂∂y g ( x , y ) (cid:13)(cid:13)(cid:13)(cid:13) ≤ C k x − x k + C k y − y k . (16) Theorem 1.
Under Assumption ( H1 ) , there exists a unique strong solution to Marcus SDDE (10) .Proof. According to (14) or (15), we only need to show that for any given k ∈ N , there is a unique strongsolution to (11) under condition (12). We shall finish the proof by induction.First, we agrue that the conclusion is true for k = 1. In fact, for k = 1, Marcus SDE (11) and (12)becomes d X ( t ′ ) = F ( X ( t ′ ) , t ′ )d t ′ + G ( X ( t ′ ) , t ′ ) ⋄ d L ( t ′ ) , t ′ ∈ (0 , τ ] ,X (0) = γ , (17)where F ( X ( t ′ ) , t ′ ) = f ( X ( t ′ ) , γ ( t ′ − τ )), G ( X ( t ′ ) , t ′ ) = g ( X ( t ′ ) , γ ( t ′ − τ )). It is straightforward to checkthat F , G satisfy the conditions in Lemma 1, therefore, (17) has a unique strong solution by Lemma 1.5uppose that the conclusion is true for k = i , i.e., Marcus SDE d X ( t ′ ) = f ( X ( t ′ ) , γ ( t ′ − τ )) d t ′ + g ( X ( t ′ ) , γ ( t ′ − τ )) ⋄ d L ( t ′ ) , d X ( t ′ ) = f ( X ( t ′ ) , X ( t ′ )) d t ′ + g ( X ( t ′ ) , X ( t ′ )) ⋄ d L ( t ′ ) , ... ... ...d X i ( t ′ ) = f ( X i ( t ′ ) , X i − ( t ′ )) d t ′ + g ( X i ( t ′ ) , X i − ( t ′ )) ⋄ d L i ( t ′ ) ,X (0) = γ , X (0) = X ( τ ) , · · · , X i (0) = X i − ( τ ) , (18)has a unique strong solution (cid:0) X T ( t ′ ) , X T ( t ′ ) , · · · , X Ti ( t ′ ) (cid:1) T . It remains to show that the conclusion is true for k = i + 1, i.e., there is a unique strong solution toMarcus SDE d X ( t ′ ) = f ( X ( t ′ ) , γ ( t ′ − τ )) d t ′ + g ( X ( t ′ ) , γ ( t ′ − τ )) ⋄ d L ( t ′ ) , d X ( t ′ ) = f ( X ( t ′ ) , X ( t ′ )) d t ′ + g ( X ( t ′ ) , X ( t ′ )) ⋄ d L ( t ′ ) , ... ... ...d X i ( t ′ ) = f ( X i ( t ′ ) , X i − ( t ′ )) d t ′ + g ( X i ( t ′ ) , X i − ( t ′ )) ⋄ d L i ( t ′ ) , d X i +1 ( t ′ ) = f ( X i +1 ( t ′ ) , X i ( t ′ )) d t ′ + g ( X i +1 ( t ′ ) , X i ( t ′ )) ⋄ d L i +1 ( t ′ ) ,X (0) = γ , X (0) = X ( τ ) , · · · , X i (0) = X i − ( τ ) , X i +1 (0) = X i ( τ ) . (19)Note that the existence and uniqueness for (18) implies that the solution (cid:0) X T ( t ′ ) , X T ( t ′ ) , · · · , X Ti ( t ′ ) (cid:1) T at t ′ = τ is a fixed value equal to Φ i ( γ , τ ), i.e., (cid:0) X T ( τ ) , X T ( τ ) , · · · , X Ti ( τ ) (cid:1) T = Φ i ( γ , τ ) . (20)With (20), equation (19) can be written as d X ( t ′ ) = f ( X ( t ′ ) , γ ( t ′ − τ )) d t ′ + g ( X ( t ′ ) , γ ( t ′ − τ )) ⋄ d L ( t ′ ) , d X ( t ′ ) = f ( X ( t ′ ) , X ( t ′ )) d t ′ + g ( X ( t ′ ) , X ( t ′ )) ⋄ d L ( t ′ ) , ... ... ...d X i ( t ′ ) = f ( X i ( t ′ ) , X i − ( t ′ )) d t ′ + g ( X i ( t ′ ) , X i − ( t ′ )) ⋄ d L i ( t ′ ) , d X i +1 ( t ′ ) = f ( X i +1 ( t ′ ) , X i ( t ′ )) d t ′ + g ( X i +1 ( t ′ ) , X i ( t ′ )) ⋄ d L i +1 ( t ′ ) , (21)with initial condition (cid:0) X T (0) , X T (0) , · · · , X Ti (0) , X Ti +1 (0) (cid:1) T = (cid:0) γ T , Φ Ti ( γ , τ ) (cid:1) T . (22)6quations (21) and (22) can be rewritten in form of (1) as X ( t ′ ) = F i +1 ( X ( t ′ ) , t ′ )d t ′ + G i +1 ( X ( t ′ ) , t ′ ) ⋄ d L ( t ′ ) , X (0) = x , (23)where X ( t ′ ) = X ( t ′ ) X ( t ′ )... X i ( t ′ ) X i +1 ( t ′ ) , F i +1 ( X ( t ′ ) , t ′ ) = f ( X ( t ′ ) , γ ( t ′ − τ )) f ( X ( t ′ ) , X ( t ′ )... f ( X i ( t ′ ) , X i − ( t ′ ) f ( X i +1 ( t ′ ) , X i ( t ′ ) , L ( t ′ ) = L ( t ′ ) L ( t ′ )... L i ( t ′ ) L i +1 ( t ′ ) , x = γ Φ i ( γ , τ ) , and G i +1 ( X ( t ′ ) , t ′ ) = g ( X ( t ′ ) , γ ( t ′ − τ )) g ( X ( t ′ ) , X ( t ′ )) . . . g ( X i ( t ′ ) , X i − ( t ′ )) g ( X i +1 ( t ′ ) , X i ( t ′ )) . It is straightforward to show that F i +1 , G i +1 satisfy the conditions in Lemma 1, therefore, (23) has a uniquestrong solution by Lemma 1. Definition 2.
For k ∈ N , define Q k : R kd × [0 , τ ] × R kd × [0 , τ ] → [0 , ∞ ) , ( u , t ′ , v , s ) → Q k ( u ; t ′ | v ; s ) such that ∀ u , v ∈ R kd and ≤ s < t ′ ≤ τ , Q k ( u ; t ′ | v ; s ) represents the probability density for the solution (cid:0) X T ( t ′ ) , X T ( t ′ ) , · · · , X Tk ( t ′ ) (cid:1) T to Marcus SDE (11) at u under the condition (cid:0) X T ( s ) , X T ( s ) , · · · , X Tk ( s ) (cid:1) T = v . The following three notations are used in this paper to represent probability densities.(i) P A : the density for the solution X ( t ) defined in Marcus SDDE (10). For example, P A ( x, t ) representsthe density for X ( t ) at X ( t ) = x . P A ( x, τ | y, τ ; z, τ ) represents the conditional density for X (3 τ ) at X (3 τ ) = x given X ( τ ) = y and X (2 τ ) = z .(ii) Q k : as given in Definition 2, Q k is the transitional density of the R kd -valued solution (cid:0) X T ( t ′ ) , X T ( t ′ ) , · · · ,X Tk ( t ′ ) (cid:1) T to Marcus SDE (11). For example, Q ( x, y ; t ′ | w, z ; s ) with 0 ≤ s < t ′ ≤ τ represents thedensity of (cid:0) X T ( t ′ ) , X T ( t ′ ) (cid:1) T at X ( t ′ ) = x and X ( t ′ ) = y given X ( s ) = w and X ( s ) = z .7iii) p : the density in general cases. For example, p ( X = x ; Y = y ) represents the density of ( X, Y ) at X = x and Y = y ; p ( X = x ; Y = y | Z = z ) represents the density of ( X, Y ) at X = x and Y = y given Z = z . Note that P A and Q k can be expressed in terms of p . For instance, P A ( x, t ) = p ( X ( t ) = x | X (0) = γ ) , (24) P A ( x, τ | y, τ ; z, τ ) = p ( X (3 τ ) = x | X (0) = γ ; X ( τ ) = y ; X (2 τ ) = z ) , (25) Q ( x, y ; t ′ | u, v ; s ) = p ( X ( t ′ ) = x ; X ( t ′ ) = y | X (0) = γ ; X ( s ) = u ; X ( s ) = v ) . (26)To proceed, we need the following Assumptions. Assumption (H2) . Suppose ∀ x ∈ R kd with k ∈ N , the Marcus SDE defined by (11) under initial condition (13) has a unique strong solution. Assumption (H3) . Suppose ∀ u, v ∈ R kd with k ∈ N and ≤ s < t ′ ≤ τ , the probability density Q k ( u , t ′ | v , s ) ,which represents the density of X ( t ′ ) at u given X ( s ) = v , exists and is strictly positive. Lemma 1 gives a sufficient condition for Assumption (H2) to be true. As for Assumption (H3), thereexist sufficient conditions for the existence and regularity of the probability density for some SDEs drivenby L´evy processes with certain restrictions imposed on the jumping measures, see [1, 11] and the referencestherein, among others. However, sufficient conditions for SDEs driven by general L´evy processes are stillnot available. There are also some sufficient conditions available for the density to be strictly positive incases with Gaussian white noise. The strictly positive property of the density for a general class of SDEscan be concluded from the heat kernel estimations [12, 13]. A more general sufficient condition for strictivepositiveness of density is presented in [14].Now, we use the transition density Q k of Marcus SDE (11) to represent the density P A of Marcus SDDE(10). Theorem 2. [Representation formula for the density of Marcus SDDE]
Suppose that Assumptions ( H2 ) and ( H3 ) hold. Then ∀ t > , the probability density function P A ( x, t ) for the solution X ( t ) defined byMarcus SDDE (10) exists. Moreover, ∀ x ∈ R d , the following statements are true. (i) F or t ∈ (0 , τ ] , P A ( x, t ) = Q ( x ; t (cid:12)(cid:12) γ ; 0) . (27)(ii) F or t ∈ (( k − τ, kτ ) with k ≥ and k ∈ N , P A ( x, t ) = Z R k − d Q k − (cid:0) x , · · · , x k − ; τ (cid:12)(cid:12) y , · · · , y k − ; t − ( k − τ (cid:1) × Q k (cid:0) y , · · · , y k − , x ; t − ( k − τ (cid:12)(cid:12) γ , x , · · · , x k − ; 0 (cid:1) k − Y i =1 d x i k − Y i =1 d y i . (28)8iii) F or t = kτ with k ≥ and k ∈ N , P A ( x, t ) = Z R ( k − d Q k ( x , · · · , x k − , x ; τ (cid:12)(cid:12) γ , x , · · · , x k − ; 0) k − Y i =1 d x i . (29) Remark 2.
Theorem 2 shows that the density for Marcus SDDE can be expressed in terms of that for MarcusSDEs without delays. Governing equations for density of Marcus SDEs without delays have been established,see [7].
Remark 3.
Note that (29) can be absorbed into (28) . In fact, ∀ u, v ∈ R kd , Q k ( u , s | v , s ) = lim t ′ → s Q k ( u , t ′ | v , s ) = δ ( u − v ) and f ( u ) = R R kd δ ( u − v ) f ( v )d v . Therefore, for t = kτ , equation (28) becomes to (29) .Proof. First, we prove the statement (i) of Theorem 2. In fact, by equation (14) or (15), the density of X ( t )for 0 < t ≤ τ defined by Marcus SDDE (10) is the same as the density of X ( t ) defined by Marcus SDE (11)with initial value γ ∈ R d and k = 1. Therefore, (i) is true.Next, we prove the statement (ii) of Theorem 2. For t ∈ (( k − τ, kτ ) with k ≥ k ∈ N , asshown in Appendix, if Assumption (H3) holds (i.e., Q k exists), then P A ( x, t | x , τ ; · · · ; x k − , ( k − τ ; x k , kτ ), P A ( x k , kτ | x , τ ; · · · ; x k − , ( k − τ ) and P A ( x , τ ; · · · ; x k − , ( k − τ ) exist, and can be respectively expressedas P A ( x, t | x , τ ; · · · ; x k − , ( k − τ ; x k , kτ )= Z R ( k − d Q k ( x , · · · , x k − , x k ; τ | y , · · · , y k − , x ; t − ( k − τ ) × Q k ( y , · · · , y k − , x ; t − ( k − τ | γ , x , · · · , x k − ; 0) Q k ( x , · · · , x k ; τ | γ , x , · · · , x k − ; 0) k − Y i =1 d y i , (30) P A ( x k , kτ | x , τ ; · · · ; x k − , ( k − τ ) = Q k ( x , · · · , x k ; τ | γ , x , · · · , x k − ; 0) Q k − ( x , · · · , x k − ; τ | γ , x , · · · , x k − ; 0) , (31)and P A ( x , τ ; · · · ; x k − , ( k − τ ) = Q k − (cid:0) x , · · · , x k − ; τ (cid:12)(cid:12) γ , x , · · · , x k − ; 0 (cid:1) . (32)Further more, the condition density P A ( x, t | x , τ ; · · · ; x k − , ( k − τ ) exists due to the identity P A ( x, t | x , τ ; · · · ; x k − , ( k − τ )= Z R d P A ( x, t | x , τ ; · · · ; x k − , ( k − τ ; x k , kτ ) × P A ( x k , kτ | x , τ ; · · · ; x k − , ( k − τ )d x k . (33)9ubstitute equations (30) and (31) into (33), we get P A ( x, t | x , τ ; · · · ; x k − , ( k − τ )= Z R ( k − d Z R d Q k ( x , · · · , x k − , x k ; τ | y , · · · , y k − , x ; t − ( k − τ )d x k × Q k ( y , · · · , y k − , x ; t − ( k − τ | γ , x , · · · , x k − ; 0) Q k − ( x , · · · , x k − ; τ | γ , x , · · · , x k − ; 0) k − Y i =1 d y i = Z R ( k − d Q k − ( x , · · · , x k − ; τ | y , · · · , y k − , x ; t − ( k − τ ) × Q k ( y , · · · , y k − , x ; t − ( k − τ | γ , x , · · · , x k − ; 0) Q k − ( x , · · · , x k − ; τ | γ , x , · · · , x k − ; 0) k − Y i =1 d y i = Z R ( k − d Q k − ( x , · · · , x k − ; τ | y , · · · , y k − ; t − ( k − τ ) × Q k ( y , · · · , y k − , x ; t − ( k − τ | γ , x , · · · , x k − ; 0) Q k − ( x , · · · , x k − ; τ | γ , x , · · · , x k − ; 0) k − Y i =1 d y i . (34)To derive last identity in (34), we use Q k − ( x , · · · , x k − ; τ | y , · · · , y k − , x ; t − ( k − τ ) = Q k − ( x , · · · , x k − ; τ | y , · · · , y k − ; t − ( k − τ ) , which follows from the fact that (cid:0) X T ( τ ) , X T ( τ ) , · · · , X Tk − ( τ ) (cid:1) T in (11) only depends on (cid:0) X T ( t − ( k − τ ) , X T ( t − ( k − τ ) , · · · , X Tk − ( t − ( k − τ ) (cid:1) T and independent of X k ( t − ( k − τ ).By using equations (34) and (32), the density P A ( x, t ) of Marcus SDDE (10) exists by the identity P A ( x, t ) = Z R ( k − d P A ( x, t | x , τ ; · · · ; x k − , ( k − τ ) × P A ( x , τ ; · · · ; x k − , ( k − τ ) k − Y i =1 d x i = Z R k − d Q k − (cid:0) x , · · · , x k − ; τ (cid:12)(cid:12) y , · · · , y k − ; t − ( k − τ (cid:1) × Q k (cid:0) y , · · · , y k − , x ; t − ( k − τ (cid:12)(cid:12) γ , x , · · · , x k − ; 0 (cid:1) k − Y i =1 d x i k − Y i =1 d y i . (35)Therefore, (ii) is true.Finally, we show the statement (iii) of Theorem 2 is true. In fact, for t = kτ with k ≥ k ∈ N , P A ( x, kτ ) = Z R d P A ( x, kτ | x , τ ; · · · ; x k − , ( k − τ ) × P A ( x , τ ; · · · ; x k − , ( k − τ ) k − Y i =1 d x i . (36)Substitute equations (31) and (32) into (36), we get equation (29). Therefor, (iii) is true.10 ppendix A.1 Proof of equation (30) . For t ∈ (( k − τ, kτ ] with k ≥ k ∈ N , P A ( x, t | x , τ ; · · · ; x k − , ( k − τ ; x k , kτ )= Z R ( k − d P A ( y , t − ( k − τ ; y , t − ( k − τ ; · · · ; y k − , t − τ ; x, t | x , τ ; · · · ; x k − , ( k − τ ; x k , kτ ) k − Y i =1 d y i = Z R ( k − d p ( X ( t − ( k − τ ) = y ; X ( t − ( k − τ ) = y ; · · · ; X ( t − τ ) = y k − ; X ( t ) = x | X (0) = γ ; X ( τ ) = x ; · · · ; X (( k − τ ) = x k − ; X ( kτ ) = x k ) k − Y i =1 d y i = Z R ( k − d p ( X ( t − ( k − τ ) = y ; · · · ; X k − ( t − ( k − τ ) = y k − ; X k ( t − ( k − τ ) = x | X (0) = γ ; · · · ; X k − (( k − τ ) = X k (0) = x k − , X k ( τ ) = x k ) k − Y i =1 d y i = Z R ( k − d p ( X ( t − ( k − τ ) = y ; · · · ; X k − ( t − ( k − τ ) = y k − ; X k ( t − ( k − τ ) = x | X (0) = γ ; · · · ; X k (0) = x k − ; X ( τ ) = x ; · · · ; X k ( τ ) = x k ) k − Y i =1 d y i = Z R ( k − d p ( X ( τ ) = x ; · · · ; X k ( τ ) = x k | X ( t − ( k − τ ) = y ; · · · ; X k ( t − ( k − τ ) = x ) × p ( X ( t − ( k − τ ) = y ; · · · ; X k ( t − ( k − τ ) = x | X (0) = γ ; · · · ; X k (0) = x k − ) p ( X ( τ ) = x ; · · · ; X k ( τ ) = x k | X (0) = γ ; · · · ; X k (0) = x k − ) k − Y i =1 d y i = Z R ( k − d Q k ( x , · · · , x k ; τ | y , · · · , y k − , x ; t − ( k − τ ) × Q k ( y , · · · , y k − , x ; t − ( k − τ | γ , x , · · · , x k − ; 0) Q k ( x , · · · , x k ; τ | γ , x , · · · , x k − ; 0) k − Y i =1 d y i , (37)To derive the last identity, we use the notation as expressed in (26).11 .2 Proof of equation (31) . For t = kτ with k ≥ k ∈ N , P A ( x k , kτ | x , τ ; · · · ; x k − , ( k − τ )= p ( X ( kτ ) = x k | X (0) = γ ; X ( τ ) = x ; · · · ; X (( k − τ ) = x k − ; X (( k − τ ) = x k − )= p ( X k ( τ ) = x k | X (0) = γ ; X ( τ ) = X (0) = x ; · · · ; X k − ( τ ) = X k (0) = x k − )= p ( X k ( τ ) = x k | X (0) = γ ; X (0) = x ; · · · ; X k (0) = x k − ; X ( τ ) = x ; · · · ; X k − ( τ ) = x k − )= p ( X ( τ ) = x ; · · · ; X k ( τ ) = x k | X (0) = γ ; · · · ; X k (0) = x k − ) p ( X ( τ ) = x ; · · · ; X k − ( τ ) = x k − | X (0) = γ ; · · · ; X k (0) = x k − )= p ( X ( τ ) = x ; · · · ; X k ( τ ) = x k | X (0) = γ ; · · · ; X k (0) = x k − ) p ( X ( τ ) = x ; · · · ; X k − ( τ ) = x k − | X (0) = γ ; · · · ; X k − (0) = x k − )= Q k ( x , · · · , x k ; τ | γ , x , · · · , x k − ; 0) Q k − ( x , · · · , x k − ; τ | γ , x , · · · , x k − ; 0) , (38)where the second last “ = ” follows from p ( X ( τ ) = x ; · · · ; X k − ( τ ) = x k − | X (0) = γ ; · · · ; X k − (0) = x k − ; X k (0) = x k − )= p ( X ( τ ) = x ; · · · ; X k − ( τ ) = x k − | X (0) = γ ; · · · ; X k − (0) = x k − ) , (39)which is the consequence of the fact that X ( τ ) , X ( τ ) , · · · , X k − ( τ ) in Marcus SED (11) only depends ontheir initial values X (0) , X (0) , · · · , X k − (0) and independent of X k (0). A.3 Proof of equation (32) . For t = kτ with k ≥ k ∈ N , P A ( x , τ ; x , τ ; · · · ; x k − , ( k − τ )= P A ( x , τ ) × P A ( x , τ | x , τ ) × · · · × P A ( x k − , ( k − τ | x , τ ; x , τ ; · · · ; x k − , ( k − τ )= Q (cid:0) x ; τ (cid:12)(cid:12) γ ; 0 (cid:1) × Q (cid:0) x , x ; τ (cid:12)(cid:12) γ , x ; 0 (cid:1) Q (cid:0) x ; τ (cid:12)(cid:12) γ ; 0 (cid:1) × · · · × Q k − (cid:0) x , · · · , x k − ; τ (cid:12)(cid:12) γ , x , · · · , x k − ; 0 (cid:1) Q k − (cid:0) x , · · · , x k − ; τ (cid:12)(cid:12) γ , x , · · · , x k − ; 0 (cid:1) = Q k − (cid:0) x , · · · , x k − ; τ (cid:12)(cid:12) γ , x , · · · , x k − ; 0 (cid:1) . (40)Equation (31) has been used to derive the second last `‘‘=” in (40). References [1] D. Applebaum.
L´evy Processes and Stochastic Calculus, 2nd Edition . Cambridge University Press, 2009.[2] J. Duan.
An Introduction to Stochastic Dynamics . Cambridge University Press, 2015.123] S. I. Marcus. Modeling and Analysis of Stochastic Differential Equations Driven by Point Processes.
IEEE Transactiona on Information Theory , IT-24(2):164–172, 1978.[4] S. I. Marcus. Modeling and Approximation of Stochastic Differential Equations Driven by Semimartin-gales.
Stochastics , 4:223–245, 1981.[5] T. Kurtz, E. Pardoux, and P. Protter. Stratonovich Stochastic Differential Equations Driven by GeneralSemimartingales.
Annales de l’Institut Henri Poincar´e-Probabilit´es et Statistiques , 1(32):351–377, 1995.[6] X. Sun, J. Duan, and X. Li. An Alternative Expression for Stochastic Dynamical Systems with Para-metric Poisson White Noise.
Probabilistic Engineering Mechanics , 32:1–4, 2013.[7] X. Sun, X. Li, and Y. Zheng. Governing Equations for Probability Densities of Marcus StochasticDifferential Equations with L´evy Noise.
Stochastics and Dynamics , 17(5):1750033, 2017.[8] K. K¨ummel. On the Dynamics of Marcus Type Stochastic Differential Equations.
Doctoral Disserta-tion, Friedrich Schiller Universit¨at Jena , 2016.[9] Y. Zheng and X. Sun. Governing Equations for Probability Densities of Stochastic Differential Equationswith Discrete Time Delays.
Discrete and Continuous Dynamical Systems-Series B , 22(9):3615–3628,2017.[10] P. E. Protter.
Stochastic Integration and Differential Equations, 2nd Edition . Springer, 2004.[11] K. Bichteler, J. Gravereaux, and J. Jacod.
Malliavin Calculus for Processes with Jumps . Gordon andBreach, 1987.[12] D. G. Aronson. Non-negative Solutions of Linear Parabolic Equations.
Annali Della Scuola NormaleSuperiore DI Pisa-Classe di Scienze , 22:607–694, 1968.[13] E. B. Davies.
Heat Kernels and Spectral Theory . Cambridge University Press, 1989.[14] V. I. Bogachev, M. Roeckner, and S. V. Shaposhnikov. Positive Densities of Transition Probabilities ofDiffusion Processes.