Pathwise construction of affine processes
aa r X i v : . [ m a t h . P R ] D ec Pathwise construction of affine processes
Nicoletta Gabrielli , Josef Teichmann Abstract
Based on the theory of multivariate time changes for Markov processes, we show how toidentify affine processes as solutions of certain time change equations. The result is a strongversion of a theorem presented by J. Kallsen in [Kal06] which provides a representation inlaw of an affine process as a time–change transformation of a family of independent L´evyprocesses.
Keywords:
Affine processes, Lamperti transform, time–change
1. Introduction
During the last decades, many alternatives to the Black-Scholes model have been proposedin the literature to overcome its deficiencies. Possible extensions include jumps, stochasticvolatility and/or other high dimensional models. Among the most popular ones, we recall theexponential L´evy models, which generalize the Black-Scholes model by introducing jumps.They allow to generate implied volatility smiles and skews similar to the ones observed in themarkets. However, in some occasions, independence of increments is too big a restriction.Stochastic volatility models give a way to overcome this problem: when we model thevariance parameter in the Black–Scholes model by a CIR model, we get the Heston’s model,see [Hes93]. The Heston model can be extended by adding jumps in the return component,as in the Bates model (see [Bat96]), and also in the stochastic variance component, as in theBarndorff–Nielsen and Shephard model (see [BNS01]). The class of affine processes includesall the above mentioned examples.Affine processes are a class of time homogeneous Markov processes X = ( X t ) t ≥ takingvalues in a state space D ⊂ R d characterized by the fact that, for all ( t, x ) ∈ R ≥ × D , theircharacteristic function has the following exponential affine form E x (cid:20) e h u,X t i (cid:21) = e ϕ ( t,u )+ h x,ψ ( t,u ) i , u ∈ i R d , where ϕ and ψ are two function taking values in C and C d , respectively. The theory of affineprocesses is dominated by weak characterizations, since affine processes are characterized by University of Z¨urich, Plattenstrasse 22, CH-8032, Switzerland. [email protected] ETH Z¨urich, R¨amistrasse 101, CH-8092, Switzerland. [email protected]
July 30, 2018 property of their marginal distributions. The functions ϕ and ψ in the specification of theaffine property, solve a system of ODEs, also known in the literature with the name of general-ized Riccati equations . These equations arise from the regularity property of affine processes.More precisely, in [CT13] it has been proved that, even on a general state space, stochasti-cally continuous processes having the aforementioned affine property admit a version withc`adl`ag trajectories. The path regularity implies that the process is a semimartingale withdifferentiable characteristics up to its lifetime. From this characterization it is possible toconclude differentiability with respect to time of the Fourier–Laplace transform. This prop-erty, also called regularity property, is crucial to relate the marginal laws of affine processeswith a solution of a system generalized Riccati equations.This paper is devoted to a pathwise construction of affine processes, when the state spaceis specified by R m ≥ × R n . The representation proposed in this paper is a multivariate gen-eralization of the Lamperti transformation of L´evy processes in R with no negative jump.When D = R ≥ , it has been proved that there exists one-to-one correspondence betweenaffine processes taking values in D and L´evy processes, see [CPGUB13]. More precisely, let Z (1) = ( Z (1) t ) t ≥ be a L´evy process starting from 0 taking values in R , whose L´evy measurehas support R ≥ and let Z (0) be an independent subordinator. Theorem 2 in [CPGUB13]shows that there exists a solution of the following time–change equation X t = x + Z (0) t + Z (1) R t X s ds for all ( t, x ) ∈ R ≥ × R ≥ . Moreover, it is proved that the solution is a time homogeneousMarkov process, taking values in R ≥ starting from x , characterized by the property thatthe logarithm of the characteristic function of the transition semigroup is given by an affinefunction of the initial state x . Hence, by definition, it is an affine process taking values in R ≥ .In this paper we aim to obtain the analogous result in the multivariate case. In [Kal06] ithas been proved that – in distribution – affine processes can be represented by means of d + 1 independent L´evy processes taking values in R d . Under some natural assumptions onthe L´evy triplets, the time change equation X t = x + Z (0) t + d X i =1 Z ( i ) R t X ( i ) s ds , t ≥ , (1)admits a weak solution . More precisely, (1) admits a weak solution if there exists a probabilityspace (Ω , G , P ) containing two processes ( X, Z ) such that (1) holds.
Remark 1.1.
Recall that, a priori, the process X takes values on the state space R m ≥ × R n and, therefore, for all j = m + 1 , . . . , d , the process X ( j ) is real valued. As we announced,existence of a solution for (1) will be proved under a set of conditions on the L´evy triplets. Inparticular, we will require that, for all j = m +1 , . . . , d , the L´evy process Z ( j ) is deterministic(see Table 1 at pag. 6). This ensures that the sum in (1) is well defined also for the indices i = m + 1 , . . . , d . X a strong solution of thetime change equation (1)? In this paper we try to address this problem and the expositionis organized as follows. In Chapter 2 we provide an overview of some basic results foraffine processes. We introduce a particular class of affine processes, called of Heston type,which, up to a pathspace transformation, represents the full class of affine processes. SeeDefinition 2.10 and Proposition 2.11. Chapter 3 contains the core of the proof of existenceof a strong solution of (1). The final result is stated in Theorem 4.3 and the proof is dividedin several steps. Using the results from Chapter 3, we will see how to construct a solution X of (1) which lives on the same probability space where the L´evy processes are defined. InChapter 4 we show that, starting from a family of L´evy processes { Z ( k ) } k =0 , ,...,d specifiedby some restrictions on their L´evy triplets, the solution time–change equation (1) is a timehomogeneous Markov process having the affine property. Observe that, this new existenceproof of affine processes gives, as straightforward consequence, the c`adl`ag property for affineprocesses.
2. Preliminaries
Henceforth D denotes the subset R m ≥ × R n of R d . The canonical basis of R d is denoted by { e i } i =1 ,...,d . Given ∆ < D define D ∆ = D ∪ { ∆ } . The set B ( D ) is the space of measurablefunction on D , while mbbd ( D ) is the space of measurable bounded function on D .In order to simplify the notation, we introduce the sets of indices I and J defined as I = { , . . . , m } and J = { m + 1 , . . . , d } . Moreover, given a set H ⊆ { , . . . , d } , the map π H is the projection of R d on the lowerdimensional subspace with indices in H. In particular π I : R m ≥ × R n → R m ≥ x π I x := ( x i ) i ∈ I and π J : R m ≥ × R n → R n x π J x := ( x j ) j ∈ J . Due to the geometry of the state space, the function f u ( x ) := e h x,u i , x ∈ D (2)is bounded if and only if U := C m ≤ × i R n . (3)3he notation h· , ·i with input variables in R d denotes the usual scalar product. The samenotation is used also when the scalar product is considered in the space R d + i R d . In thiscase we mean the extension of h· , ·i in R d + i R d without conjugation.Unless differently specified, the notation E x [ · ] indicates that the expectation is taken underthe probability measure P x . Fix N ∈ N and let s ∈ R N ≥ . Whenever we are going to consider s as a time parameter,we emphasize its multidimensionality by writing s . When s = ( s , . . . , s N ) is a multivariatetime parameter and X is a stochastic process in R N , we use the notation X ( s ) := ( X (1) s , . . . , X ( N ) s N ) ∈ R N . In line with the literature, we introduce the affine processes as a class of time homogeneousMarkov processes characterized by two additional properties. The first one being stochasticcontinuity, the second one a condition which characterizes the Fourier–Laplace transformof the one time marginal distributions. This introduction of affine processes in taken from[DFS03, CT13] and [KST13].
Definition 2.1.
Let (Ω , ( X t ) t ≥ , ( F ♮t ) t ≥ , ( p t ) t ≥ , ( P x ) x ∈ D ) be a time homogeneous Markov process. In particular we assume that- Ω is a probability space,- ( X t ) t ≥ is a stochastic process taking values in D ∆ , - F ♮t = σ ( { X s , s ≤ t } ) ,- ( p t ) t ≥ is a semigroup of transition functions on ( D ∆ , B ( D ∆ )) , - ( P x ) x ∈ D ∆ is a probability measures on (Ω , F ♮ ) , with F ♮ = W t ≥ F ♮t ,satisfying E x (cid:20) f ( X t + s ) (cid:12)(cid:12)(cid:12) F ♮t (cid:21) = E X t (cid:20) f ( X s ) (cid:21) , P x -a.s. for all f ∈ mbdd ( D ∆ ) . (4) The process X is said to be an affine process if it satisfies the following properties:• for every t ≥ and x ∈ D, lim s → t p s ( x, · ) = p t ( x, · ) weakly,• there exist functions ϕ : R ≥ × U → C and ψ : R ≥ × U → C d such that E x (cid:20) e h u,X t i (cid:21) = Z D e h u,ξ i p t ( x, dξ ) = e ϕ ( t,u )+ h x,ψ ( t,u ) i , (5) for all x ∈ D and ( t, u ) ∈ R ≥ × U . Definition 2.2.
An affine process X is called regular if, for every u ∈ U , the derivatives F ( u ) := ∂ t ϕ ( t, u ) (cid:12)(cid:12)(cid:12)(cid:12) t =0 , R ( u ) := ∂ t ψ ( t, u ) (cid:12)(cid:12)(cid:12)(cid:12) t =0 , (6) exist for all u ∈ U and are continuous in U m = ( u ∈ C d | sup x ∈ D R e ( h u, x i ) ≤ m ) , for all m ≥ . Regularity has been proved in [CT13] for the class of affine processes on general state spaces.The proof is based on the fact that affine processes always admit a version which has c`adl`agpaths. From this path regularity it is possible to conclude differentiability of the Fourier–Laplace transform. We summarize here the main results.
Theorem 2.3 (Theorem 6.4 in [CT13]) . Every affine process is regular. On the set R ≥ × U , the functions ϕ and ψ satisfy the following system of generalized Riccati equations: ∂ t ϕ ( t, u ) = F ( ψ ( t, u )) , ϕ (0 , u ) = 0 ,∂ t ψ ( t, u ) = R ( ψ ( t, u )) , ψ (0 , u ) = u, (7) with F ( u ) = h b, u i + 12 h u, au i − c + Z D \{ } (cid:16) e h u,ξ i − − h π J u, π J h ( ξ ) i (cid:17) m ( dξ ) , (8) R k ( u ) = h β k , u i + 12 h u, α k u i − γ k + Z D \{ } (cid:16) e h u,ξ i − − D π J ∪{ k } u, π J ∪{ k } h ( ξ ) E(cid:17) M k ( dξ ) , (9) for k = 1 , . . . , d where here we take as truncation function h ( x ) = x {| x |≤ } . The set ofparameters ( b, β, a, α, c, γ, m, M ) (10) is specified by• b, β i ∈ R d for i = 1 , . . . , d ,• a, α i ∈ S d + for i = 1 , . . . , d , where S d + denotes the cones of positive semidefinite d × d matrices,• c, γ i ∈ R ≥ for i = 1 , . . . , d , a kl = 0 for k ∈ I or l ∈ I ,α j = 0 for all j ∈ J , ( α i ) kl = 0 if k ∈ I \ { i } or l ∈ J \ { i } , drift b ∈ D, ( β i ) k ≥ i ∈ I and k ∈ I \ { i } , ( β j ) k = 0 for all j ∈ J, k ∈ I , killing γ j = 0 for all j ∈ J, jumpssupp m ⊆ D and R D \{ } (( | π I ξ | + | π J ξ | ) ∧ m ( dξ ) < ∞ ,M j = 0 for all j ∈ J , supp M i ⊆ D for all i ∈ I and R D \{ } (cid:16) ( | π I \{ i } ξ | + | π J ∪{ i } ξ | ) ∧ (cid:17) M i ( dξ ) < ∞ . Table 1: Set of conditions for admissible parameters. • m, M i for i = 1 , . . . , d are L´evy measures. This set of parameters is called admissible if the conditions in Table 1 are satisfied with I and J defined as I = { , . . . , m } and J = { m + 1 , . . . , d } . The set of admissible parametersfully characterizes an affine process in D. Remark 2.4.
If, additionally, the semigroup of transition functions ( p t ) t ≥ is homogeneousin the space variable, meaning that, for all x ∈ D and B ∈ B ( D ) p t ( x, B ) = p t (0 , B − x ) , then necessarily R = 0 and it holds E x (cid:20) e h u,X t i (cid:21) = Z e h u,ξ i p t ( x, dξ ) = e tF ( u )+ h x,u i , for all ( t, x ) ∈ R ≥ × D and u ∈ U . Hence X is a (possibly killed) L´evy process with L´evyexponent F starting from x .2.3. Towards the multivariate Lamperti transform When D = R ≥ , it has been proved that there exists a one-to-one correspondence betweenaffine processes taking values in D and L´evy processes, see [CPGUB13]. More precisely, let6 (1) = ( Z (1) t ) t ≥ be a L´evy process starting from 0 taking values in R whose L´evy measurehas support R ≥ . This implies that there exists a function R : i R → C such that E (cid:20) e uZ (1) s (cid:21) = e sR ( u ) , for all ( s, u ) ∈ R ≥ × i R . Due to the restrictions on the jump measure, the function R takesthe form R ( u ) = βu + 12 α u − γ + Z R ≥ (cid:16) e uξ − − uξ {| ξ |≤ } (cid:17) M ( dξ ) , where u ∈ i R , α, β ∈ R and M is a measure on R ≥ which satisfies Z (1 ∧ | ξ | ) M ( dξ ) < ∞ . Moreover, let Z (0) be an independent subordinator with E (cid:20) e uZ (0) s (cid:21) = e sF ( u ) , for all ( s, u ) ∈ R ≥ × i R . Since Z (0) is a subordinator, there exists a constant b ∈ R ≥ anda measure m in R ≥ satisfying Z (1 ∧ | ξ | ) m ( dξ ) < ∞ , such that, for all u ∈ i R , F ( u ) = bu + Z R ≥ (cid:16) e uξ − (cid:17) m ( dξ ) . Theorem 2 in [CPGUB13] shows that there exists a solution of the following time–changeequation X t = x + Z (0) t + Z (1) R t X s ds for all ( t, x ) ∈ R ≥ × R ≥ . Moreover, it is proved that the solution is a time homogeneousMarkov process, taking values in R ≥ starting from x , such that the logarithm of the charac-teristic function of the transition semigroup is given by an affine function of the initial state x . Hence, by definition, it is an affine process taking values in R ≥ .Here we are interested in the multivariate generalization of this result, whose weak versionis already known in the literature: Theorem 2.5 (Theorem 3.4 in [Kal06]) . Let X be an affine process with admissible param-eter satisfying Z {| ξ |≥ } | ξ k | M i ( dξ ) < ∞ and c = 0 , γ i = 0 , for ≤ i, k ≤ m. On a possibly enlarged probability space, there exist d + 1 independent L´evy processes Z ( k ) such that X t d = x + Z (0) t + d X k =1 Z ( k ) (cid:18)Z t X ( k ) s ds (cid:19) t ≥ . (11)7his result has to be understood in distributional sense, because, without any additionalassumptions, it is not clear how to conclude that the process X is adapted with respect tothe (properly time–changed) filtration generated by the L´evy processes.In this paper we provide a strong solution of (11) defined on the probability space (Ω , G , P )which carries Z (0) , . . . , Z ( d ) . In this section, we are going to specify a particular subclass of affine processes, which we willcall affine processes of Heston type . They are characterized by more restrictive admissibleparameters but, at the same time, they constitute a canonical family, in the sense that everyaffine process can be obtained as a pathwise transformation of a canonical one. Insteadof stating directly the conditions, we work through an example, where we point the mainmotivations and reasonings for the forthcoming Assumptions 2.7 – 2.9.
Example 2.6.
Let us start by writing (11) componentwise. Denote by Z ( k,j ) the j -th coor-dinate of the k -L´evy process. Then (11) reads X (1) t = x + Z (0 , t + d X k =1 Z ( k, (cid:18)Z t X ( k ) s ds (cid:19) , t ≥ ...X ( d ) t = x d + Z (0 ,d ) t + d X k =1 Z ( k,d ) (cid:18)Z t X ( k ) s ds (cid:19) , t ≥ . (12) Due to the drift conditions summarized in Table 1, we conclude that, for k = m + 1 , . . . , d , Z ( k ) is a L´evy process with triplet ( β k , , with π I β k identically zero. Therefore we canwrite X (1) t = x + Z (0 , t + m X k =1 Z ( k, (cid:18)Z t X ( k ) s ds (cid:19) , t ≥ ...X ( d ) t = x d + Z (0 ,d ) t + m X k =1 Z ( k,d ) (cid:18)Z t X ( k ) s ds (cid:19) + d X k = m +1 ( β k ) d (cid:18)Z t X ( k ) s ds (cid:19) , t ≥ . We first transform the process into another affine process with functional characteristic F =0 . We will see that, up to an enlargement of the state space, there is no loss of generality inassuming that the parameters in the L´evy–Khintchine form of F are all identically zero. Justfor simplicity assume that n = m = 1 . Augment the process X = ( X (1) , X (2) ) by considering Y = ( Y (0) , Y (1) , Y (2) ) := (1 , X (1) , X (2) ) . Moreover define, for k = 0 , , , Z ( k ) = ( Z ( k, , Z ( k, , Z ( k, ) := (0 , Z ( k, , Z ( k, ) . hen we can write Y (0) t Y (1) t Y (2) t = x x + Z (0) (cid:18)Z t Y (0) s ds (cid:19) + Z (1) (cid:18)Z t Y (1) s ds (cid:19) + Z (2) (cid:18)Z t Y (2) s ds (cid:19) , t ≥ . Observe that the process Y takes values in R ≥ × R . Hence, up to a change of the state space,we are led to consider solutions of X (1) t = x + m X k =1 Z ( k, (cid:18)Z t X ( k ) s ds (cid:19) , t ≥ ...X ( d ) t = x d + m X k =1 Z ( k,d ) (cid:18)Z t X ( k ) s ds (cid:19) + d X k = m +1 ( β k ) d (cid:18)Z t X ( k ) s ds (cid:19) , t ≥ . In order to additionally simplify the system, we introduce a second pathspace transformation,which allows us to work only with affine processes with admissible parameters satisfying theadditional property ( β j ) k = 0 for all j, k ∈ J . This means that the L´evy processes Z ( k ) with k = m + 1 , . . . , d are not only deterministic but actually identically equal to zero. Thistransformation has been introduced in [KST11] and it is based on the method of the movingframes. The general case will be treated in the proof of Proposition 2.11, here we present thecase n = m = 1 . Let X = ( X (1) , X (2) ) be an affine process in R ≥ × R . Consider the process Y = ( Y (1) , Y (2) ) with Y = x and Y (1) t := X (1) t − Z t X (1) s ds , t ≥ Y (2) t := X (2) t − ( β ) Z t X (2) s ds , t ≥ . Theorem 5.1 in [KST11] guarantees that Y is again an affine process in R ≥ × R withadmissible parameter β Y = (0 , . Moreover this transformation can be inverted. Hence,when n = m = 1 , up to an invertible pathspace transformation, we can restrict ourselves tothe solution of a system of type X (1) t = x + Z (1 , (cid:18)Z t X (1) s ds (cid:19) , t ≥ X (2) t = x + Z (1 , (cid:18)Z t X (1) s ds (cid:19) , t ≥ , or more generally X (1) t = x + m X k =1 Z ( k, (cid:18)Z t X ( k ) s ds (cid:19) , t ≥ ...X ( d ) t = x d + m X k =1 Z ( k,d ) (cid:18)Z t X ( k ) s ds (cid:19) , t ≥ . hen, it is evident that only the equation determining π I X is a real time–change equation.As soon as we provide a strong solution for the system of time–change equations describingthe positive components, we automatically find a solution for the components taking valuesin R n . Let X be an affine process taking values in D and denote by ( b, β, a, α, c, γ, m, M ) its set ofadmissible parameters. The next condition implies that the function ϕ in the definition ofaffine property is identically zero. Assumptions 2.7.
The condition A m is satisfied if ( b, a, c, m ) = (0 , , , . The next assumption implies that the process is homogeneous in the last n variables. Assumptions 2.8.
The condition A H is satisfied if, for all i, j ∈ J it holds ( β i ) j = 0 . Finally, to ensure that a solution of the system exists for all t ≥
0, we introduce this last setof conditions.
Assumptions 2.9.
The condition ˚ A is satisfied if, for all i ∈ I it holds c = 0 , γ i = 0 and Z (cid:16) | π I ξ | ∧ | π I ξ | (cid:17) M i ( dξ ) , for all i ∈ I .
Definition 2.10.
We call an affine process with admissible parameters ( b, β, a, α, c, γ, m, M ) satisfying A m , A H and ˚A an affine process of Heston type . Among the previous assumptions, only Assumption 2.9 is a real restriction on the structureon the admissible parameters. As observed also in [Kal06] (also compare with Lemma 9.2.in [DFS03]) Assumption 2.9 guarantees that the solution process does not explode in finitetime and hence the time–change process is always well defined. Up to an enlargement of thestate space and a pathwise transformation, there is no loss of generality in assuming thatboth Assumption 2.7 and Assumption 2.8 hold.In the following proposition we present all steps which allow us to reduce a general affineprocess into an affine process of Heston type.
Proposition 2.11.
Let X be an affine process satisfying Assumption 2.9. On a possiblyenlarged probability space, there exists a process X m such that X m is an affine process taking values in R m +1 ≥ × R n satisfying the following property:there exists a function ψ m : C m +1 ≤ × i R n → C d +1 such that for all ( t, x m ) ∈ R ≥ × ( R m +1 ≥ × R n ) it holds E x m (cid:20) e h u,X m i (cid:21) = e h x m ,ψ m ( t,u ) i for u ∈ C m +1 ≤ × i R n , for all u = ( u , u ) ∈ C m +1 ≤ × i R n it holds π { m +2 ,...,d +1 } ψ m ( t, u , u ) = u , for all t ≥ , the set of admissible parameters for X m satisfies the assumptions A m , A H and ˚A andmoreover, for all k = 1 , . . . , m + 1 , the matrix α k has the form α k = ... . . . α k ) kk . . . . . . ... ... α Jk , with ( α k ) kk ≥ and α Jk ∈ S + n , for all ( t, x ) ∈ R ≥ × D and u ∈ U , define x m = (1 , x ) and v = (0 , u ) . It holds E x (cid:20) e h u,X t i (cid:21) = E x m (cid:20) e h v,X m t i (cid:21) . Proof.
Given two indices i, j = 1 , . . . , d + 1 with i < j denote by[ i : j ] := { i, i + 1 , . . . , j − , j } . We start using Proposition 1.23 in [Gab14]. Fix x ∈ R ≥ and define x m := ( x , x ) ∈ R m +1 ≥ × R n , (13) U m := C m +1 ≤ × i R n , (14) ψ m ( t, u , u , . . . , u d ) := ϕ ( t, u , . . . , u d ) + u ψ ( t, u , . . . , u d ) ! . (15)Due to regularity in t of ϕ ( t, u ) and ψ ( t, u ) , we conclude that ψ m ( t, · ) is a regular semiflow.Hence, from Proposition 7.4 in [DFS03], we conclude that there exists an affine process X m with state space R m +1 ≥ × R n satisfying E x m (cid:20) e h u,X m t i (cid:21) = e h x m ,ψ m ( t,u ) i , u ∈ U m . Now we can apply the method of moving frames (see Theorem 5.1 in [KST11]) to the affineprocess X m . Let (0 , β, , α, , , , M ) be its set of admissible parameters. Denote by B the d × d matrix obtained by placing each β i , i = 1 , . . . , d as a column B = B I B IJ B J ! . (16)11efine the matrix T = I B ⊤ J ! ∈ R d × d and the map T : X m X m − T ⊤ Z · X m s ds . The process T X m is an affine process with Fourier–Laplace transform given by E x m (cid:20) e h u, T X m t i (cid:21) = e h π [1: m +1] x m ,π [1: m +1] ψ m ( t,u ) i + h π [ m +2: d +1] x m ,π [ m +2: d +1] u i . In particular, the assumptions A m and A H are satisfied. Now we move on the structure ofthe matrices α k , k = 1 , . . . , m + 1. Due to the restrictions on the admissible parameters α is already in the specified form with ( α ) = 0. The matrices α k , k = 1 , . . . , m + 1 can betransformed simultaneously into block diagonal form by means of a linear map. See [FM09].Finally, if v = (0 , u ) with u ∈ U E (1 ,x ) (cid:20) e h v,X m t i (cid:21) = e ϕ ( t,u )+ h x,ψ ( t,u ) i = E x (cid:20) e h u,X t i (cid:21) , within ψ m ( t, (0 , u )) := ϕ ( t, u ) ψ ( t, u ) ! . (17)
3. Existence of the solution of the time–change equation
Let Z (1) , . . . , Z ( d ) be d independent c`adl`ag R d -valued L´evy processes, each of them withL´evy triplet ( β k , α k , M k ) , k = 1 , . . . , d , defined on the same probability space (Ω , G , P ).Henceforth, we assume that the following restrictions on the L´evy triplets hold: (H) the family (0 , β, , α, , , , M ) consisting of the collection of the triplets ( β k , α k , M k ), k = 1 , . . . , d satisfies the assumptions A H and ˚A Now we consider the process Z = ( Z (1) , . . . , Z ( d ) ) ∈ R d on the product space(Ω , G , P ) := ( d Y k =1 Ω ( k ) , ⊗ dk =1 G ( k ) , ⊗ dk =1 P ( k ) ) . We fix x ∈ D and consider the functions f ( k ) i ( y ) := h x + N y, e k i , for k = 1 , . . . , d, i = 1 , . . . , d, y ∈ R d , (18)12here N ∈ R d × d is the matrix obtained by horizontally concatenating d times the identitymatrix of dimension d . In the next section it will be essential to construct the solution of asystem of time–change equations of type Y ( k ) i ( t ) := Z ( k ) i (cid:18)Z t f ( k ) i ( Y s ) ds (cid:19) , k, i = 1 , . . . , d , and , t ≥ . (19)The aim of this section is to prove the following result Theorem 3.1.
Let Z (1) , . . . , Z ( d ) be d independent R d -valued L´evy processes with c`adl`agpaths defined on the same probability space (Ω , G , P ) . For k = 1 , . . . , d , denote by ( β k , α k , M k ) the respective L´evy triplets. Under the assumption that the triplets satisfy (H) , for all x ∈ D ,there exists a solution of the following time–change problem Y ( k ) i ( t ) := Z ( k ) i (cid:18)Z t f ( k ) i ( Y s ) ds (cid:19) , k, i = 1 , . . . , d and t ≥ , (19) with f ( k ) i ( y ) := h x + N y, e k i , k, i = 1 , . . . , d, y ∈ R d (20) and N ∈ R d × d the matrix obtained by horizontally concatenating d times the identity matrixof dimension d .3.2. The proof The proof of Theorem 3.1 is done in several steps. We first translate the problem of existenceand uniqueness of a solution for (19) in the problem of existence and uniqueness of a systemof ODEs.Introduce τ ( k ) i ( t ) := Z t f ( k ) i ( Y s ) ds, for k, i = 1 . . . , d and t ≥ , (21)and define τ ( t ) := ( τ (1)1 ( t ) , . . . , τ (1) d ( t ) , . . . , τ ( k ) i ( t ) , . . . , τ ( d )1 ( t ) , . . . , τ ( d ) d ( t )) . (22)Existence of a solution of (19) is equivalent to the existence of a solution of the followingsystem of ODEs ˙ τ ( k ) i ( t ) = f ( k ) i ( Z ( τ ( t )) , for all k, i = 1 , . . . , d , t ≥ ,τ ( k ) i (0) = 0 , (23)where Z ( τ ( t )) := (cid:16) Z (1)1 ( τ (1)1 ( t )) , . . . , Z ( k ) i ( τ ( k ) i ( t )) , . . . , Z ( d ) d ( τ ( d ) d ( t )) (cid:17) . We start by showing that, existence for a solution of (23) can be proved by focusing on thecomponents with k = 1 , . . . , m and i = 1 , . . . , m .13 emma 3.2. If ˙ τ ( k ) i ( t ) = f ( k ) i ( Z ( τ ( t )) , t ≥ ,τ ( k ) i (0) = 0 , (24) admits a solution for all k = 1 , . . . , m and i = 1 , . . . , m , then it admits also a solution forall k = 1 , . . . , d and i = 1 , . . . , d .Proof. For all k = 1 , . . . , d it holds f ( k )1 ( y ) = x k + d X h =1 y ( h ) k = f ( k )2 ( y ) = . . . = f ( k ) d ( y ) , for all y ∈ R d , and therefore τ ( k )1 ( t ) = . . . = τ ( k ) d ( t ) , for all t ≥ . Denote τ ( k ) ( t ) := τ ( k )1 ( t ) , for all t ≥ k = 1 , . . . , d . By definition, for each k = 1 , . . . , d ,τ ( k ) ( t ) = Z t x k + d X i =1 Y ( i ) k ( s ) ! ds = Z t x k + d X i =1 Z ( i ) k ( τ ( i ) k ( s )) ! ds = Z t x k + d X i =1 Z ( i ) k ( τ ( i ) ( s )) ! ds . By Assumption 2.8, each Z ( j ) for j ∈ J is a L´evy process with L´evy triplet (0 , , τ ( k ) ( t ) = Z t x k + m X i =1 Z ( i ) k ( τ ( i ) ( s )) ! ds for all k = 1 , . . . , m , then, for all t ≥ j = m + 1 , . . . , d , we can compute τ ( j ) ( t ) = Z t x j + m X i =1 Z ( i ) j ( τ ( i ) ( s )) ! ds , since the right hand side of the last equation does not depend anymore on the left handside.Henceforth, Z (1) , . . . , Z ( m ) are m independent L´evy processes on R m , each of them with L´evytriplets ( β i , α i , M i ), i = 1 . . . , m , satisfying( α i ) kl = 0 for all k, l ∈ I such that ( k, l ) , ( i, i ) , ( β i ) k ≥ i ∈ I and k ∈ I \ { i } . ˙ τ ( k ) ( t ) = x k + Z k ( τ ( t )) , k = 1 , . . . , m , t ≥ ,τ ( k ) (0) = 0 , (25)where Z : R m ≥ → R m s m X i =1 Z ( i ) ( s i ) . (26)In vector notation the previous initial value problem reads ( ˙ τ ( t ) = x + Z ( τ ( t )) , t ≥ ,τ (0) = 0 . (27) Remark 3.3.
By definition it holds ˙ τ (0) = x ∈ R m ≥ . Due to the restrictions on the parameters, each Z ( i ) , i = 1 , . . . , m , is a process with nonegative jumps. This implies that, whenever a component τ i ∗ reaches zero for some i ∗ , thecorresponding component L´evy process Z ( i ∗ ) is stopped. In particular, each trajectory of x + Z stays positive until it is absorbed at zero.3.2.1. Approximation of the vector field In order to construct a solution for (27), we seek for a decomposition of type Z = ∼ Z + ≁ Z such that the system ˙ τ ( t ) = ( x + ∼ Z )( τ ( t )) , t ≥ τ (0) = 0 , (28)reduces to a decoupled system of m one dimensional problems and ≁ Z := Z − ∼ Z .The L´evy–Itˆo decomposition, together with the canonical form of the admissible parameters,gives Z ( i ) t = β i t + σ i B ( i ) t + Z t Z ξ {| ξ | > } J ( i ) ( dξ, ds )+ Z t Z ξ {| ξ |≤ } ( J ( i ) ( dξ, ds ) − M i ( dξ ) ds )15here σ i = q ( α i ) ii , B ( i ) is a process in R m which evolves only along the i -th coordinate asBrownian motion and J ( i ) is the jump measure of the process Z ( i ) .Now, from the assumption on the set of admissible parameter, π I \{ i } β i ∈ R m − ≥ and ( β i ) i ∈ R . Decompose Z ( i ) =: ∼ Z ( i ) + ≁ Z ( i ) where ∼ Z ( i ) and ≁ Z ( i ) are two stochastic processes on R m defined by ∼ Z ( i ) k ( t ) := 0 , for k , i , ∼ Z ( i ) i ( t ) := σ i B ( i ) i ( t ) + ( β i ) i t + R t R ξ i | ξ | > J ( i ) ( dξ, ds ) , + R t R ξ i | ξ |≤ ( J ( i ) ( dξ, ds ) − M i ( dξ ) ds ) , ≁ Z ( i ) ( t ) := ≁ β i t + R t R ( ξ − ξ i e i ) {| ξ | > } J ( i ) ( dξ, ds ) , + R t R ( ξ − ξ i e i ) {| ξ |≤ } ( J ( i ) ( dξ, ds ) − M i ( dξ ) ds ) , where ≁ β i = β i − e i ( β i ) i . The following lemma, which is an obvious consequence of the restrictions on the admissibleparameters, collects some path properties of the processes ∼ Z ( i ) and ≁ Z ( i ) . We would liketo remark that both c`adl`ag property and this special structure of the paths are essentialingredients of our proof. Lemma 3.4.
For all i = 1 , . . . , m it holds ∼ Z ( i ) is a L´evy process with no negative jumps, ≁ Z ( i ) is a process with increasing paths. Introduce, for all s ∈ R m ≥ , ∼ Z ( s ) := m X i =1 ∼ Z ( i ) ( s i ) , ≁ Z ( s ) := m X i =1 ≁ Z ( i ) ( s i ) . We will consider separately the initial value problems with vector fields x + ∼ Z and ≁ Z . The next result shows that it is possible to find a unique solution for the initial value problem ˙ τ (( t , τ , x ); t ) = ( x + ∼ Z )( τ (( t , τ , x ); t ) ,τ (( t , τ , x ); t ) = τ . Later, we will show how to construct a solution of the general problem.16 roposition 3.5.
There exists a unique solution of ˙ τ (( t , τ , x ); t ) = ( x + ∼ Z )( τ (( t , τ , x ); t )) ,τ (( t , τ , x ); t ) = τ , (29) with τ ∈ R m ≥ and t ≥ .Proof. Observe that (29) is a decoupled system of m equations of type ˙ τ i (( t , τ , x ); t ) = ( x i + ∼ Z ( i ) i )( τ i (( t , τ , x ); t )) , i = 1 , . . . , m ,τ i (( t , τ , x ); t ) = π { i } τ . (30)where each ∼ Z ( i ) i is a L´evy process with no negative jumps. The existence of a unique solutionof (30) follows from Section 6.1 in [EK86].For the proof of the general result, we will need to approximate ≁ Z with piecewise constantfunctions. Fix M ∈ N and consider the partition T M := ( k M , k ≥ ) . Define the following approximations on the partition T M : ↑ ≁ Z ( i, M ) t := ∞ X k =0 ≁ Z ( i ) k/ M [ k M , k +12 M ) ( t ) , ↓ ≁ Z ( i, M ) t := ∞ X k =0 ≁ Z ( i )( k +1) / M [ k M , k +12 M ) ( t ) . Introduce, for s ∈ R m ≥ , the processes ↑ ≁ Z ( M ) ( s ) and ↓ ≁ Z ( M ) ( s ) obtained by taking the sumsof ↑ ≁ Z ( i, M ) s i and ↓ ≁ Z ( i, M ) s i respectively. Notation 3.6.
Let Σ M := m [ i =1 { s ≥ | ∆ Z ( i ) s > } and augment the partition T M with Σ M . Denote the family obtained in this way by T Σ M . We will first construct a solution for the equation (27) when ≁ Z is replaced by ↑ ≁ Z ( M ) .Hereafter, given x, y ∈ R m , we write x ≤ y if x i ≤ y i , for all i = 1 , . . . , m . .2.2. The algorithm Let ∼ Z and ↑ ≁ Z ( M ) be defined as above. Input:
Start by defining the random variables ←− σ := (0 , . . . , , (31) −→ σ ( ω ) := ( σ (1 ,M )1 ( ω ) , . . . , σ ( m,M )1 ( ω )) , (32)where each σ ( i,M )1 ( ω ) is the first jump in the path t ↑ ≁ Z ( i, M ) t ( ω ). Step 1:
Let τ (( t , τ , x ); t ) be the solution of the system (29) starting from t = 0 , τ = (0 , . . . ,
0) and x ∈ R m ≥ . Consider the solution of (29) for all times t such that τ (( t , τ , x ); t ) < −→ σ . (†)Let t ∗ be the first time such that (†) does not hold anymore. Stop the solution τ (( t , τ , x ); · ) at time t ∗ . Observe that the condition (†) is violated if there exists anindex i ∗ ∈ { , . . . , m } such that τ i ∗ (( t , τ , x ); t ∗ ) = σ ( i ∗ ,M )1 . Notice here that there might be more than one i ∗ , where the above equality is valid,however, for the sake of convenience, we assume that there exists only one index forthe moment. We will deal with the general case in the proof of Theorem 3.7. Step 2:
Update ←− σ := (0 , . . . , σ ( i ∗ ,M )1 , . . . , , (33) −→ σ := ( σ (1 ,M )1 , . . . , σ ( i ∗ ,M )2 , . . . , σ ( m,M )1 ) , (34) x := x + ∆ ↑ ≁ Z ( M ) ( ←− σ ) , (35)where σ ( i ∗ ,M )2 ( ω ) is the second jump in the path t ↑ ≁ Z ( i ∗ , M ) t ( ω ). Step 3:
Let τ (( t , τ , x ); t ) be the solution of the system (29) starting from the updated values t = t ∗ , τ = τ (( t , τ , x ); t ∗ ) and x = x ∈ R m ≥ . As before, we let τ (( t , τ , x ); · ) evolve until τ (( t , τ , x ); t ∗ ) < −→ σ (36)holds. As soon as this condition does not holds anymore, we stop again the solution.18 nd: Do iteratively
Step 2 and
Step 3 .The above algorithm describes the guiding principle for the proof of the next result:
Theorem 3.7.
There exists a solution of ˙ τ ( M ) ((0 , , x ); t ) = ( x + ∼ Z + ↑ ≁ Z ( M ) )( τ ( M ) ((0 , , x ); t )) ,τ ( M ) ((0 , , x ); 0) = 0 . (37) Proof.
We already did all the main steps for the proof of this result. Let T M and Σ be thesets defined in Notation 3.6. Recall that T Σ M is a countable family. Enumerate the elementsin T Σ M such that σ ( i ) k denotes the k -th jump of ↑ ≁ Z ( i, M ) . Fix x ∈ D and set( t , τ , x ) := (0 , , x )and ←− σ := (0 , . . . , , −→ σ := ( σ (1 ,M )1 , . . . , σ ( i,M )1 , . . . , σ ( m,M )1 ) , where σ ( i,M ) k denotes the k -th jump in the path t ↑ ≁ Z ( i, M ) t for all i = 1 , . . . , m . Bydefinition ↑ ≁ Z ( M ) ( s ) = 0 for all s < −→ σ . Proposition 3.5 gives the existence of the solution of(29) with this set of input parameters. Denote it by τ (( t , τ , x ); t ). As soon as the solution τ (( t , τ , x ); t ) reaches a jump time for ↑ ≁ Z ( M ) , the vector field in the equation (37) changes.Precisely, denote by t := sup { t > | τ (( t , τ , x ); t ) < −→ σ } . Again there might be one or more indices i ∗ , where the condition fails. Collect them in aset I ∗ ⊆ { , . . . , m } . Update the values π I ∗ ←− σ := π I ∗ −→ σ ,π I ∗ −→ σ := π I ∗ −→ σ ++ , (38)where −→ σ ++ contains the next jumps of ↑ ≁ Z ( i, M ) for all i ∈ I ∗ after −→ σ i . Then define τ := τ (( t , τ , x ); t ) ,x := x + ∆ ↑ ≁ Z ( M ) ( ←− σ ) . Now, consider again the solution of (29), but this time with parameters ( t , τ , x ). De-note it by τ (( t , τ , x ); t ) and observe that it is well defined until all the coordinates of τ (( t , τ , x ); t ) stay below the next jump times of ↑ ≁ Z ( M ) . We obtain the solution of (29) by19asting a finite amount of solutions obtained in the time subintervals defined by T Σ M . Defineiteratively, for all n ≥ t n +1 := sup { t > t n | τ (( t n , τ n , x n ); t n ) < −→ σ } , (39) τ n +1 := τ (( t n , τ n , x n ); t n +1 ) , (40) x n +1 := x n + ∆ ↑ ≁ Z ( M ) ( ←− σ ) , (41)where, at each step, ←− σ and −→ σ are updated using the prescription in (38). Continuity followsby construction.Now that we have found a solution for the approximated problems, we would like to showconvergence to the solution of (27).The following results focus on monotonicity and convergence of (37). Lemma 3.8.
Let i = 1 , . . . , m and M ∈ N be fixed. Then, for all t ≥ it holds ↑ ≁ Z ( i, M ) t ≤ ≁ Z ( i, M ) t ≤ ↓ ≁ Z ( i, M ) t almost surely . Moreover, for each ω ∈ Ω , the sequences { ↑ ≁ Z ( i, M ) ( ω ) } M ∈ N and { ↓ ≁ Z ( i, M ) ( ω ) } M ∈ N are mono-tone in the sense that, for all t ≥ , ↑ ≁ Z ( i, M + 1) t ( ω ) ≥ ↑ ≁ Z ( i, M ) t ( ω ) and ↓ ≁ Z ( i, M + 1) t ( ω ) ≤ ↓ ≁ Z ( i, M ) t ( ω ) . Proof.
Since Z ( i ) has no negative jumps, and, by assumption ( β i ) k ≥ k , i , thepaths of ≁ Z ( i, M ) are increasing. Therefore, ≁ Z ( i, M ) t ≥ ≁ Z ( i, M ) k/ M = ↑ ≁ Z ( i, M ) t , a.s. for all t ∈ [ k M , k + 12 M ) . For the same reason, ≁ Z ( i, M ) t ≤ ≁ Z ( i, M )( k +1) / M = ↓ ≁ Z ( i, M ) t , a.s. for all t ∈ [ k M , k + 12 M ) . Now, since for every M ∈ N the partition T M +1 is obtained by halving all the subintervalsin the partition T M , it clearly holds ↑ ≁ Z ( i, M + 1) t ( ω ) = ↑ ≁ Z ( i, M ) t ( ω ) , for all t ∈ h k M +1 , k +12 M +1 (cid:17) , ≁ Z ( i, M )(2 k +1) / M +1 ( ω ) , for all t ∈ h k +12 M +1 , k +1)2 M +1 (cid:17) . Using again the increasing property of the paths of ≁ Z ( i, M ) we conclude that ↑ ≁ Z ( i, M ) t ≥ ≁ Z ( i, M ) t , a.s.20ecause ≁ Z ( i, M )(2 k +1) / M +1 ≥ ≁ Z ( i, M )2 k/ M +1 = ≁ Z ( i, M ) k/ M . The case with ↓ ≁ Z ( i, M ) goes analogously. Proposition 3.9.
Let M ∈ N be fixed and denote by τ ( M ) ((0 , , x ); t ) the solution of (37) constructed in Theorem 3.7. Then, for all t ≥ and x ∈ R m ≥ it holds τ ( M ) ((0 , , x ); t ) ≤ τ ( M +1) ((0 , , x ); t ) , almost surely . Proof.
This follows by construction using the monotonicity proved in Lemma 3.8. Indeed,denote by T Σ M := { σ ( M ) k } k ∈ N and T Σ M +1 := { σ ( M +1) k } k ∈ N the set of jump times for ↑ ≁ Z ( M ) and ↑ ≁ Z ( M + 1) respectively. By construction T Σ M ⊂ T Σ M +1 in the sense that, for each σ ( M ) k ∈ T Σ M there exists h ∈ N such that σ ( M ) k = σ ( M +1) h ∈ T Σ M +1 . Denote by { σ ( M +1) k h } h ∈ N the jumptimes of ↑ ≁ Z ( M + 1) occurring on the subinterval [ σ ( M ) k , σ ( M ) k +1 ] . By construction, there is onlyone jump inside this interval. Write { σ ( M +1) k h } h =1 ,..., with σ ( M +1) k = σ ( M ) k and σ ( M +1) k = σ ( M ) k +1 Then τ ( M +1) is obtained by pasting a finite number of solutions of initial value problemswith piecewise linear vector field. For each h = 1 , , ↑ ≁ Z ( M + 1) ( σ ( M +1) k h ) ≥ ↑ ≁ Z ( M ) ( σ ( M ) k ).Therefore, on each subinterval [ σ ( M ) k , σ ( M ) k +1 ], the solution τ ( M +1) (( t k , τ k , x k ); t ) is constructedby pasting a finite number of solutions of type τ (( t k,h , τ k,h , x k,h ); t ) where x k,h is increasingsequence in h . Hence we conclude that, for all k ∈ N and for t ∈ [ σ ( M ) k , σ ( M ) k +1 ] it holds τ ( M +1) (( t k , τ k , x k ); t ) ≥ τ ( M ) (( t k , τ k , x k ); σ ( M ) k ) . The mast monotonicity argument we need follows directly from the definition of the ODEs
Lemma 3.10.
Let
M, t , τ be fixed and x ≤ y . Consider the systems ˙ τ ( M ) (( t , τ , x ); t ) = ( x + ∼ Z + ↑ ≁ Z ( M ) )( τ ( M ) (( t , τ , x ); t )) ,τ ( M ) (( t , τ , x ); t ) = τ . ˙ τ ( M ) (( t , τ , y ); t ) = ( y + ∼ Z + ↑ ≁ Z ( M ) )( τ ( M ) (( t , τ , y ); t )) ,τ ( M ) (( t , τ , y ); t ) = τ . Then, for all t ≥ t it holds τ ( M ) (( t , τ , x ); t ) ≤ τ ( M ) (( t , τ , y ); t ) , almost surely . Finally, due to monotonicity, we know that the sequence τ ( M ) admits a limit. With the nextresult we show that the limit is actually finite and, by monotone convergence, it coincideswith the solution of (27). 21 roposition 3.11. For all t ≥ and x ∈ R m ≥ the sequence τ ( M ) ((0 , , x ); t ) converges lim M →∞ τ ( M ) ((0 , , x ); t ) = τ ( ∗ ) ((0 , , x ); t ) and the limit can be identified with the solution of (27) .Proof. Let τ ( ∗ ) ((0 , , x ); · ) be the limit of the sequence { τ ( M ) ((0 , , x ); · ) } M ≥ . Since thesequence is a monotone sequence, the convergence is actually uniform. Observe that thesame holds for the limit of the sequence of solutions of the system (37) when ↑ ≁ Z ( M ) isreplaced by ↓ ≁ Z ( M ) . Applying dominate convergence theorem it follows that τ ( ∗ ) ((0 , , x ); t )coincides with the solution of (27).At this point, most of the results we need for the proof of Theorem 3.1 have been proved.The final step is to construct the solution of the time change equation (19) using the solutionof the system (27). Proof of Theorem 3.1.
Let τ = ( τ (1) , . . . , τ ( m ) ) be the solution of (27) with Z := P mi =1 π I Z ( i ) .Then, for k = 1 , . . . , m and i = 1 , . . . , dY ( k ) i ( t ) := Z ( k ) i ( τ ( k ) ( t )) . Moreover observe that, due to the restrictions in (H) , the L´evy processes Z ( k ) for k = m + 1 , . . . , d are identically zero and therefore also Y ( k ) are identically zero.
4. Pathwise construction of affine processes with time–change
We start summarizing the results from Chapter 3. In Proposition 3.11 we have shown thatthe system of ODEs ( ˙ τ ( t ) = x + Z ( τ ( t )) ,τ (0) = 0 (27)admits a solution which can be constructed as the limit of approximated problems. Then,in Theorem 3.1 we showed how to use these solutions in order to construct the processes { Y ( k ) i } k,i =1 ,...,d defined by means of the time–change equation (19). In this section we aregoing to see how to combine these processes in order to construct an affine process. Beforeto do it, we clarify the main steps by means of an easy two dimensional example with n = m = 1. Example 4.1.
The results in Section 3 in the particular case when m = n = 1 give theexistence of a solution for the time change equation Y ( k ) i ( t ) := Z ( k ) i (cid:18)Z t f ( k ) i ( Y s ) ds (cid:19) , for k, i = 1 , and t ≥ . Under the assumption (H) , Y (2) t = 0 for all t ≥ and (1) ( t ) := Z (1) (cid:18)Z t (cid:16) x + Y (1)1 ( s ) (cid:17) ds (cid:19) . Define X = x + N Y with N = ! . Inserting the definitions of the Y ( k ) i , it is clear that X satisfies X (1) X (2) ! = x x ! + Z (1)1 ( R · X (1) s ds ) + Z (2)1 ( R · X (2) s ds ) Z (1)2 ( R · X (1) s ds ) + Z (2)1 ( R · X (2) s ds ) ! . In vector notation, we can write X = x + X i =1 Z ( i ) (cid:18)Z · X ( i ) s ds (cid:19) which is indeed the formulation in Theorem 2.5. The next theorem is a re-formulation of the above argument in the general multivariate case.The additional problem we still need to address is measurability of the time–change processwith respect to the filtration generated by the Z ( i ) , i = 1 , . . . , d . In order to do it, we willneed the notion of multivariate filtration and multivariate stopping time taken from [EK86].Recall that Z ( i ) , i = 1 , . . . , d is a family of L´evy processes as in the setting of Section 3.For all s = ( s , . . . , s d ) ∈ R d ≥ , define the σ -algebra G ♮s := σ (cid:16) { Z ( h ) t h , t h ≤ s h , for h = 1 , . . . , d } (cid:17) , (42)and then complete it by G s = \ n ∈ N G ♮s ( n ) ∨ σ ( N ) , (43)where N is the collection of sets in G with P -probability zero and s ( n ) is the sequence definedby s ( n ) k = s k + 1 /n . Definition 4.2.
A random variable τ = ( τ , . . . , τ s ) ∈ R d ≥ is a ( G s ) -stopping time if { τ ≤ s } := { τ ≤ s , . . . , τ d ≤ s d } ∈ G s , for all s ∈ R d ≥ . If τ is a stopping time, G τ := { B ∈ G | B ∩ { τ ≤ s } ∈ G s for all s ∈ R d ≥ } . Now that we have introduced the necessary notation, we are ready to prove the followingresult.
Theorem 4.3.
Let ( b, β, a, α, c, γ, m, M ) be a set of admissible parameters satisfying theAssumptions A m , A H and ˚A . The time–change equation X t = x + d X i =1 Z ( i ) ( θ ( i ) t ) , with θ ( i ) t = Z t X ( i ) r dr , (44) admits a unique solution.(ii) Define θ xt := ( θ (1) t , . . . , θ (1) t | {z } d times , . . . , θ ( d ) t , . . . , θ ( d ) t | {z } d times ) ∈ R d . The random variable θ xt is a G s stopping time for all t ≥ . Hence the time–changefiltration G θ xt := { A | A ∩ { θ xt ≤ s } ∈ G s , for all s ∈ R d ≥ } , is well defined.(iii) Let R be the function defined as in (9) . The solution of (44) is an affine process withfunctional characteristics (0 , R ) with respect to the time–changed filtration ( G θ xt ) t ≥ .Proof. Let Y ∈ R d be the process obtained by casting the solutions of (19) as Y := ( Y (1)1 , . . . , Y (1) d , Y (2)1 , . . . , Y (2) d , . . . , Y ( d )1 , . . . , Y ( d ) d ) . Consider the matrix N := · · · · · · · · ·
00 1 0 · · · · · · · · · . . . . . . · · · · · · · · · ∈ R d × d . Then X = x + d X k =1 Y ( k ) is a solution of time-change equation (44). Indeed, in vector notation, we can write X = x + N Y .
Then, if Z ( k ) j denotes the j -th coordinate of the k -th L´evy process, Z ( k ) j (cid:18)Z t f ( k ) j ( Y s ) ds (cid:19) = Z ( k ) j (cid:18)Z t h x + N Y s , e k i ds (cid:19) = Z ( k ) j (cid:18)Z t X ( k ) s ds (cid:19) and X j = x j + d X k =1 Y ( k ) j = x j + d X k =1 Z ( k ) j (cid:18)Z t X ( k ) s ds (cid:19) . τ ( t ) := ( τ (1)1 ( t ) , . . . , τ (1) d ( t ) , . . . , τ ( d )1 ( t ) , . . . , τ ( d ) d ( t ))where τ ( k ) i ( t ) = Z t f ( k ) i ( Y s ) ds is a G s stopping time for all t ≥
0. This follows from Theorem VI.2.2. in [EK86]. From theaffine relationship between X and Y we conclude that θ t is a G s stopping time and thereforethe time–changed filtration is well defined.Now, we need to check that X is a homogeneous Markov process with respect to ( G θ xt ) t ≥ .Applying Proposition I.6 in [Ber98] at each component Z ( k ) , k = 1 , . . . , d we get that( Z ( θ xt + h ) − Z ( θ xt )) h ≥ has the same law as Z ( θ xh ) h ≥ and it is independent of G θ xt . Therefore X xt + h = X xt + N (cid:16) Z ( θ xt + h ) − Z ( θ xt ) (cid:17) =: S t ( Z ( θ xt + h ) − Z ( θ xt ) , X xt ) , with S t : ( R d , d Y i =1 ( G θ xt )) × ( R d , G θ xt ) → ( R d , G θ xt )( Z, X ) → X + N Z.
Therefore, we conclude that the conditional law of X xt + h , given G θ xt , is X xt measurable.Markov property translates into X xt + h = S ( Z ( θ yh ) , y ) | y = X xt . Additionally the time–change process is absolutely continuous with ddt θ ( k ) i ( t ) = X ( k ) t − , for all k, i = 1 , . . . , d . The characteristics of the time–changed semimartingale can be computed using the formulasin Theorem 8.4. in [BNS10] from where we conclude that the process ( S ( Z ( θ xt ) , x )) t ≥ hascharacteristics ( β ( X − ) , α ( X − ) , M ( X − )), where β ( x ) = x β + . . . + x m β m ,α ( x ) = x α + . . . + x m α m ,M ( x, B ) = x M ( B ) + . . . + x m M m ( B ) , B ∈ B ( D ) . eferences [Bat96] D. S. Bates. Jumps and stochastic volatility: exchange rate processes implicit in deutsche markoptions. Review of Financial Studies , 9(1):69–107, 1996.[Ber98] J. Bertoin.
L´evy Processes . Cambridge University Press, 1998.[BNS01] O. E. Barndorff-Nielsen and N. Shephard. Modelling by L´evy processes for financial economet-rics. In
L´evy processes , pages 283–318. Birkh¨auser Boston, Boston, MA, 2001.[BNS10] O. E. Barndorff-Nielsen and A. Shiryaev.
Change of Time and Change of Measure . WorldScientific, 2010.[CPGUB13] M. E. Caballero, J. L. P´erez Garmendia, and G. Uribe Bravo. A Lamperti-type represen-tation of continuous-state branching processes with immigration.
The Annals of Probability ,41(3):1585–1627, May 2013.[CT13] C. Cuchiero and J. Teichmann. Path properties and regularity of affine processes on generalstate spaces. In
S´eminaire de Probabilit´es XLV , volume 2078 of
Lecture Notes in Math. , pages201–244. Springer, Cham, 2013.[DFS03] D. Duffie, D. Filipovi´c, and W. Schachermayer. Affine processes and applications in finance.
The Annals of Applied Probability , 13(3):984–1053, 2003.[EK86] S. N. Ethier and T. G. Kurtz.
Markov processes . Wiley Series in Probability and MathematicalStatistics: Probability and Mathematical Statistics. John Wiley & Sons, Inc., New York, 1986.[FM09] D. Filipovic and E. Mayerhofer. Affine diffusion processes: Theory and applications.
AdvancedFinancial Modelling , 8:1–40, 2009.[Gab14] N. Gabrielli.
Affine processes from the perspective of path space valued L´evy processes . PhDthesis, ETH Z¨urich, 2014.[Hes93] S. L. Heston. A closed-form solution for options with stochastic volatility with applications tobond and currency options.
The Review of Financial Studies , 6(2):327–343, 1993.[Kal06] J. Kallsen. A didactic note on affine stochastic volatility models. In
From Stochastic Calculus toMathematical Finance , page 343. Bachelier Colloquium on Stochastic Calculus and Probability,2006.[KST11] M. Keller-Ressel, W. Schachermayer, and J. Teichmann. Affine processes are regular.
Probab.Theory Related Fields , 151(3-4):591–611, 2011.[KST13] M. Keller-Ressel, W. Schachermayer, and J. Teichmann. Regularity of affine processes ongeneral state spaces.
Electron. J. Probab. , 18:no. 43, 17, 2013., 18:no. 43, 17, 2013.