Factorization and discrete-time representation of multivariate CARMA processes
aa r X i v : . [ m a t h . S T ] F e b Factorization and discrete-timerepresentation of multivariateCARMA processes
Vicky Fasen-Hartmann † Markus Scholz ‡§In this paper we show that stationary and non-stationary multivariate continuous-timeARMA (MCARMA) processes have the representation as a sum of multivariate complex-valued Ornstein-Uhlenbeck processes under some mild assumptions. The proof benefitsfrom properties of rational matrix polynomials. A conclusion is an alternative descrip-tion of the autocovariance function of a stationary MCARMA process. Moreover, thatrepresentation is used to show that the discrete-time sampled MCARMA ( p , q ) processis a weak VARMA ( p , p − ) process if second moments exist. That result complementsthe weak VARMA ( p , p − ) representation derived in Chambers and Thornton [8]. Inparticular, it relates the right solvents of the autoregressive polynomial of the MCARMAprocess to the right solvents of the autoregressive polynomial of the VARMA process; inthe one-dimensional case the right solvents are the zeros of the autoregressive polyno-mial. Finally, a factorization of the sample autocovariance function of the noise sequenceis presented which is useful for statistical inference. AMS Subject Classification 2020:
Primary: 62M10Secondary: 62M86, 60G10
Keywords: autocovariance function, latent root, matrix polynomial, MCARMA process, Ornstein-Uhlenbeck process, rational matrix function, right solvent, VARMA process.
A multivariate continuous-time ARMA (MCARMA) process is a continuous-time version of the well-known vector ARMA (VARMA) process in discrete time. They are applied in diversified fields as,e.g., signal processing and control (cf. [12, 15]), high-frequency financial econometrics (cf. [23]) andfinancial mathematics (cf. [1]). The driving process of a MCARMA process is a Lévy process L =( L ( t )) t ≥ which is an R m -valued stochastic process with L ( ) = m P -a.s., stationary and independent † Institute of Stochastics, Englerstraße 2, D-76131 Karlsruhe, Germany, email: [email protected] ‡ Allianz Lebensversichung-AG, Reinsburgstraße 19, D-70197 Stuttgart, Germany. § Financial support by the Deutsche Forschungsgemeinschaft through the research grant FA 809/2-2 is gratefully acknowl-edged. d -dimensional MCARMA ( p , q ) process( p > q positive integers) is the solution of the stochastic differential equation A ( D ) Y ( t ) = B ( D ) DL ( t ) for t ≥ , (1.1)where D is the differential operator with respect to t , A ( λ ) : = I d λ p + A λ p − + . . . + A p and B ( λ ) : = B λ q + . . . + B q − λ + B q (1.2)is the autoregressive and the moving average polynomial, respectively with A , . . . , A p ∈ R d × d and B , . . . , B q ∈ R d × m . The matrix I d denotes the d × d -dimensional identity matrix and 0 d × m denotesa d × m -dimensional matrix whose entries are all zero in the following. In contrast, in discrete timethe differential operator is replaced by the backshift operator and the differential of the Lévy process DL ( t ) by a weak white noise. Since a Lévy process is not differentiable, the question arises what isthe formal definition of a MCARMA process. We can interpret (1.1) via linear continuous-time statespace models as in Marquardt and Stelzer [19]. Therefore, define A ∗ : = d × d I d d × d · · · d × d d × d d × d I d . . . ...... . . . . . . d × d d × d · · · · · · d × d I d − A p − A p − · · · · · · − A ∈ R pd × pd , (1.3) C ∗ : = ( I d , d × d , . . . , d × d ) ∈ R d × pd and B ∗ : = ( β T · · · β T p ) T ∈ R pd × m with β : = . . . : = β p − q − : = d × m and β p − j : = − p − j − ∑ i = A i β p − j − i + B q − j , j = , . . . , q . Then the R d -valued MCARMA ( p , q ) process Y : = ( Y ( t )) t ≥ is defined by the state space equation Y ( t ) = C ∗ X ( t ) and d X ( t ) = A ∗ X ( t ) d t + B ∗ d L ( t ) . (1.4)Interesting is that if we define A = I d d × d · · · d × d A I d . . . ...... . . . . . . d × d A p − · · · A I d ∈ R pd × pd , B = ( p − ( q + )) d × m B ... B q ∈ R pd × m , (1.5)then A B ∗ = B . (1.6)The class of MCARMA processes is very rich. Under the constrain of finite second moments,Schlemm and Stelzer [21, Corollary 3.4] show that the class of stationary MCARMA processes andthe class of stationary state space models are equivalent (see Fasen-Hartmann and Scholz [11] for2ointegrated MCARMA processes).The aim of the paper is to present sufficient criteria for stationary and non-stationary MCARMA ( p , q ) processes to have a representation as a sum of p multivariate Ornstein-Uhlenbeck processes (whichare MCAR(1) = MCARMA(1,0) processes). For d =
1, it is well known that if the zeros r , . . . , r p of A ( λ ) are distinct and have a strictly negative real parts that then Y ( t ) = p ∑ k = Y k ( t ) with Y k ( t ) = Z t − ∞ e r k ( t − u ) B ( r k ) A ′ ( r k ) d L ( u ) (1.7)is a stationary solution of the state space model (1.4) and hence, a CARMA process (see Brockwell,Davis and Yang [5, Proposition 2]). Note that B ( r k ) / A ′ ( r k ) is the residue of A ( λ ) − B ( λ ) at r k . Inthe present paper we extend this result to the multivariate setup for both stationary and non-stationaryMCARMA processes. The zero r k of A ( λ ) in the one-dimensional case is replaced by a d × d ma-trix R k , which is as well a kind of multivariate ”zeros” of A ( λ ) , the so called right solvent satisfying A R ( R k ) : = I d R pk + A R p − k + . . . + A p = d × d . The result is derived in Theorem 3.1. Essential for ourproof are basic principles from rational matrix polynomials coming from linear algebra which arenot necessary in dimension d =
1. A main feature is that we have a sum of multivariate Ornstein-Uhlenbeck processes and not only some linear combinations of multivariate Ornstein-Uhlenbeck pro-cesses. Since matrix multiplication is not commutative this is not trivial. That is different to theone-dimensional case where any linear combination of stationary Ornstein-Uhlenbeck processes is aswell a sum of stationary Ornstein-Uhlenbeck processes. A straightforward consequence of our resultis an alternative representation of the autocovariance function of a stationary MCARMA process inProposition 3.6.Although we consider in this paper a continuous-time model, the corresponding discrete-time mod-els are of special interest. The reason for this is that despite having a continuous-time model, oneoften observes the process only at discrete time points as, e.g, in the context of high-frequency data.Hence, we use the representation of a MCARMA ( p , q ) process as a sum of p multivariate Ornstein-Uhlenbeck processes to derive a vector-valued ARMA (VARMA) ( p , p − ) representation for the lowfrequency sampled MCARMA process ( Y ( nh )) n ∈ N ( h > Y as a linear combination of multivariate Ornstein-Uhlenbeck processes isnot sufficient. The statement is a direct extension of the ARMA ( p , p − ) representation of discretelysampled CARMA processes in Brockwell, Davis and Yang [5, Proposition 3] whose autoregressivepolynomial ∏ pk = ( − λ e r k h ) of the ARMA representation has as zeros e − r h , . . . , e − r p h . In analogy, inthe multivariate setup of this paper the autoregressive polynomial of the VARMA representation hasright solvents e − R h , . . . , e − R p h .In the econometric literature, the VARMA ( p , p − ) representation of a discretely sampledMCARMA process is well-known, see, e.g., Chambers and Thornton [8, Corollary 1]; a nice overviewon this topic is presented in Chambers, McCrorie and Thornton [7]. In contrast to us, Chambers andThornton [8] assume some kind of observability and controllability conditions on submatrices ofe A ∗∗ , where A ∗∗ is constructed form A ∗ by reflecting the entries of A ∗ at the diagonal from the leftlower corner to the right upper corner. There, the coefficients of the autoregressive polynomial in theVARMA representation are complicated functions of these submatrices. The current paper presentsan alternative and simpler representation of the VARMA parameters and in particular, it connects theautoregressive polynomial in the MCARMA representation to the autoregressive polynomial in theVARMA representation due to the solvents. Our proof is an alternative proof requiring only assump-tions on the right solvents of A ( λ ) . In the multivariate setting, Schlemm and Stelzer [21, Proposition5.1] proved that a MCARMA process has a representation as a multivariate linear combination of3 d dependent one-dimensional Ornstein-Uhlenbeck processes. In the present paper, we will havemultivariate Ornstein-Uhlenbeck processes and p instead of pd Ornstein-Uhlenbeck processes.Similarly, as in the above mentioned papers our conclusions are advantageously for statistical in-ference of MCARMA processes. Brockwell and Lindner [3] use the representation (1.7) to solveboth the sampling and the embedding problem for CARMA processes. In the first case, they deducethe explicit parameters of the ARMA representation of ( Y ( nh )) n ∈ N . In the second case, they presentconditions under which an ARMA ( p , q ) process can be embedded in a CARMA ( p , p − ) process.Therefore, we think that our results might be helpful for a multivariate version of the sampling andembedding problem as well. But this is outside the scope of the present paper. Moreover, our findingsare helpful to derive probabilistic properties of a MCARMA process. Brockwell and Lindner [6],for example, use the ARMA ( p , p − ) representation of a CARMA process to derive necessary andsufficient conditions for the existence of a CARMA process.The paper is structured on the following way. In Section 2 we present preliminary results on matrixpolynomials and rational matrix polynomials which lay the background for the upcoming results. Themain results of the paper are given in Section 3. In this section, we review main results on matrix polynomials and rational matrix functions. Refer-ences about matrix analysis and matrix polynomials are, e.g., the textbooks of Bernstein [2], Hornand Johnson [13] and Kailath [14].The aim is to receive matrix valued ”roots” of a matrix polynomialwhich help to define linear factors of a matrix polynomial. However, a challenge is that there does notexist the Fundamental Theorem of Algebra for matrix polynomials and matrix multiplication is notcommutative.
Definition 2.1. (a) A λ -matrix A : C → C d × m of degree p and order ( d , m ) is defined asA ( λ ) = A λ p + A λ p − + . . . + A p − λ + A p , λ ∈ C , where A k ∈ C d × m for k = , . . . , p. If additionally, d = m we say shortly that A ( λ ) is of degree pand order d, and define the spectrum of A ( λ ) as σ ( A ( · )) : = { λ ∈ C : det ( A ( λ )) = } . If σ ( A ( · )) lies in the complement of the closed unit disc, then A ( λ ) is called Schur-stable . The λ -matrixA ( λ ) is called monic λ -matrix ofdegree p and order d if A : = I d .(b) Let Z ∈ C d × d and d = m. Then the right matrix polynomial A R : C d × d → C d × d of the λ -matrixA ( λ ) is defined as A R ( Z ) : = A Z p + A Z p − + . . . + A p − Z + A p . Next, we extend the definition of a root to the matrix polynomial case.
Definition 2.2.
For a monic λ -matrix A ( λ ) of degree p and order d we defineA ( k ) ( λ ) : = d k d λ k A ( λ ) , k = , . . . p . matrix R ∈ C d × d is defined to be a right solvent of A ( λ ) withmultiplicity ν ∈ { , . . . , p } ifA R ( R ) = d × d , A ( ) R ( R ) = d × d , . . . , A ( ν − ) R ( R ) = d × d and A ( ν ) R ( R ) = d × d . If A R ( R ) = d × d we simply say that R is a right solvent of A ( λ ) . A right solvent R of A ( λ ) is called regular if σ ( R ) ∩ σ ( A ( ) ( · )) = /0 , where A ( ) ( λ ) is a monic λ -matrix of degree p − satisfying A ( λ ) = A ( ) ( λ )( λ I d − R ) . Definition 2.3.
A set of right solvents R , . . . , R µ ∈ C d × d of the λ -matrix A ( λ ) of degree p is called complete if σ ( A ( · )) = S µ j = σ ( R j ) , where σ ( R j ) = { λ ∈ C : det ( λ I d − R j ) = } is the spectrum ofR j . In this case, ν + . . . + ν µ = p where ν k is the multiplicity of the right solvent R k . The Vandermonde matrix is extended in the next definition.
Definition 2.4.
Suppose R , . . . , R µ are a complete set of right solvents of the matrix polynomialA ( λ ) with multiplicities ν , . . . , ν µ , respectively. We define the confluent Vandermonde matrix W : = W ( R , . . . , R µ ) ∈ C pd × pd by W = [ W , . . . , W µ ] , where for k = , . . . , µ ,W k = I d d . . . d R k I d ... R k R k . . . d ... ... I d ... ... ... R p − k ( p − ) R p − k . . . (cid:16) p − ν k − (cid:17) R p − ν k k ∈ C pd × ν k d . In the case µ = p and ν = . . . = ν p = V ( R , . . . , R p ) = W ( R , . . . , R p ) . Lemma 2.5 (Maroulas [18], Theorem 3.4).
Let R , . . . , R µ be right solvents of a monic λ -matrixA ( λ ) of multiplicities ν , . . . , ν µ , respectively. Then W ( R , . . . , R µ ) is non-singular if and only if σ ( A ( · )) = µ [ j = σ ( R j ) and σ ( R j ) ∩ σ ( R i ) = /0 for j , i = , . . . , µ , j = i . Thus, we have the following relation between the solvents of the λ -matrix A ( λ ) and the coefficientmatrices A , . . . , A p of A ( λ ) . Lemma 2.6 (Maroulas [18]).
Let R , . . . , R p be a complete set of regular right solvents of the monic λ -matrix A ( λ ) = I d λ p + A λ p − + . . . + A p − λ + A p . Then [ A p , . . . , A ] = − [ R p , . . . , R pp ] V − ( R , . . . , R p ) andA ( λ ) = ( λ I d − R ∗ p ) · · · ( λ I d − R ∗ )( λ I d − R ) , where for k = , . . . , p,R ∗ k = M k ( R k ) R k M − k ( R k ) and M k ( R k ) = ( λ I d − R k − ) · · · ( λ I d − R ) . R ∗ k is not necessarily equal to R k for k = , . . . , p , as in theone-dimensional case d = Definition 2.7. A strictly proper rational left λ -matrix F ( λ ) with degree p and order ( d , m ) has therepresentation F ( λ ) = A ( λ ) − B ( λ ) , where A ( λ ) is a monic λ -matrix of degree p and order d, and B ( λ ) is a λ -matrix of degree p − andorder ( d , m ) . The rational λ -matrix F ( λ ) is called irreducible if A ( λ ) and B ( λ ) are left coprime. IfF ( λ ) is irreducible and R is a regular right solvent of A ( λ ) then the residue of the rational λ -matrix F ( λ ) at R is defined by Res [ F , R ] : = π i I Γ R F ( λ ) d λ , where Γ R is a simple closed contour such that σ ( R ) is contained in the interior of Γ R and σ ( A ( · )) \ σ ( R ) is contained in the exterior of Γ R . The next result characterizes a rational left matrix function. However, although Tsay and Shieh [24]assume that d = m , it is straightforward to extend the result to the case d = m (cf. Levya-Ramos [17]). Theorem 2.8 (Tsay and Shieh [24], Theorem 4.1).
Let F ( λ ) = A ( λ ) − B ( λ ) be a irreducible strictlyproper rational left λ -matrix of degree p and order ( d , m ) , and A ( λ ) has a complete set of regularright solvents { R k : k = , . . . , p } . ThenF ( λ ) = p ∑ k = ( λ I d − R k ) − Res [ F , R k ] . An assumption of Theorem 2.8 is that the right solvents are regular which excludes right solventswith multiplicities.A formula for the calculation of a matrix residue is given in Levya-Ramos [17, Section 6, eq.(6.13)]: Suppose the strictly proper left λ -matrix F ( λ ) = A ( λ ) − B ( λ ) is irreducible and A ( λ ) has acomplete set of regular right solvents { R k : k = , . . . , p } . Notice, the matrix A as defined in (1.5) isnon-singular because A has the only eigenvalue 1. Then due to Lemma 2.5 the Vandermonde matrix V ( R , . . . , R p ) is non-singular (cf. Levya-Ramos [17, Definition 4]) and Res [ F , R ] ... Res [ F , R p ] = V ( R , . . . , R p ) − [ A ] − B . (2.1)Finally, the question arises how to calculate the right solvents of the λ -matrix A ( λ ) . A possibilityto characterize a right solvent is by right latent roots and latent vectors as is done in Dennis et al. [9]. Definition 2.9.
Let A ( λ ) be a λ -matrix of order d. If λ i ∈ C satisfies det ( A ( λ i )) = , then λ i is called latent root of A ( λ ) . A vector p i ∈ C d satisfying A ( λ i ) p i = d is called right latent vector of A ( λ ) associated to the latent root λ i . Theorem 2.10.
Suppose the monic λ -matrix A ( λ ) has distinct latent roots λ , . . . , λ pd with corre-sponding right latent vectors p , . . . , p pd , respectively. Define P k : = ( p ( k − ) d + , . . . , p kd ) ∈ C d × d and Λ k : = diag ( λ ( k − ) d + , . . . , λ kd ) for k = , . . . , p. a) Then R k : = P k Λ k P − k for k = , . . . , p is a complete set of regular right solvents of A ( λ ) .(b) Suppose the strictly proper left λ -matrix F ( λ ) = A ( λ ) − B ( λ ) is irreducible, then the residueof F ( λ ) can be calculated as in (2.1) andF ( λ ) = p ∑ k = ( λ I d − R k ) − Res [ F , R k ] . Proof. (a) is proven in Dennis et al. [9], Theorem 4.5. (b) follows from (a) and Theorem 2.8.
In this section we present criteria for a MCARMA process to be a sum of multivariate Ornstein-Uhlenbeck processes. For the rest of the paper we will assume the following:
Assumption A.
Let A ( λ ) , B ( λ ) be defined as in (1.2) and F ( λ ) = A ( λ ) − B ( λ ) be irreducible. As-sume further that A ( λ ) has a complete set of regular right solvents { R k : k = , . . . , p } . Instead of assuming that the right solvents { R k : k = , . . . , p } are complete and regular, it is equivalentto assume that V ( R , . . . , R p ) is non-singular (see Lemma 2.5). A sufficient condition for A ( λ ) to havea complete set of regular right solvents is that A ∗ as defined in (1.3) has distinct eigenvalues because σ ( A ∗ ) : = { λ ∈ C : det ( A ∗ − λ I pd ) = } = σ ( A ( · )) , due to Marquardt and Stelzer [19, Lemma 3.8],such that by Theorem 2.10 the statement follows. However, this is only a sufficient but not a necessaryassumption. Theorem 3.1.
Define for k = , . . . , p the multivariate complex-valued Ornstein-Uhlenbeck processesY k ( t ) = e R k t Y k ( ) + Z t e R k ( t − u ) Res [ F , R k ] d L ( u ) , t ≥ , (3.1) with some initial condition Y k ( ) in C d such that V ( R , . . . , R p )[ Y ( ) ⊤ , . . . , Y p ( ) ⊤ ] ⊤ ∈ R pd . ThenY ( t ) = ∑ pk = Y k ( t ) is an R d -valued solution of the state space model (1.4) and hence, a MCARMA ( p , q ) -process.Proof. Of course, Y ( t ) = C ∗ e A ∗ t X ( ) + Z t C ∗ e A ∗ ( t − u ) B ∗ d L ( u ) is an R d -valued solution of the state space model (1.4) with some initial condition X ( ) ∈ R pd . Define E ∗ : = [ I d , . . . , I d ] ∈ R d × pd , F ∗ = Res [ F , R ] ... Res [ F , R p ] ∈ C pd × d and R ∗ : = diag ( R , . . . , R p ) ∈ C pd × pd as a block diagonal matrix. Due to (2.1) and (1.6) the relation F ∗ = V ( R , . . . , R p ) − [ A ] − B = V ( R , . . . , R p ) − B ∗ A ∗ V ( R , . . . , R p ) = V ( R , . . . , R p ) R ∗ and C ∗ V ( R , . . . , R p ) = E ∗ , where we used that R k is a right solvent of A ( λ ) . Therefore, define T : = V ( R , . . . , R p ) , Y ∗ ( ) : =[ Y ( ) ⊤ , . . . , Y p ( ) ⊤ ] ⊤ and X ∗ ( ) : = TY ∗ ( ) ∈ R pd such that A ∗ = T R ∗ T − , B ∗ = T F ∗ and C ∗ = E ∗ T − . In particular, e A ∗ t = T e R ∗ t T − , t ∈ R . Then for t ≥ Y ( t ) = C ∗ e A ∗ t X ∗ ( ) + Z t C ∗ e A ∗ ( t − u ) B ∗ d L ( u )= E ∗ e R ∗ t T − TY ∗ ( ) + Z t E ∗ e R ∗ ( t − u ) F ∗ d L ( u )= p ∑ k = (cid:20) e R k t Y k ( ) + Z t e R k ( t − u ) Res [ F , R k ] d L ( u ) (cid:21) = p ∑ k = Y k ( t ) is R d -valued. Remark 3.2. (a) If σ ( A ∗ ) has only distinct eigenvalues then Theorem 2.10 gives the possibility to calculatea complete set of regular right solvents. Due to Equation (2.1) we are able to calculate theresidues as well. Thus, we obtain via (3.1) a representation of the MCARMA process as sumof Ornstein-Uhlenbeck processes.(b) Since the solvents R , . . . , R p are not unique, the representation of Y as sum of Ornstein-Uhlenbeckprocesses is not unique as well (cf. Example 3.5), only in the case d = ∑ pk = L k Y k ( t ) , t ≥
0, where L , . . . , L p ∈ R d × d , of R d -valued multivariateOrnstein-Uhlenbeck processes Y , . . . , Y p is a MCARMA ( p , p − ) -process. But the exponent R k in the definition of Y k is not necessarily a right solvent of the autoregressive polynomial ofthe MCARMA process. This is essential to derive a VARMA representation of the discrete-timesampled MCARMA process later on. Corollary 3.3.
Suppose σ ( A ( · )) ⊂ { ( − ∞ , ) + i R } and E [ log ( max ( , k L ( ) k ))] < ∞ . Define for k = , . . . , p the multivariate complex-valued Ornstein-Uhlenbeck processesY k ( t ) = Z t − ∞ e R k ( t − u ) Res [ F , R k ] d L ( u ) , t ∈ R , Then Y ( t ) = ∑ pk = Y k ( t ) = R t − ∞ ∑ pk = e R k ( t − u ) Res [ F , R k ] d L ( u ) , t ∈ R , is a stationary R d -valued solutionof the state space model (1.4) and hence, a MCARMA ( p , q ) -process. Due to Sato and Yamazato [20, Theorem 4.1] the stationary Ornstein-Uhlenbeck processes Y k arewell-defined. 8 emark 3.4. (a) Let Γ k be a simple closed contour such that σ ( R k ) lies in the interior of Γ k and the residuaryspectrum σ ( A ( · )) \ σ ( R k ) lies in the exterior of Γ k and Γ : = S pk = Γ k . Due to Cauchy’s integralformula (see Lax [16, Theorem 17.5]), and Theorem 2.8 we obtain for t ≥ p ∑ k = e tR k Res [ F , R k ] = π i p ∑ k = I Γ e t λ ( λ I d − R k ) − Res [ F , R k ] d λ = π i I Γ e t λ F ( λ ) d λ . In particular, if σ ( A ( · )) ⊂ { ( − ∞ , ) + i R } then the kernel function satisfies for t ≥ p ∑ k = e tR k Res [ F , R k ] = π Z ∞ − ∞ e ti ω F ( i ω ) d ω . (b) In the case of repeated right solvents, Y has not the representation as a sum of multivariateOrnstein Uhlenbeck processes due to the representation of F ( λ ) = ∑ µ k = ∑ ν k j = ( λ I d − R k ) − j F k , j in Shieh et. al. [22] and hence, p ∑ k = e tR k Res [ F , R k ] = π i I Γ e t λ F ( λ ) d λ (cf. Brockwell and Lindner [6, Lemma 2.4] in the case of one-dimensional CARMA processes). Example 3.5.
Let A ( λ ) = I d λ + (cid:18) −
11 22 −
12 21 (cid:19) λ + (cid:18) −
42 52 −
36 44 (cid:19) and B ( λ ) = I d , λ ∈ C , be given. Then R = (cid:18) − − (cid:19) , R = (cid:18) − − − (cid:19) , R = (cid:18) − − (cid:19) , R = (cid:18) − . − (cid:19) are right solvents of A ( λ ) . The pair R , R and the pair R , R , respectively build a complete set ofregular right solvents of A ( λ ) . Then Theorem 3.1 and the formula for the residues (2.1) give that both Y ( t ) + Y ( t ) with Y ( t ) = e R t Y ( ) + Z t e R ( t − u ) (cid:18) − − (cid:19) d L ( u ) , Y ( t ) = e R t Y ( ) + Z t e R ( t − u ) (cid:18) − − (cid:19) d L ( u ) , and Y ( t ) + Y ( t ) with Y ( t ) = e R t Y ( ) + Z t e R ( t − u ) (cid:18) − − (cid:19) d L ( u ) , Y ( t ) = e R t Y ( ) + Z t e R ( t − u ) (cid:18) − − (cid:19) d L ( u ) , are MCARMA ( , ) processes with AR polynomial A ( λ ) and MA polynomial B ( λ ) .9or the rest of the paper we assume: Assumption B.
Y has the representation as given in Theorem 3.1, E k L ( ) k < ∞ and E L ( ) = m . Now, we are able to present an alternative representation of the covariance function of a stationaryMCARMA process.
Proposition 3.6.
Suppose the setting of Corollary 3.3. The covariance function ( γ Y ( l )) l ∈ Z = ( Cov ( Y ( t + l ) , Y ( t ))) l ∈ N of Y has the representation γ Y ( l ) = p ∑ i = e lR i Σ i , l ≥ , where Σ i : = p ∑ j = Z ∞ e uR i Res [ F , R i ] Σ L Res [ F , R j ] H e uR H j d uand for a matrix Z ∈ C d × d we denote by Z H the transposed complex conjugated of Z.Proof. An application of Corollary 3.3 gives γ Y ( l ) = p ∑ i , j = Cov (cid:18) Z t + l − ∞ e R i ( t + l − u ) Res [ F , R i ] d L ( u ) , Z t − ∞ e R j ( t − u ) Res [ F , R j ] d L ( u ) (cid:19) = p ∑ i , j = e lR i Z ∞ e uR i Res [ F , R i ] Σ L Res [ F , R j ] H e uR H j d u , which completes the proof.A final aim is to derive a VARMA representation for a MCARMA process observed at discrete time-points. To distinguish the notation between the continuous-time process and the sampled discrete-timeprocess, we write Y ( h ) n for Y ( nh ) in the following and accordingly Y ( h ) k , n for Y k ( nh ) for some fixed h > Lemma 3.7.
For any k = , . . . , p, n ≥ p, l = , . . . , n and any matrices C , . . . , C l ∈ C d × d it holds thatY ( h ) k , n = l ∑ r = C r Y ( h ) k , n − r + e hlR k − l ∑ r = C r e h ( l − r ) R k ! Y ( h ) k , n − l + l − ∑ r = e hrR k − r ∑ j = C j e h ( r − j ) R k ! N ( h ) k , n − r , where N ( h ) k , n = R nh ( n − ) h e R k ( nh − u ) Res [ F , R k ] d L ( u ) .Proof. The proof goes in the same vein as the proof of equation (2.8) in Brockwell and Lindner [6]for scalars c , . . . , c l instead of matrices C , . . . , C l , since Y k is a multivariate Ornstein-Uhlenbeck pro-cess.Eventually, we obtain a VARMA ( p , p − ) representation for the sampled version of aMCARMA ( p , q ) process. Theorem 3.8.
Define Ψ : = I d , [ Ψ p , . . . , Ψ ] : = − [ e − phR , . . . , e − phR p ] V − ( e − hR , . . . , e − hR p ) ∈ C p × pd and the λ -matrix Φ ( λ ) : = I d − Φ λ − . . . − Φ p λ p of degree p and order d with Φ j : = − Ψ − p Ψ p − j for j = , . . . , p . Then there exists a λ -matrix Θ ( λ ) = I d + Θ λ + . . . + Θ p − λ p − of degree p − and order such that Φ ( B ) Y ( h ) n = Θ ( B ) ε ( h ) n , n ≥ p , (3.2) where B denotes the backshift operator (i.e. B j Y ( h ) n = Y ( h ) n − j for j ∈ N ) and ( ε ( h ) n ) n ≥ p is a d-dimensionalweak white noise. Thus, ( Y ( h ) n ) n ≥ p admits a weak VARMA ( p , p − ) representation.Proof. First, we will show that Φ ( λ ) is well-defined and has the complete set of regular right solventse − hR , . . . , e − hR p . Due to Assumption A and Lemma 2.5, the Vandermonde matrix V ( e − hR , . . . , e − hR p ) is non-singular and finally, Ψ , . . . , Ψ p is well-defined. A conclusion of Assumption A and Lemma 2.6is then that e − hR , . . . , e − hR p is a complete set of regular right solvents of Ψ ( λ ) = I d λ p + Ψ λ p − + . . . + Ψ p . Note that Ψ p = ( − ) p e − hR ∗ p · . . . · e − hR ∗ · e − hR where R ∗ , . . . , R ∗ p are defined as in Lemma 2.6.Since the eigenvalues of e − hR ∗ k , k = , . . . , p and e − hR are non-zero, the matrix Ψ p is non-singular.Finally, Φ ( λ ) = Ψ − p Ψ ( λ ) is well-defined and has the complete set of regular right solvents e − hR ,. . . , e − hR p .Due to (3.1) we obtain that Y ( h ) n = ∑ pk = Y ( h ) k , n , n ≥ p , where Y ( h ) k , n = e hR k Y ( h ) k , n − + N ( h ) k , n and N ( h ) k , n = Z nh ( n − ) h e R k ( nh − u ) Res [ F , R k ] d L ( u ) (cf. Schlemm and Stelzer [21, Lemma 5.2]). An application of Lemma 3.7 with l = p and C r = Φ r for r = , . . . , p gives Y ( h ) k , n = p ∑ r = Φ r Y ( h ) k , n − r + " e hpR k − p ∑ r = Φ r e h ( p − r ) R k Y ( h ) k , n − p + p − ∑ r = " e hrR k − r ∑ j = Φ j e h ( r − j ) R k N ( h ) k , n − r . The fact that e − hR k is a right solvent of Φ ( λ ) implies thate hpR k − p ∑ r = Φ r e h ( p − r ) R k = Φ ( e − hR k ) e phR k = d × d , k = , . . . , p . Hence, we obtain Φ ( B ) Y ( h ) k , n = Y ( h ) k , n − p ∑ r = Φ r Y ( h ) k , n − r = p − ∑ r = " e hrR k − r ∑ j = Φ j e h ( r − j ) R k N ( h ) k , n − r = : U ( h ) n . (3.3)Define for r = , . . . , p − ( W ( h ) r , n ) n ∈ Z in C d as W ( h ) r , n : = Z nh ( n − ) h p ∑ k = " e hrR k − r ∑ j = Φ j e h ( r − j ) R k e R k ( nh − u ) Res [ F , R k ] d L ( u ) . (3.4)Summation over k and rearranging leads to Φ ( B ) Y ( h ) n = U ( h ) n = p − ∑ r = W ( h ) r , n − r , n ≥ p . Since ([ W ( h ) ⊤ , n , . . . , W ( h ) ⊤ p − , n ] ⊤ ) n ∈ Z is a sequence of iid random vectors, the d -dimensional sequence11 U ( h ) n ) n ∈ Z is ( p − ) -dependent. Define ε ( h ) n : = U ( h ) n − P M n − U ( h ) n , n ∈ Z , where P M n − denotes the orthogonal projection on M n − : = sp { U ( h ) j : − ∞ < j ≤ n − } and theclosure is taken in the Hilbert space of square integrable complex random vectors with inner product ( U , U ) E ( U H U ) for random vectors U , U in C d . Then Θ , . . . Θ p − is given as the solution ofthe equation P sp { ε ( h ) n − p + ,..., ε ( h ) n − } U ( h ) n = Θ ε ( h ) n − + . . . + Θ p − ε ( h ) n − p + . As in the proof of Brockwell and Davis [4, Proposition 3.2.1] for one-dimensional ( p − ) -dependentprocesses we can follow then the statement. Remark 3.9. (a) Characteristic is that the λ - matrix Ψ ( λ ) has the complete set of right solvents e − hR , . . . , e − hR p but due to Lemma 2.6 it has not necessarily the representation as ∏ pk = ( λ I d − e − hR k ) . Thus, the λ -matrix Φ ( λ ) is not necessarily ψ − p ∏ pk = ( λ e hR k − I d ) . This differs to the one-dimensionalcase where multiplication is commutative. However, Ψ ( λ ) is the unique λ -matrix with rightsolvents e − hR , . . . , e − hR p and Ψ ( ) = I d .(b) If σ ( A ( · )) ⊂ { ( − ∞ , ) + i R } holds then σ ( Ψ ( · )) = p [ k = σ ( e − hR k ) = { e − h λ : λ ∈ p [ k = σ ( R k ) } = { e − h λ : λ ∈ σ ( A ) } , is outside the closed unit disc and hence, Ψ ( λ ) is Schur-stable.Finally, we state the covariance function of the series U ( h ) : = ( U ( h ) n ) n ≥ Z given in (3.3). The second-order properties of the series U ( h ) are of interest for indirect estimation as is done, e.g., in Fasen-Hartmann and Kimmig [10] for CARMA processes. The basic idea is that the VARMA parametersof ( Y ( nh )) n ∈ N are estimated by standard techniques. Taking identifiability issues into account theautoregressive parameters of the continuous-time process are then estimated from the autoregressiveparameters of the discrete-time VARMA process. Finally, a comparison of the autocorrelation func-tion of U ( h ) for the estimated and the parametric model gives the moving average parameters of theMCARMA process. Proposition 3.10.
Let ( U ( h ) n ) n ≥ p be the d-dimensional time series defined as Φ ( B ) Y ( h ) n = U ( h ) n , and ( γ U ( h ) ( l )) l ∈ Z = ( Cov ( U ( h ) n + l , U ( h ) n )) l ∈ Z denotes the autocovariance function. Then for l = , . . . , p − : γ U ( h ) ( l ) = p ∑ ν = e hlR ν " p − l − ∑ r = p ∑ µ = (cid:18) e hrR ν − r + l ∑ j = Φ j e h ( r − j ) R ν (cid:19) Σ ( h ) ν , µ (cid:18) e hrR µ − r ∑ j = Φ j e h ( r − j ) R µ (cid:19) H , and γ U ( h ) ( l ) = d × d for l ≥ p, where Σ ( h ) ν , µ : = Cov (cid:16) N ( h ) ν , , N ( h ) µ , (cid:17) = Z h e R ν u Res [ F , R ν ] Σ L Res [ F , R µ ] H e R H µ u d u . roof. For lag l ∈ { , . . . , p − } we receive due to (3.4): γ U ( h ) ( l ) = Cov (cid:16) W ( h ) , n + l + . . . + W ( h ) p − , n + l − p + , W ( h ) , n + . . . + W ( h ) p − , n − p + (cid:17) = p − l − ∑ r = Cov (cid:16) W ( h ) r + l , n − r , W ( h ) r , n − r (cid:17) = p − l − ∑ r = " p ∑ ν = p ∑ µ = (cid:18) e h ( r + l ) R ν − r + l ∑ j = Φ j e h ( r + l − j ) R ν (cid:19) · Cov (cid:16) N ( h ) ν , n − r , N ( h ) µ , n − r (cid:17) (cid:18) e hrR µ − r ∑ j = Φ j e h ( r − j ) R µ (cid:19) H , where Cov (cid:16) N ( h ) ν , n − r , N ( h ) µ , n − r (cid:17) = Σ ( h ) ν , µ , and finally, the assertion follows. References [1] B
ENTH , F., K
OEKEBAKKER , S.
AND Z AKAMOULINE , V. (2014). The CARMA interest ratemodel.
Journal of Theoretical and Applied Finance .[2] B
ERNSTEIN , D. S. (2009).
Matrix mathematics: theory, facts, and formulas
ROCKWELL , P.
AND L INDNER , A. (2019). Sampling, embedding and inference for CARMAprocesses.
J. Time Ser. Anal.
ROCKWELL , P. J.
AND D AVIS , R. A. (1998).
Time Series: Theory and Methods
ROCKWELL , P. J., D
AVIS , R. A.
AND Y ANG , Y. (2011). Estimation for non-negative Lévy-driven CARMA processes.
J. Bus. Econom. Statist.
ROCKWELL , P. J.
AND L INDNER , A. (2009). Existence and uniqueness of stationary Lévy-driven CARMA processes.
Stochastic Process. Appl.
HAMBERS , M., M C C RORIE , J.
AND T HORNTON , M. (2018). Continuous time modellingbased on an exact discrete time representation. In
Continuous Time Modeling in the Behavioraland Related Sciences . ed. K. van Montfort, J. Oud, and M. Voelkle. Springer. pp. 317–357.[8] C
HAMBERS , M.
AND T HORNTON , M. (2012). Discrete time representations of continuous timeARMA processs.
Econometric Theory
ENNIS , J., T
RAUB , J.
AND W EBER , R. (1976). The Algebraic Theory of Matrix Polynomials.
SIAM J. Numer. Anal.
ASEN -H ARTMANN , V.
AND K IMMIG , S. (2020). Robust estimation of stationary continuous-time ARMA models via indirect inference.
J. Time Series Anal.
ASEN -H ARTMANN , V.
AND S CHOLZ , M. (2020). Cointegrated continuous-time linear statespace and MCARMA models.
Stochastics
ARNIER , H.
AND W ANG , L., Eds. (2008).
Identification of Continuous-time Models fromSampled Data . Advances in Industrial Control. Springer, London.[13] H
ORN , R. A.
AND J OHNSON , C. R. (2013).
Matrix analysis
AILATH , T. (1980).
Linear systems . Prentice-Hall, Inc., Englewood Cliffs, N.J.[15] L
ARSSON , E. K., M
OSSBERG , M.
AND S ÖDERSTRÖM , T. (2006). An overview of impor-tant practical aspects of continuous-time ARMA system identification.
Circuits Systems SignalProcess. AX , P. (2002). Functional analysis . Pure and applied mathematics. Wiley, New York.[17] L
EVYA -R AMOS , J. (1991). Partial-fraction expansion in system analysis.
Internat. J. Control
AROULAS , J. (1985). Factorization of matrix polynomials with multiple roots.
Linear AlgebraAppl.
ARQUARDT , T.
AND S TELZER , R. (2007). Multivariate CARMA processes.
Stochastic Pro-cess. Appl.
ATO , K.- I . AND Y AMAZATO , M. (1984). Operator-self-decomposable distributions as limitdistributions of processes of Ornstein-Uhlenbeck type.
Stochastic Process. Appl.
CHLEMM , E.
AND S TELZER , R. (2012). Multivariate CARMA processes, continuous-time state space models and complete regularity of the innovations of the sampled processes.
Bernoulli
HIEH , L. S., C
HANG , F.
AND M CINNIS , B. C. (1986). The block partial fraction expansionof a matrix fraction description with repeated block poles.
IEEE Trans. Automat. Control
ODOROV , V. (2009). Estimation of continuous-time stochastic volatility models with jumpsusing high-frequency data.
J. Econometrics
SAY , Y. T.
AND S HIEH , L. S. (1982). Some applications of rational matrices to problems insystems theory.
Internat. J. Systems Sci.13,