Cointegrated Solutions of Unit-Root VARs: An Extended Representation Theorem
aa r X i v : . [ ec on . E M ] F e b Cointegrated Solutions of Unit-Root VARs: AnExtended Representation Theorem
Mario Faliva and Maria Grazia Zoia ∗ ∗ Corresponding author Department of Economic Policy, Università Cattolica del Sacro Cuore, Largo Gemelli 1, 20123,Milano, Italy. Tel:+390272342948, Fax: +390272342324 [email protected] Department of Economic Policy, Università Cattolica del Sacro Cuore, Largo Gemelli 1, 20123,Milano, Italy. [email protected]
Abstract
This paper establishes an extended representation theorem for unit-root VARs. A specific al-gebraic technique is devised to recover stationarity from the solution of the model in the formof a cointegrating transformation. Closed forms of the results of interest are derived for inte-grated processes up to the 4-th order. An extension to higher-order processes turns out to bewithin the reach on an induction argument. keywords : Unit roots, VAR models, Cointegrated solutions,Stationarity recovering, Parallelsum.
JEL codes : C01, C02, C32
Introduction
As is well known, the solution of a unit-root VAR,
AAA ( L ) yyy t = εεε t , crucially rests on the inver-sion of the matrix polynomial AAA ( L ) = ∑ Kk = AAA k L k and eventually of the isomorphic matrix AAA ( z ) in the complex variable z about the unit root z =1. The solution takes the form of an integratedstochastic process, where stationarity can be recovered by a cointegrating transformation.The inversion of a matrix polynomial plays a crucial role in the representation theory oflinear processes. The seminal and best known contribution to this topic is the so-called Granger1epresentation theorem by Granger (1981), Granger (1983) and Engle and Granger (1987) ,that addressed the issue of the inversion of a matrix polynomial inherent in the MA represen-tation of an integrated process and derived the so-called error-correction model. Since then,a stream of research and contributions have been registered on this issue, leading to a spe-cialized literature. Among them, Phillips (1991), Phillips and Hansen (1990), Sims et al.(1990), Stock and Watson (1993) who worked out the triangular representation, and Engle and Yoo(1991), Haldrup and Salmon (1998) who introduced the use of the Smith-MacMillan form of A ( L ) . Johansen, Johansen (1985), Johansen (1991), Johansen (1992) and Johansen (1996),developed the Granger’s representation theorem in the context of integrated VAR models andestablished the necessary and sufficient conditions for the occurrence of first and second-order integrated processes (see Franchi and Paruolo (2019b) for a detailed discussion of theGranger representation theorem history).First Schumacher (1991) pointed out that Johansen’s I ( ) conditions could be restatedin terms of existence of a simple pole in the inverse of the VAR autoregressive polynomial.Several authors have investigated the problem of the inversion of a matrix polynomial about apole (see e.g., Avrachenkov et al. (2001), Faliva and Zoia (2002), Faliva and Zoia (2003), Faliva and Zoia(2009), Faliva and Zoia (2011), Langenhop (1971), Franchi and Paruolo (2019b)) ever since.This topic has been also recently revisited from the Hilbert-space standpoint via the theory offunctional time series by Beare et al. (2017), Beare and Seo (2019) and Franchi and Paruolo(2019a).This paper develops an approach to unit-root VAR models which crucially hinges on theLaurent expansion of the inverse of the isomorphic matrix polynomial AAA ((( zzz ))) . This eventuallyleads to determine the VAR solution, corresponding to the order of the pole of
AAA ((( zzz ))) − , alongwith its integration and cointegration properties. Closed-form expressions of the results ofinterest are provided for VAR models with (co)integrated solutions up to the 4-th order. By aninduction argument, the analysis can be further advanced to cover processes of higher orders.The paper develops in a twofold way. Once an extended unit-root VAR representation the-orem is stated in Section 1, the paper switches to set up the required analytical apparatus,which is provided in Sections 2 and 3. Here, closed-form expressions of the principal-part2atrices in the Laurent expansion of AAA ( z ) − about a unit root are worked out. The key issueof recovering stationarity, via a linear transformation of AAA ( z ) − which annihilates the princi-pal part, is successfully faced.The algebraic set-up of the paper pivots around the twin equalities AAA − ( z ) AAA ( z ) = I and AAA ( z ) AAA − ( z ) = I which hold true in a deleted neighbourhood of z =
1. The twin equationsystems that arise from the said equalities, allow to obtain informative closed-form expres-sions of the principal-part matrices. It can be shown that, besides the leading matrix, all theother principal-part matrices obey a regular scheme, as they can be expressed as a sum of twocomponents: a term whose representation does not vary with the order of the pole and an-other which changes according to the latter. For a principal part matrix,
NNN j , which weightsthe power ( z − ) − j , the former term turns out to be a linear combination of all the principal-part matrices, NNN i , weighting (negative) higher order powers of ( z − ) , namely the matrices NNN i weighting ( z − ) − i , i = j + , .., m , where m is the order of the pole. The coefficient matricesof the said linear combination play a crucial role in the cointegration analysis.The multiplicity of the pole is determined and analyticity at z = PPPAAA − ( z ) , where PPP is a projection operator of the principal-part on the null space.Use of the notion of parallel sum of matrices is made to find out the cointegrating relation-ships and their rank. When the order of the pole is multiple, the problem of annihilating theprincipal-part is solved by means of more operators that jointly meet the target. To this end,the role of parallel sum of matrices proves effective to combine all the projectors needed toannihilate the principal-part. Having established the results we needed, the paper turns backto the econometric side of the problem and provides in Section 4 the proof of the unit-rootVAR theorem of Section 1. From an econometric standpoint, the value added by the paper isattributable to the approach to determine the unit-root VAR solutions and the innovative sta-tionarity recovery technique. In the paper such topics are fully developed for VAR modelswith solutions integrated up to the 4 th order. Thus the paper crosses the virtual threshold ofI(2) processes and clears the way to tackle cointegrated processes of higher order by induc-tion. The paper is organized as follows. Section 1 formulates the extended theorem whichprovides the solution of unit-root VARs together with its integration and co-integration prop-3rties. Sections 2 and 3 work out the analytical premise and the algebraic results, demandedto establish the main theorem of Section 1. Section 4 gives the proof of the said theorem. Sec-tion 5 provides some concluding remarks. Two appendices complete the paper, the former isdevoted to parallel sums of matrices and the latter derives the results and formulas demandedby Theorem 3.1. In this section an extended representation theorem is established for unit-root VAR modelswhose solutions are integrated processes up to the 4 th order and stationarity recovering viacointegration is thoroughly investigated. The latter is not only interesting in itself but playsa crucial role in economic analysis insofar as it offers a key to the interpretation of the long-run dynamics inherent in economic phenomena under investigation (see e.g., Banerjee et al.(1993) ). As the theorem demands an ad hoc analytic apparatus, its proof is postponed untilthe intended algebraic toolkit is made available in the newt two sections.Let us now state the following Theorem 1.1
Consider the VAR model: A ( L ) yyy t = εεε t , εεε t ( n , ) ∼ WN n ( , ΣΣΣ ) (1) where AAA ( L ) = III n − K ∑ k = AAA k L k . (2) Let z=1 be a root of multiplicity µ of the characteristic polynomial AAA ( z ) , with the other rootslying outside the unit circle. Then, the solution yyy t of (1) is an integrated ( III ) and co-integratedprocess, that is yyy t ∼ III ( m ) (3) PPP m yyy t ∼ III ( ) (4)4 here m (m ≤ µ ) is the least positive integer for which the matrixKKK m = ( m − ∏ i = BBB i ⊥ ) AAA [ m ] ( m − ∏ i = CCC i ⊥ ) (5) is non-singular. The matrices BBB j , CCC j arise from the rank factorizationsAAA ( ) = BBB CCC (6) KKK i = BBB i CCC ′ i , ≤ i < m , (7) where the subscript ⊥ stands for orthogonal complement, andAAA [ m ] = AAA ( ) i f m = AAA [ m ] = AAA ( ) − e AAA ( ) , e AAA ( ) = AAA ( ) AAA + AAA ( ) i f m = AAA [ m ] = AAA ( ) − e AAA ( ) , e AAA ( ) = (cid:20) AAA ( ) AAA + , AAA [ ] (cid:21) AAA ( ) IIIIII
ΘΘΘ AAA + AAA ( ) AAA [ ] i f m = AAA [ m ] = AAA ( ) − e AAA ( ) , e AAA ( ) = (cid:20) AAA ( ) AAA + , AAA [ ] , AAA [ ] (cid:21) AAA ( ) AAA [ ] ΘΘΘ + AAA ( ) AAA + IIIAAA + AAA ( ) + ΘΘΘ AAA [ ] AAA + AAA ( ) + ΘΘΘ AAA [ ] AAA + + ΘΘΘ AAA [ ] ΘΘΘ ΘΘΘ III
ΘΘΘ ΘΘΘ AAA + AAA ( ) AAA [ ] AAA [ ] i f m = Here AAA + denotes the Moore-Penrose generalized inverse of AAA,AAA ( k ) = ∂ k AAA ( z ) ∂ z k (cid:12)(cid:12) z = , AAA ( ) = AAA ( ) = AAA (12)
ΘΘΘ = CCC ⊥ KKK + BBB ′ ⊥ (13) ΘΘΘ = CCC ⊥ CCC ⊥ KKK + BBB ′ ⊥ BBB ′ ⊥ (14)5 he cointegration matrices in (4) arePPP m = ( CCC ′ ) + CCC ′ i f m = PPP m = − (cid:20) , , III (cid:21) PPP III
000 2
ΠΠΠ IIIIII III
III i f m = PPP m = − (cid:20) , , III (cid:21) PPP III
000 2
ΠΠΠ IIIIII III
III i f m = PPP m = − (cid:20) , , III (cid:21) PPP III
000 2
ΠΠΠ IIIIII III
III i f m = where ΠΠΠ = ( AAA + AAA ( ) CCC ⊥ CCC ⊥ ) ⊤ (19) ΠΠΠ = (( AAA + AAA ( ) CCC ⊥ ) ⊤ : ( AAA + AAA [ ] CCC ⊥ CCC ⊥ CCC ⊥ ) ⊤ ) (20) ΠΠΠ = ( ΠΠΠ , : ( AAA + AAA [ ] CCC ⊥ CCC ⊥ CCC ⊥ CCC ⊥ ) ⊤ ) (21) with ΠΠΠ , = (( AAA + AAA ( ) CCC ⊥ ) ⊤ : ( AAA + AAA [ ] CCC ⊥ ) ⊤ ) (22) Here XXX :::
ZZZ ===
XXX (((
XXX +++
ZZZ ))) + ZZZ denotes the parallel sum of XXX and ZZZ, and GGG ⊤ === III −−−
GGGGGG + . Thecointegration ranks are r ( PPP m ) = r ( CCC ) i f m = r ( PPP m ) = r ( CCC ) − r ( AAA + AAA ( ) CCC ⊥ CCC ⊥ ) i f m = r ( PPP m ) = r ( CCC ) − r ( (cid:20) AAA + AAA ( ) CCC ⊥ , AAA + AAA [ ] CCC ⊥ CCC ⊥ CCC ⊥ (cid:21) ) i f m = r ( PPP m ) = r ( CCC ) − r ( (cid:20) AAA + AAA ( ) CCC ⊥ , AAA + AAA [ ] CCC ⊥ (cid:21) ) + r ( ΞΞΞ ) − r ([ ΞΞΞ , ΓΓΓ ]) i f m = ith ΓΓΓΓΓΓ + = ΠΠΠ , and ΞΞΞΞΞΞ + = ( AAA + AAA [ ] CCC ⊥ CCC ⊥ CCC ⊥ CCC ⊥ ) ⊤ ) Proof
Go to Section 4 (cid:3)
Several facts on the inversion of a matrix polynomial about a pole must be established beforeproving the theorem. This is done in the following two Sections.
As the solution of a unit-root VAR model,
AAA ( z ) y t = εεε t , crucially rests on the operator AAA − ( L ) (see e.g., Faliva and Zoia (2009), Sections 2.3 and 2.9 ) and the algebra of matrix polynomi-als in the lag operator L and in a complex variable z are isomorphic ( Dhrymes (1971)), let usaddress the issue of the inversion of AAA ( z ) about a pole, z = z o , with z o = AAA − ( z ) AAA ( z ) = I and AAA ( z ) AAA − ( z ) = I , which hold truein a deleted neighbourhood of a pole, we derive two equation systems which allow eventu-ally to work out closed-form expressions for the coefficient matrices of the principal part of AAA − ( z ) .Let AAA ( z ) = K ∑ k = AAA k z k , AAA k ( n , n ) =
000 (27)be a matrix polynomial of order n and degree K and z denotes a root of det ( AAA ( z )) = AAA ( z ) about z = z yields AAA ( z ) = K ∑ k = k ! AAA ( k ) ( z − z ) k (29)As the matrix function AAA − ( z ) is analytic through the z -plane except for the zeros, z , of detAAA ( z ) ,7 AA − ( z ) , the following Laurent expansion AAA − ( z ) = ∞ ∑ j = − m NNN j ( z − z ) j = − ∑ j = − m NNN j ( z − z ) j + MMM ( z ) (30)holds in a deleted neighborhood of the pole located at z = z (see, e.g., Faliva and Zoia (2009)).Here m is the order of the pole. The first term on the right-hand side of (30) is the principalpart, while MMM ( z ) = ∑ ∞ j = NNN j ( z − z ) j is the regular part.In a deleted neighbourhood of z = z , the product of the right hand sides of (30) and (29)yields the equalities III n = ∞ ∑ j = − m NNN j ( z − z ) j K ∑ k = k ! AAA ( k ) ( z − z ) k ! = ∞ ∑ k = − m m + k ∑ j = j ! NNN k − j AAA ( j ) ! ( z − z ) k == ∞ ∑ h = h ∑ j = j ! NNN h − m − j AAA ( j ) ! ( z − z ) h − m (31)In turn, reversing the order of multiplication yields III n = (cid:16) ∑ Kk = k ! AAA ( k ) ( z − z ) k (cid:17) ∑ ∞ j = − m NNN j ( z − z ) j = ∑ ∞ k = − m (cid:16) ∑ m + kj = − m j ! AAA ( j ) NNN k − j (cid:17) ( z − z ) k == ∞ ∑ h = h ∑ j = j ! AAA ( j ) NNN h − m − j ! ( z − z ) h − m (32)The coefficient matrices in the right-hand side of (31) associated with negative powers of ( z − z ) are null matrices, whereas the matrix associated with ( z − z ) is the identity matrix,that is h ∑ j = j ! NNN − m + k − j AAA ( j ) = i f ≤ h ≤ m − III n i f h = m (33)The same argument applies to equation (32) and h ∑ j = j ! AAA ( j ) NNN − m + k − j = i f ≤ h ≤ m − III n i f h = m (34)follows accordingly.Hereafter, the analysis is concerned with unit roots, z =1 , which entails that AAA ( ) = AAA is a8ingular matrix.
In this section we establish under which conditions
AAA − ( z ) has a pole of order m at z = z =
1, derive closed form expressions of the principal-part matrices and determine linear functionsof
AAA − ( z ) which are analytic at z = AAA − ( z ) . This is done in Theorem 3.1. Here, both a basic result on theleading principal-part matrix and useful representations of the non-leading ones are providedfor multiple poles. These representations turn out to be the resultant of two terms: a termwhose structure is maintained as the pole order changes and a term which is peculiar to themultiplicity of the pole and vanishes if the pole is simple. In order to unburden the exposition,the working out of formulas is left to an Appendix (B).Afterwords, Theorem 3.2 determines the order of the pole of AAA − ( z ) , gives closed-formrepresentations of the principal-part leading matrix of the Laurent expansion of AAA − ( z ) about z = z = Theorem 3.1
Let z = z =1 be a pole of AAA − ( z ) . Then, the following holds The leading principal-part matrix NNN − m of the Laurent expansion of AAA − ( z ) about z = has the representation NNN − m = CCC ⊥ ZZZ m BBB ′ ⊥ (35) for some ZZZ m . The non-leading principal-part matrices NNN − m + θ , ≤ θ < m ≤ , is composed of twoterms, NNN − m + θ = ΛΛΛ θ + ΛΛΛ m , θ (36) The first term,
ΛΛΛ θ , has a structure which depends only on θ and plays a crucial role in de-termining the linear transformations which recovers stationary from the VAR solution. The atrix ΛΛΛ θ is the sum of the discrete convolutions of j ! AAA + AAA j and − NNN − m + θ − j , of -NNN − m + θ − j and j ! AAA + AAA j ,and of AAA + AAANNN − m + θ − j , and j ! AAA j AAA + , respectively, that is ΛΛΛ θ = − AAA + ( θ ∑ j = j ! AAA ( j ) NNN − m + θ − j ) − ( θ ∑ j = NNN − m + θ − j j ! AAA ( j ) ) AAA + ++ AAA + AAA θ − ∑ j = NNN − m + θ − j j ! AAA ( j ) AAA + (37) The second term,
ΛΛΛ m , θ has a structure which depends on both θ and the order, m, of the poleorder. In particular, the following holds ΛΛΛ m , =
000 if m < , CCC , ⊥ SSS , m − BBB ′ , ⊥ if m = , − ∑ m − j = ( ΘΘΘ j AAA [ j + ] NNN − m + NNN − m AAA [ j + ] ΘΘΘ j )++( ∏ m − j = CCC j ⊥ ) SSS , m − ( ∏ m − j = BBB ′ m − − j ⊥ ) if m > , (38) for θ = and for some SSS , m − . Here
ΘΘΘ and ΘΘΘ are the matrices specified in (13) and (14) ,respectively, and KKK = BBB ′ ⊥ AAA ( ) CCC ⊥ (39) KKK = BBB ′ ⊥ BBB ′ ⊥ AAA [ ] CCC ⊥ CCC ⊥ (40) ΛΛΛ m , =
000 if m < , CCC , ⊥ SSS , m − BBB ′ , ⊥ if m = , − ΘΘΘ AAA [ ] NNN − m + ( BBB ′ ⊥ ) + KKK ⊤ BBB ′ ⊥ − ΘΘΘ ˙ AAA [ ] NNN − m + − CCC ⊥ CCC + ⊥ NNN − m + AAA [ ] ΘΘΘ − NNN − m ˘ AAA [ ] ΘΘΘ ++ CCC ⊥ CCC ⊥ SSS , m − BBB ′ ⊥ BBB ′ ⊥ if m > , (41) for θ = and for some SSS , m − . Here ˙ AAA [ ] and ˘ AAA [ ] denote the matrices (169) and (170) inAppendix B . Proof
NNN m + , NNN m + and NNN m + of formulas (156), (161) and (173) (Appendix B). In particular, formula (38) rests on(156) together with (160) and (167) in the said Appendix, while formula (41) hinges on (161)and (171)). (cid:3) At this point we can derive the results we are mostly interested in. To this end, next theoremestablishes the order of the pole of
AAA ( z ) − by a determinantal criterion, gives the closed-formrepresentations of the leading matrices of the principal-part for poles of order 1 ≤ m ≤
4, de-termines which linear forms
PPP m AAA ( z ) − = PPP m MMM ( z ) recover analyticity at z = PPP m which are orthogonal to the principal-part of AAA ( z ) − .It is worth noting that the propositions of the theorem which follows have an economet-ric counterpart of prominent interest, insofar as they clear the way, thanks to the isomorphismof algebras in L and z , to the cointegration analysis in unit-root VAR models. Indeed, the or-der of the pole of AAA ( z ) − determines the integration order of the solution of AAA ( L ) yyy t = εεε t andthe analyticity of PPP m AAA ( z ) − pairs off with the stationarity of the linear transformations PPP m yyy t ,which shed light into the otherwise hidden long-run relationships of an economic system. Theorem 3.2
Let AAA ( z ) have a possibly repeated unit root, z=1, and AAA = BBB CCC ′ be a rank-factorization ofthe singular matrix AAA ( ) = AAA. Then, the following statements hold the unit root z = is a simple pole of AAA − ( z ) , that is m = , ifdetKKK = where KKK is defined as in (39) .Under (42) , the matrix AAA − ( z ) has the Laurent expansion (30) about z = , with m = and NNN − = CCC ⊥ ( BBB ′ ⊥ AAA ( ) CCC ⊥ ) − BBB ′ ⊥ (43) as residue matrix. he matrix function PPP AAA − ( z ) is analytic at z = forPPP = ( CCC ′ ) + CCC ′ (44)2. The unit root z = is a double pole of AAA − ( z ) , that is m = , ifdetKKK = detKKK = where KKK is given by (40) .Under (46) , the matrix AAA − ( z ) has the Laurent expansion (30) about z = with m = and NNN − = CCC ⊥ CCC ⊥ ( BBB ′ ⊥ BBB ′ ⊥ AAA [ ] CCC ⊥ CCC ⊥ ) − (47) as leading matrix of the principal part . The matrix function PPP AAA − ( z ) is analytic at z = forPPP = − (cid:20) , , III (cid:21) PPP , , III , ΠΠΠ , IIIIII , III , III (48) where
ΠΠΠ = ( AAA + AAA ( ) CCC ⊥ CCC ⊥ ) ⊤ (49) The rank of PPP is r ( PPP ) = r ( CCC ) − r ( AAA + AAA ( ) CCC ⊥ CCC ⊥ ) (50)3. The unit root z = is a triple pole of AAA − ( z ) , that is m = , ifdetKKK = detKKK = here KKK = ( BBB ′ ⊥ BBB ′ ⊥ BBB ′ ⊥ AAA [ ] CCC ⊥ CCC ⊥ CCC ⊥ ) .Under (52) , the matrix AAA − ( z ) has the Laurent expansion (30) about z = with m = and NNN − = CCC ⊥ CCC ⊥ CCC ⊥ ( BBB ′ ⊥ BBB ′ ⊥ BBB ′ ⊥ AAA [ ] CCC ⊥ CCC ⊥ CCC ⊥ ) − BBB ′ ⊥ BBB ′ ⊥ BBB ′ ⊥ (53) as leading matrix of the principal part.The matrix function PPP AAA − ( z ) is analytic at z = forPPP = − (cid:20) , , III (cid:21) PPP , , III , ΠΠΠ , IIIIII , III , III (54) where
ΠΠΠ = [( AAA + AAA ( ) CCC ⊥ ) ⊤ : ( AAA + AAA [ ] CCC ⊥ CCC ⊥ CCC ⊥ ) ⊤ ] (55) The rank of PPP isr ( PPP ) = r ( CCC ) − r ([ AAA + AAA ( ) CCC ⊥ , AAA + AAA [ ] CCC ⊥ CCC ⊥ CCC ⊥ ]) (56)4. The unit root z = is a fourth order pole of AAA − ( z ) , that is m = , ifdetKKK = detKKK = where KKK = ( BBB ′ ⊥ BBB ′ ⊥ BBB ′ ⊥ BBB ′ ⊥ AAA [ ] CCC ⊥ CCC ⊥ CCC ⊥ CCC ⊥ ) .Under (58) , the matrix AAA − ( z ) has the Laurent expansion (30) about z = with m = andNNN = CCC ⊥ CCC ⊥ CCC ⊥ CCC ⊥ ( BBB ′ ⊥ BBB ′ ⊥ BBB ′ ⊥ BBB ′ ⊥ AAA [ ] CCC ⊥ CCC ⊥ CCC ⊥ CCC ⊥ ) − BBB ′ ⊥ BBB ′ ⊥ BBB ′ ⊥ BBB ′ ⊥ (59) as leading matrix of the principal part. he matrix function PPP AAA − ( z ) is analytic at z = forPPP = − (cid:20) , , III (cid:21) PPP , , III , ΠΠΠ , IIIIII , III , III (60) where
ΠΠΠ = ( ΠΠΠ , : ( AAA + AAA [ ] CCC ⊥ CCC ⊥ CCC ⊥ CCC ⊥ ) ⊤ ) (61) and ΠΠΠ , = (( AAA + AAA ( ) CCC ⊥ ) ⊤ : ( AAA + AAA [ ] CCC ⊥ ) ⊤ ) (62) The rank of PPP isr ( PPP ) = r ( CCC ) − r ([ AAA + AAA ( ) CCC ⊥ , AAA + AAA [ ] CCC ⊥ ]) + r ( ΞΞΞ ) − r ([ ΞΞΞ , ΓΓΓ ]) (63) with ΓΓΓΓΓΓ + = ΠΠΠ , and ΞΞΞΞΞΞ + = ( AAA [ ] CCC ⊥ CCC ⊥ CCC ⊥ CCC ⊥ ) ⊤ Proof
Let m =
1, then the following
NNN − m + AAA + NNN − m AAA ( ) = III (64)holds true because of (33), and the other way around.Pre and post-multiplication of (64) by
CCC + ⊥ and CCC ⊥ , respectively, and making use of (35)leads to the equation ZZZ m KKK = III (65)which is consistent if and only if
KKK is non-singular. Solving for ZZZ m yields ZZZ m = KKK − (66)and (43) follows from (35), accordingly. 14bout the simple pole located at z = AAA − ( z ) = ( z − ) − NNN − + MMM ( z ) (67)with NNN − given by (43).By inspection of (43) it is easy to see that PPP NNN − =
000 (68)It follows that
PPP AAA − ( z ) = PPP MMM ( z ) is analytic at z = KKK is singular then the equation becomes inconsistent and we arefacing a multiple pole. It follows that the right-hand sides of both (64) and (65) are no longeridentity matrices, but null matrices instead. Eventually, the homogeneous equation ZZZ m BBB =
000 (69)takes the place of (65). Equation (69) pairs off with
CCC ′ ZZZ m =
000 (70)which follows from (34) by using the same argument. Solving the systems (69) and (70) for
ZZZ m yields ZZZ m = CCC ⊥ CCC + ⊥ ΨΨΨ m ( BBB ′ ⊥ ) + BBB ′ ⊥ = CCC ⊥ ΦΦΦ m BBB ′ ⊥ (71)for some ΨΨΨ m , with ΦΦΦ m = CCC + ⊥ ΨΨΨ m ( BBB ′ ⊥ ) + . The representation NNN − m = CCC ⊥ CCC ⊥ ΦΦΦ m BBB ′ ⊥ BBB ′ ⊥ (72)follows from (35), accordingly.Now, let m =
2. Then, the following
NNN − m + AAA + NNN − m + AAA ( ) + NNN − m AAA ( ) = III (73)15olds true because of (33), and the other way around.Pre and post-multiplying (73) by
CCC + ⊥ CCC + ⊥ and CCC ⊥ CCC ⊥ , respectively, and making use of(156) and (72), leads to the equation ΦΦΦ m KKK = III (74)which is consistent if and only if
KKK is non-singular. Solving for ΦΦΦ m yields ΦΦΦ m = KKK − (75)and (47) follows from (72), accordingly.About the double pole located at z = AAA − ( z ) = ( z − ) − NNN − ( z − ) − NNN − + MMM ( z ) (76)with NNN − given by (47). Then, taking into account (156) and (47), it is easy to see that PPP [( z − ) − NNN − ( z − ) − ( NNN − + AAA + AAA ( ) NNN − )] =
000 (77)Upon noting that
ΠΠΠ ( AAA + AAA ( ) NNN − ) =
000 (78)where
ΠΠΠ is given by (49), applying Lemma A.1 in Appendix A to PPP and ΠΠΠ yields a projec-tor PPP = ( PPP : ΠΠΠ ) such that PPP AAA − ( z ) = PPP MMM ( z ) is analytic at z = PPP , formula (149) applies yielding r ( PPP ) = r ( CCC ) + r ( ΠΠΠ ) − n = r ( CCC ) − r ( AAA + AAA ( ) CCC ⊥ CCC ⊥ ) (79)as ΠΠΠ PPP ⊤ = ΘΘΘ ⊤ PPP ⊤ = ( III − ΘΘΘ ΘΘΘ + ) PPP ⊤ = ( III − ( ΘΘΘ ) + ′ ΘΘΘ ′ ) PPP ⊤ = PPP ⊤ − ( ΘΘΘ ) + ′ CCC ′ ⊥ CCC ′ ⊥ ( AAA ( ) ) ′ ( AAA + ) ′ PPP ⊤ = PPP ⊤ (80)16here ΘΘΘ = AAA + AAA ( ) CCC ⊥ CCC ⊥ , PPP ⊤ = CCC ⊥ CCC + ⊥ , ( AAA + ) ′ = ( BBB + ) ′ ( CCC ′ CCC ) − CCC ′ . Since ( AAA + ) ′ PPP ⊤ = ΘΘΘ ′ PPP ⊤ =
000 follows as a by-product.Turning back to (74), if
KKK is singular, then the equation becomes inconsistent and we arefacing a pole of order higher than two. It follows that the right-hand sides of (73) and (74) areno longer identity matrices but null matrices instead. Eventually, the homogeneous equation ΦΦΦ m BBB =
000 (81)takes the place of (74). Equation (81) pairs off with
CCC ′ ΦΦΦ m =
000 (82)which follows from (34) by using the same argument. Solving the systems (81) and (82) for
ΦΦΦ m yields ΦΦΦ m = CCC ⊥ CCC + ⊥ ΨΨΨ m ( BBB ′ ⊥ ) + BBB ′ ⊥ = CCC ⊥ ZZZ m BBB ′ ⊥ (83)for some ΨΨΨ m , with ZZZ m = CCC + ⊥ ΨΨΨ m ( BBB ′ ⊥ ) + . The representation NNN − m = CCC ⊥ CCC ⊥ CCC ⊥ ZZZ m BBB ′ ⊥ BBB ′ ⊥ BBB ′ ⊥ (84)follows from (72), accordingly.Now, let m =
3. Then, the following
NNN − m + AAA + NNN − m + AAA ( ) + NNN − m + AAA ( ) + NNN − m AAA ( ) = III (85)holds true because of (33) and the other way around.Pre and post-multiplying (85) by
FFF − = CCC + ⊥ CCC + ⊥ CCC + ⊥ and FFF = CCC ⊥ CCC ⊥ CCC ⊥ , respectively,yields FFF − NNN − m + AAA ( ) FFF + FFF − NNN − m + AAA ( ) FFF + FFF − NNN − m AAA ( ) = III (86)17s
AAAFFF = NNN − m + , given by (161), into (86) gives FFF − NNN − m + AAA [ ] FFF − FFF − NNN − m AAA ( ) AAA + AAA ( ) + FFF − NNN − m AAA ( ) FFF = III (87)as
FFF − AAA + and BBB ′ ⊥ AAA ( ) FFF are null matrices.Then, replacing NNN − m + , given by (160), into (87) gives FFF − NNN − m AAA [ ] FFF = III (88)as
FFF − ΘΘΘ and BBB ′ ⊥ BBB ′ ⊥ AAA [ ] FFF are null matrices.Equation (88), in light of (84) and (5), can be also written as follows ZZZ m KKK = III (89)as
FFF − NNN − m = ZZZ m BBB ′ ⊥ BBB ′ ⊥ BBB ′ ⊥ .Equation (89) is consistent if and only if KKK is non-singular. Solving (89) yields ZZZ m = KKK − (90)and (53) follows, accordingly.About the 3-rd order pole located at z =
1, the Laurent expansion (30) takes the form
AAA − ( z ) = ( z − ) − NNN − + ( z − ) − NNN − + ( z − ) − NNN − + MMM ( z ) (91)with NNN − given by (53). Then, taking into account formulas (156), (163) and (53), it is easy toverify that PPP [( z − ) − NNN − + ( z − ) − ( NNN − + AAA + AAA ( ) NNN ) + ( z − ) − ( NNN − + AAA + AAA [ ] NNN − + ΞΞΞ )] = ΞΞΞ = − AAA + AAA ( ) NNN − AAA ( ) AAA + + AAA + AAA ( ) CCC ⊥ SSS , BBB ′ ⊥ .18pon noting that ( AAA + AAA [ ] CCC ⊥ CCC ⊥ CCC ⊥ ) ⊤ AAA + AAA [ ] NNN − =
000 (93) ( AAA + AAA ( ) CCC ⊥ ) ⊤ (( z − ) − AAA + AAA ( ) NNN − + ( z − ) − ΞΞΞ ) =
000 (94)the application of Lemma A.1 in Appendix A leads to the conclusion
ΠΠΠ [( z − ) − AAA + AAA ( ) ) NNN − + ( z − ) − ( AAA + AAA [ ] NNN − + ΞΞΞ )] =
000 (95)where
ΠΠΠ is the matrix given by (55), and eventually that PPP AAA − ( z ) = PPP ( ) ) MMM ( z ) (96)where PPP = ( PPP : ΠΠΠ ) . In light of (96), PPP AAA − ( z ) is analytic at z =
1. The expression (54)follows from (143) in Appendix A.The rank of
PPP can be established following an argument similar to that used to obtain therank of PPP . Applying formula (149) in Appendix A yields r ( PPP ) = r ( PPP ) + r ( ΠΠΠ ) − n (97)as ΠΠΠ PPP ⊤ = ( e ΘΘΘ ⊤ : ΘΘΘ ⊤ ) PPP ⊤ = e ΘΘΘ ⊤ PPP ⊤ − ( e ΘΘΘ ⊤ − ΘΘΘ ⊤ e ΘΘΘ ⊤ ) + ( e ΘΘΘ ⊤ − ΘΘΘ ⊤ e ΘΘΘ ⊤ ) + PPP ⊤ = PPP ⊤ − ( e ΘΘΘ ⊤ − ΘΘΘ ⊤ e ΘΘΘ ⊤ ) + ( PPP ⊤ − ΘΘΘ ⊤ PPP ⊤ ) = PPP ⊤ (98)Here e ΘΘΘ ⊤ = AAA + AAA ( ) CCC ⊥ , ΘΘΘ = AAA + AAA [ ] CCC ⊥ CCC ⊥ CCC ⊥ and use has been made of (145) in Ap-pendix A, of (80) above and of the equality ΘΘΘ ⊤ PPP ⊤ = PPP ⊤ which can be proved by using thesame approach followed to obtain (80). Thanks to (151), formula (97) can be worked out asfollows r ( PPP ) = r ( CCC ) + r ( e ΘΘΘ ⊤ : ΘΘΘ ⊤ ) − n = r ( CCC ) − r ([ e ΘΘΘ ⊤ , ΘΘΘ ]) (99)Turning back to (89), if KKK is singular then the equation becomes inconsistent and we are fac-ing a pole of order higher than three. It follows that the right-hand sides of (85) and (89) are19o longer identity matrices but null matrices instead. Eventually, the homogeneous equation ZZZ m BBB =
000 (100)takes the place of the former (89). Equation (100) pairs off with
CCC ′ ⊥ ZZZ m =
000 (101)which follows from (34) making use of the same argument. Solving (100) and (101) for
ZZZ m yields ZZZ m = CCC ⊥ CCC + ⊥ e ΨΨΨ m ( BBB ′ ⊥ ) + BBB ′ ⊥ = CCC ⊥ e ΦΦΦ m BBB ′ ⊥ (102)for some e ΨΨΨ m , with e ΦΦΦ m = CCC + ⊥ e ΨΨΨ m ( BBB ′ ⊥ ) + . The representation NNN − m = CCC ⊥ CCC ⊥ CCC ⊥ CCC ⊥ e ΦΦΦ m BBB ′ ⊥ BBB ′ ⊥ BBB ′ ⊥ BBB ′ ⊥ (103)follows from (84), accordingly.Now, let m =
4. Then, the following
NNN − m + AAA + NNN − m + AAA ( ) + NNN − m + AAA ( ) + NNN − m + AAA ( ) + NNN − m AAA ( ) = III (104)holds true because of (33) and the other way around.Pre and post-multiplying (104) by
FFF − = CCC + ⊥ CCC + ⊥ CCC + ⊥ CCC + ⊥ and FFF = CCC ⊥ CCC ⊥ CCC ⊥ CCC ⊥ ,respectively, yields FFF − NNN − m + AAA ( ) FFF + FFF − NNN − m + AAA ( ) FFF + FFF − NNN − m + AAA ( ) FFF + FFF − NNN − m AAA ( ) FFF = III (105)as
AAAFFF = NNN − m + , given by (173) into (105) gives FFF − NNN − m + AAA [ ] FFF − FFF − NNN − m + AAA ( ) AAA + AAA ( ) FFF − FFF − NNN − m AAA ( ) AAA + AAA ( ) FFF + FFF − NNN − m + AAA ( ) FFF + FFF − NNN − m AAA ( ) FFF = III (106)as
FFF − AAA + and BBB ′ ⊥ AAA ( ) FFF are null matrices.Then, replacing NNN − m + , given by (172), into (106) gives FFF − NNN − m + AAA [ ] FFF − FFF − NNN − m AAA ( ) AAA + AAA ( ) AAA [ ] FFF − FFF − NNN − m AAA [ ] ΘΘΘ AAA [ ] FFF + − FFF − NNN − m AAA [ ] AAA + AAA [ ] FFF − FFF − NNN − m AAA ( ) AAA + AAA [ ] ΘΘΘ AAA [ ] FFF − FFF − NNN − m AAA [ ] ΘΘΘ AAA [ ] ΘΘΘ AAA [ ] FFF − FFF − NNN − m AAA ( ) AAA + AAA ( ) FFF + FFF − NNN − m AAA ( ) FFF (107)as FFF − ΘΘΘ and BBB ′ ⊥ BBB ′ ⊥ AAA [ ] FFF are null matrices.Finally, replacing NNN − m + , given by (167), into (107) gives FFF − NNN − m AAA [ ] FFF = III (108)Equation (108), taking into account (103) and (5), can be also written as e ΦΦΦ m KKK = III (109)because
FFF − NNN m = e ΦΦΦ m BBB ′ ⊥ BBB ′ ⊥ BBB ′ ⊥ BBB ′ ⊥ .Equation (109) is consistent if and only if KKK is non-singular. Solving (109) yields e ΦΦΦ m = KKK − (110)and (59) follows, accordingly.About the 4-th order pole located at z =
1, the Laurent expansion (30) takes the form
AAA − ( z ) = ( z − ) − NNN − + ( z − ) − NNN − + ( z − ) − NNN − + ( z − ) − NNN − + MMM ( z ) (111)with NNN − given by (59).Taking into account (156), (163), (175) and (59), it is easy to see that21 PP [( z − ) − NNN − + ( z − ) − ( NNN − + AAA + AAA ( ) NNN − ) + ( z − ) − ( NNN − + AAA + AAA [ ] NNN − + e ΞΞΞ )++ ( z − ) − ( NNN − + AAA + AAA [ ] NNN − + ΞΞΞ + ΞΞΞ )] =
000 (112)where e ΞΞΞ = − AAA + AAA ( ) NNN − AAA ( ) AAA + + AAA + AAA ( ) CCC ⊥ SSS , BBB ′ ⊥ , (113) ΞΞΞ = − AAA + AAA [ ] NNN − AAA ( ) AAA + + AAA + AAA [ ] ΘΘΘ AAA [ ] NNN − + AAA + AAA [ ] CCC ⊥ SSS , BBB ′ ⊥ , (114) ΞΞΞ = − AAA + AAA ( ) NNN − AAA [ ] AAA + − AAA + AAA ( ) CCC ⊥ SSS , BBB ′ ⊥ AAA ( ) AAA + + AAA + AAA ( ) CCC ⊥ SSS , BBB ′ ⊥ , (115)Upon noting that ( AAA + AAA [ ] CCC ⊥ CCC ⊥ CCC ⊥ CCC ⊥ ) ⊤ AAA + AAA [ ] NNN − =
000 (116) ( AAA + AAA [ ] CCC ⊥ ) ⊤ (( z − ) − AAA + AAA [ ] NNN − + ( z − ) − ΞΞΞ ) =
000 (117) ( AAA + AAA ( ) CCC ⊥ ) ⊤ (( z − ) − AAA + AAA ( ) NNN − + ( z − ) − e ΞΞΞ + ( z − ) − ΞΞΞ ) =
000 (118)it follows from Lemma A1 in Appendix A that
ΠΠΠ , (( z − ) − AAA + AAA ( ) NNN − + ( z − ) − ( AAA + AAA [ ] CCC ⊥ + e ΞΞΞ ) + ( z − ) − ( ΞΞΞ + ΞΞΞ )) =
000 (119)where
ΠΠΠ , is the matrix given by (62), and ΠΠΠ [(( z − ) − AAA + AAA ( ) NNN − +( z − ) − ( AAA + AAA [ ] NNN − + e ΞΞΞ ) +( z − ) − ( AAA + AAA [ ] NNN − + ΞΞΞ + ΞΞΞ )] = ΠΠΠ is the matrix given by (61). Eventually, it follows that PPP AAA − ( z ) = PPP MMM ( z ) (121)where PPP = ( PPP : ΠΠΠ ) . In light of (121), PPP AAA − ( z ) is analytic at z =
1. The expression (60)follows from (143) in Appendix A.The rank of
PPP can be established following an argument similar to that used for PPP . Ap-22lying formula (149) in Appendix A yields r ( PPP ) = r ( PPP : ΠΠΠ ) = r ( PPP ) + r ( ΠΠΠ ) − r ( PPP + ΠΠΠ ) = r ( CCC ) + r ( ΠΠΠ ) − n (122)as it can be proved that ΠΠΠ PPP ⊤ = PPP ⊤ by repeating the argument of formula (98). Applying(147) to ΠΠΠ yields r ( ΠΠΠ ) = r [ ΠΠΠ , : ( AAA + AAA [ ] CCC ⊥ CCC ⊥ CCC ⊥ CCC ⊥ ) ⊤ ] = r ( ΠΠΠ , )++ r ( AAA + AAA [ ] CCC ⊥ CCC ⊥ CCC ⊥ CCC ⊥ ) ⊤ − r ( ΠΠΠ , + ( AAA + AAA [ ] CCC ⊥ CCC ⊥ CCC ⊥ CCC ⊥ ) ⊤ ) (123)Here r ( ΠΠΠ , ) = r [( AAA + AAA ( ) CCC ⊥ ) ⊤ : ( AAA + AAA [ ] CCC ⊥ ) ⊤ ] = n − r [( AAA + AAA ( ) CCC ⊥ ) , ( AAA + AAA [ ] CCC ⊥ )] (124)in light of (151) in Appendix A, and r ( ΠΠΠ , + ( AAA + AAA [ ] CCC ⊥ CCC ⊥ CCC ⊥ CCC ⊥ ) ⊤ ] = r ([ ΓΓΓ , ΞΞΞ ]) (125)by setting ΠΠΠ , = ΓΓΓΓΓΓ + and ( AAA + AAA [ ] CCC ⊥ CCC ⊥ CCC ⊥ CCC ⊥ ) ⊤ = ΞΞΞΞΞΞ + .Thanks to (123), (124) and (125), formula (122) can be worked out as follows r ( PPP ) = r ( CCC ) − r [( AAA + AAA ( ) CCC ⊥ ) , ( AAA + AAA [ ] CCC ⊥ )] + r ( ΞΞΞ ) − r ([ ΓΓΓ , ΞΞΞ ]) (126)This proves (63). (cid:3) Thanks to the analytic toolkit we have settled in the previous two sections, we are eventuallyready to give the following
Proof of Theorem 1.1 yyy t = AAA − ( L ) εεε t + AAA − ( L )
000 (127)where the first term is a particular solution of the non-homogeneous equation and the secondis the so called complementary solution. Both depend on the operator
AAA − ( L ) , and eventu-ally on the matrix AAA − ( z ) as the algebras of the polynomial functions of the lag operator L and of the complex variable z are isomorphic (see, e.g., Dhrymes (1971)). The particular solu-tion AAA − ( L ) εεε t is composed of a (coloured) noise term and random walks up to the m -th order,where m is the order of the pole of AAA − ( z ) at z =
1, while the complementary solution is apolynomial in t of ( m -1)-th degree: altogether they lead to an m -th order integrated process(see, e.g., Faliva and Zoia (2009) Sections 1.8 and 2.3). As for the pole order ( m ), this is es-tablished by Theorem 3.2, and (3) ensues accordingly.As for (5)-(14), the formulas are proved in Theorem 3.2 as well.The cointegration relationship (4) recovers stationary by annihilating the principal-partof AAA − ( z ) . The matrices PPP m , for m =1, 2, 3, 4 as per (15)-(18), are obtained in the aforesaidtheorem, too. The cointegration ranks in formulas (23)- (26) are derived in the same theorem. (cid:3) The paper investigates unit-root VARs whose solutions are (co)-integrated processes up to 4-th order. This is in itself worthy of note when compared the extant literature which is almostentirely dedicated to first and second order processes. What is more, the algebraic apparatusset forth in the paper can be successfully applied to higher order processes by induction, thuspaving the way to a wide-spread field of applications. It is worth noticing that the key issue ofstationary recovering via cointegration is settled for integrated processes of increasing ordervia a cunning algebraic argument which hinges on the notion of parallel sum.24 ppendix A
Parallel sum of matrices
The notion of parallel sum plays a key role in the approach to cointegration devised in thepaper. As the non-stationarity of the solution y t = AAA − ( L ) ε t + AAA − ( L ) ∑ − j = − m NNN j ( z − ) j of the Laurent ex-pansion of AAA − ( z ) through AAA − ( L ) , a transformation PPPAAA − ( z ) that annihilates the principalpart recovers stationarity, that is PPPAAA − ( z ) = PPPMMM ( z ) (129)for some PPP , then
PPPAAA − ( L ) = PPPMMM ( L ) ⇒ y t ∼ I ( ) (130)In the case of a simple pole, the determination of an idempotent operator PPP = PPP is straight-forward (see Theorem 3.2). In the case of a double pole (and more generally of a multiplepole) a stepwise procedure must been devised, that is1 Step) check that PPP ( NNN , NNN + ΦΦΦ ) =
ΦΦΦ is a matrix that can be easily found.2 Step) determine an idempotent operator
ΠΠΠ such that ΠΠΠ ΦΦΦ = PPP = f ( PPP , ΠΠΠ ) generated by a subset of vectorswhich are common to PPP and ΠΠΠ , so that PPP ( NNN , NNN ) = PPP and ΠΠΠ , as the lemma which follows shows. The caseof higher-order poles can be tackled by repeating the argument above as shown in Theorem3.2.The following Lemma gives the results on parallel sums we are primarily interested in Lemma A.1
Let V and W be square matrices of the same order and R , S be two matrices satisfying RRR ′ VVV =
000 (131)
SSS ′ WWW =
000 (132)so that
PPP R = ( RRR ′ ) + RRR ′ (133) PPP S = ( SSS ′ ) + SSS ′ (134)are projection operators of VVV on 000 and of
WWW on 000, respectively.Then, the parallel sum
PPP R : PPP S of PPP R and PPP S is defined as PPP R : PPP S = PPP R ( PPP R + PPP S ) + PPP S (135)and it is such that PPP ⌢ = ( PPP R : PPP S ) = ( PPP R ( PPP R + PPP S ) + PPP S ) (136)a projection operator of α VVV + β WWW on 000 and of [ VVV , WWW ] on 000, as well, i.e. PPP ⌢ ( α VVV + β WWW ) =
000 (137)25 PP ⌢ [ VVV , WWW ] =
000 (138)
Proof
The parallel sum
PPP R : PPP S enjoys the property (see e.g., Anderson Jr and Duffin (1969)) PPP R : PPP S = PPP R ( PPP R + PPP S ) + PPP S = PPP S ( PPP R + PPP S ) + PPP R = PPP S : PPP R (139)Straightforward computations show that PPP ⌢ ( α VVV + β WWW ) = ( PPP R : PPP S )( α VVV + β WWW ) = PPP S ( PPP S + PPP R ) + PPP R ( α VVV + β WWW ) == α PPP S ( PPP S + PPP R ) + PPP R VVV + β PPP R ( PPP R + PPP S ) + PPP S WWW = PPP ⌢ ( VVV , WWW ) = ( PPP R : PPP S )( VVV , WWW ) = PPP S ( PPP S + PPP R ) + PPP R ( VVV , WWW ) == [ PPP S ( PPP S + PPP R ) + PPP R VVV , PPP R ( PPP R + PPP S ) + PPP S WWW ] = (cid:3)
The results here below prove useful. Let
AAA , BBB and
CCC idempotent square matrices of order n .Note that for any idempotent matrix AAA a representation of the
AAA = ΓΓΓΓΓΓ + holds. As for theidempotent matrix AAA T , this representation can be obtain form the rank factorization of AAA . Let
AAA = DDDEEE ′ , then AAA T = III − AAAAAA + = III − DDDDDD + = ΓΓΓΓΓΓ + , where ΓΓΓ = DDD ⊥ .Then, the following statements hold (see Berkics (2017), Bernstein (2009) p. 201,527, Piziak et al.(1999), Tian (2002) and Tian and Styan (2006))1. α ( AAA : BBB ) = ( α AAA : α BBB ) , α > ( AAA : BBB ) = − (cid:2) , , III (cid:3)
AAA
III
BBB IIIIII III
III (143)3. The matrix PPP = ( AAA : BBB ) (144) is idempotent PPP = AAA ( AAA + BBB ) + BBB = AAA − ( AAA − BBBAAA ) + ( AAA − BBBAAA ) (145)5. ( AAA : BBB ) : CCC = AAA : ( BBB : CCC ) (146)6. r ( AAA : BBB ) = r ( AAA ) + r ( BBB ) − r ( AAA + BBB ) (147) = r ( AAA ) + r ( BBB ) − r ([ BBBAAA ⊤ , AAA ]) in general (148) = r ( AAA ) + r ( BBB ) − r ([ AAA ⊤ , AAA ]) = r ( AAA ) + r ( BBB ) − n , if BBBAAA ⊤ = AAA ⊤ (149) as r ([ AAA ⊤ , AAA ]) = n r ( AAA ⊤ + BBB ⊤ ) = r ( AAA + BBB ) + n − r ( AAA ) − r ( BBB ) (150)the following holds r ( AAA ⊤ : BBB ⊤ ) = r ( AAA ⊤ ) + r ( BBB ⊤ ) − r ( AAA ⊤ + BBB ⊤ ) = n − r ( AAA + BBB ) = n − r ([ ΓΓΓ , ΞΞΞ ]) (151)as r ( AAA ⊤ ) = n − r ( AAA ) and r ( BBB ⊤ ) = n − r ( BBB ) and AAA = ΓΓΓΓΓΓ + , BBB = ΞΞΞΞΞΞ + . (cid:3) Appendix B
In this Appendix we work out closed-form representations of the principal-part matrices ofthe Laurent expansion of
AAA − ( z ) about z =
1. To start with, notice that the following holds (cid:26)
NNN − m AAA = AAANNN − m =
000 (152)by virtue of (33) and (34) by taking h =
0. Solving for
NNN − m (see Lemma 2.3.1 in Rao and Mitra(1973)) yields NNN − m = AAA ⊥ ΦΦΦ m AAA ⊤ = CCC ⊥ CCC + ⊥ ΦΦΦ m ( BBB ′ ⊥ ) + BBB ′ ⊥ (153)for some e ΦΦΦ m , where AAA ⊥ = III − AAA + AAA = CCC ⊥ CCC + ⊥ , AAA ⊤ = III − AAAAAA + = ( BBB ′ ⊥ ) + BBB ′ ⊥ , (see e.g.,Faliva and Zoia (2009)) . This in turn yields (35) by putting ZZZ m = CCC + ⊥ ΦΦΦ m ( BBB ′ ⊥ ) + (154)As for NNN − m + , from (33) and (34) we have for h =1, ( NNN − m + AAA + NNN − m AAA ( ) = AAANNN − m + + AAA ( ) NNN − m =
000 (155)Solving for
NNN − m + (see Theorem 2.3.3, formula (2.3.7) in Rao and Mitra (1973)) yields NNN − m + = − AAA + AAA ( ) NNN − m − NNN − m AAA ( ) AAA + + CCC ⊥ SSS , BBB ′ ⊥ (156)for some SSS , , as NNN − m AAA ⊤ = NNN − m in light of (152).Now, let m >2. Then from (33) and (34) we have for h =2 ( NNN − m + AAA + NNN − m + AAA ( ) + NNN − m AAA ( ) = AAANNN − m + + AAA ( ) NNN − m + + AAA ( ) NNN − m =
000 (157)Pre and post-multiply the first equation by
CCC + ⊥ and CCC ⊥ and the second one by BBB ′ ⊥ and ( BBB ′ ⊥ ) + ,respectively. Then, by making use of (156), a simple computation yields ( SSS , KKK = − ( CCC ⊥ ) + NNN − m AAA [ ] CCC ⊥ KKK SSS , = − BBB ′ ⊥ AAA [ ] NNN − m ( BBB ′ ⊥ ) + (158)27here KKK and AAA [ ] are defined in (39) and (9).Solving for SSS , and pre and post multiplying the result by CCC ⊥ and BBB ′ ⊥ gives CCC ⊥ SSS , BBB ′ ⊥ = − ΘΘΘ AAA [ ] NNN − m − NNN − m AAA [ ] ΘΘΘ + CCC ⊥ CCC ⊥ SSS , BBB ′ ⊥ BBB ′ ⊥ (159)for some SSS , and ΘΘΘ given by (13), as NNN − m ( BBB ′ ⊥ ) + KKK ⊤ BBB ′ ⊥ = KKK − m and ( CCC ⊥ ) + CCC ⊥ NNN − m = NNN − m .Accordingly,the representation of NNN − m + for m > NNN − m + = − AAA + AAA ( ) NNN − m − NNN − m AAA ( ) AAA + − ΘΘΘ AAA [ ] NNN − m − NNN − m AAA [ ] ΘΘΘ + CCC ⊥ CCC ⊥ SSS , BBB ′ ⊥ BBB ′ ⊥ (160)At this point, let us move to NNN − m + . Solving (157) for NNN − m + yields NNN − m + = − AAA + AAA ( ) NNN − m + − AAA + AAA ( ) NNN − m − NNN − m + AAA ( ) AAA + − NNN − m AAA ( ) AAA + ++ AAA + AAANNN − m + AAA ( ) AAA + + CCC ⊥ SSS , BBB ′ ⊥ (161)which can be also written as NNN − m + = − AAA + AAA ( ) NNN − m + AAA ⊤ − AAA + AAA [ ] NNN − m − AAA + AAA ( ) AAA + AAA ( ) NNN − m − NNN − m + AAA ( ) AAA + + − NNN − m AAA [ ] AAA + − NNN − m AAA ( ) AAA + AAA ( ) AAA + + CCC ⊥ SSS , BBB ′ ⊥ (162)for some SSS , . Formula (162) can be expressed in term of the leading principal-part matrix NNN − m as follows NNN − m + = AAA + AAA ( ) NNN − m AAA ( ) AAA + − AAA + AAA [ ] NNN − m − NNN − m AAA [ ] AAA + + − CCC ⊥ SSS , BBB ′ ⊥ AAA + AAA ( ) − AAA + AAA ( ) CCC ⊥ SSS , BBB ′ ⊥ + CCC ⊥ SSS , BBB ′ ⊥ (163)Coming back to NNN − m + , let m >3. Then, from (33) and (34) we have for h =3 NNN − m + AAA + NNN − m + AAA ( ) + NNN − m + AAA ( ) + NNN − m AAA ( ) = AAANNN − m + + AAA ( ) NNN − m + AAA ( ) NNN − m + AAA ( ) NNN − m =
000 (164)Pre and post-multiply the first equation by
CCC + ⊥ CCC + ⊥ and CCC ⊥ CCC ⊥ respectively, and the secondone by BBB ′ ⊥ BBB ′ ⊥ and ( BBB ′ ⊥ ) + ( BBB ′ ⊥ ) + , respectively. Then, by making use of (161), (156) and(159), a simple computation yields the system ( SSS , KKK = − CCC + ⊥ CCC + ⊥ NNN − m AAA [ ] CCC ⊥ CCC ⊥ KKK SSS , = − BBB ′ ⊥ BBB ′ ⊥ AAA [ ] NNN − m ( BBB ′ ⊥ ) + ( BBB ′ ⊥ ) + (165)as CCC + ⊥ CCC + ⊥ ΘΘΘ =
000 and
ΘΘΘ ( BBB ′ ⊥ ) + ( BBB ′ ⊥ ) + =
000 with
KKK and AAA [ ] as defined in (40) and (10).Solving for SSS , and pre and post-multiplying by CCC ⊥ CCC ⊥ and BBB ′ ⊥ BBB ′ ⊥ yields CCC ⊥ CCC ⊥ SSS , BBB ′ ⊥ BBB ′ ⊥ = − ΘΘΘ AAA [ ] NNN − m − NNN − m AAA [ ] ΘΘΘ + CCC ⊥ CCC ⊥ CCC ⊥ SSS , BBB ′ ⊥ BBB ′ ⊥ BBB ′ ⊥ (166)for some SSS , and ΘΘΘ given by (14), as NNN − m ( BBB ′ ⊥ ) + ( BBB ′ ⊥ ) + ( III − KKK KKK + ) BBB ′ ⊥ BBB ′ ⊥ = NNN − m .28ccordingly, NNN − m + for m > NNN − m + = − AAA + AAA ( ) NNN − m − NNN − m AAA ( ) AAA + − ΘΘΘ AAA [ ] NNN − m − NNN − m AAA [ ] ΘΘΘ − ΘΘΘ AAA [ ] NNN − m + − NNN − m AAA [ ] ΘΘΘ + CCC ⊥ CCC ⊥ CCC ⊥ SSS , BBB ′ ⊥ BBB ′ ⊥ BBB ′ ⊥ (167)Coming back to NNN − m + , let m > 3 and refer to the system (164). Pre and post-multiplying thefirst equation by CCC + ⊥ and CCC ⊥ , and the second equation by BBB ′ ⊥ and ( BBB ′ ⊥ ) + , gives ( SSS , KKK = − ( CCC ⊥ ) + NNN − m + AAA [ ] CCC ⊥ − ( CCC ⊥ ) + NNN − m ˘ AAA [ ] CCC ⊥ KKK SSS , = − BBB ′ ⊥ AAA [ ] NNN − m + ( BBB ′ ⊥ ) + − BBB ′ ⊥ ˙ AAA [ ] NNN − m ( BBB ′ ⊥ ) + (168)where ˘ AAA [ ] = ( AAA ( ) − AAA ( ) AAA + AAA ( ) ) = AAA [ ] + AAA ( ) AAA + AAA [ ] + AAA [ ] ΘΘΘ AAA [ ] (169)˙ AAA [ ] = ( AAA ( ) − AAA ( ) AAA + AAA ( ) ) = AAA [ ] + AAA [ ] AAA + AAA ( ) + AAA [ ] ΘΘΘ AAA [ ] (170)Solving for SSS , and pre and post-multiplying by CCC ⊥ and BBB ′ ⊥ , gives CCC ⊥ SSS , BBB ′ ⊥ = − ΘΘΘ AAA [ ] NNN − m + ( BBB ′ ⊥ ) + KKK ⊤ BBB ′ ⊥ − ΘΘΘ ˙ AAA [ ] NNN − m + − CCC ⊥ CCC + ⊥ NNN − m + AAA [ ] ΘΘΘ − NNN − m ˘ AAA [ ] ΘΘΘ + CCC ⊥ CCC ⊥ SSS , BBB ′ ⊥ BBB ′ ⊥ (171)for some SSS , , as NNN − m ( BBB ′ ⊥ ) + KKK ⊤ BBB ′ ⊥ = NNN − m and CCC ⊥ CCC + ⊥ NNN − m = NNN − m . Accordingly, the representation of
NNN − m + when m > NNN − m + = − AAA + AAA ( ) NNN − m + AAA ⊤ − AAA + AAA [ ] NNN − m − AAA + AAA ( ) AAA + AAA ( ) NNN − m − NNN − m + AAA ( ) AAA + + − NNN − m AAA [ ] AAA + − NNN − m AAA ( ) AAA + AAA ( ) AAA + − ΘΘΘ AAA [ ] NNN − m + ( BBB ′ ⊥ ) + KKK ⊤ BBB ′ ⊥ − ΘΘΘ ˙ AAA [ ] NNN − m + − CCC ⊥ CCC + ⊥ NNN − m + AAA [ ] ΘΘΘ − NNN − m ˘ AAA [ ] ΘΘΘ + CCC ⊥ CCC ⊥ SSS , BBB ′ ⊥ BBB ′ ⊥ (172)Finally, as for NNN − m + , let m >3 and refer to system (164). Solving for NNN − m + yields NNN − m + = − AAA + AAA ( ) NNN − m + − AAA + AAA ( ) NNN − m + − AAA + AAA ( ) NNN − m − NNN − m + AAA ( ) AAA + + NNN − m + AAA ( ) AAA + − NNN − m AAA ( ) AAA + + AAA + AAANNN − m + AAA ( ) AAA + + AAA + AAANNN − m + AAA ( ) AAA + ++ CCC ⊥ SSS , BBB ′ ⊥ (173)which can be alternatively expressed as follows NNN − m + = − AAA + AAA ( ) NNN − m + AAA ⊤ − AAA + AAA [ ] NNN − m + AAA ⊤ − AAA + e AAA ( ) NNN − m + AAA ⊤ − AAA + AAA [ ] NNN − m + − AAA + e AAA ( ) NNN − m − NNN − m + AAA ( ) AAA + − NNN − m + AAA [ ] AAA + − NNN − m + e AAA ( ) AAA + + − NNN − m AAA [ ] AAA + − NNN − m e AAA ( ) AAA + + CCC ⊥ SSS , BBB ′ ⊥ , (174)or in term of the leading principal-part matrix NNN m as NNN − m + = − AAA + AAA [ ] NNN − m − NNN − m AAA [ ] AAA + + AAA + AAA ( ) NNN − m AAA [ ] AAA + + AAA + AAA [ ] NNN − m AAA ( ) AAA + + AAA + AAA [ ] ΘΘΘ AAA [ ] NNN − m − NNN − m AAA [ ] ΘΘΘ AAA [ ] AAA + − AAA + AAA ( ) CCC ⊥ SSS , BBB ′ ⊥ ++ AAA + AAA ( ) CCC ⊥ SSS , BBB ′ ⊥ AAA ( ) AAA + − CCC ⊥ SSS , BBB ′ ⊥ AAA [ ] AAA + − AAA + AAA [ ] CCC ⊥ SSS , BBB ′ ⊥ + − CCC ⊥ SSS , BBB ′ ⊥ AAA ( ) AAA + + CCC ⊥ SSS , BBB ′ ⊥ (175) (cid:3) References
Anderson Jr, W. N. and Duffin, R. J. (1969). Series and parallel addition of matrices.
Journalof Mathematical Analysis and Applications , 26(3):576–594.Avrachenkov, K. E., Haviv, M., and Howlett, P. G. (2001). Inversion of analytic matrix func-tions that are singular at the origin.
SIAM Journal on Matrix Analysis and Applications ,22(4):1175–1189.Banerjee, A., Dolado, J. J., Galbraith, J. W., Hendry, D., et al. (1993). Co-integration, errorcorrection, and the econometric analysis of non-stationary data.
Oxford University Press .Beare, B. K., Seo, J., and Seo, W.-K. (2017). Cointegrated linear processes in Hilbert space.
Journal of Time Series Analysis , 38(6):1010–1027.Beare, B. K. and Seo, W.-K. (2019). Representation of i (1) and i (2) autoregressive hilbertianprocesses.
Econometric Theory , 36(5):773–802.Berkics, P. (2017). On parallel sum of matrices.
Linear and Multilinear Algebra ,65(10):2114–2123.Bernstein, D. S. (2009).
Matrix mathematics: theory, facts, and formulas . Princeton Univer-sity Press.Dhrymes, P. (1971). Distributed lags: problems of formulation and estimation.
Holden Day,San Francisco .Engle, R. and Yoo, B. (1991). Cointegrated economic time series: An overview with newresults. in engle and granger., eds.,.
Long-Run Economic Relations: Readings in Cointegra-tion , pages 237–266.Engle, R. F. and Granger, C. W. (1987). Cointegration and error correction: representationand error correction.
Econometrica , 55(2):251–276.Faliva, M. and Zoia, M. G. (2002). On a partitioned inversion formula having useful applica-tions in econometrics.
Econometric Theory , 18:525–530.Faliva, M. and Zoia, M. G. (2003). A new proof of the representation theorem for I(2) pro-cesses.
Journal of Interdisciplinary Mathematics , 6: 331-347.Faliva, M. and Zoia, M. G. (2009).
Dynamic model analysis: advanced matrix methods andunit-root econometrics representation theorems . Springer, Heidelberg.Faliva, M. and Zoia, M. G. (2011). An inversion formula for a matrix polynomial about a(unit) root.
Linear and Multilinear Algebra , 59(5):541–556.30ranchi, M. and Paruolo, P. (2019a). Cointegration in functional autoregressive processes.
Econometric Theory, 35:1-37 .Franchi, M. and Paruolo, P. (2019b). A general inversion theorem for cointegration.
Econo-metric Reviews , 38:10:1176–1201.Granger, C. W. (1981). Some properties of time series data and their use in econometricmodel specification.
Journal of econometrics , 16(1):121–130.Granger, C. W. (1983).
Co-integrated variables and error-correcting models . PhD thesis,UCSD Discussion Paper 83-13.Haldrup, N. and Salmon, M. (1998). Representations of i (2) cointegrated systems using thesmith-mcmillan form.
Journal of Econometrics , 84(2):303–325.Johansen, S. (1985). The mathematical structure of error correction models. Technical report,John Hopkins Univ. Baltimore MD Dept of Mathematical Sciences.Johansen, S. (1991). Estimation and hypothesis testing of cointegration vectors in gaussianvector autoregressive models.
Econometrica , 59(6):1551–1580.Johansen, S. (1992). A representation of vector autoregressive processes integrated of order 2.
Econometric Theory , 8(02):188–202.Johansen, S. (1996).
Likelihood based inference in Cointegrated of Vector Auto-RegressiveModels . Oxford: Oxford University Press.Langenhop, C. (1971). The Laurent expansion for a nearly singular matrix.
Linear Algebraand its Applications , 4(4):329–340.Phillips, P. C. (1991). Optimal inference in cointegrated systems.
Econometrica , 59(2):283–306.Phillips, P. C. and Hansen, B. E. (1990). Statistical inference in instrumental variables regres-sion with i (1) processes.
The Review of Economic Studies , 57(1):99–125.Piziak, R., Odell, P., and Hahn, R. (1999). Constructing projections on sums and intersections.
Computers & Mathematics with Applications , 37(1):67–74.Rao, C. R. and Mitra, S. K. (1973). Generalized inverse of a matrix and its applications.
Wi-ley, New York .Schumacher, J. M. (1991). System-theoretic trends in econometrics. In
Mathematical systemtheory . 559–577 Springer, Heidelberg.Sims, C. A., Stock, J. H., and Watson, M. W. (1990). Inference in linear time series modelswith some unit roots.
Econometrica , 58(1):113–144.Stock, J. H. and Watson, M. W. (1993). A simple estimator of cointegrating vectors in higherorder integrated systems.
Econometrica , 61(4):783–820.Tian, Y. (2002). How to express a parallel sum of k matrices.
Journal of Mathematical Analy-sis and Applications , 266:333–341.Tian, Y. and Styan, G. P. (2006). Rank equalities for idempotent matrices with applications.