Regularized Solutions to Linear Rational Expectations Models
RRegularized Solutions to Linear Rational ExpectationsModels ∗ Majid M. Al-SadoonDurham University Business SchoolOctober 28, 2020
Abstract
This paper proposes an algorithm for computing regularized solutions to linear rationalexpectations models. The algorithm allows for regularization cross-sectionally as well as acrossfrequencies. A variety of numerical examples illustrate the advantage of regularization.JEL Classification: C62, C63, E00.Keywords: Linear rational expectations model, regularization, indeterminacy, computationalmethods. ∗ Thanks are due to Thomas Lubik, Davide Debortoli and Fei Tan for helpful comments and suggestions. a r X i v : . [ ec on . E M ] O c t Introduction
Recently, Al-Sadoon (2020) showed that non-unique solutions to multivariate linear rationalexpectations models (LREMs) are not generally continuous with respect to their parameters,invalidating crucial assumptions for both frequentis and Bayesian methods. For frequentistanalyses, the objective function (e.g. the likelihood function) has to be at least continuous. ForBayesian analysis, the posterior cannot have atoms at unknown locations. Al-Sadoon (2020)demonstrated that these two conditions are not guaranteed under current methodology andproposed a regularization solution.Regularization is a method for selecting from among infinitely many solutions to an LREMa unique solution that accords with prior information that the researcher may have about whata solution should look like (e.g. that its spectral density should concentrate in the range ofobserved business cycles). Al-Sadoon (2020) provided a theoretical analysis of regularizaiton.The aim of this paper is to provide an algorithm for computing such solutions based on theSims (2002) framework.This work is related to several more recent works. The main result of this paper builds onLubik & Schorfheide (2003) and Al-Sadoon (2018). Farmer et al. (2015) and Bianchi & Nicol`o(2019) provide alternative parametrizations of solutions to LREMs to Lubik & Schorfheide(2003). Funovits (2017) counts the dimension of the solution space to a given LREM. Thispaper can also be seen as part of the recent interest in frequency domain analysis of LREMsas exemplified by Onatski (2006), Tan & Walker (2015), and Tan (2019). Such methods havefound important applications in addressing the identification problem for LREMs as seen inKomunjer & Ng (2011), Qu & Tkachenko (2017), Kociecki & Kolasa (2018), and Al-Sadoon& Zwiernik (2019).This paper is organized as follows. Section 2 reviews results of Sims (2002) and Lubik& Schorfheide (2003). Section 3 shows how regularization can be achieved and provides themain result of this paper. Section 4 provides illustrative examples of how regularizationworks. Section 5 concludes. The Matlab code for reproducing the computations presented inthis paper can be found in the accompanying file, regular.zip . We begin by reviewing results developed by Sims (2002) and Lubik & Schorfheide (2003).This is necessary in order to set the notation and obtain the basic ingredients that we willneed. Because regularization is only defined in a stationary context, we will restrict attentionto covariance stationary solutions. efinition 1. Given (Γ , Γ , Ψ , Π) ∈ R n × n × R n × n × R n × l × R n × k , an l -dimensional i.i.d.process z of mean zero and finite and positive definite variance matrix, and the formal LREM Γ y ( t ) = Γ y ( t −
1) + Ψ z ( t ) + Π η ( t ) , t ∈ Z , (1) a solution to (1) is a pair ( y, η ) such that:(i) y is an n -dimensional process such that y ( t ) is measurable with respect to z ( t ) , z ( t − , . . . for all t ∈ Z .(ii) η is a k -dimensional martingale difference sequence with respect to z . That is, η ( t ) ismeasurable with respect to z ( t ) , z ( t − , . . . and E t η ( t + 1) = 0 almost surely for all t ∈ Z ,where E t ( · ) = E ( · | z ( t ) , z ( t − , . . . ) .(iii) The process ( z, y, η ) is jointly covariance stationary.(iv) The pair satisfies equations (1) almost surely.A solution ( y, η ) is unique if for every other solution (˜ y, ˜ η ) , y ( t ) = ˜ y ( t ) almost surely for all t ∈ Z . (For ease of exposition, we will drop the “almost surely” in the subsequent analysis). Assuming, as Sims (2002) does, that det(Γ +Γ x ) is not identically zero (i.e. it is impossibleto cancel out y by elementary algebraic operations), then by Theorem VI.1.9 and ExerciseVI.1.3 of Stewart & Sun (1990), there are orthogonal matrices Q, Z ∈ R n × n such that Q Γ Z and Q Γ Z are block upper triangular with either 1 × × + Γ x ) (cid:54) = 0 for all x ∈ C with | x | = 1 (i.e. theaforementioned cancellation is impossible and there are no unit roots in the system), thenthese matrices can be partitioned conformably as Q Γ Z = Λ Λ , Q Γ Z = Ω Ω , (2)where the polynomial det(Λ + Ω x ) has all its zeros outside the unit circle (this impliesthat Λ is non-singular), and the polynomial det(Λ + Ω x ) has all its zeros inside the unitcircle (this implies that Ω is non-singular). As shown in the online appendix to Al-Sadoon(2018), this step is an implicit Wiener-Hopf factorization. Note that Sims (2002) and Lubik &Schorfheide (2003) use the complex QZ decomposition but never explain how the final answeris real; using the real QZ decomposition obviates any need for such a discussion.Now suppose ( y, η ) is a solution to (1), define w ( t ) = Z (cid:48) y ( t ), and rewrite the system asΛ w ( t ) = Ω w ( t −
1) + Q Ψ z ( t ) + Q Π η ( t ) , t ∈ Z , f we partition w ( t ) = w ( t ) w ( t ) conformably with (2), thenΛ w ( t ) = Ω w ( t −
1) + Q · Ψ z ( t ) + Q · Π η ( t ) , t ∈ Z , (3)where Q = Q · Q · is partitioned conformably with (2). Applying the conditional expectation E t − we obtain w ( t −
1) = Ω − Λ E t − w ( t ) , t ∈ Z , This implies that w ( t ) = (Ω − Λ ) s − t E t w ( s ) , s ≥ t. Therefore, E (cid:107) w ( t ) (cid:107) ≤ (cid:107) (Ω − Λ ) s − t (cid:107) E (cid:107) E t w ( s ) (cid:107) ≤ (cid:107) (Ω − Λ ) s − t (cid:107) E (cid:107) w ( s ) (cid:107) , s ≥ t. where we have used the fact that E (cid:107) E t w ( s ) (cid:107) ≤ E ( E t (cid:107) w ( s ) (cid:107) ) = E (cid:107) w ( s ) (cid:107) (Williams,1991, Theorem 9.7). The covariance stationarity of y implies that E (cid:107) w ( t ) (cid:107) = E (cid:107) w ( s ) (cid:107) .Since our choice of QZ decomposition ensures that the eigenvalues of Ω − Λ are inside theunit circle, (cid:107) (Ω − Λ ) s − t (cid:107) < s − t and then it must be the case that E (cid:107) w ( t ) (cid:107) = E (cid:107) w ( s ) (cid:107) = 0. Therefore, w ( t ) = 0 , t ∈ Z . Now plugging this back into (3) we have that Q · Ψ z ( t ) + Q · Π η ( t ) = 0 , t ∈ Z . Multiplying on the right by z (cid:48) ( t ), taking expectations, and utilizing the joint covariance sta-tionarity of η and z , we arrive at Q · Ψ E ( z (0) z (cid:48) (0)) + Q · Π E ( η (0) z (cid:48) (0)) = 0 . But since E ( z (0) z (cid:48) (0)) is invertible by assumption, a necessary condition for existence isim( Q · Ψ) ⊆ im( Q · Π) . (4) t also follows that ( Q · Π) † Q · Ψ z ( t ) + η ( t ) ∈ ker( Q · Π) , t ∈ Z , where ( Q · Π) † is the Moore-Penrose generalized inverse of Q · Π, and E t − (cid:16) ( Q · Π) † Q · Ψ z ( t ) + η ( t ) (cid:17) = 0 , t ∈ Z . Thus, for a given matrix K whose columns form a basis for ker( Q · Π) there is a martingaledifference sequence with respect to z , denoted by ν , such that Kν ( t ) = ( Q · Π) † Q · Ψ z ( t ) + η ( t ) , t ∈ Z . Every solution is therefore representable as y ( t ) = Θ y ( t −
1) + Θ z z ( t ) + Θ ν ν ( t ) , η ( t ) = Kν ( t ) − ( Q · Π) † Q · Ψ z ( t ) , t ∈ Z , (5)withΘ = Z (cid:104) Λ − Ω
00 0 (cid:105) Z (cid:48) , Θ z = Z (cid:104) Λ − ( Q · Ψ − Q · Π( Q · Π) † Q · Ψ ) (cid:105) , Θ ν = Z (cid:104) Λ − Q · Π0 (cid:105) K. Note that ν inters into the system along rank( Q · Π K ) independent directions, what Funovits(2017) calls the dimension of indeterminacy.In fact, (4) is not only necessary but also sufficient for existence. To see this, simplyconstruct the pair ( y, η ) from (5) with ν set to the zero process; it is easily checked that thispair is a solution to (1).Turning now to uniqueness, we see that the arbitrary ν plays no role in the solution if andonly if Θ ν = 0 or, equivalently, if and only if Q · Π K = 0, which can be expressed asker( Q · Π) ⊆ ker( Q · Π) . (6)Since (5), generated with ν ( t ) = Az ( t ) for t ∈ Z defines a solution for any matrix A , it mustbe that (6) is necessary and sufficient for uniqueness. Note that Sims (2002) expresses (6)equivalently in terms of the row spaces of Q · Π and Q · Π.To summarize, we have proven the following.
Theorem 1 (Sims (2002)) . Let det(Γ + Γ x ) (cid:54) = 0 for all x ∈ C with | x | = 1 . A solution to (1) exists if and only if (4) holds. A solution is unique if and only if (6) holds. Current methodology utilizes (5) or variants thereof. Al-Sadoon (2020) has demonstrated thatthese solutions can be discontinuous (as we will see shortly) and proposed using regularizedsolutions to ensure continuity. We now turn to the problem of computing such solutions. e begin with the basic setting. Suppose a symmetric positive semi-definite matrix W ∈ R n × n is given and we are interested in selecting among all solutions to (1), one that minimizes E (cid:107) W / y (0) (cid:107) = E ( y (cid:48) (0) W y (0)) = tr (cid:0)
W E ( y (0) y (cid:48) (0)) (cid:1) . If the solution to (1) is unique, there is nothing to solve for. If not, it will be convenient inthe subsequent computations to introduce the martingale difference sequence, ζ , defined as ν ( t ) = Bz ( t ) + ζ ( t ) , E ( z ( t ) ζ ( t )) = 0 , t ∈ Z . The process ζ is the residual from regressing ν ( t ) on z ( t ). This implies that E ( y (0) y (cid:48) (0)) = ∞ (cid:88) j =0 Θ j (cid:0) Θ z Σ zz Θ (cid:48) z + Θ z Σ zz B (cid:48) Θ (cid:48) ν + Θ ν B Σ zz Θ (cid:48) z + Θ ν B Σ zz B (cid:48) Θ (cid:48) ν + Θ ν CC (cid:48) Θ (cid:48) ν (cid:1) Θ j (cid:48) , where CC (cid:48) = E ( ζ (0) ζ (cid:48) (0)) and Σ zz = E ( z (0) z (cid:48) (0)). Thus, finding a regularized solution isequivalent to minimizing L = 12 tr W ∞ (cid:88) j =0 Θ j (cid:0) Θ z Σ zz Θ (cid:48) z + Θ z Σ zz B (cid:48) Θ (cid:48) ν + Θ ν B Σ zz Θ (cid:48) z + Θ ν B Σ zz B (cid:48) Θ (cid:48) ν + Θ ν CC (cid:48) Θ (cid:48) ν (cid:1) Θ j (cid:48) with respect to B and C . Using the properties of the trace of a produce of matrices, L = 12 tr (cid:0)(cid:0) Θ z Σ zz Θ (cid:48) z + Θ z Σ zz B (cid:48) Θ (cid:48) ν + Θ ν B Σ zz Θ (cid:48) z + Θ ν B Σ zz B (cid:48) Θ (cid:48) ν + Θ ν CC (cid:48) Θ (cid:48) ν (cid:1) Ξ (cid:1) , where Ξ = ∞ (cid:88) j =0 Θ j (cid:48) W Θ j . Note that Ξ is the unique solution to the Lyapunov equationΞ = Θ (cid:48) ΞΘ + W. See Section B.1.8 of Lindquist & Picci (2015). Taking the gradient of L , we obtain thefollowing first order conditions Θ (cid:48) ν Ξ (Θ z + Θ ν B ∗ ) = 0Θ (cid:48) ν ΞΘ ν C ∗ = 0 , If Θ (cid:48) ν ΞΘ ν is invertible, there exists a unique regularized solution determined by B ∗ = − (Θ (cid:48) ν ΞΘ ν ) − Θ (cid:48) ν ΞΘ z , C ∗ = 0 . If Θ (cid:48) ν ΞΘ ν is not invertible, there are infinitely many regularized solutions determined by B ∗ = − (Θ (cid:48) ν ΞΘ ν ) † Θ (cid:48) ν ΞΘ z + X, C ∗ = Y, or arbitrary X and Y of the appropriate sizes such that im( X ) , im( Y ) ⊆ ker(Θ (cid:48) ν ΞΘ ν ).We have established the following. First, a regularized solution to (1) exists if and onlyif solutions to (1) exist. Second, the regularized solution is unique if and only if either thesolution to (1) is unique, in which case the regularized solution is the unique solution, y ( t ) = Θ y ( t −
1) + Θ z z ( t ) , η ( t ) = − ( Q · Π) † Q · Ψ z ( t ) , t ∈ Z , (7)or Θ (cid:48) ν ΞΘ ν is invertible, in which case the regularized solution has the representation, y ( t ) = Θ y ( t −
1) + Θ reg z ( t ) , η ( t ) = − (cid:16) K (Θ (cid:48) ν ΞΘ ν ) − Θ (cid:48) ν ΞΘ z + ( Q · Π) † Q · Ψ (cid:17) z ( t ) , t ∈ Z , (8)where Θ reg = ( I − Θ ν (Θ (cid:48) ν ΞΘ ν ) − Θ (cid:48) ν Ξ)Θ z . The intuition of this result is quite simple. WriteΘ (cid:48) ν ΞΘ ν = (cid:104) Θ (cid:48) ν W / Θ (cid:48) ν Θ (cid:48) W / Θ (cid:48) ν Θ (cid:48) W / · · · (cid:105) W / Θ ν W / Θ Θ ν W / Θ Θ ν ... . Now if W / Θ ν is of full column rank, then the regularized solution is unique. That is,if W attaches non-trivial weight to every contemporaneous instance of indeterminacy, thenregularization eliminates indeterminacy. More generally, we have proven that regularizationeliminates indeterminacy if and only if W attaches non-trivial weight to every instance ofindeterminacy whether contemporaneous or lagged. From a linear systems point of view,regularization leads to uniqueness if and only if the triple (Θ , Θ ν , W / ) is input observable(Sain & Massey, 1969), which is to say, again, that the weight matrix detects all of theindeterminacy in the system.The analysis above suggests a generalization of the basic setting. We have constructed analgorithm for minimizing E (cid:107) W / y (0) (cid:107) = tr (cid:18) π (cid:90) W f ( ω ) dω (cid:19) , where f is the spectral density of y . The above expression allows us to choose different weightsalong the cross-section of y . More generally, we may consider choosing weights on frequenciesof oscillation of y . In particular, we may consider minimizing L = 12 tr (cid:18) π (cid:90) W ( ω ) f ( ω ) dω (cid:19) , (9) here W is a bounded measurable function, with W ( ω ) Hermitian positive semi definite and W ( ω ) ∗ = W ( − ω ) (cid:48) for all ω ∈ ( − π, π ]. If, for example, we like to impose that the solutionshould display the frequency characteristics of the business cycle, we could choose W ( ω ) = , π/ ≤ | ω | ≤ π/ ,I, otherwise , which penalizes oscillations of period smaller than a year and greater than eight years inquarterly data. To that end, we first note that f ( ω ) = ( I − Θ e − i ω ) − (cid:0) Θ z Σ zz Θ (cid:48) z + Θ z Σ zz B (cid:48) Θ (cid:48) ν + Θ ν B Σ zz Θ (cid:48) z + Θ ν B Σ zz B (cid:48) Θ (cid:48) ν + Θ ν CC (cid:48) Θ (cid:48) ν (cid:1) ( I − Θ (cid:48) e i ω ) − . This implies that L = 12 tr (cid:0)(cid:0) Θ z Σ zz Θ (cid:48) z + Θ z Σ zz B (cid:48) Θ (cid:48) ν + Θ ν B Σ zz Θ (cid:48) z + Θ ν B Σ zz B (cid:48) Θ (cid:48) ν + Θ ν CC (cid:48) Θ (cid:48) ν (cid:1) Ξ (cid:1) , where Ξ = 12 π (cid:90) ( I − Θ (cid:48) e i ω ) − W ( ω )( I − Θ e − i ω ) − dω. It is easily checked that Ξ is a real symmetric positive semi definite matrix and that it reducesto our previous expression when W ( ω ) is constant. Following the same line of argument asabove, we arrive finally at the main result of the paper. Theorem 2.
Let det(Γ + Γ x ) (cid:54) = 0 for all x ∈ C with | x | = 1 . A regularized solution to (1) that minimizes (9) exists if and only if (4) holds. A regularized solution is unique if and onlyif either (6) holds, in which case it is represented as (7) , or Θ (cid:48) ν ΞΘ ν is invertible, in whichcase it is represented as (8) . Consider first, the Cagan model with mean zero, independent, and identically distributedshocks X t = 2 E t X t +1 + ε t , t ∈ Z . There are infinitely many solutions to this system. To compute the regularized solutionminimizing EX t , we reformulate this model as y ( t ) = X t E t X t +1 , z ( t ) = ε t , η ( t ) = X t − E t − X t , t ∈ Z ithΓ = −
21 0 , Γ = , Ψ = , Π = , W = . This implies that Θ = . . , Θ reg = . − . . Solving for the first element, we obtain 0 . (cid:16) . − L − . L (cid:17) ε t , which was obtained analytically inAl-Sadoon (2020). This regularized solution is actually a white noise process and thereforehas a flat spectral density. We may instead impose that the solution avoid empirically unlikelyfrequencies. If we use the weight matrix W ( ω ) = , π/ ≤ | ω | ≤ π/ , , otherwise , we obtain a different regularized solution with the spectral density plotted in the Figure 1. Figure 1: Regularized Solutions to the Cagan Model.9 .2 A New Keynesian Model
Consider next the New Keynesian model of Lubik & Schorfheide (2004).Γ = − − τ − β − (1 − ρ R ) ψ − (1 − ρ R ) ψ − ρ R ) ψ , Γ = − − τ κ − − κ ρ R ρ g
00 0 0 0 ρ z , Ψ = , Π = − − β . The model is calibrated using Lubik & Schorfheide’s estimates reported in their Table 3 in thecolumn titled “Pre-Volcker (Prior 1)”. Figure 2 plots the impulse responses of the first threevariables to the three shocks. The impulse responses are generated from the non-regularizedsolution, the solution regularized with constant weight matrix with equal weights on thefirst three variables, and the solution regularized with a variable weight matrix emphasizingbusiness cycle frequencies in the first three variables.Clearly, regularization produces more stable dynamics. Unlike the case in Figure 1, how-ever, regularizing by constant and variable weight matrices did not produce dramaticallydifferent results.
Consider now the system E t X t +2 = ε t ,θE t X t +1 + X t = ε t , t ∈ Z . The shocks are again zero mean, independent, and identically distributed. This system alsohas infinitely many solutions. Although it is simple, it concretely illustrates the failure ofcurrent methodology to account for discontinuity of solutions to LREMs. Al-Sadoon (2020)demonstrates its discontinuity analytically and studies its Gaussian likelihood function. Wewill now demonstrate its discontinuity numerically.In order to reformulate this system into the form (1), we use the second equation to obtain θE t X t +2 + E t X t +1 = 0 , t ∈ Z , Dashed: non-regularized solution. Continuous: constant weight matrix. Dotted: variable weight matrix. and then combine this equation with the first equation of the original system to obtain E t X t +1 = − θε t ,θE t X t +1 + X t = ε t , t ∈ Z . This system is equivalent to the original one, provided θ (cid:54) = 0. We can now set y ( t ) = X t X t E t X t +1 E t X t +1 , z ( t ) = ε t ε t , η ( t ) = X t − E t − X t X t − E t − X t , t ∈ Z withΓ = θ
01 0 0 00 1 0 0 , Γ = , Ψ = − θ
00 10 00 0 , Π = . The weight matrix is W = . or θ = 10 − , the first three impulse responses of the non-regularized solution are , − − , . Clearly, these are quite far from the impulse responses of the θ = 0 model, which ought to be , , . On the other hand, the first three impulse responses of the regularized solution are − . , − − − , . The continuity of regularized solutions is proven in Theorem 6 of Al-Sadoon (2020).
This paper has provided an algorithm for computing regularized solutions to LREMs. Thiswork suggests at least three venues for further investigation. First, it is likely that regular-ization helps resolve identifiability issues in LREMs due to its imposition of uniqueness butsince it is strictly more general than the class of solutions considered in Al-Sadoon & Zwiernik(2019) its identifiability requires separate examination. Second, the algorithm presented hereis given without any claim to efficiency; it would be helpful to consider other methods ofobtaining regularized solutions and compare their accuracy and speed. Finally, recent workhas sought to relax the assumption that the information set includes all exogenous variables(e.g. Huo & Takayama (2015), Rondina & Walker (2017), Angeletos & Huo (2018), and Hanet al. (2019)); regularization in that context would be a fruitful venue for follow up work.
References
Al-Sadoon, M. M. (2018). The linear systems approach to linear rational expectations models.
Econometric Theory , (03), 628–658.Al-Sadoon, M. M. (2020). The spectral approach to linear rational expectations models. arXiv preprint arXiv:2007.13804 .Al-Sadoon, M. M. & Zwiernik, P. (2019). The identification problem for linear rational expectations models. arXivpreprint arXiv:1908.09617 .Angeletos, G.-M. & Huo, Z. (2018). Myopia and anchoring. Technical report, National Bureau of Economic Research.Bianchi, F. & Nicol`o, G. (2019). A Generalized Approach to Indeterminacy in Linear Rational Expectations Models.Finance and Economics Discussion Series 2019-033, Board of Governors of the Federal Reserve System (U.S.).Farmer, R. E., Khramov, V., & Nicol`o, G. (2015). Solving and estimating indeterminate DSGE models. Journal ofEconomic Dynamics and Control , (C), 17–36.Funovits, B. (2017). The full set of solutions of linear rational expectations models. Economics Letters , , 47 – 51.Han, Z., Tan, F., & Wu, J. (2019). Analytic policy function iteration. Available at SSRN 3512320 . uo, Z. & Takayama, N. (2015). Higher order beliefs, confidence, and business cycles. Report, Yale University.[1, 2] .Kociecki, A. & Kolasa, M. (2018). Global identification of linearized DSGE models.
Quantitative Economics , (3),1243–1263.Komunjer, I. & Ng, S. (2011). Dynamic identification of dynamic stochastic general equilibrium models. Econometrica , (6), 1995–2032.Lindquist, A. & Picci, G. (2015). Linear Stochastic Systems: A Geometric Approach to Modeling, Estimation, andIdentification . Series in Contemporary Mathematics 1. Berlin Heidelberg: Springer-Verlag.Lubik, T. A. & Schorfheide, F. (2003). Computing sunspot equilibria in linear rational expectations models.
Journal ofEconomic Dynamics and Control , (2), 273 – 285.Lubik, T. A. & Schorfheide, F. (2004). Testing for indeterminacy: An application to u.s. monetary policy. AmericanEconomic Review , (1), 190–217.Onatski, A. (2006). Winding number criterion for existence and uniqueness of equilibrium in linear rational expectationsmodels. Journal of Economic Dynamics and Control , (2), 323–345.Qu, Z. & Tkachenko, D. (2017). Global identification in DSGE models allowing for indeterminacy. The Review ofEconomic Studies , (3), 1306–1345.Rondina, G. & Walker, T. (2017). Confounding dynamics. Technical report, Working paper.Sain, M. & Massey, J. (1969). Invertibility of linear time-invariant dynamical systems. IEEE Transactions on AutomaticControl , (2), 141–149.Sims, C. A. (2002). Solving linear rational expectations models. Computational Economics , (1), 1–20.Stewart, G. W. & Sun, J. (1990). Matrix Perturbation Theory . New York, USA: Academic Press, Inc.Tan, F. (2019). A frequency-domain approach to dynamic macroeconomic models.
Macroeconomic Dynamics , 1–31.Tan, F. & Walker, T. B. (2015). Solving generalized multivariate linear rational expectations models.
Journal of EconomicDynamics and Control , , 95–111.Williams, D. (1991). Probability with Martingales . Cambridge, UK: Cambridge University Press.. Cambridge, UK: Cambridge University Press.