A Simple Algorithm for Solving Ramsey Optimal Policy with Exogenous Forcing Variables
aa r X i v : . [ q -f i n . E C ] A ug A Simple Algorithm for Solving RamseyOptimal Policy with Exogenous ForcingVariables
Jean-Bernard Chatelain ∗ and Kirsten Ralf † June 20, 2018
Abstract
This algorithm extends Ljungqvist and Sargent (2012) algorithm of Stackelbergdynamic game to the case of dynamic stochastic general equilibrium models includ-ing exogenous forcing variables. It is based Anderson, Hansen, McGrattan, Sargent(1996) discounted augmented linear quadratic regulator. It adds an intermediatestep in solving a Sylvester equation. Forward-looking variables are also optimallyanchored on forcing variables. This simple algorithm calls for already programmedroutines for Ricatti, Sylvester and Inverse matrix in Matlab and Scilab. A finalstep using a change of basis vector computes a vector auto regressive representa-tion including Ramsey optimal policy rule function of lagged observable variables,when the exogenous forcing variables are not observable.
JEL classification numbers : C61, C62, C73, E47, E52, E61, E63.
Keywords:
Ramsey optimal policy, Stackelberg dynamic game, algorithm,forcing variables, augmented linear quadratic regulator.
Ljungqvist and Sargent (2012, chapter 19) offer an elegant algorithm of Stackelberg dy-namic game used for Ramsey optimal policy. All dynamic stochastic general equilibrium(DSGE) models include exogenous auto-regressive forcing variables, which are not in-cluded in their algorithm. This algorithm extends Ljungqvist and Sargent (2012, chapter19) algorithm of dynamic Stackelberg game to the case of DSGE models including ex-ogenous forcing variables.We use Anderson, Hansen, McGrattan, Sargent (1996) discounted augmented linearquadratic regulator. After the usual algorithm for solving the Riccati equation of thelinear quadratic regulator (Amman (1996)), this algorithm adds another step in solvinga Sylvester equation for completing the policy rule. It also adds a term for the optimalinitial anchor of forward-looking variables on the predetermined forcing variables.
This algorithm is easy to code and check. It is simple because it only calls alreadyoptimized routines solving Ricatti and Sylvester equations and inverse matrix in Matlab ∗ Paris School of Economics, Universit´e Paris I Pantheon Sorbonne, PjSE, 48 Boulevard Jourdan,75014 Paris. Email: [email protected] † ESCE International Business School, 10 rue Sextius Michel, 75015 Paris, Email: [email protected].
We refer to Ljungqvist and Sargent (2012), chapter 19, step by step. The Stackelbergleader is the government and the Stackelberg follower is the private sector.Let k t be an n k × k given, x t an n x × t withouta given initial condition for x , and u t a vector of government policy instruments. Let y t = ( k Tt , x Tt ) T be an ( n k + n x ) × z t , which an n z × k and z , but not for x , a government wants tomaximize: − + ∞ X t =0 β t (cid:0) y Tt Q yy y t + 2 y Tt Q yz z t + u Tt Ru t (cid:1) (1)where β is the policy maker’s discount factor and her policy preference are the relativeweights included matrices Q , R. Q yy ≥ is a ( n k + n x ) × ( n k + n x ) positive symmetricsemi-definite matrix, R > is a p × p strictly positive symmetric definite matrix so thatpolicy maker’s has at least a very small concern for the volatility of policy instruments.The cross-product of controllable policy targets with non-controllable forcing variables y Tt Q yz z t is introduced by Anderson, Hansen, McGrattan and Sargent (1996). To ourknowledge, it has always been set to zero Q yz = so far in models of Ramsey optimalpolicy. This simplifies the Sylvester equation in step 3.The policy transmission mechanism of the private sector’s behavior is summarized bythis system of equations written in a Kalman controllable staircase form: (cid:18) E t y t +1 z t +1 (cid:19) = (cid:18) A yy A yz zy A zz (cid:19) (cid:18) y t z t (cid:19) + (cid:18) B y z (cid:19) u t (2) A is ( n k + n x + n z ) × ( n k + n x + n z ) matrix. B is the ( n k + n x + n z ) × p matrix ofthe marginal effects of policy instruments u t on next period policy targets y t +1 .The government minimizes his discounted objective function by choosing sequences { u t , x t , k t +1 , z t +1 } + ∞ t =0 subject to the policy transmission mechanism (2) and subject to2( n x + n k + n z ) boundary conditions detailed below.The certainty equivalence principle of the linear quadratic regulator (Simon (1956))allows us to work with a non stochastic model. ” We would attain the same decision ruleif we were to replace x t +1 with the forecast E t x t +1 and to add a shock process Cε t +1 tothe right hand side of the private sector policy transmission mechanism, where ε t +1 is an .i.d. random vector with mean of zero and identity covariance matrix. ” (Ljungqvist andSargent, 2012 p.767).The policy maker’s choice can be solve with Lagrange multipliers using Bellman’smethod (Ljungqvist and Sargent (2012)). It is practical (but not necessary) to solve thepolicy maker’s choice by attaching a sequence of Lagrange multipliers 2 β t +1 µ t +1 to thesequence of private sector’s policy transmission mechanism constraints and then formingthe Lagrangian: − + ∞ X t =0 β t (cid:20) y Tt Q yy y t + 2 y Tt Q yz z t + u Tt Ru t +2 β t +1 µ t +1 ( A yy y t + B y u t − y t +1 ) (cid:21) (3)The non-controllable variables dynamics can be excluded from the Lagrangian (Ander-son, Hansen, McGrattan and Sargent (1996)). It is important to partition the Lagrangemultipliers µ t conformable with our partition of y t = (cid:20) k t x t (cid:21) , so that µ t = (cid:20) µ k,t µ x,t (cid:21) , where µ x,t is an n x × n x + n k + n z ) boundary conditions determining the policy maker’s Lagrangian systemwith 2( n x + n k + n z ) variables ( y t , µ t , z t ) with µ t the policy maker’s Lagrange multipliersrelated to each of the controllable variables y t (table 1). Table 1: n x + n k + n z ) boundary conditionsNumber Boundary conditions n z lim t → + ∞ β t z t = z ∗ = , z t bounded+ n k + n x lim t → + ∞ β t y t = y ∗ = ⇔ lim t → + ∞ ∂L∂ y t = = lim t → + ∞ β t µ t , µ t bounded+ n k + n z k and z predetermined (given)+ n x x = x ∗ ⇔ ∂L∂ x = 0 = µ ∗ x ,t =0 predeterminedEssential boundary conditions are the initial conditions of predetermined variables k and z which are given.Natural boundary conditions are such that the policy maker’s anchors unique optimalinitial values of private sectors forward-looking variables. The policy maker’s Lagrangemultipliers of private sector’s forward (Lagrange multipliers) variables are predeterminedat the value zero: µ x ,t =0 = 0 in order to determine the unique optimal initial value x = x ∗ of private sector’s forward variables.Bryson and Ho ((1975), p.55) explains natural boundary conditions as follows. ” If x t is not prescribed at t = t , it does not follow that δx ( t ) = 0 . In fact, there will be anoptimum value for x ( t ) and it will be such that δL = 0 for arbitrary small variationsof x ( t ) around this value. For this to be the case, we choose ∂L∂x ( t ) = µ x,t = 0 (1)which simply says that small changes of the optimal initial value of the forward variables x ( t ) on the loss function is zero. We have simply traded one boundary condition: x ( t ) given, for another, (1). Boundary conditions such as (1) are sometimes called ”naturalboundary conditions ” or transversality conditions associated with the extremum problem. ”Anderson, Hansen, McGrattan and Sargent (1996) assume a bounded discountedquadratic loss function: 3 + ∞ X t =0 β t (cid:0) y Tt y t + z Tt z t + u Tt u t (cid:1)! < + ∞ (4)This implies a stability criterion for eigenvalues of the dynamic system such that (cid:12)(cid:12)(cid:12) ( βλ i ) t (cid:12)(cid:12)(cid:12) < | βλ i | <
1, so that stable eigenvalues are such that | λ i | < / √ β < /β . Apreliminary step is to multiply matrices by √ β as follows √ β A yy √ β B y in order to applyformulas of Riccati and Sylvester equations for the non-discounted augmented linearquadratic regulator (Anderson, Hansen, McGrattan and Sargent (1996)). Assumption 1:
The matrix pair ( √ β A yy √ β B y ) is controllable (all forward-lookingvariables are controllable).The matrix pair ( √ β A yy √ β B y ) is controllable if the Kalman (1960) controllabilitymatrix has full rank:rank (cid:16)p β B y β A yy B y β A yy B y ... β nk + nx A n k + n x − yy B y (cid:17) = n k + n x (5) Assumption 2:
The system is stabilizable when the transition matrix A zz for thenon-controllable variables has stable eigenvalues, such that | λ i | < / √ β . ” Step 1 and 2 seems to disregard the forward-looking aspect of the problem (step 3 willtake account of that). If we temporarily ignore the fact that the x component of the state y is not actually a state vector, then superficially the Stackelberg problem has the formof an optimal linear regulator. ” (Ljungqvist and Sargent (2012, p.769)).When the forcing variables are set to zero z t = , a stabilizing solution of the linearquadratic regulator satisfies: µ t = P y y t (6)where P y solves the matrix Riccati equation (Anderson, Hansen, McGrattan and Sargent(1996)): P y = Q y + β A ′ yy P y A yy − β ′ A ′ yy P y B y (cid:16) R + β B ′ y P y B y (cid:17) − β B ′ y P y A yy (7)The optimal rule of the linear quadratic regulator is: u t = F y y t (8)where F y is computed knowing P y (Anderson, Hansen, McGrattan and Sargent (1996)): F y = (cid:16) R + β B ′ y P y B y (cid:17) − β B ′ y P y A yy (9)As demonstrated by Simon (1956) certainty equivalence principle and by Kalman(1960) solution, the optimal rule parameters F y and P y of the linear quadratic regulatorare independent of additive random shocks and of initial conditions. This confirms thatit is correct to temporarily ignore the fact that x is not a state vector.4 .4 Step 2: Stabilizing solution of an augmented linear quadraticregulator This is the additional step missing in Ljungqvist and Sargent (2012) algorithm. A stabi-lizing solution of the augmented linear quadratic regulator satisfies (Anderson, Hansen,McGrattan and Sargent (1996)): µ t = P y y t + P z z t (10)where P z solves the matrix Sylvester equation: P z = Q yz + β ( A yy + B y F y ) ′ P y A yz + β ( A yy + B y F y ) ′ P z A zz (11)The optimal rule of the augmented linear quadratic regulator is: u t = F y y t + F z z t (12)where F z is computed knowing P z : F z = (cid:16) R + β B ′ y P y B y (cid:17) − β B ′ y ( P y A yz + P z A zz ) (13)As demonstrated by Simon (1956) certainty equivalence principle and by Anderson,Hansen, McGrattan and Sargent (1996) solution, the optimal rule parameters F z and P z of the augmented linear quadratic regulator are independent of additive random shocksand of initial conditions. This confirms that it is correct to temporarily ignore the factthat x is not a state vector, until step 3. x , the optimal initial anchor of forward-looking variables The policy maker’s Lagrange multipliers on private sector forward-looking variables aresuch that µ ,x = , at the initial date. The optimal stabilizing condition is: (cid:18) µ ,k µ ,x (cid:19) = (cid:18) P y,k P y,kx P y,kx P y,x (cid:19) (cid:18) k x (cid:19) + (cid:18) P z,k P z,x (cid:19) z = (cid:18) µ ,k (cid:19) (14)This implies P y,kx k + P y,x x + P z,x z = (15)Which provides the optimal initial anchor: x = P − y,x P y,kx k + P − y,x P z,x z (16)The exogenous forcing variables adds the term P − y,x P z,x z with respect to Ljungqvistand Sargent (2012) algorithm. 5 .6 Step 4: Compute impulse response functions and optimalloss function The transmission mechanism is given. Computing F y and F z provides a reduced form ofthe optimal policy rule. Computing P y and P z provides the missing initial conditions. (cid:18) E t y t +1 z t +1 (cid:19) = (cid:18) A yy A yz zy A zz (cid:19) (cid:18) y t z t (cid:19) + (cid:18) B y z (cid:19) u t u t = F y y t + F z z t x = P − y,x P y,kx k + P − y,x P z,x z , k and z givenThis information is sufficient to compute impulse response functions (the optimal pathof the expected values of variables y t z t and u t ) and to sum up over time their value inthe the discounted loss function.By contrast to other algorithms based on Miller and Salmon (1985) solution, it is notnecessary to compute all the values over time of all policy-makers Lagrange multipliers µ t . These algorithms then add a step which is a change of vector basis for eliminatingLagrange multipliers. Knowing the optimal path of variables ( y t z t ), one can computethe Lagrange multipliers at the end of this algorithm: µ t = P y y t + P z z t (17) Policymakers cannot implement a Ramsey optimal policy rule where policy instrumentsresponds to non-observable variables, such as the shocks u t or the Lagrange multipliers µ t .They can implement an observationally equivalent representation of the Ramsey optimalpolicy rule where policy instruments responds to lagged observable variables, includingthe lags of the policy instruments. This is also a useful representation for testing Ramseyoptimal policy using vector auto-regressive system of equation.( H ) (cid:18) E t y t +1 z t +1 (cid:19) = (cid:18) A yy + B y F y A yz + B y F z zy A zz (cid:19) (cid:18) y t z t (cid:19) + (cid:18) (cid:19) ε t u t = F y y t + F z z t x = P − y,x P y,kx k + P − y,x P z,x z , k and z given ⇔ (cid:18) E t y t +1 u t +1 (cid:19) = M − ( A + BF ) M (cid:18) y t u t (cid:19) + M − (cid:18) (cid:19) ε t z t = F − z u t − F − z F y y t x = P − y,x P y,kx k + P − y,x P z,x z , k and z givenwhere 6 + BF = (cid:18) A yy + B y F y A yz + B y F z zy A zz (cid:19)(cid:18) y t u t (cid:19) = M − (cid:18) y t z t (cid:19) with M − = (cid:18) y F z (cid:19) In the estimation of dynamic stochastic general equilibrium model, the controllablepredetermined variables are usually set to zero at all periods. They are as many auto-regressive forcing variables than controllable forward-looking variables. If the number ofpolicy instrument is equal to the number of controllable forward-looking policy targets, F z is a square matrix which can be invertible. One eliminates forcing variables z t and replacethem by policy instruments u t in the recursive equation, doing a change of vector basis.There is then of a representation of forward-looking variables and policy instrumentsrule optimal policy dynamics in a vector auto-regressive model. This representation ofRamsey optimal policy rule is such that policy instruments u t responds to lags of policyinstruments u t − and of lags of the observable policy targets y t − . This representationcan be implemented by policy makers. It can be estimated by econometricians (Chatelainand Ralf (2017a)). . Chatelain and Ralf (2017a) use this algorithm for the new-Keynesian Phillips curve as amonetary policy transmission mechanism. They check that it is equivalent to Gali (2015)solution who used the method of undetermined coefficients. They use the implementablerepresentation of step 5 to estimate structural parameters.Chatelain and Ralf (2017b) use this algorithm for the new-Keynesian Phillips curveand the consumption Euler equation as a monetary policy transmission mechanism. Theycheck the determinacy property of step 2 reduced form of the Ramsey optimal policy rule.Chatelain and Ralf (2016) use this algorithm for Taylor (1999) monetary policy trans-mission mechanism. They check whether Taylor principle applies to Ramsey optimalpolicy.
This algorithm complements Ljungqvist and Sargent (2012) algorithm taking into accountforcing variables. It is easy to code, check and implement.
References [1] Amman, H. (1996). Numerical methods for linear-quadratic models. in Amman H.M.,Kendrick D.A. and Rust J. (editors)
Handbook of Computational Economics , Else-vier, Amsterdam, 1, 587-618.[2] Anderson E.W., Hansen L.P., McGrattan E.R. and Sargent T.J. (1996). Mechanicsof Forming and Estimating Dynamic Linear Economies. in Amman H.M., KendrickD.A. and Rust J. (editors)
Handbook of Computational Economics , Elsevier, Ams-terdam, 1, 171-252. 73] Bryson A.E. and Ho Y.C. (1975).
Applied Optimal Control , John Wiley and Sons,New York.[4] Chatelain J.B. and Ralf K. (2016). Countercyclical versus Procyclical Taylor Prin-ciples. Econstor working papers.[5] Chatelain J.B. and Ralf K. (2017a). Can We Identify the Fed’s Preferences? Econstorworking papers.[6] Chatelain J.B. and Ralf K. (2017b). Hopf Bifurcation from New-Keynesian TaylorRule to Ramsey Optimal Policy. Econstor working papers.[7] Kalman R.E. (1960). Contributions to the Theory of Optimal Control.
Boletin de laSociedad Matematica Mexicana , 5, pp.102-109.[8] Ljungqvist L. and Sargent T.J. (2012).
Recursive Macroeconomic Theory . 3rd edition.The MIT Press. Cambridge, Massaschussets.[9] Miller M. and Salmon M. (1985). Dynamic Games and the Time Inconsistency ofOptimal Policy in Open Economies,
Economic Journal , 95 supplement: conferencepapers, 124-137.[10] Simon H.A. (1956). Dynamic Programming under Uncertainty with a QuadraticCriterion Function.
Econometrica , 24(1), 74-81.[11] Taylor, J. B. (1999). The robustness and efficiency of monetary policy rules as guide-lines for interest rate setting by the European Central Bank.