Nowcasting in a Pandemic using Non-Parametric Mixed Frequency VARs
Florian Huber, Gary Koop, Luca Onorante, Michael Pfarrhofer, Josef Schreiner
NNowcasting in a Pandemic using Non-ParametricMixed Frequency VARs
FLORIAN HUBER , GARY KOOP , LUCA ONORANTE ,MICHAEL PFARRHOFER ∗ , and JOSEF SCHREINER University of Salzburg University of Strathclyde European Commission European Central Bank Oesterreichische Nationalbank
September 9, 2020
Abstract
This paper develops Bayesian econometric methods for posterior and pre-dictive inference in a non-parametric mixed frequency VAR using additiveregression trees. We argue that regression tree models are ideally suitedfor macroeconomic nowcasting in the face of the extreme observations pro-duced by the pandemic due to their flexibility and ability to model outliers.In a nowcasting application involving four major countries in the EuropeanUnion, we find substantial improvements in nowcasting performance relativeto a linear mixed frequency VAR. A detailed examination of the predictivedensities in the first six months of 2020 shows where these improvements areachieved.
Keywords:
Regression tree, Bayesian, macroeconomicforecasting, vector autoregression
JEL Codes:
C11, C30, E3, D31 ∗ Corresponding author: Michael Pfarrhofer. Salzburg Centre of European Union Studies, University ofSalzburg.
Address : M¨onchsberg 2a, 5020 Salzburg, Austria.
Email : [email protected]. Florian Huberand Michael Pfarrhofer gratefully acknowledge financial support from the Austrian Science Fund (FWF, grantno. ZK 35). a r X i v : . [ ec on . E M ] S e p Introduction
Mixed Frequency Vector Autoregressions (MF-VARs) have enjoyed great popularity in recentyears as a tool for producing timely high frequency nowcasts of low frequency variables. Acommon practice (see, e.g., Schorfheide and Song, 2015) is to choose a quarterly macroeconomicvariable such as GDP and a set of monthly variables and model them together in a VAR so as toproduce monthly nowcasts of GDP. The fact that statistical agencies release data such as GDPwith a delay, whereas appropriately chosen monthly variables are released with less of a delayfurther enhances the benefits of the MF-VAR. Nowcasts can be produced in a timely fashion.The pandemic lockdown of 2020 has further increased the need for timely, high frequencynowcasts of economic activity. And the increasing availability of a variety of high frequency(i.e. monthly, weekly or daily) and quickly released data (i.e. some variables are released almostinstantly) presents rich opportunities for the mixed frequency modeller. However, the pandemicalso poses challenges to the conventional, linear, MF-VAR. During the pandemic, we have seenvalues of variables that are far from the range of past values. Linear time series econometricmethods seek to find patterns in past data. If current data is very different, using such patternsand linearly extrapolating them may be highly questionable. This has led researchers to try todevelop new VAR frameworks for nowcasting during the pandemic. For instance, Schorfheideand Song (2020) find that the model developed in Schorfheide and Song (2015) nowcasts poorly,but that if they estimate their MF-VAR using data through 2019 and then produce conditionalforecasts for the first half of 2020, improvements were obtained. In essence, the extreme datain the first half of 2020 caused estimates of the full sample MF-VAR coefficients to changein a manner which led to poor forecasts. Lenza and Primiceri (2020) propose an alternativeVAR-based approach which allows the error covariance matrix to have a mixture distribution.In essence, the pandemic is treated as a large variance shock and pandemic observations are,thus, drastically downweighted in the model estimation. They conclude ”Our results show thatthe ad-hoc strategy of dropping these observations may be acceptable for the purpose of pa-rameter estimation. However, disregarding these recent data is inappropriate for forecasting thefuture evolution of the economy, because it vastly underestimates uncertainty.” Thus, althoughSchorfheide and Song (2020) and Lenza and Primiceri (2020) adopt very different approaches,they end up with similar advice: discard the pandemic observations when estimating the model.It is possible to envisage other approaches to modifying the MF-VAR for pandemic times.These would involve parameter change of some form (e.g. structural break or time-varyingparameter, TVP, models). But structural break models would be plagued by the fact thatthere are too few observations post-break to permit reliable estimation. This problem wouldnot occur with TVP models which assume smoothly adjusting coefficents. But TVP modelsare not capable of adjusting for sudden and strong jumps in the endogenous variables withinfew months such as have been occurring in the pandemic. In light of these considerations, inthis paper we adopt a different, non-parametric, approach. We argue that such an approachshould automatically decide how to treat the pandemic observations in a sensible fashion. In anempirical exercise involving four European countries, we demonstrate the superior nowcastingperformance of our approach.The non-parametric model we adopt involves Bayesian Additive Regression Trees (BART,see Chipman et al. , 2010). BART is a flexible and popular approach in many fields of statistics. But BART has been rarely used in time series econometrics. Huber and Rossini (2020) developBayesian methods which build BART into a VAR leading to the Bayesian Additive VectorAutoregressive Tree (BAVART) model and demonstrate it forecasts well. In this paper, wedevelop Bayesian methods for the mixed frequency version of this model (MF-BAVART). This A few other recent MF-VAR references adopting similar strategies include Eraker et al. (2015), Ghysels (2016),Brave et al. (2019) and Koop et al. (2020). Tan and Roy (2019) is an excellent introduction to BART and includes a long list of papers using BART ina variety of scientific disciplines. see Crawford et al. (2018, 2019).We apply the resulting model to nowcast GDP growth in selected Eurozone economies (Ger-many, Spain, France and Italy) and show that our approach outperforms the linear MF-VARmodel. With some exceptions, it produces slightly better nowcasts through 2019. But whenthe first two months of 2020 are included the improvements offered by MF-BAVART rise sub-stantially. We investigate where these improvements are coming from in a detailed study of thepredictive densities for the first six months of 2020.The remainder of this paper is organized as follows. In the next section, we define the MF-BAVART and illustrate how it can effectively handle extreme observations such as have occurredduring the pandemic and briefly sketch the Markov Chain Monte Carlo (MCMC) algorithmfor posterior and predictive Bayesian inference. The third section of the paper contains ourempirical work. The fourth section offers a summary and conclusions. Appendix A providesfull details of our Bayesian methods including the prior and MCMC algorithm. Appendix Bprovides additional empirical results. Suppose we are interested in modeling an M -dimensional vector of time series y t = ( y (cid:48) m,t , y (cid:48) q,t ) (cid:48) where y m,t is an M m vector and y q,t is an M q vector and t = 1 , . . . , T indicates time at themonthly frequency. The variables in y m,t are observed, but we do not observe y q,t at any pointin time. Instead the statistical agency produces a quarterly figure, y Q,t . Assuming that y q,t aremonthly growth rates (log difference relative to previous month) and y Q,t are quarterly growthrates (log difference relative to previous quarter), the relationship between them is (see Marianoand Murasawa, 2003): y Q,t = 19 y q,t + 29 y q,t − + 13 y q,t − + 29 y q,t − + 19 y q,t − . (1)We refer to this as the intertemporal restriction and note that it applies every third month (e.g.the statistical agency produces quarterly data for the quarter covering January, February andMarch, but not the quarter covering February, March, April).We assume that y t evolves according to a VAR of the form: y t = F ( X t ) + ε t , ε t ∼ N ( , Σ ) , (2)with X t = ( y (cid:48) t − , . . . , y (cid:48) t − p ) (cid:48) , F ( X t ) = ( f ( X t ) , . . . , f M ( X t )) (cid:48) being a vector of potentially non-linear functions and f j : R K → R and Σ denotes an M × M -dimensional variance-covariancematrix. The methods derived in this literature and used in the present paper can also be used with other black boxmodels such as neural networks or Gaussian process regressions. We divide our quarterly growth rates by three to make their scale comparable to the monthly growth rates.Thus, the right hand side of this equation divides that of Mariano and Murasawa (2003) by 3. y q,t , are treated asstates. The state equations are given by (2). The measurement equations are the intertemporalrestriction in (1) (applicable every third month) and those which simply state that y m,t areobserved every month.If F ( X t ) is a vector of linear functions then we obtain the linear MF-VAR of, e.g., Schorfheideand Song (2015). Assuming a conditionally Gaussian prior for the VAR coefficients (e.g. theMinnesota prior or a conditionally Gaussian global-local shrinkage prior), posterior and predic-tive inference is straightforward. That is, standard Bayesian MCMC methods such as Forward-Filtering Backward-Sampling (FFBS, see e.g., Fr¨uhwirth-Schnatter, 1994) for Gaussian linearstate space models can be used.In this paper, we wish to treat F ( X t ) non-parametrically. In principle, any model can beused for F (e.g. kernel regression, deep neural networks, tree-based models, Gaussian processregression) and the methods derived below could be used with minor modifications. In this paper,we approximate F using BART as, for reasons discussed below, it should be well-designed tocapture large shocks and outliers such as those produced by the pandemic. BART approximateseach f j ( X t ) as follows: f j ( X t ) = S (cid:88) s =1 g js ( X t | T js , µ js ) , (3)where T js are so-called tree structures related to the j th element in y t , µ js are tree-specificterminal nodes and S denotes the total number of trees used. The dimension of µ js is denotedby b js which depends on the complexity of the tree (i.e. this dimension is the number of leaveson the tree). In our empirical work, we follow Chipman et al. (2010) and set S = 250.To understand how BART works, we begin with a single tree (and, for simplicity, suppressthe js subscripts which distinguish the various trees and equations in the VAR). In the languageof regression, a tree takes as an input the value for the explanatory variables for an observationand produces as an output a fitted value for the dependent variable for that observation. Thesefitted values are the parameters related to the terminal nodes. It does this by dividing thespace of explanatory variables into various regions using a sequence of binary rules. These so-called splitting rules take the form { X ∈ A r } or { X (cid:54)∈ A r } with A r being a partition set for r = 1 , . . . , b and X = ( X , . . . , X T ) (cid:48) a full-data matrix of dimension T × K . The partition rulesinvolve an explanatory variable and depend on whether they are above or below a threshold, c .If we let X • i denote the i th column of X , then the partition set takes the form { X • i ≤ c } or { X • i > c } .The fitted value of the dependent variable for an observation with explanatory variables inthe set A r produced by a single regression tree takes the form g ( X ; T, µ ) = µ r , if X ∈ A r , r = 1 , . . . , b. A key point to emphasize is that everything defining the tree is treated as an unknown parameterand estimated. This includes the terminal node parameters ( µ which is the vector of fittedvalues the algorithm can choose between), their number ( b ) as well as all the elements of thetree structure (i.e. the explanatory variable, X • i , and threshold, c chosen to define each splittingrule) and even the number of splitting rules each tree involves. In the following sub-section, weprovide a brief empirical illustration of what an estimated tree looks like in our data set.The preceding discussion involved a single tree and illustrated its flexibility. But BARTinvolves not just one regression tree, but rather a sum of them. By adding up various regressiontrees even greater flexibility is produced. Thus, BART can be interpreted as a non-parametricapproach capable of approximating any nonlinear function. But, as with any non-parametricapproach, additive regression trees risks over-fitting. This why Bayesian methods have beencommonly used as prior information can mitigate this problem. We use regularization priors to4educe the complexity of the tree structures and to shrink the terminal nodes. In the jargon ofthis literature, we force each tree to be small and, thus, act as a weak learner. This essentiallyimplies that for a large S , each tree explains only a limited fraction of the variation in y t .In terms of posterior and predictive computation, the point to note is that efficient MCMCalgorithms have derived for estimating BART models. In our MF-BVART model, we use theseconditional on y q,t . That is, one block of the MCMC algorithm (to be discussed below) providesdraws of y q,t and, conditional on these draws, we use standard algorithms for drawing theBART parameters. In principle, we could draw the parameters of the trees and Σ as an entire M dimensional system. However, we follow Carriero et al. (2019) and estimate the model on anequation-by-equation basis by conditioning on the lower Cholesky factor of Σ . This speeds upcomputation time enormously. Complete details of our prior and posterior simulation methodsare provided in Appendix A. To provide some additional intuition of what BART is doing and why it might be a goodapproach to handle the extreme observations associated with the pandemic, we preview ourempirical application in a simple way. Full details of our data and application are providedbelow, suffice it to note here that results in this sub-section are for GDP growth for Germanyand estimated on the full sample of data which runs through 2020Q2. We use a single tree witha relatively non-informative prior so as to allow for more complex tree structures. This is justfor illustration. In our main empirical work, we use many trees and a regularization prior.Figure 1 shows the estimated regression tree for Germany. The tree is organized with acondition (e.g.
XIP ( t − < − .
96) at the top of every binary split. If this condition holds,you move down the left branch, else you move down the right branch. So, for example, therightmost terminal node (1 . XIP ( t −
1) being greater thanor equal to − .
96 (go right at the first split), but have
XGDP ( t −
1) greater than or equalto − .
392 (go right at the second split) and have
XIP ( t −
1) greater than or equal to 1 . − . , . − .
382 is 1 . − .
99. This is a pandemic observation. What BART is doing iscreating nodes for capturing outliers. Whereas the parameter estimates in a linear model canbe substantially affected by an outlier, BART can simply add a new branch to control for itwithout affecting the main body of the tree. We will explore this issue in more detail in ourempirical work, but this is the intuition for why our MF-BAVART ends up nowcasting better5han the linear MF-VAR, particularly around the time of the pandemic.
Sub-section 2.1 defined the MF-BAVART and discussed how well-established MCMC methodscan be used to draw the BART parameters conditional on the states (i.e. the unobserved highfrequency values of the low frequency variables). To complete the MCMC algorithm we need amethod for drawing the states, conditional on the BART parameters. In a linear MF-VAR this isdone using standard Bayesian state space algorithms such as FFBS. But with MF-BAVAR thisis more complicated since the model is highly non-linear and FFBS is not directly applicable.Accordingly, we borrow from the literature that deals with estimating effect sizes in black-boxmodels (see Crawford et al. , 2018, 2019) to produce a linear approximation to F ( X t ). Given thislinear approximation FFBS can be used to draw y q,t . Thus, this step in the MCMC algorithmis an approximate one, but our empirical results indicate the approximation is a good one.To explain our linear approximation, note that in linear regression models, the effect size iscommonly interpreted as the magnitude of the projection of X onto Y = ( y , . . . , y T ) (cid:48) whichtakes the form:ˆ A = Proj( X , Y ) = X † Y , with X † (which is K × T ) being the Moore-Penrose inverse. In the case where X is a fullrank matrix then this projection is simply ( X (cid:48) X ) − X (cid:48) Y and the effect size is simply the leastsquares estimate (i.e. it is an estimate of the magnitude of the marginal effect of the explanatoryvariables on the dependent variables). We follow Crawford et al. (2018, 2019) and adopt thisidea but using the non-parametric functions F = ( F ( X ) , . . . , F ( X T )) (cid:48) in place of Y . Thisproduces the following estimate which can be interpreted as an effect size:˜ A = Proj( X , F ) = X † F . The argument for why ˜ A can be interpreted as an effect size similar to the least squares estimatorin linear models is provided in detail in papers such as Crawford et al. (2018), Crawford et al. (2019) and Ish-Horowicz et al. (2020). But in essence the justification is based on the idea thatit can be shown that at the T observations it is the case that F ≈ X ˜ A . We use this fact toproduce a linear approximation to the non-parametric multivariate model: y t = ˜ A (cid:48) X t + ε t . (4)Since we now have a linear model with Gaussian shocks, standard techniques such as FFBS canbe used to draw y q,t based on the Gaussian linear state space model defined by (4) and theintertemporal restriction, (1). In this section, we investigate the performance of our MF-BAVART model for forecasting GDPgrowth using four data sets with relatively short samples. The short sample arises since someof the variables have only been collected for a short time period. This is an issue which ariseswith many of the new data sets that are becoming popular (e.g. internet search data) and,accordingly, we felt it useful to test our methodology in the type of context where it might beused in the future. All models use a lag length of 5 since this is the number of lags in theintertemporal restriction in (1). 6 i g u r e : E s t i m a t e d t r ee s t r u c t u r e f o r G e r m a n y u s i n ga s i n g l e t r ee . | X I P ( t − ) < − . X G D P ( t − ) < − . XP M I ( t − ) >= . X I P ( t − ) < − . X G D P ( t − ) >=− . X I P ( t − ) < . XES I ( t − ) < . X I P ( t − ) < − . − .
96 n = − .
588 n = − .
114 n = − .
825 n = .
447 n = − .
018 n = .
246 n = .
833 n = .
594 n = N o t e s : T h e v a r i a b l e s i n t h e t r ee a r e G D P , I P ( i ndu s t r i a l p r o du c t i o n ) , E S I ( e c o n o m i c s e n t i m e n t i nd i c a t o r ) , P M I ( pu r c h a s i n g m a n ag e r ’ s i nd e x ) a nd X d e n o t e s t h e g r o w t h r a t e . C o m p l e t e d e fin i t i o n s a r e g i v e n i n t h ee m p i r i c a l s e c t i o n o f t h i s p a p e r . T h e nu m b e r o f o b s e r v a t i o n s c h oo s i n g e a c h t e r m i n a l n o d e i s d e n o t e db y n . T h e s p li tt i n g r u l e s a r e d e fin e d s u c h t h a t ,i f t h e c o nd i t i o nh o l d s y o u m o v e d o w n t h e l e f t b r a n c h o f t h e t r ee , e l s e y o u m o v e d o w n t h e r i g h t b r a n c h . .1 Data We use monthly and quarterly data on Germany (DE), France (FR), Italy (IT) and Spain (ES)from 2005M03/2005Q1 to 2020M06/2020Q2 on the following M = 6 variables:1. GDP growth: quarterly GDP growth (abbreviated GDP), released six weeks after the endof the respective quarter.2. Industrial production: monthly growth rate of industrial production (abbreviated IP),released with approximately 6 week lag.3. Economic sentiment indicator: monthly growth rate of the economic sentiment indicator(abbreviated ESI), released on the next-to-last working day of the respective month.4. New car registrations: monthly growth rate of new car registrations (abbreviated CAR),released with a delay of two and a half weeks.5. Purchasing manager index: monthly growth rate of the purchasing manager index (abbre-viated PMI), released on the first working day of the next month.6. One-year-ahead interest rates (abbreviated EUR), monthly average, available immediatelyafter the end of the respective month.Data on GDP and industrial production is obtained from Eurostat, the Economic SentimentIndicator is provided by the European Commission, figures on new car registrations are releasedby the European Automobile Manufacturers Association (ACEA), PMI readings come fromMarkit and the interest rate data is obtained from Macrobond. Given the relatively short sample size we begin evaluating nowcasts in 2011Q1. Within eachquarter, we produce three nowcasts, one for each month in the quarter. Our model nowcastsmonthly growth rates, y q,t , which are turned into quarterly growth rates for comparison withthe actual realization of quarterly GDP growth. All of our nowcasts respect the release calendar(e.g. a nowcast produced for January will be made at the beginning of February using the datathat has been released by then).We compare results from our MF-BAVART specification to a standard linear MF-VAR whichis identical in all respects except that it is linear. This implies that we set F ( X t ) = AX t with A being an M × K coefficient matrix. On a = vec( A ) we use a Horseshoe prior analogous tothe one defined in (A.2). The prior on Σ is the same in the two models. Tables 1 and 2 summarise our findings. They offer a comparison of MF-BAVART to the conven-tional MF-VAR in terms of root mean squared forecast errors (RMSEs), log predictive scores(LPSs which are log predictive likelihoods summed over the nowcast evaluation period) and con-tinuously ranked probability scores (CRPSs). To investigate the pandemic period, we producetwo sets of results: one for the full sample (including the pandemic period) and one ending in2019.Note first that, as we move from month to month within a quarter, our nowcasts almostalways improve. This statement holds true for all nowcast evaluation metrics and countries.This provides evidence that mixed frequency methods are useful for nowcasting in these datasets. As new information is released each month, our nowcasts of GDP growth improve.8 able 1:
Results of the nowcasting exercise through the end of 2019.
RMSE LPS CRPSTiming Country
MF-BAVART MF-VAR MF-BAVART MF-VAR MF-BAVART MF-VARM/Q 1 DE 0.588 0.610 -47.781 -70.337 0.342 0.357M/Q 2 DE 0.521 0.455 -35.669 -32.870 0.304 0.262M/Q 3 DE 0.499 0.395 -31.400 -22.975 0.290 0.228M/Q 1 ES 0.320 0.436 -12.656 -57.506 0.184 0.267M/Q 2 ES 0.304 0.406 -10.185 -47.043 0.174 0.245M/Q 3 ES 0.296 0.388 -9.208 -39.913 0.168 0.233M/Q 1 FR 0.322 0.312 -39.576 -53.952 0.212 0.208M/Q 2 FR 0.299 0.292 -26.279 -40.745 0.188 0.184M/Q 3 FR 0.296 0.259 -26.775 -27.002 0.186 0.152M/Q 1 IT 0.397 0.475 -19.412 -60.077 0.210 0.248M/Q 2 IT 0.353 0.383 -12.880 -30.691 0.195 0.208M/Q 3 IT 0.327 0.367 -9.364 -25.893 0.181 0.207
Notes : M/Q denotes which month within the quarter the nowcast was made.
In terms of the comparison of linear versus non-parametric mixed frequency methods, withsome exceptions, prior to the pandemic, we are finding that MF-BAVART nowcasts somewhatbetter than the linear MF-VAR. But the improvements from using the non-parametric modelare not large. The main exception is Germany where the MF-VAR nowcasts slightly betterthan MF-BAVART. For Spain and Italy, which had more volatile GDP growth over our sampleperiod, MF-BAVART is nowcasting substantially better than the MF-VAR. For France thevarious methods of nowcast comparison tell slightly different stories. LPSs indicate the non-parametric approach is nowcasting better, but CRPSs indicate the linear model is doing slightlybetter.When we turn to Table 2 which includes the pandemic period, we tend to see much betternowcast performance of the MF-BAVART relative to the MF-VAR. There are exceptions tothis, both for Germany and in some RMSEs. It is interesting to note that measures using theentire predictive density (i.e. LPLs and CRPSs) are leading to large improvements for the MF-BAVART relative to the MF-VAR, whereas the measure which uses only point nowcasts are not.Clearly, the benefits of using the non-parametric approach lie largely in their ability to bettermodel second and higher predictive moments for the extreme observations for the first half of2020.The pattern for Germany’s LPSs is interesting. In Table 1 MF-VAR was producing slightlybetter LPSs for every month within the quarter. However, in Table 2, MF-BAVART is producingLPS’s which are worse for the first month within a quarter, but better for the second and thirdmonth. To investigate such patterns more deeply, consider Figures 2a, 2b, 2c and 2d which plotcumulative sums of log predictive likelihoods over time. For Spain and Italy, it can be seen thatMF-BAVART is nowcasting well relative to MF-VAR throughout the entire sample, but thereis a particularly large jump in 2020. For France, pre-2020 the performance of MF-BAVART ismixed, but in the first half of 2020 there is the same kind of large jump in BART’s performanceas was observed for Italy and Spain. For these three countries for every month within eachquarter, BART is clearly doing a much better job of modelling the pandemic shock than thelinear model. For Germany, a similar jump in BART’s performance is observed during thepandemic but only in the second and third months of the quarter. In the first month, however,BART is nowcasting very poorly during the pandemic and it is this that is driving the afore-mentioned finding for Germany. But with this one exception, the MF-BAVART model is doingan excellent job of handling the pandemic. 9 igure 2:
Cumulative LPSs for MF-BAVART (solid line) relative to MF-VAR (dashed line). (a) DE M1 M2 M3 U n t il M1 M2 M3 I n c l . (b) ES M1 M2 M3 U n t il M1 M2 M3 I n c l . (c) FR M1 M2 M3 U n t il M1 M2 M3 I n c l . (d) IT M1 M2 M3 U n t il M1 M2 M3 I n c l . able 2: Results of the nowcasting exercise through 2020Q2.
RMSE LPS CRPSTiming Country
MF-BAVART MF-VAR MF-BAVART MF-VAR MF-BAVART MF-VARM/Q 1 DE 1.700 1.725 -215.016 -116.339 0.624 0.639M/Q 2 DE 1.672 1.219 -76.042 -125.467 0.549 0.472M/Q 3 DE 1.588 1.191 -79.296 -112.404 0.544 0.440M/Q 1 ES 3.204 2.831 -309.236 -851.437 0.748 0.818M/Q 2 ES 2.859 3.556 -319.677 -633.894 0.686 0.908M/Q 3 ES 2.783 2.486 -311.633 -537.136 0.693 0.703M/Q 1 FR 2.117 2.314 -787.116 -1198.590 0.622 0.675M/Q 2 FR 1.881 1.392 -707.394 -997.313 0.552 0.467M/Q 3 FR 1.902 1.549 -743.594 -979.229 0.561 0.475M/Q 1 IT 1.871 2.047 -304.824 -624.947 0.558 0.648M/Q 2 IT 1.730 1.939 -353.395 -647.272 0.519 0.592M/Q 3 IT 1.702 1.919 -333.255 -621.686 0.521 0.591
Notes : M/Q denotes which month within the quarter the nowcast was made.
The preceding sub-section compared the relative performance of the MF-BAVART to the MF-VAR, but did not present any evidence on the nowcast performance of either in an absolute sense.In Appendix B, we provide graphs of the nowcasts of both approaches plotted against realizedGDP growth for the four countries and three monthly nowcasts within each quarter. An exam-ination of them indicates that the MF-BAVART’s nowcasts are better calibrated, particularlyfor Spain.In this sub-section, we investigate this issue more formally using Probability Integral Trans-forms (PITs). In particular, we follow a common practice (e.g. Clark, 2011) and produce PITsfor our nowcasts and transform them using the inverse of the c.d.f. of a standard Gaussian. Wedenote these transformed PITs as r t for the time of our nowcast evaluation period. Perfectlycalibrated nowcasts should lead to r t having mean zero, variance one and being uncorrelatedover time. We calculate the sample mean (labelled µ in the tables), variance (labelled σ ) andestimated AR(1) coefficient (labelled AR(1)) and 95% credible intervals. Tables 3 and 4 plotthese summary statistics for the sample through 2019 and the full sample, respectively.Beginning with the linear MF-VAR, note that even in the pre-pandemic sample, there issome evidence of poor calibration. For the sample mean, the point estimates are consistentlywell away from zero, although the credible intervals always contain zero. The sample variancesare substantially higher than one and credible intervals all lie completely above 1 . r t , particularly for the first month in a quarter. When we move to the full sample,these problems get much worse, particularly the sample variance of r t which now becomes verylarge.If we turn to the MF-BAVART in Table 4 it can be seen that they are better calibrated.Even for the full sample, the credible intervals for the sample mean of r t always contain zeroand, with the exception of a couple cases in the first month, the estimated AR(1) coefficient isinsignificant. It is the case that the sample variance of r t is still too high, but to a much lesserextent than for the MF-VAR. Indeed this sample variance tends to be roughly half of what itwas with the MF-VAR (again with the exception of Germany in the first month of the quarter).Thus, use of the MF-BAVART has gone a large way towards improving the calibration problemsof the MF-VAR, even if it has not completely fixed them.11 able 3: Summary Statistics of Transformed PITs through 2019.
M/Q 1 M/Q 2 M/Q 3
MF-BAVART MF-VAR MF-BAVART MF-VAR MF-BAVART MF-VAR DE µ σ ES µ σ FR µ σ IT µ -0.34 (-0.72, 0.03) -0.35 (-0.95, 0.24) -0.3 (-0.63, 0.04) -0.26 (-0.73, 0.25) -0.26 (-0.57, 0.06) -0.29 (-0.77, 0.2) σ Notes : M/Q denotes which month within the quarter the nowcast was made. µ , σ and AR(1) denote the sample mean, variance and AR(1)coefficient of the transformed PITs. Numbers in parentheses are 95% credible intervals. Table 4:
Summary Statistics of Transformed PITs through 2020.
M/Q 1 M/Q 2 M/Q 3
MF-BAVART MF-VAR MF-BAVART MF-VAR MF-BAVART MF-VAR DE µ -0.55 (-1.47, 0.32) -0.07 (-0.79, 0.62) -0.24 (-0.79, 0.33) -0.19 (-0.93, 0.55) -0.27 (-0.84, 0.29) -0.19 (-0.88, 0.51) σ ES µ -0.66 (-1.77, 0.48) -1.09 (-2.96, 0.75) -0.68 (-1.83, 0.46) -0.76 (-2.41, 0.78) -0.74 (-1.87, 0.37) -0.64 (-2.14, 0.85) σ FR µ -0.56 (-1.6, 0.54) -0.53 (-1.72, 0.67) -1 (-2.67, 0.7) -0.31 (-1.38, 0.79) -0.51 (-1.54, 0.5) -0.43 (-1.49, 0.67) σ IT µ -1.02 (-2.13, 0.06) -1.48 (-3.04, 0.11) -1.02 (-2.17, 0.17) -1.29 (-2.91, 0.34) -1.06 (-2.16, 0.13) -1.35 (-2.91, 0.22) σ Notes : M/Q denotes which month within the quarter the nowcast was made. µ , σ and AR(1) denote the sample mean, variance and AR(1)coefficient of the transformed PITs. Numbers in parentheses are 95% credible intervals. In this section, we provide more insight as to how MF-BAVART is nowcasting the two pandemicquarters and provide greater insight into the role of the individual variables. We do so byestimating five different versions of our models using different sets of variables. The first version isthe one we have used thus far, involving all six variables (this is labelled Full in the figures). Theother models all involve GDP growth and industrial production (the two main variables) alongwith one additional high frequency variable (these are labelled by the name of the additionalvariable in the tables). We can then examine aspects of the six predictive densities (i.e. for thesix months in the first half of 2020) for the five different models for each country for MF-BAVARTand MF-VAR.Figure 3 presents the log predictive likelihoods for each individual observation. The storythat emerges reinforces our previous evidence that MF-BAVART offers substantial advantagesin nowcasting during pandemic times. When the pandemic hit the log predictive likelihoodsfrom both models did tend to become quite negative, particularly during 2020Q2. But, with onemain exception, this drop in predictive likelihoods was much larger for the linear model thanthe non-parametric one. The one exception was noted previously and occured for Germany fornowcasts made for the first month of each quarter. It can now be seen why this occurs. In thefirst month of 2020Q2, the MF-VAR produced a nowcast of German GDP growth which was12 igure 3:
Log predictive likelihoods for each month within the first two quarters of 2020. M F − BAVA R T M F − VA R M F − BAVA R T M F − VA R M F − BAVA R T M F − VA R −60−40−200−80−60−40−200−40−200−40−200−30−20−100−300−200−1000 Full CAR ESI EUR PMI DE M F − BAVA R T M F − VA R M F − BAVA R T M F − VA R M F − BAVA R T M F − VA R −500−400−300−200−1000−150−100−500−400−2000−75−50−250−400−2000−200−1000 Full CAR ESI EUR PMI ES M F − BAVA R T M F − VA R M F − BAVA R T M F − VA R M F − BAVA R T M F − VA R −900−600−3000−40−200−900−600−3000−25−20−15−10−50−1200−900−600−3000−75−50−250 Full CAR ESI EUR PMI FR M F − BAVA R T M F − VA R M F − BAVA R T M F − VA R M F − BAVA R T M F − VA R −600−400−2000−80−60−40−200−600−400−2000−30−20−100−400−2000−60−40−200 Full CAR ESI EUR PMI IT much better than the one produced by MF-BAVART. This pattern is not repeated in the secondor third months of 2020Q2 where the MF-BAVART nowcasts are better than the MF-VAR ones.If we turn to the issue of which variables are most useful in the nowcasts, for MF-BAVART thefive different models tend to produce similar log predictive scores during the pandemic months.However, for MF-VAR there are sometimes substantial differences between models. We notedpreviously how the one time and country where the MF-VAR nowcasts better than MF-BAVARTwas Germany in the first month of 2020Q2. This finding occurred for the full model, but it canbe seen that for several of the smaller models the MF-VAR is actually nowcasting worse thanMF-BAVART for this month.It is interesting to note that, particularly for 2020Q1, we often observe that the full modelproduces slightly inferior density forecasts compared to the smaller models. For instance, we arefinding in 2020Q1 that models which contain just GDP, IP and CAR perform as well or slightlybetter than the full model. In 2020Q2, however, the full specifications tend to nowcast betterwith different variables tending to be of varying importance across countries (e.g. interest ratesseem to be important for Italy and France, while car registrations works well for Germany).Figures 4a, 4b, 4c and 4d plot the predictive densities for the various models and countriesfor first six months of 2020. The key general finding is that, as expected, MF-BAVART is muchmore flexible than the MF-VAR. Particularly in 2020Q2, the predictive densities it producestend to be much more dispersed, feature fatter tails, are often asymmetric and there is evensome slight evidence (in the case of Germany) of multi-modality. This contrasts with the MF-VAR where the predictive densities tend to be closer to Gaussian. In light of the recent interestin macroeconomics in models involve asymmetries and multimodalities (see, e.g., Adrian et al. ,2019a,b) this feature of BART is particularly attractive and is the source of the improvementsin nowcast performance during the pandemic.Another feature of the predictive densities worth noting is that for the MF-VAR, the nowcastdensity for the Full model is sometimes shifted towards zero relative to the smaller models. Itdoes not look like a combination of the nowcast densities for the smaller models. These propertiesdo not occur with MF-BAVART. This is due to the horseshoe prior used with the MF-VARshrinking more aggressively larger models and highlights the importance of prior elicitation inlinear VARs. With MF-BAVART we are using a standard prior from the BART literature (seeChipman et al. , 2010) and are obtaining results which are robust over model dimension suchthat the Full model nowcast density appears like a sensible and flexible combination of those ofthe small models. 13 igure 4: Predictive densities for the first two quarters of 2020 across countries. (a) DE MF−VARM1 MF−VARM2 MF−VARM3MF−BAVARTM1 MF−BAVARTM2 MF−BAVARTM3 −0.5 0.0 0.5 1.0 −0.5 0.0 0.5 1.0 −0.5 0.0 0.5 1.00.00.51.01.50.00.51.01.5
GDP D en s i t y Information
Full CAR ESI EUR PMI
MF−VARM1 MF−VARM2 MF−VARM3MF−BAVARTM1 MF−BAVARTM2 MF−BAVARTM3 −5 0 5 −5 0 5 −5 0 50.00.30.60.90.00.30.60.9
GDP D en s i t y Information
Full CAR ESI EUR PMI (b) ES MF−VARM1 MF−VARM2 MF−VARM3MF−BAVARTM1 MF−BAVARTM2 MF−BAVARTM3 −0.4 0.0 0.4 0.8 −0.4 0.0 0.4 0.8 −0.4 0.0 0.4 0.80.00.51.01.52.02.50.00.51.01.52.02.5
GDP D en s i t y Information
Full CAR ESI EUR PMI
MF−VARM1 MF−VARM2 MF−VARM3MF−BAVARTM1 MF−BAVARTM2 MF−BAVARTM3 −15 −10 −5 0 5 −15 −10 −5 0 5 −15 −10 −5 0 50.00.20.40.60.00.20.40.6
GDP D en s i t y Information
Full CAR ESI EUR PMI (c) FR MF−VARM1 MF−VARM2 MF−VARM3MF−BAVARTM1 MF−BAVARTM2 MF−BAVARTM3 −0.6 −0.3 0.0 0.3 0.6−0.6 −0.3 0.0 0.3 0.6−0.6 −0.3 0.0 0.3 0.601230123
GDP D en s i t y Information
Full CAR ESI EUR PMI
MF−VARM1 MF−VARM2 MF−VARM3MF−BAVARTM1 MF−BAVARTM2 MF−BAVARTM3 −10 −5 0 5 −10 −5 0 5 −10 −5 0 50.00.10.20.30.40.50.00.10.20.30.40.5
GDP D en s i t y Information
Full CAR ESI EUR PMI (d) IT MF−VARM1 MF−VARM2 MF−VARM3MF−BAVARTM1 MF−BAVARTM2 MF−BAVARTM3 −1.0 −0.5 0.0 0.5 −1.0 −0.5 0.0 0.5 −1.0 −0.5 0.0 0.5012012
GDP D en s i t y Information
Full CAR ESI EUR PMI
MF−VARM1 MF−VARM2 MF−VARM3MF−BAVARTM1 MF−BAVARTM2 MF−BAVARTM3 −12 −8 −4 0 4 −12 −8 −4 0 4 −12 −8 −4 0 40.00.20.40.60.00.20.40.6
GDP D en s i t y Information
Full CAR ESI EUR PMI Summary and Conclusions
MF-VARs have been a standard tool for producing timely, high frequency nowcasts of lowfrequency variables for several years. With the arrival of the pandemic the need for such nowcastshas become even more acute. However, conventional linear MF-VARs have nowcast poorlyduring the pandemic due to their inability to effectively deal with the extreme observations thathave occurred. In this paper, we have developed MF-BAVART which is a non-parametric modelusing additive regression trees. MF-BAVART can be cast as a nonlinear state space model. Wedevelop an approximate MCMC algorithm where the parameters defining the conditional meanof the VAR are drawn using a standard BART algorithm and, conditional on these, the statesare drawn using a linear approximation. This linear approximation is taken from the machinelearning literature on black box models.Our nowcasting exercise, involving four major EU countries, shows that MF-BAVART, withfew exceptions, forecasts better than the linear MF-VAR at all times in our sample, but partic-ularly big nowcasting benefits occur during the pandemic. We show how and why this occursby providing a detailed comparison of nowcast densities in the first six months of 2020.
References
Adrian T, Boyarchenko N, and Giannone D (2019a), “Multi-modality in macro-financial dynam-ics,”
Federal Reserve Bank of New York Staff Reports .——— (2019b), “Vulnerable growth,”
American Economic Review , 1263–1289.
Brave S, Butters R, and Justiano A (2019), “Forecasting economic activity with mixed frequencyBVARs,”
International Journal of Forecasting , 1692–1707. Carriero A, Clark TE, and Marcellino M (2019), “Large Bayesian vector autoregressions withstochastic volatility and non-conjugate priors,”
Journal of Econometrics (1), 137–154.
Chipman HA, George EI, and McCulloch RE (1998), “Bayesian CART Model Search,”
Journalof the American Statistical Association (443), 935–948.——— (2010), “BART: Bayesian additive regression trees,” Ann. Appl. Stat. (1), 266–298. Clark T (2011), “Real-Time Density Forecasts From Bayesian Vector Autoregressions With StochasticVolatility,”
Journal of Business and Economic Statistics , 327–341. Crawford L, Flaxman S, Runcie D, and West M (2019), “Variable Prioritization in NonlinearBlack Box Methods: A Genetic Association Case Study,”
The Annals of Applied Statistics , 958–989. Crawford L, Wood K, Zhou X, and Mukherjee S (2018), “Bayesian Approximate Kernel Regres-sion With Variable Selection,”
Journal of the American Statistical Association , 1710–1721.
Eraker B, Chiu C, Foerster A, Kim T, and Seoane H (2015), “Bayesian mixed frequency VAR’s,”
Journal of Financial Econometrics , 698–721. Fr¨uhwirth-Schnatter S (1994), “Data augmentation and dynamic linear models,”
Journal of TimeSeries Analysis , 183–202. Ghysels E (2016), “Macroeconomics and the reality of mixed frequency data,”
Journal of Econometrics , 294–314.
Huber F, and Rossini L (2020), “Inference in Bayesian Additive Vector Autoregressive Tree Models,” https://arxiv.org/abs/2006.16333 . Ish-Horowicz J, Udwin D, Scharfstein K, Flaxman S, Crawford L, and Filippi S (2020),“Interpreting Deep Neural Networks Through Variable Importance,”
Journal of Machine LearningResearch , 1–30. Koop G, McIntyre S, Mitchell J, and Poon A (2020), “Regional Output Growth in the UnitedKingdom: More Timely and Higher Frequency Estimates From 1970,”
Journal of Applied Econometrics , 176–197. Lenza M, and Primiceri G (2020), “How to estimate a VAR after March 2020,” manuscript . akalic E, and Schmidt DF (2015), “A simple sampler for the horseshoe estimator,” IEEE SignalProcessing Letters (1), 179–182. Mariano R, and Murasawa Y (2003), “A new coincident index of business cycles based on monthlyand quarterly series,”
Journal of Applied Econometrics , 427–443. Schorfheide F, and Song D (2015), “Real-time forecasting with a mixed-frequency VAR,”
Journalof Business and Economic Statistics (3), 366–380.——— (2020), “Real-Time Forecasting with a (Standard) Mixed-Frequency VAR During a Pandemic,” manuscript . Tan Y, and Roy J (2019), “Bayesian additive regression trees and the General BART model,” https://arxiv.org/abs/1901.07504 . A Priors and Posterior Simulation Algorithm
The model outlined in Section 2 is estimated using Bayesian techniques. This implies that wehave to specify suitable priors on the parameters associated with the trees and as well as on Σ .Before discussing the precise prior setup we show how to rewrite the VAR as a system ofunrelated regression models. This approach has the advantage that the computational burden isdrastically reduced since we can perform equation-by-equation estimation. Let Q be a M × M lower triangular matrix with unit diagonal such that Σ = QHQ (cid:48) and H = diag( σ , . . . , σ M )denotes a diagonal matrix with variances σ j . Notice that the first equation of (2) can be writtenas: y t = f ( X t ) + η t , η t ∼ N (0 , σ ) . The second equation is given by: y t = f ( X t ) + q η t + η t , η t ∼ N (0 , σ ) . In general, the j th > y jt = f j ( X t ) + q (cid:48) j Z jt + η jt , η jt ∼ N (0 , σ j ) . (A.1)This implies that, conditional on the shocks to the previous j − j th equation is astandard regression model that features a non-parametric part given by f j ( X t ) and a regressionpart q (cid:48) j Z jt with q j = ( q j , . . . , q jj − ) (cid:48) and Z jt = ( η t , . . . , η j − t ) (cid:48) . The j − q j stores the first j − j th row of Q . These equations are conditionally independentand standard MCMC techniques can be readily applied. Alternative algorithms replace theshocks with the contemporaneous values of y t . This introduces order dependence which weavoid by conditioning on the shocks. Thus, we are using a standard sampling algorithm that iscommonly used to sample from the multivariate Gaussian (Carriero et al. , 2019). A.1 The Prior
The priors we use are all specified in an equation-specific manner and are thus (up to minordifferences caused by the fact that the dimension of Z jt differs across equations) symmetric acrossequations. For each equation j , we closely follow Chipman et al. (2010) and use a regularizationprior that can be factorized as follows: p (( T j , µ j ) , . . . , ( T jS , µ jS ) , σ j , q i ) = (cid:40)(cid:89) s p ( µ js | T js ) p ( T js ) (cid:41) p ( q j ) p ( σ j ) . with p ( µ js | T js ) = (cid:81) i p ( µ ij,s | T js ) and µ i,js being the i th element of µ js . This prior impliesindependence between equations, trees, covariance parameters and error variances. Within16rees, we assume that the terminal leaf parameters are independent of each other but dependon the specific tree structure T js .Starting with the prior on T js we follow Chipman et al. (1998) and specify a tree generatingstochastic process that consists of three parts. The first part relates to the probability that agiven node at stage n = 0 , , , . . . is not a terminal node. This probability is specified such that α (1 + n ) − β .α ∈ (0 ,
1) and β > β ( α ) introduce alarger penalty on more complex tree structures. This prior thus controls for overparameterizationby keeping trees rather small and simple (and thus act as weak learners). In our empiricalapplication we set α = 0 .
95 and β = 2. This is the standard choice proposed by Chipman et al. (2010) that works well for a wide range of different datasets and in simulations. Thesecond part concerns the possible values the thresholds c can take. Here we assume a discreteuniform distribution over all possible values of the i th covariate X • i . Finally, the last part dealswith the specific variables used in the splitting rule. Again, in the absence of substantial priorinformation we use a uniform distribution over the K columns of X .Consistent with Chipman et al. (2010), we construct the prior on µ i,js by transforming Y such that the transformed values range from − . .
5. This allows us to use a zero meanGaussian prior on µ i,js that places substantial posterior mass of µ i,js between the minimum andmaximum values of the columns of Y . µ i,js | T js ∼ N (0 , V µ ) . The prior variance V µ is set as follows: V µ = 12 π √ S , with π denoting a suitable positive constant. Notice that if S or π are increased, the prior isincreasingly pushed towards zero and the effect of a single tree becomes smaller. This prior hasthe big advantage that values far outside the range of Y are highly unlikely but not ruled outa priori. We follow much of the recent literature and set π = 2.For σ j we use the conjugate inverse chi-square distribution: σ j ∼ ν j ξ j /χ ν j , whereby ν j and ξ j denote hyperparameters that are calibrated using some data-based estimateof σ j , ˆ σ j . This data-based estimate is taken to be the OLS standard deviation from a univariateAR(5) model. The values of ν j and ξ j are then chosen such that the v th quantile of the prior iscentered on ˆ σ j with P ( σ j < ˆ σ j ) = v . In our application we use v = 0 .
75 and set the degrees offreedom ν j = T /
2. We found that this choice avoids too large values of σ j during the pandemic.Smaller values of ν j yields similar but slightly more unstable results if the sample is expandedto include the first two quarters of 2020.Finally, on the elements of q j we use a Horseshoe (HS) prior: q ji | τ ji , λ ∼ N (0 , τ ji λ ) , τ ji ∼ C + (0 , , λ ∼ C + (0 , . (A.2)Here we let C + denote the half Cauchy distribution and τ ji and λ scaling parameters. Notethat λ does not feature any indices and thus serves as a common shrinkage factor across the freeelements of Q . For later convenience we let V j = λ × diag( τ j , . . . , τ jj − ) denote a ( j − × ( j − .2 Posterior Simulation Under this prior and likelihood configuration we can derive a posterior simulation algorithm thatconsists of simple, well known steps. Hence, we only briefly summarize the main steps involvedand provide relevant references that provide more details on the specific steps.The tree structure T js can be obtained marginally of µ js using the strategy outlined inSection 5.1 of Chipman et al. (1998) . In brief, this consists of sampling the tree T js marginallyof µ js and conditional on the other trees T j s for s (cid:54) = s . Using a Metropolis-Hastings algorithm,we propose a new tree using the last accepted tree and then pick one of four moves. The firstmove grows a terminal node with a probability of 0 .
25, the second move prunes two terminalnodes with a probability of 0 .
25 and the third changes a non-terminal rule with probability 0 . . µ js is integrated out and thus the dimension of the estimation problem is keptfixed.Next, we simulate the terminal node parameters µ js . These can be obtained by simulating µ ji,s from independent Gaussian distributions which take a textbook conjugate form. Thesame can be said about the error variances. These can also be obtained by simulating from aconditional posterior which follows an inverse Gamma distribution.We sample q j using (A.1) from a multivariate Gaussian posterior. This posterior is givenby: q j |• ∼ N ( m j , Ω j ) , with moments Ω j = ( Z (cid:48) j Z j + V j ) − m j = Ω j Z (cid:48) j ˜ y j , and ˜ y j = y • j − f j ( X ) while y • j denotes the j th column of Y and Z j = ( Z j , . . . , Z jT ) (cid:48) . The • notation indicates that we condition on the remaining model parameters and the latent states.The scaling parameters of the HS prior are obtained using methods outlined in Makalic andSchmidt (2015). Introducing additional auxiliary parameters allows us to simulate τ ji and λ from inverse Gamma distributions. More precisely, the corresponding full conditional posteriorsare: τ ji |• ∼ G − (cid:32) , w ji + q ji λ (cid:33) , for i = 1 , . . . , j − ,λ |• ∼ G − M ( M − , ζ + 12 (cid:88) i (cid:88) j (cid:32) q ji τ ji (cid:33) . The auxiliary parameters w ji and ζ are simulated from: w ji |• ∼ G − (1 , τ − ji ) ,ζ |• ∼ G − (1 , λ − ) . Finally, we use the methods outlined in Sub-section 4 to simulate y q,t .We repeat this algorithm 30 ,
000 times and discard the first 15 ,
000 draws as burn-in. Stan-dard convergence diagnostics point towards rapid convergence towards the joint posterior distri-bution and thus closely mirror the excellent performance of the original algorithm of Chipman et al. (2010). 18
Additional Empirical Results
In this appendix, we plot the nowcasts against the realizations. Our model produces monthlynowcasts of GDP growth which are converted into quarterly nowcasts to be comparable to therealization. To improve readability, we present results through 2019 and through 2020Q2 asseparate graphs.
Figure B.1:
Predictive densities for Germany. (a)
Until 2019
MF−VARM1 MF−VARM2 MF−VARM3MF−BAVARTM1 MF−BAVARTM2 MF−BAVARTM3
Date G D P g r o w t h (b) Including the pandemic
MF−VARM1 MF−VARM2 MF−VARM3MF−BAVARTM1 MF−BAVARTM2 MF−BAVARTM3
Date G D P g r o w t h Note : Columns are months per quarter in which the nowcast was produced. No data for the third month in thefinal quarter yet available. Realizations are marked as X’s, posterior median and 68 percent credible set are given. igure B.2: Predictive densities for Spain. (a)
Until 2019
MF−VARM1 MF−VARM2 MF−VARM3MF−BAVARTM1 MF−BAVARTM2 MF−BAVARTM3
Date G D P g r o w t h (b) Including the pandemic
MF−VARM1 MF−VARM2 MF−VARM3MF−BAVARTM1 MF−BAVARTM2 MF−BAVARTM3
Date G D P g r o w t h Note : Columns are months per quarter in which the nowcast was produced. No data for the third month in thefinal quarter yet available. Realizations are marked as X’s, posterior median and 68 percent credible set are given.: Columns are months per quarter in which the nowcast was produced. No data for the third month in thefinal quarter yet available. Realizations are marked as X’s, posterior median and 68 percent credible set are given.
Date G D P g r o w t h Note : Columns are months per quarter in which the nowcast was produced. No data for the third month in thefinal quarter yet available. Realizations are marked as X’s, posterior median and 68 percent credible set are given.: Columns are months per quarter in which the nowcast was produced. No data for the third month in thefinal quarter yet available. Realizations are marked as X’s, posterior median and 68 percent credible set are given. igure B.3: Predictive densities for France. (a)
Until 2019
MF−VARM1 MF−VARM2 MF−VARM3MF−BAVARTM1 MF−BAVARTM2 MF−BAVARTM3
Date G D P g r o w t h (b) Including the pandemic
MF−VARM1 MF−VARM2 MF−VARM3MF−BAVARTM1 MF−BAVARTM2 MF−BAVARTM3
Date G D P g r o w t h Note : Columns are months per quarter in which the nowcast was produced. No data for the third month in thefinal quarter yet available. Realizations are marked as X’s, posterior median and 68 percent credible set are given.: Columns are months per quarter in which the nowcast was produced. No data for the third month in thefinal quarter yet available. Realizations are marked as X’s, posterior median and 68 percent credible set are given.
Date G D P g r o w t h Note : Columns are months per quarter in which the nowcast was produced. No data for the third month in thefinal quarter yet available. Realizations are marked as X’s, posterior median and 68 percent credible set are given.: Columns are months per quarter in which the nowcast was produced. No data for the third month in thefinal quarter yet available. Realizations are marked as X’s, posterior median and 68 percent credible set are given. igure B.4: Predictive densities for Italy. (a)
Until 2019
MF−VARM1 MF−VARM2 MF−VARM3MF−BAVARTM1 MF−BAVARTM2 MF−BAVARTM3
Date G D P g r o w t h (b) Including the pandemic
MF−VARM1 MF−VARM2 MF−VARM3MF−BAVARTM1 MF−BAVARTM2 MF−BAVARTM3
Date G D P g r o w t h Note : Columns are months per quarter in which the nowcast was produced. No data for the third month in thefinal quarter yet available. Realizations are marked as X’s, posterior median and 68 percent credible set are given.: Columns are months per quarter in which the nowcast was produced. No data for the third month in thefinal quarter yet available. Realizations are marked as X’s, posterior median and 68 percent credible set are given.