Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jennifer L. Castle is active.

Publication


Featured researches published by Jennifer L. Castle.


Journal of Time Series Econometrics | 2011

Evaluating Automatic Model Selection

Jennifer L. Castle; Jurgen A. Doornik; David F. Hendry

We outline a range of criteria for evaluating model selection approaches that have been used in the literature. Focusing on three key criteria, we evaluate automatically selecting the relevant variables in an econometric model from a large candidate set. General-to-specific selection is outlined for a regression model in orthogonal variables, where only one decision is required to select, irrespective of the number of regressors. Comparisons with an automated model selection algorithm, Autometrics (Doornik, 2009), show similar properties, but not restricted to orthogonal cases. Monte Carlo experiments examine the roles of post-selection bias corrections and diagnostic testing as well as evaluate selection in dynamic models by costs of search versus costs of inference.


National Institute Economic Review | 2009

Nowcasting Is Not Just Contemporaneous Forecasting

Jennifer L. Castle; Nicholas W.P. Fawcett; David F. Hendry

We consider the reasons for nowcasting, the timing of information and sources thereof, especially contemporaneous data, which introduce different aspects compared to forecasting. We allow for the impact of location shifts inducing nowcast failure and nowcasting during breaks, probably with measurement errors. We also apply a variant of the nowcasting strategy proposed in Castle and Hendry (2009) to nowcast Euro Area GDP growth. Models of disaggregate monthly indicators are built by automatic methods, forecasting all variables that are released with a publication lag each period, then testing for shifts in available measures including survey data, switching to robust forecasts of missing series when breaks are detected.


Archive | 2012

Automatic Selection for Non-linear Models

Jennifer L. Castle; David F. Hendry

Our strategy for automatic selection in potentially non-linear processes is: test for non-linearity in the unrestricted linear formulation; if that test rejects, specify a general model using polynomials, to be simplified to a minimal congruent representation; finally select by encompassing tests of specific non-linear forms against the selected model. Non-linearity poses many problems: extreme observations leading to non-normal (fat-tailed) distributions; collinearity between non-linear functions; usually more variables than observations when approximating the non-linearity; and excess retention of irrelevant variables; but solutions are proposed. A returns-to-education empirical application demonstrates the feasiblity of the non-linear automatic model selection algorithm Autometrics.


Oxford Bulletin of Economics and Statistics | 2013

Model Selection in Equations with Many ‘Small’ Effects

Jennifer L. Castle; Jurgen A. Doornik; David F. Hendry

High dimensional general unrestricted models (GUMs) may include important individual determinants, many small relevant effects, and irrelevant variables. Automatic model selection procedures can handle more candidate variables than observations, allowing substantial dimension reduction from GUMs with salient regressors, lags, nonlinear transformations, and multiple location shifts, together with all the principal components, possibly representing ‘factor’ structures, as perfect collinearity is also unproblematic. ‘Factors’ can capture small influences that selection may not retain individually. The final model can implicitly include more variables than observations, entering via ‘factors’. We simulate selection in several special cases to illustrate.


Journal of Economic Surveys | 2013

Using Model Selection Algorithms to Obtain Reliable Coefficient Estimates

Jennifer L. Castle; Xiaochuan Qin; W. Robert Reed

This review surveys a number of common Model Selection Algorithms (MSAs), discusses how they relate to each other, and identifies factors that explain their relative performances. At the heart of MSA performance is the trade-off between Type I and Type II errors. Some relevant variables will be mistakenly excluded, and some irrelevant variables will be retained by chance. A successful MSA will find the optimal trade-off between the two types of errors for a given data environment. Whether a given MSA will be successful in a given environment depends on the relative costs of these two types of errors. We use Monte Carlo experimentation to illustrate these issues. We confirm that no MSA does best in all circumstances. Even the worst MSA in terms of overall performance – the strategy of including all candidate variables – sometimes performs best (viz., when all candidate variables are relevant). We also show how (i) the ratio of relevant to total candidate variables and (ii) DGP noise affect relative MSA performance. Finally, we discuss a number of issues complicating the task of MSAs in producing reliable coefficient estimates.


Econometric Reviews | 2014

Misspecification Testing: Non-Invariance of Expectations Models of Inflation

Jennifer L. Castle; Jurgen A. Doornik; David F. Hendry; Ragnar Nymoen

Many economic models (such as the new-Keynesian Phillips curve, NKPC) include expected future values, often estimated after replacing the expected value by the actual future outcome, using Instrumental Variables (IV) or Generalized Method of Moments (GMM). Although crises, breaks, and regime shifts are relatively common, the underlying theory does not allow for their occurrence. We show the consequences for such models of breaks in data processes, and propose an impulse-indicator saturation test of such specifications, applied to USA and Euro-area NKPCs.


Archive | 2008

Chapter 2 Forecasting UK Inflation: The Roles of Structural Breaks and Time Disaggregation

Jennifer L. Castle; David F. Hendry

Structural models` inflation forecasts are often inferior to those of naive devices. This chapter theoretically and empirically assesses this for UK annual and quarterly inflation, using the theoretical framework in Clements and Hendry (1998, 1999). Forecasts from equilibrium-correction mechanisms, built by automatic model selection, are compared to various robust devices. Forecast-error taxonomies for aggregated and time-disaggregated information reveal that the impacts of structural breaks are identical between these, so no gain results, helping interpret the empirical findings. Forecast failures in structural models are driven by their deterministic terms, confirming location shifts as a pernicious cause thereof, and explaining the success of robust devices.


Journal of Econometrics | 2012

Model Selection when there are Multiple Breaks

Jennifer L. Castle; Jurgen A. Doornik; David F. Hendry


Archive | 2005

Building a Real-Time Database for GDP(E)

Colin Ellis; Jennifer L. Castle


Journal of Econometrics | 2010

Forecasting with Equilibrium-correction Models during Structural Breaks

Jennifer L. Castle; Nicholas W.P. Fawcett; David F. Hendry

Collaboration


Dive into the Jennifer L. Castle's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Colin Ellis

University of Birmingham

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge