Rianne Legerstee
Erasmus University Rotterdam
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Rianne Legerstee.
Journal of Economic Surveys | 2009
Philip Hans Franses; Rianne Legerstee
This paper unifies two methodologies for multi-step forecasting from autoregressive time series models. The first is covered in most of the traditional time series literature and it uses short-horizon forecasts to compute longer-horizon forecasts, while the estimation method minimizes one-step-ahead forecast errors. The second methodology considers direct multi-step estimation and forecasting. In this paper, we show that both approaches are special (boundary) cases of a technique called partial least squares (PLS) when this technique is applied to an autoregression. We outline this methodology and show how it unifies the other two. We also illustrate the practical relevance of the resultant PLS autoregression for 17 quarterly, seasonally adjusted, industrial production series. Our main findings are that both boundary models can be improved by including factors indicated from the PLS technique.
Journal of the Operational Research Society | 2011
Philip Hans Franses; Rianne Legerstee
Experts (managers) may have domain-specific knowledge that is not included in a statistical model and that can improve short-run and long-run forecasts of SKU-level sales data. While one-step-ahead forecasts address the conditional mean of the variable, model-based forecasts for longer horizons have a tendency to convert to the unconditional mean of a time series variable. Analysing a large database concerning pharmaceutical sales forecasts for various products and adjusted by a range of experts, we examine whether the forecast horizon has an impact on what experts do and on how good they are once they adjust model-based forecasts. For this, we use regression-based methods and we obtain five innovative results. First, all horizons experience managerial intervention of forecasts. Second, the horizon that is most relevant to the managers shows greater overweighting of the expert adjustment. Third, for all horizons the expert adjusted forecasts have less accuracy than pure model-based forecasts, with distant horizons having the least deterioration. Fourth, when expert-adjusted forecasts are significantly better, they are best at those distant horizons. Fifth, when expert adjustment is down-weighted, expert forecast accuracy increases.
Journal of Forecasting | 2015
Rianne Legerstee; Philip Hans Franses
Forecasts from various experts are often used in macroeconomic forecasting models. Usually the focus is on the mean or median of the survey data. In the present study we adopt a different perspective on the survey data as we examine the predictive power of disagreement amongst forecasters. The premise is that this variable could signal upcoming structural or temporal changes in an economic process or in the predictive power of the survey forecasts. In our empirical work, we examine a variety of macroeconomic variables, and we use different measurements for the degree of disagreement, together with measures for location of the survey data and autoregressive components. Forecasts from simple linear models and forecasts from Markov regime-switching models with constant and with time-varying transition probabilities are constructed in real-time and compared on forecast accuracy. We find that disagreement has predictive power indeed and that this variable can be used to improve forecasts when used in Markov regime-switching models.
Report / Econometric Institute, Erasmus University Rotterdam | 2011
Philip Hans Franses; Rianne Legerstee; Richard Paap
textabstractWe propose a new and simple methodology to estimate the loss function associated with experts’ forecasts. Under the assumption of conditional normality of the data and the forecast distribution, the asymmetry parameter of the lin–lin and linex loss function can easily be estimated using a linear regression. This regression also provides an estimate for potential systematic bias in the forecasts of the experts. The residuals of the regression are the input for a test for the validity of the normality assumption. We apply our approach to a large data set of SKU-level sales forecasts made by experts, and we compare the outcomes with those for statistical model-based forecasts of the same sales data. We find substantial evidence for asymmetry in the loss functions of the experts, with underprediction penalized more than overprediction.
Report / Econometric Institute, Erasmus University Rotterdam | 2011
Rianne Legerstee; Philip Hans Franses; Richard Paap
Experts can rely on statistical model forecasts when creating their own forecasts. Usually it is not known what experts actually do. In this paper we focus on three questions, which we try to answer given the availability of expert forecasts and model forecasts. First, is the expert forecast related to the model forecast and how? Second, how is this potential relation influenced by other factors? Third, how does this relation influence forecast accuracy? We propose a new and innovative two-level Hierarchical Bayes model to answer these questions. We apply our proposed methodology to a large data set of forecasts and realizations of SKU-level sales data from a pharmaceutical company. We find that expert forecasts can depend on model forecasts in a variety of ways. Average sales levels, sales volatility, and the forecast horizon influence this dependence. We also demonstrate that theoretical implications of expert behavior on forecast accuracy are reflected in the empirical data.
Applied Economics | 2017
Philip Hans Franses; Rianne Legerstee; Richard Paap
ABSTRACT We propose a new and simple methodology to estimate the loss function associated with experts’ forecasts. Under the assumption of conditional normality of the data and the forecast distribution, the asymmetry parameter of the lin–lin and linex loss function can easily be estimated using a linear regression. This regression also provides an estimate for potential systematic bias in the forecasts of the experts. The residuals of the regression are the input for a test for the validity of the normality assumption. We apply our approach to a large data set of SKU-level sales forecasts made by experts, and we compare the outcomes with those for statistical model-based forecasts of the same sales data. We find substantial evidence for asymmetry in the loss functions of the experts, with underprediction penalized more than overprediction.
International Journal of Forecasting | 2009
Philip Hans Franses; Rianne Legerstee
Journal of Forecasting | 2009
Philip Hans Franses; Rianne Legerstee
Expert Systems With Applications | 2011
Philip Hans Franses; Rianne Legerstee
International Journal of Forecasting | 2013
Philip Hans Franses; Rianne Legerstee