Environment Systems and Decisions | 2021
Forecast of environment systems using expert judgements: performance comparison between the possibilistic and the classical model
Abstract
Expert judgment is widely used to inform forecasts (e.g. using the 5th, 50th and 95th percentile of some variable of interest) for a large variety of applications related to environment systems. This task can rely on Cooke’s classical model (CM) within the probabilistic framework, and consists in combining expert information after a preliminary step where experts are weighted using calibration and informativeness scores estimated using some seed questions for which the answers can be obtained. In the literature, an alternative model (PM) has been proposed using a different framework to process the information supplied by experts, namely possibility theory. In the present study, we assess whether both models perform similarly when the seed questions are different from those used to determine the scores, i.e. by taking the viewpoint of forecast. Using an extensive out-of-sample validation procedure, two aspects are investigated using 33 expert datasets: (1) robustness to the set of calibration questions used to estimate the scores, i.e. whether the best and worst performing expert differs; (2) forecast performance, i.e. the degree of accuracy and informativeness of the derived forecast intervals. Regarding (1), the validation procedure shows that PM is less sensitive. Regarding (2), PM achieves more accuracy but with less informativeness when the averaging operator is used. Interestingly, the differences with CM only remain of moderate magnitude for the considered cases despite the conceptual dissimilarities of both models and their lack of agreement on the selection of the best performing expert.