Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Stefan Siegert is active.

Publication


Featured researches published by Stefan Siegert.


Journal of Climate | 2016

A Bayesian Framework for Verification and Recalibration of Ensemble Forecasts: How Uncertain is NAO Predictability?

Stefan Siegert; David B. Stephenson; Philip G. Sansom; Adam A. Scaife; Rosie Eade; Alberto Arribas

Predictability estimates of ensemble prediction systems are uncertain due to limited numbers of past forecasts and observations. To account for such uncertainty, this paper proposes a Bayesian inferential framework that provides a simple 6-parameter representation of ensemble forecasting systems and the corresponding observations. The framework is probabilistic, and thus allows for quantifying uncertainty in predictability measures such as correlation skill and signal-to-noise ratios. It also provides a natural way to produce recalibrated probabilistic predictions from uncalibrated ensembles forecasts. The framework is used to address important questions concerning the skill of winter hindcasts of the North Atlantic Oscillation for 1992-2011 issued by the Met Office GloSea5 climate prediction system. Although there is much uncertainty in the correlation between ensemble mean and observations, there is strong evidence of skill: the 95% credible interval of the correlation coefficient of [0.19,0.68] does not overlap zero. There is also strong evidence that the forecasts are not exchangeable with the observations: With over 99% certainty, the signal-to-noise ratio of the forecasts is smaller than the signal-to-noise ratio of the observations, which suggests that raw forecasts should not be taken as representative scenarios of the observations. Forecast recalibration is thus required, which can be coherently addressed within the proposed framework.


Monthly Weather Review | 2017

Detecting Improvements in Forecast Correlation Skill: Statistical Testing and Power Analysis

Stefan Siegert; Omar Bellprat; Martin Ménégoz; David B. Stephenson; Francisco J. Doblas-Reyes

AbstractThe skill of weather and climate forecast systems is often assessed by calculating the correlation coefficient between past forecasts and their verifying observations. Improvements in forecast skill can thus be quantified by correlation differences. The uncertainty in the correlation difference needs to be assessed to judge whether the observed difference constitutes a genuine improvement, or is compatible with random sampling variations. A widely used statistical test for correlation difference is known to be unsuitable, because it assumes that the competing forecasting systems are independent. In this paper, appropriate statistical methods are reviewed to assess correlation differences when the competing forecasting systems are strongly correlated with one another. The methods are used to compare correlation skill between seasonal temperature forecasts that differ in initialization scheme and model resolution. A simple power analysis framework is proposed to estimate the probability of correctly...


Monthly Weather Review | 2012

Rank Histograms of Stratified Monte Carlo Ensembles

Stefan Siegert; Jochen Bröcker; Holger Kantz

The application of forecast ensembles to probabilistic weather prediction has spurred considerable interest in their evaluation. Such ensembles are commonly interpreted as Monte Carlo ensembles meaning that the ensemble members are perceived as random draws from a distribution. Under this interpretation, a reasonable property to ask for is statistical consistency, which demands that the ensemble members and the verification behave like draws from the same distribution. A widely used technique to assess statistical consistency of a historical dataset is the rank histogram, which uses as a criterion the number of times that the verification falls between pairs of members of the ordered ensemble. Ensemble evaluation is rendered more specific by stratification, which means that ensembles that satisfy a certain condition (e.g., a certain meteorological regime) are evaluated separately. Fundamental relationships between Monte Carlo ensembles, their rank histograms, and random sampling from the probability simplex according to the Dirichlet distribution are pointed out. Furthermore, the possible benefits and complications of ensemble stratification are discussed. The main conclusion is that a stratified Monte Carlo ensemble might appear inconsistent with the verification even though the original (unstratified) ensemble is consistent. The apparent inconsistency is merely a result of stratification. Stratified rank histograms are thus not necessarily flat. This result is demonstrated by perfect ensemble simulations and supplemented by mathematical arguments. Possible methods to avoid or remove artifacts that stratification induces in the rank histogram are suggested.


Quarterly Journal of the Royal Meteorological Society | 2016

Parameter uncertainty in forecast recalibration

Stefan Siegert; Philip G. Sansom; Robin M. Williams

Stefan Siegert was supported by the European Union Programme FP7/2007–2013 under grant agreement 3038378 (SPECS). Philip Sansom was supported by a grant from the National Oceanic and Atmospheric Administration (NOAA) NA12OAR4310086.


Monthly Weather Review | 2011

Comments on “Conditional Exceedance Probabilities”

Jochen Bröcker; Stefan Siegert; Holger Kantz

AbstractIn a recent paper, Mason et al. propose a reliability test of ensemble forecasts for a continuous, scalar verification. As noted in the paper, the test relies on a very specific interpretation of ensembles, namely, that the ensemble members represent quantiles of some underlying distribution. This quantile interpretation is not the only interpretation of ensembles, another popular one being the Monte Carlo interpretation. Mason et al. suggest estimating the quantiles in this situation; however, this approach is fundamentally flawed. Errors in the quantile estimates are not independent of the exceedance events, and consequently the conditional exceedance probabilities (CEP) curves are not constant, which is a fundamental assumption of the test. The test would reject reliable forecasts with probability much higher than the test size.


Archive | 2016

Prediction of Complex Dynamics: Who Cares About Chaos?

Stefan Siegert; Holger Kantz

We compile knowledge on limitations to prediction of the time evolution of complex systems. Although such systems are typically highly chaotic, the inverse of the maximal Lyapunov exponent, the Lyapunov time, is not the time scale beyond which predictions fail. Instead, as the example of weather forecasting will show, predictions can be successful on lead times which are several orders of magnitude longer. We analyze the reasons which prevent errors from growing exponentially fast with a rate related to the maximal Lyapunov exponent. Moreover, we advocate that standard practices from weather forecasting should be transferred to other fields of complex systems’ predictions, which includes a statement about the uncertainty related to the actual prediction and a performance measure on past predictions so that a decision maker can assess the potential quality of a forecasting scheme.


Quarterly Journal of the Royal Meteorological Society | 2014

Variance estimation for Brier Score decomposition

Stefan Siegert

The Brier Score is a widely used criterion to assess the quality of probabilistic predictions of binary events. The expectation value of the Brier Score can be decomposed into the sum of three components called reliability, resolution and uncertainty, which characterize different forecast attributes. Given a dataset of forecast probabilities and corresponding binary verifications, these three components can be estimated empirically. Here, propagation of uncertainty is used to derive expressions that approximate the sampling variances of the estimated components. Variance estimates are provided for both the traditional estimators, as well as for refined estimators that include a bias correction. Applications of the derived variance estimates to artificial data illustrate their validity and application to a meteorological prediction problem illustrates a possible usage case. The observed increase of variance of the bias-corrected estimators is discussed.


Physical Review E | 2014

Improved predictions of rare events using the Crooks fluctuation theorem.

Julia Gundermann; Stefan Siegert; Holger Kantz


Quarterly Journal of the Royal Meteorological Society | 2011

Predicting outliers in ensemble forecasts

Stefan Siegert; Jochen Bröcker; Holger Kantz


Remote Sensing of Environment | 2017

Uncertainty propagation in observational references to climate model scales

Omar Bellprat; François Massonnet; Stefan Siegert; Chloé Prodhomme; Daniel Macias-Gómez; Virginie Guemas; Francisco J. Doblas-Reyes

Collaboration


Dive into the Stefan Siegert's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Francisco J. Doblas-Reyes

European Centre for Medium-Range Weather Forecasts

View shared research outputs
Top Co-Authors

Avatar

Omar Bellprat

Barcelona Supercomputing Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

François Massonnet

Université catholique de Louvain

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge