Archive | 2021

Evaluating temperature extremes in CMIP6 simulations using statistically proper evaluation methods

 
 
 
 
 

Abstract


<p>Reliable projections of extremes in near-surface air temperature (SAT) by climate models become more and more important&#160;as global warming is leading to significant increases in the hottest days and decreases in coldest nights around the world with considerable impacts on various sectors, such as agriculture, health and tourism.</p><p>Climate model evaluation has traditionally been performed by comparing summary statistics that are derived from simulated model output and corresponding observed quantities using, for instance, the root mean squared error (RMSE) or mean bias as also used in the model evaluation chapter of the fifth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC AR5). Both RMSE and mean bias compare averages over time and/or space, ignoring the variability, or the uncertainty, in the underlying values. Particularly when interested in the evaluation of climate extremes, climate models should be evaluated by comparing the probability distribution of model output to the corresponding distribution of observed data.</p><p>To address this shortcoming, we use the integrated quadratic distance (IQD) to compare distributions of simulated indices to the corresponding distributions from a data product. The IQD is the proper divergence associated with the proper continuous ranked probability score (CRPS) as it fulfills essential decision-theoretic properties for ranking competing models and testing equality in performance, while also assessing the full distribution.</p><p>The IQD is applied to evaluate CMIP5 and CMIP6 simulations of monthly maximum (TXx) and minimum near-surface air temperature (TNn) over the data-dense regions Europe and North America against both observational and reanalysis datasets. There is not a notable difference between the model generations CMIP5 and CMIP6 when the model simulations are compared against the observational dataset HadEX2. However, the CMIP6 models show a better agreement with the reanalysis ERA5 than CMIP5 models, with a few exceptions. Overall, the climate models show higher skill when compared against ERA5 than when compared against HadEX2. While the model rankings vary with region, season and index, the model evaluation is robust against changes in the grid resolution considered in the analysis.</p>

Volume None
Pages None
DOI 10.5194/egusphere-egu21-9722
Language English
Journal None

Full Text