Gabi Hegerl
University of Edinburgh
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Gabi Hegerl.
Journal of Climate | 2008
Reto Knutti; Myles R. Allen; Pierre Friedlingstein; Jonathan M. Gregory; Gabi Hegerl; Gerald A. Meehl; Malte Meinshausen; James M. Murphy; S. C. B. Raper; Thomas F. Stocker; Peter A. Stott; Haiyan Teng; T. M. L. Wigley
Quantification of the uncertainties in future climate projections is crucial for the implementation of climate policies. Here a review of projections of global temperature change over the twenty-first century is provided for the six illustrative emission scenarios from the Special Report on Emissions Scenarios (SRES) that assume no policy intervention, based on the latest generation of coupled general circulation models, climate models of intermediate complexity, and simple models, and uncertainty ranges and probabilistic projections from various published methods and models are assessed. Despite substantial improvements in climate models, projections for given scenarios on average have not changed much in recent years. Recent progress has, however, increased the confidence in uncertainty estimates and now allows a better separation of the uncertainties introduced by scenarios, physical feedbacks, carbon cycle, and structural uncertainty. Projection uncertainties are now constrained by observations and therefore consistent with past observed trends and patterns. Future trends in global temperature resulting from anthropogenic forcing over the next few decades are found to be comparably well constrained. Uncertainties for projections on the century time scale, when accounting for structural and feedback uncertainties, are larger than captured in single models or methods. This is due to differences in the models, the sources of uncertainty taken into account, the type of observational constraints used, and the statistical assumptions made. It is shown that as an approximation, the relative uncertainty range for projected warming in 2100 is the same for all scenarios. Inclusion of uncertainties in carbon cycle–climate feedbacks extends the upper bound of the uncertainty range by more than the lower bound.
Climate Dynamics | 2013
Lisa M. Goddard; Arun Kumar; Amy Solomon; D. Smith; G. J. Boer; Paula Leticia Manuela Gonzalez; Viatcheslav V. Kharin; William J. Merryfield; Clara Deser; Simon J. Mason; Ben P. Kirtman; Rym Msadek; Rowan Sutton; Ed Hawkins; Thomas E. Fricker; Gabi Hegerl; Christopher A. T. Ferro; David B. Stephenson; Gerald A. Meehl; Timothy N. Stockdale; Robert J. Burgman; Arthur M. Greene; Yochanan Kushnir; Matthew Newman; James A. Carton; Ichiro Fukumori; Thomas L. Delworth
Decadal predictions have a high profile in the climate science community and beyond, yet very little is known about their skill. Nor is there any agreed protocol for estimating their skill. This paper proposes a sound and coordinated framework for verification of decadal hindcast experiments. The framework is illustrated for decadal hindcasts tailored to meet the requirements and specifications of CMIP5 (Coupled Model Intercomparison Project phase 5). The chosen metrics address key questions about the information content in initialized decadal hindcasts. These questions are: (1) Do the initial conditions in the hindcasts lead to more accurate predictions of the climate, compared to un-initialized climate change projections? and (2) Is the prediction model’s ensemble spread an appropriate representation of forecast uncertainty on average? The first question is addressed through deterministic metrics that compare the initialized and uninitialized hindcasts. The second question is addressed through a probabilistic metric applied to the initialized hindcasts and comparing different ways to ascribe forecast uncertainty. Verification is advocated at smoothed regional scales that can illuminate broad areas of predictability, as well as at the grid scale, since many users of the decadal prediction experiments who feed the climate data into applications or decision models will use the data at grid scale, or downscale it to even higher resolution. An overall statement on skill of CMIP5 decadal hindcasts is not the aim of this paper. The results presented are only illustrative of the framework, which would enable such studies. However, broad conclusions that are beginning to emerge from the CMIP5 results include (1) Most predictability at the interannual-to-decadal scale, relative to climatological averages, comes from external forcing, particularly for temperature; (2) though moderate, additional skill is added by the initial conditions over what is imparted by external forcing alone; however, the impact of initialization may result in overall worse predictions in some regions than provided by uninitialized climate change projections; (3) limited hindcast records and the dearth of climate-quality observational data impede our ability to quantify expected skill as well as model biases; and (4) as is common to seasonal-to-interannual model predictions, the spread of the ensemble members is not necessarily a good representation of forecast uncertainty. The authors recommend that this framework be adopted to serve as a starting point to compare prediction quality across prediction systems. The framework can provide a baseline against which future improvements can be quantified. The framework also provides guidance on the use of these model predictions, which differ in fundamental ways from the climate change projections that much of the community has become familiar with, including adjustment of mean and conditional biases, and consideration of how to best approach forecast uncertainty.
Climate Dynamics | 1994
Ulrich Cubasch; Benjamin D. Santer; A. Hellbach; Gabi Hegerl; Heinke Höck; Ernst Maier-Reimer; Uwe Mikolajewicz; Achim Stössel; Reinhard Voss
Four time-dependent greenhouse warming experiments were performed with the same global coupled atmosphere-ocean model, but with each simulation using initial conditions from different “snapshots” of the control run climate. The radiative forcing — the increase in equivalent CO2 concentrations from 1985–2035 specified in the Intergovernmental Panel on Climate Change (IPCC) scenario A — was identical in all four 50-year integrations. This approach to climate change experiments is called the Monte Carlo technique and is analogous to a similar experimental set-up used in the field of extended range weather forecasting. Despite the limitation of a very small sample size, this approach enables the estimation of both a mean response and the “between-experiment” variability, information which is not available from a single integration. The use of multiple realizations provides insights into the stability of the response, both spatially, seasonally and in terms of different climate variables. The results indicate that the time evolution of the global mean warming signal is strongly dependent on the initial state of the climate system. While the individual members of the ensemble show considerable variation in the pattern and amplitude of near-surface temperature change after 50 years, the ensemble mean climate change pattern closely resembles that obtained in a 100-year integration performed with the same model. In global mean terms, the climate change signals for near surface temperature, the hydrological cycle and sea level significantly exceed the variability among the members of the ensemble. Due to the high internal variability of the modelled climate system, the estimated detection time of the global mean temperature change signal is uncertain by at least one decade. While the ensemble mean surface temperature and sea level fields show regionally significant responses to greenhouse-gas forcing, it is not possible to identify a significant response in the precipitation and soil moisture fields, variables which are spatially noisy and characterized by large variability between the individual integrations.
Environmental Research Letters | 2016
Jürg Luterbacher; Johannes P. Werner; Jason E. Smerdon; Laura Fernández-Donado; Fidel González-Rouco; David Barriopedro; Fredrik Charpentier Ljungqvist; Ulf Büntgen; E. Zorita; S. Wagner; Jan Esper; Danny McCarroll; Andrea Toreti; David Frank; Johann H. Jungclaus; Mariano Barriendos; Chiara Bertolin; Oliver Bothe; Rudolf Brázdil; Dario Camuffo; Petr Dobrovolný; Mary Gagen; E. García-Bustamante; Quansheng Ge; Juan J. Gomez-Navarro; Joël Guiot; Zhixin Hao; Gabi Hegerl; Karin Holmgren; V.V. Klimenko
The spatial context is criticalwhen assessing present-day climate anomalies, attributing them to potential forcings and making statements regarding their frequency and severity in a long-term perspective. Recent international initiatives have expanded the number of high-quality proxy-records and developed new statistical reconstruction methods. These advances allow more rigorous regional past temperature reconstructions and, in turn, the possibility of evaluating climate models on policy-relevant, spatiotemporal scales. Here we provide a new proxy-based, annually-resolved, spatial reconstruction of the European summer (June-August) temperature fields back to 755 CE based on Bayesian hierarchical modelling (BHM), together with estimates of the European mean temperature variation since 138 BCE based on BHM and composite-plus-scaling (CPS). Our reconstructions compare well with independent instrumental and proxy-based temperature estimates, but suggest a larger amplitude in summer temperature variability than previously reported. Both CPS and BHM reconstructions indicate that the mean 20th century European summer temperature was not significantly different from some earlier centuries, including the 1st, 2nd, 8th and 10th centuries CE. The 1st century (in BHM also the 10th century) may even have been slightly warmer than the 20th century, but the difference is not statistically significant. Comparing each 50 yr period with the 1951-2000 period reveals a similar pattern. Recent summers, however, have been unusually warm in the context of the last two millennia and there are no 30 yr periods in either reconstruction that exceed the mean average European summer temperature of the last 3 decades (1986-2015 CE). A comparison with an ensemble of climate model simulations suggests that the reconstructed European summer temperature variability over the period 850-2000 CE reflects changes in both internal variability and external forcing on multi-decadal time-scales. For pan-European temperatures we find slightly better agreement between the reconstruction and the model simulations with high-end estimates for total solar irradiance. Temperature differences between the medieval period, the recent period and the Little Ice Age are larger in the reconstructions than the simulations. This may indicate inflated variability of the reconstructions, a lack of sensitivity and processes to changes in external forcing on the simulated European climate and/or an underestimation of internal variability on centennial and longer time scales.
Geophysical Research Letters | 2002
Nathan P. Gillett; Francis W. Zwiers; Andrew J. Weaver; Gabi Hegerl; Myles R. Allen; Peter A. Stott
[2] The IPCC Third Assessment Report [Mitchell et al., 2001] describes the application of ‘‘optimal fingerprinting’’ techniques [Hasselmann, 1997] to the detection of a combined greenhouse gas plus sulphate aerosol (GS) response over the past 50 years. The response patterns simulated by seven climate models were found to be detectable in observations of surface temperature; the amplitudes of these patterns in the observations were found to be inconsistent with zero. However, there were considerable differences in the magnitude of the response between the models, with some simulating a response consistent with that observed, while others predicted a response significantly larger than that observed. When these techniques were used to estimate the amplitudes of the greenhouse gas (G) and sulphate aerosol (S) response patterns separately, these inter-model differences became larger, with simultaneous detection of G and S possible with some models, but not with others. These results from multiple models are synthesized only qualitatively by Mitchell et al. [2001]. Here we suggest a method for doing so more quantitatively. [3] Lambert and Boer [2001] compared coupled model climatologies of surface air temperature with observations using the CMIP ensemble. They found that overall the mean climatology of the models matched the observations better than that of any individual model. Similarly in seasonal forecasting, Krishnamurti et al. [1999] and Kharin and Zwiers [2002] argued that a weighted sum of multiple model predictions performs better than predictions using an individual model. These results might be explained if each model has independent errors, each giving different characteristic biases in model output. Thus, much as we can reduce the effects of initial condition uncertainty by averaging over an ensemble of integrations with perturbed initial conditions, so we might also account for model uncertainty by averaging over multiple models. Such an argument is also likely to apply to the anthropogenically-forced responses of these models. Thus here we describe how a mean of the response patterns of five climate models (HadCM2, HadCM3, ECHAM3, CGCM1 and CGCM2) may be used to detect greenhouse gas and sulphate aerosol influences in surface air temperature. [4] In one standard approach to the detection of anthropogenic influence on surface temperature [Allen et al., 2002], signal-to-noise optimised observations are regressed against a modelled response pattern, using a total least squares fit. Climate model output enters the analysis at three points. First, output from integrations of the model with prescribed time-varying forcings is used to derive the signal patterns of climate response. Second, output from a control integration is used to estimate the autocovariance matrix representing internal variability. This covariance matrix is used to derive the EOF basis used for truncation of the signal patterns and signal-to-noise optimisation. Third, an independent section of control data is used to estimate the uncertainty in the derived regression coefficients. In this study, output from multiple models is used in all three stages.
Climate Dynamics | 1995
Ulrich Cubasch; Gabi Hegerl; Arno Hellbach; Heinke Höck; Uwe Mikolajewicz; Benjamin D. Santer; Reinhard Voss
Due to restrictions in the available computing resources and a lack of suitable observational data, transient climate change experiments with global coupled ocean-atmosphere models have been started from an initial state at equilibrium with the present day forcing. The historical development of greenhouse gas forcing from the onset of industrialization until the present has therefore been neglected. Studies with simplified models have shown that this “cold start” error leads to a serious underestimation of the anthropogenic global warming. In the present study, a 150-year integration has been carried out with a global coupled ocean-atmosphere model starting from the greenhouse gas concentration observed in 1935, i.e., at an early time of industrialization. The model was forced with observed greenhouse gas concentrations up to 1985, and with the equivalent C02 concentrations stipulated in Scenario A (“Business as Usual”) of the Intergovernmental Panel on Climate Change from 1985 to 2085. The early starting date alleviates some of the cold start problems. The global mean near surface temperature change in 2085 is about 0.3 K (ca. 10%) higher in the early industrialization experiment than in an integration with the same model and identical Scenario A greenhouse gas forcing, but with a start date in 1985. Comparisons between the experiments with early and late start dates show considerable differences in the amplitude of the regional climate change patterns, particularly for sea level. The early industrialization experiment can be used to obtain a first estimate of the detection time for a greenhouse-gas-induced near-surface temperature signal. Detection time estimates are obtained using globally and zonally averaged data from the experiment and a long control run, as well as principal component time series describing the evolution of the dominant signal and noise modes. The latter approach yields the earliest detection time (in the decade 1990–2000) for the time-evolving near-surface temperature signal. For global-mean temperatures or for temperatures averaged between 45°N and 45°S, the signal detection times are in the decades 2015–2025 and 2005–2015, respectively. The reduction of the “cold start” error in the early industrialization experiment makes it possible to separate the near-surface temperature signal from the noise about one decade earlier than in the experiment starting in 1985. We stress that these detection times are only valid in the context of the coupled models internally-generated natural variability, which possibly underestimates low frequency fluctuations and does not incorporate the variance associated with changes in external forcing factors, such as anthropogenic sulfate aerosols, solar variability or volcanic dust.
Geophysical Research Letters | 2011
S. Morak; Gabi Hegerl; J. Kenyon
[1] In this study we analyse gridded observed and multi‐ model simulated trends in the annual number of warm nights during the second half of the 20th century. We show that there is evidence that external forcing has significantly increased the number of warm nights, both globally and over many regions. We define thirteen regions with a high density of observational data over two datasets, for which we compare observed and simulated trends from 20th century simulations. The main analysis period is 1951–1999, with a sub‐period of 1970–1999. In order to investigate if observed trends changed past 1999, we also analysed periods of 1955–2003 and 1974–2003. Both observed and ensemble mean model data from all models analysed show a positive trend for the regional mean number of warm nights in all regions within this 49 year period (1951–1999). The trends tend to become more pronounced over the sub‐period 1970–1999 and even more so up to 2003. We apply a fingerprint analysis to assess if trends are detectable relative to internal climate variability.Wefindthat changes in the globalscale analysis, and in 9 out of 13 regions, are detectable at the 5% significance level. A large part of the observed global‐scale trend in TN90 results from the trend in mean temperature, which has been attributed largely to anthropogenic greenhouse gas increase. This suggests that the detected global‐ scale trends in the number of warm nights are at least partly anthropogenic. Citation: Morak,S., G.C. Hegerl,andJ. Kenyon (2011), Detectable regional changes in the number of warm nights, Geophys. Res. Lett., 38, L17703, doi:10.1029/2011GL048531.
Geophysical Research Letters | 2014
Debbie Polson; Massimo A. Bollasina; Gabi Hegerl; Laura Wilcox
The Northern Hemisphere monsoons are an integral component of Earths hydrological cycle and affect the lives of billions of people. Observed precipitation in the monsoon regions underwent substantial changes during the second half of the twentieth century, with drying from the 1950s to mid-1980s and increasing precipitation in recent decades. Modeling studies suggest that anthropogenic aerosols have been a key factor driving changes in tropical and monsoon precipitation. Here we apply detection and attribution methods to determine whether observed changes are driven by human influences using fingerprints of individual forcings (i.e., greenhouse gas, anthropogenic aerosol, and natural) derived from climate models. The results show that the observed changes can only be explained when including the influence of anthropogenic aerosols, even after accounting for internal climate variability. Anthropogenic aerosol, not greenhouse gas or natural forcing, has been the dominant influence on Northern Hemisphere monsoon precipitation over the second half of the twentieth century.
Bulletin of the American Meteorological Society | 2017
Ed Hawkins; Pablo Ortega; Emma B. Suckling; Andrew Schurer; Gabi Hegerl; Phil D. Jones; Manoj Joshi; Timothy J. Osborn; Valérie Masson-Delmotte; Juliette Mignot; Peter W. Thorne; Geert Jan van Oldenborgh
The United Nations Framework Convention on Climate Change (UNFCCC) process agreed in Paris to limit global surface temperature rise to “well below 2°C above pre-industrial levels.” But what period is preindustrial? Somewhat remarkably, this is not defined within the UNFCCC’s many agreements and protocols. Nor is it defined in the IPCC’s Fifth Assessment Report (AR5) in the evaluation of when particular temperature levels might be reached because no robust definition of the period exists. Here we discuss the important factors to consider when defining a preindustrial period, based on estimates of historical radiative forcings and the availability of climate observations. There is no perfect period, but we suggest that 1720–1800 is the most suitable choice when discussing global temperature limits. We then estimate the change in global average temperature since preindustrial using a range of approaches based on observations, radiative forcings, global climate model simulations, and proxy evidence. Our assessment is that this preindustrial period was likely 0.55°–0.80°C cooler than 1986–2005 and that 2015 was likely the first year in which global average temperature was more than 1°C above preindustrial levels. We provide some recommendations for how this assessment might be improved in the future and suggest that reframing temperature limits with a modern baseline would be inherently less uncertain and more policy relevant.
Nature | 2014
Ed Hawkins; Bruce T. Anderson; Noah S. Diffenbaugh; Irina Mahlstein; Richard A. Betts; Gabi Hegerl; Manoj Joshi; Reto Knutti; Doug McNeall; Susan Solomon; Rowan Sutton; Jozef Syktus; Gabriel A. Vecchi
Arising from C. Mora et al. 502, 183–187 10.1038/nature12540 (2013)The question of when the signal of climate change will emerge from the background noise of climate variability—the ‘time of emergence’—is potentially important for adaptation planning. Mora et al. presented precise projections of the time of emergence of unprecedented regional climates. However, their methodology produces artificially early dates at which specific regions will permanently experience unprecedented climates and artificially low uncertainty in those dates everywhere. This overconfidence could impair the effectiveness of climate risk management decisions. There is a Reply to this Brief Communication Arising by Mora, C. et al. Nature 511, http://dx.doi.org/10.1038/nature13524 (2014).