Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Reto Knutti is active.

Publication


Featured researches published by Reto Knutti.


Proceedings of the National Academy of Sciences of the United States of America | 2009

Irreversible climate change due to carbon dioxide emissions

Susan Solomon; Gian-Kasper Plattner; Reto Knutti; Pierre Friedlingstein

The severity of damaging human-induced climate change depends not only on the magnitude of the change but also on the potential for irreversibility. This paper shows that the climate change that takes place due to increases in carbon dioxide concentration is largely irreversible for 1,000 years after emissions stop. Following cessation of emissions, removal of atmospheric carbon dioxide decreases radiative forcing, but is largely compensated by slower loss of heat to the ocean, so that atmospheric temperatures do not drop significantly for at least 1,000 years. Among illustrative irreversible impacts that should be expected if atmospheric carbon dioxide concentrations increase from current levels near 385 parts per million by volume (ppmv) to a peak of 450–600 ppmv over the coming century are irreversible dry-season rainfall reductions in several regions comparable to those of the “dust bowl” era and inexorable sea level rise. Thermal expansion of the warming ocean provides a conservative lower limit to irreversible global average sea level rise of at least 0.4–1.0 m if 21st century CO2 concentrations exceed 600 ppmv and 0.6–1.9 m for peak CO2 concentrations exceeding ≈1,000 ppmv. Additional contributions from glaciers and ice sheet contributions to future sea level rise are uncertain but may equal or exceed several meters over the next millennium or longer.


Nature | 2009

Greenhouse-gas emission targets for limiting global warming to 2 °C

Malte Meinshausen; Nicolai Meinshausen; William Hare; Sarah Raper; Katja Frieler; Reto Knutti; David J. Frame; Myles R. Allen

More than 100 countries have adopted a global warming limit of 2 °C or below (relative to pre-industrial levels) as a guiding principle for mitigation efforts to reduce climate change risks, impacts and damages. However, the greenhouse gas (GHG) emissions corresponding to a specified maximum warming are poorly known owing to uncertainties in the carbon cycle and the climate response. Here we provide a comprehensive probabilistic analysis aimed at quantifying GHG emission budgets for the 2000–50 period that would limit warming throughout the twenty-first century to below 2 °C, based on a combination of published distributions of climate system properties and observational constraints. We show that, for the chosen class of emission scenarios, both cumulative emissions up to 2050 and emission levels in 2050 are robust indicators of the probability that twenty-first century warming will not exceed 2 °C relative to pre-industrial temperatures. Limiting cumulative CO2 emissions over 2000–50 to 1,000 Gt CO2 yields a 25% probability of warming exceeding 2 °C—and a limit of 1,440 Gt CO2 yields a 50% probability—given a representative estimate of the distribution of climate system properties. As known 2000–06 CO2 emissions were ∼234 Gt CO2, less than half the proven economically recoverable oil, gas and coal reserves can still be emitted up to 2050 to achieve such a goal. Recent G8 Communiqués envisage halved global GHG emissions by 2050, for which we estimate a 12–45% probability of exceeding 2 °C—assuming 1990 as emission base year and a range of published climate sensitivity distributions. Emissions levels in 2020 are a less robust indicator, but for the scenarios considered, the probability of exceeding 2 °C rises to 53–87% if global GHG emissions are still more than 25% above 2000 levels in 2020.


Philosophical Transactions of the Royal Society A | 2007

The use of the multi-model ensemble in probabilistic climate projections

Claudia Tebaldi; Reto Knutti

Recent coordinated efforts, in which numerous climate models have been run for a common set of experiments, have produced large datasets of projections of future climate for various scenarios. Those multi-model ensembles sample initial condition, parameter as well as structural uncertainties in the model design, and they have prompted a variety of approaches to quantify uncertainty in future climate in a probabilistic way. This paper outlines the motivation for using multi-model ensembles, reviews the methodologies published so far and compares their results for regional temperature projections. The challenges in interpreting multi-model results, caused by the lack of verification of climate projections, the problem of model dependence, bias and tuning as well as the difficulty in making sense of an ‘ensemble of opportunity’, are discussed in detail.


Journal of Climate | 2010

Challenges in Combining Projections from Multiple Climate Models

Reto Knutti; Reinhard Furrer; Claudia Tebaldi; Jan Cermak; Gerald A. Meehl

Recent coordinated efforts, in which numerous general circulation climate models have been run for a common set of experiments, have produced large datasets of projections of future climate for various scenarios. Those multimodel ensembles sample initial conditions, parameters, and structural uncertainties in the model design, and they have prompted a variety of approaches to quantifying uncertainty in future climate change. International climate change assessments also rely heavily on these models. These assessments often provide equal-weighted averages as best-guess results, assuming that individual model biases will at least partly cancel and that a model average prediction is more likely to be correct than a prediction from a single model based on the result that a multimodel average of present-day climate generally outperforms any individual model. This study outlines the motivation for using multimodel ensembles and discusses various challenges in interpreting them. Among these challenges are that the number of models in these ensembles is usually small, their distribution in the model or parameter space is unclear, and that extreme behavior is often not sampled. Model skill in simulating present-day climate conditions is shown to relate only weakly to the magnitude of predicted change. It is thus unclear by how much the confidence in future projections should increase based on improvements in simulating present-day conditions, a reduction of intermodel spread, or a larger number of models. Averaging model output may further lead to a loss of signal— for example, for precipitation change where the predicted changes are spatially heterogeneous, such that the true expected change is very likely to be larger than suggested by a model average. Last, there is little agreement on metrics to separate ‘‘good’’ and ‘‘bad’’ models, and there is concern that model development, evaluation, and posterior weighting or ranking are all using the same datasets. While the multimodel average appears to still be useful in some situations, these results show that more quantitative methods to evaluate model performance are critical to maximize the value of climate change projections from global models.


Journal of Climate | 2014

Uncertainties in CMIP5 Climate Projections due to Carbon Cycle Feedbacks

Pierre Friedlingstein; Malte Meinshausen; Vivek K. Arora; Chris D. Jones; Alessandro Anav; Spencer Liddicoat; Reto Knutti

AbstractIn the context of phase 5 of the Coupled Model Intercomparison Project, most climate simulations use prescribed atmospheric CO2 concentration and therefore do not interactively include the effect of carbon cycle feedbacks. However, the representative concentration pathway 8.5 (RCP8.5) scenario has additionally been run by earth system models with prescribed CO2 emissions. This paper analyzes the climate projections of 11 earth system models (ESMs) that performed both emission-driven and concentration-driven RCP8.5 simulations. When forced by RCP8.5 CO2 emissions, models simulate a large spread in atmospheric CO2; the simulated 2100 concentrations range between 795 and 1145 ppm. Seven out of the 11 ESMs simulate a larger CO2 (on average by 44 ppm, 985 ± 97 ppm by 2100) and hence higher radiative forcing (by 0.25 W m−2) when driven by CO2 emissions than for the concentration-driven scenarios (941 ppm). However, most of these models already overestimate the present-day CO2, with the present-day biase...


Geophysical Research Letters | 2005

Thermohaline circulation hysteresis: a model intercomparison

Stefan Rahmstorf; Michel Crucifix; Andrey Ganopolski; Hugues Goosse; Igor V. Kamenkovich; Reto Knutti; Gerrit Lohmann; Robert Marsh; Lawrence A. Mysak; Zhaomin Wang; Andrew J. Weaver

We present results from an intercomparison of 11 different climate models of intermediate complexity, in which the North Atlantic Ocean was subjected to slowly varying changes in freshwater input. All models show a characteristic hysteresis response of the thermohaline circulation to the freshwater forcing; which can be explained by Stommels salt advection feedback. The width of the hysteresis curves varies between 0.2 and 0.5 Sv in the models. Major differences are found in the location of present-day climate on the hysteresis diagram. In seven of the models, present-day climate for standard parameter choices is found in the bi-stable regime, in four models this climate is in the mono-stable regime. The proximity of the present-day climate to the Stommel bifurcation point, beyond which North Atlantic Deep Water formation cannot be sustained, varies from less than 0.1 Sv to over 0.5 Sv.


Nature | 2004

Strong hemispheric coupling of glacial climate through freshwater discharge and ocean circulation

Reto Knutti; Jacqueline Flückiger; Thomas F. Stocker; Axel Timmermann

The climate of the last glacial period was extremely variable, characterized by abrupt warming events in the Northern Hemisphere, accompanied by slower temperature changes in Antarctica and variations of global sea level. It is generally accepted that this millennial-scale climate variability was caused by abrupt changes in the ocean thermohaline circulation. Here we use a coupled ocean–atmosphere–sea ice model to show that freshwater discharge into the North Atlantic Ocean, in addition to a reduction of the thermohaline circulation, has a direct effect on Southern Ocean temperature. The related anomalous oceanic southward heat transport arises from a zonal density gradient in the subtropical North Atlantic caused by a fast wave-adjustment process. We present an extended and quantitative bipolar seesaw concept that explains the timing and amplitude of Greenland and Antarctic temperature changes, the slow changes in Antarctic temperature and its similarity to sea level, as well as a possible time lag of sea level with respect to Antarctic temperature during Marine Isotope Stage 3.


Journal of Climate | 2010

Risks of Model Weighting in Multimodel Climate Projections

Andreas P. Weigel; Reto Knutti; Mark A. Liniger; Christof Appenzeller

Multimodel combination is a pragmatic approach to estimating model uncertainties and to making climate projections more reliable. The simplest way of constructing a multimodel is to give one vote to each model (‘‘equal weighting’’), while more sophisticated approaches suggest applying model weights according to some measure of performance (‘‘optimum weighting’’). In this study, a simple conceptual model of climate change projections is introduced and applied to discuss the effects of model weighting in more generic terms. The results confirm that equally weighted multimodels on average outperform the single models, and that projection errors can in principle be further reduced by optimum weighting. However, this not only requires accurate knowledge of the single model skill, but the relative contributions of the joint model error and unpredictable noise also need to be known to avoid biased weights. If weights are applied that do not appropriatelyrepresentthetrueunderlyinguncertainties,weightedmultimodelsperformonaverageworsethan equally weighted ones, which is a scenario that is not unlikely, given that at present there is no consensus on how skill-basedweights can be obtained.Particularly when internal variabilityis large, more information may be lost by inappropriate weighting than could potentially be gained by optimum weighting. These results indicate that for many applications equal weighting may be the safer and more transparent way to combine models. However, also within the presented framework eliminating models from an ensemble can be justified if they are known to lack key mechanisms that are indispensable for meaningful climate projections.


Journal of Climate | 2008

Long-term climate commitments projected with climate-carbon cycle models

Gian-Kasper Plattner; Reto Knutti; Fortunat Joos; Thomas F. Stocker; W. von Bloh; Victor Brovkin; David Cameron; E. Driesschaert; Stephanie Dutkiewicz; Michael Eby; Neil R. Edwards; Thierry Fichefet; J. C. Hargreaves; Chris D. Jones; Marie-France Loutre; H. D. Matthews; Anne Mouchet; S. A. Mueller; S. Nawrath; A.R. Price; Andrei P. Sokolov; Kuno M. Strassmann; Andrew J. Weaver

Eight earth system models of intermediate complexity (EMICs) are used to project climate change commitments for the recent Intergovernmental Panel on Climate Change’s (IPCC’s) Fourth Assessment Report (AR4). Simulations are run until the year 3000 A.D. and extend substantially farther into the future than conceptually similar simulations with atmosphere–ocean general circulation models (AOGCMs) coupled to carbon cycle models. In this paper the following are investigated: 1) the climate change commitment in response to stabilized greenhouse gases and stabilized total radiative forcing, 2) the climate change commitment in response to earlier CO2 emissions, and 3) emission trajectories for profiles leading to the stabilization of atmospheric CO2 and their uncertainties due to carbon cycle processes. Results over the twenty-first century compare reasonably well with results from AOGCMs, and the suite of EMICs proves well suited to complement more complex models. Substantial climate change commitments for sea level rise and global mean surface temperature increase after a stabilization of atmospheric greenhouse gases and radiative forcing in the year 2100 are identified. The additional warming by the year 3000 is 0.6–1.6 K for the low-CO2 IPCC Special Report on Emissions Scenarios (SRES) B1 scenario and 1.3–2.2 K for the high-CO2 SRES A2 scenario. Correspondingly, the post-2100 thermal expansion commitment is 0.3–1.1 m for SRES B1 and 0.5–2.2 m for SRES A2. Sea level continues to rise due to thermal expansion for several centuries after CO2 stabilization. In contrast, surface temperature changes slow down after a century. The meridional overturning circulation is weakened in all EMICs, but recovers to nearly initial values in all but one of the models after centuries for the scenarios considered. Emissions during the twenty-first century continue to impact atmospheric CO2 and climate even at year 3000. All models find that most of the anthropogenic carbon emissions are eventually taken up by the ocean (49%–62%) in year 3000, and that a substantial fraction (15%–28%) is still airborne even 900 yr after carbon emissions have ceased. Future stabilization of atmospheric CO2 and climate change requires a substantial reduction of CO2 emissions below present levels in all EMICs. This reduction needs to be substantially larger if carbon cycle–climate feedbacks are accounted for or if terrestrial CO2 fertilization is not operating. Large differences among EMICs are identified in both the response to increasing atmospheric CO2 and the response to climate change. This highlights the need for improved representations of carbon cycle processes in these models apart from the sensitivity to climate change. Sensitivity simulations with one single EMIC indicate that both carbon cycle and climate sensitivity related uncertainties on projected allowable emissions are substantial.


Philosophical Transactions of the Royal Society A | 2008

Should we believe model predictions of future climate change

Reto Knutti

Predictions of future climate are based on elaborate numerical computer models. As computational capacity increases and better observations become available, one would expect the model predictions to become more reliable. However, are they really improving, and how do we know? This paper discusses how current climate models are evaluated, why and where scientists have confidence in their models, how uncertainty in predictions can be quantified, and why models often tend to converge on what we observe but not on what we predict. Furthermore, it outlines some strategies on how the climate modelling community may overcome some of the current deficiencies in the attempt to provide useful information to the public and policy-makers.

Collaboration


Dive into the Reto Knutti's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Benjamin M. Sanderson

National Center for Atmospheric Research

View shared research outputs
Top Co-Authors

Avatar

Joeri Rogelj

International Institute for Applied Systems Analysis

View shared research outputs
Top Co-Authors

Avatar

Malte Meinshausen

Potsdam Institute for Climate Impact Research

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Claudia Tebaldi

National Center for Atmospheric Research

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David J. Frame

Victoria University of Wellington

View shared research outputs
Top Co-Authors

Avatar

Gerald A. Meehl

National Center for Atmospheric Research

View shared research outputs
Researchain Logo
Decentralizing Knowledge