Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David McInerney is active.

Publication


Featured researches published by David McInerney.


Risk Analysis | 2012

Robust Climate Policies Under Uncertainty: A Comparison of Robust Decision Making and Info-Gap Methods

Jim W. Hall; Robert J. Lempert; Klaus Keller; Andrew Hackbarth; Christophe Mijere; David McInerney

This study compares two widely used approaches for robustness analysis of decision problems: the info-gap method originally developed by Ben-Haim and the robust decision making (RDM) approach originally developed by Lempert, Popper, and Bankes. The study uses each approach to evaluate alternative paths for climate-altering greenhouse gas emissions given the potential for nonlinear threshold responses in the climate system, significant uncertainty about such a threshold response and a variety of other key parameters, as well as the ability to learn about any threshold responses over time. Info-gap and RDM share many similarities. Both represent uncertainty as sets of multiple plausible futures, and both seek to identify robust strategies whose performance is insensitive to uncertainties. Yet they also exhibit important differences, as they arrange their analyses in different orders, treat losses and gains in different ways, and take different approaches to imprecise probabilistic information. The study finds that the two approaches reach similar but not identical policy recommendations and that their differing attributes raise important questions about their appropriate roles in decision support applications. The comparison not only improves understanding of these specific methods, it also suggests some broader insights into robustness approaches and a framework for comparing them.


Climatic Change | 2012

What are robust strategies in the face of uncertain climate threshold responses

David McInerney; Robert J. Lempert; Klaus Keller

We use an integrated assessment model of climate change to analyze how alternative decision-making criteria affect preferred investments into greenhouse gas mitigation, the distribution of outcomes, the robustness of the strategies, and the economic value of information. We define robustness as trading a small decrease in a strategy’s expected performance for a significant increase in a strategy’s performance in the worst cases. Specifically, we modify the Dynamic Integrated model of Climate and the Economy (DICE-07) to include a simple representation of a climate threshold response, parametric uncertainty, structural uncertainty, learning, and different decision-making criteria. Economic analyses of climate change strategies typically adopt the expected utility maximization (EUM) framework. We compare EUM with two decision criteria adopted from the finance literature, namely Limited Degree of Confidence (LDC) and Safety First (SF). Both criteria increase the relative weight of the performance under the worst-case scenarios compared to EUM. We show that the LDC and SF criteria provide a computationally feasible foundation for identifying greenhouse gas mitigation strategies that may prove more robust than those identified by the EUM criterion. More robust strategies show higher near-term investments in emissions abatement. Reducing uncertainty has a higher economic value of information for the LDC and SF decision criteria than for EUM.


Water Resources Research | 2014

Comparison of joint versus postprocessor approaches for hydrological uncertainty estimation accounting for error autocorrelation and heteroscedasticity

Guillaume Evin; Mark Thyer; Dmitri Kavetski; David McInerney; George Kuczera

The paper appraises two approaches for the treatment of heteroscedasticity and autocorrelation in residual errors of hydrological models. Both approaches use weighted least squares (WLS), with heteroscedasticity modeled as a linear function of predicted flows and autocorrelation represented using an AR(1) process. In the first approach, heteroscedasticity and autocorrelation parameters are inferred jointly with hydrological model parameters. The second approach is a two-stage “postprocessor” scheme, where Stage 1 infers the hydrological parameters ignoring autocorrelation and Stage 2 conditionally infers the heteroscedasticity and autocorrelation parameters. These approaches are compared to a WLS scheme that ignores autocorrelation. Empirical analysis is carried out using daily data from 12 US catchments from the MOPEX set using two conceptual rainfall-runoff models, GR4J, and HBV. Under synthetic conditions, the postprocessor and joint approaches provide similar predictive performance, though the postprocessor approach tends to underestimate parameter uncertainty. However, the MOPEX results indicate that the joint approach can be nonrobust. In particular, when applied to GR4J, it often produces poor predictions due to strong multiway interactions between a hydrological water balance parameter and the error model parameters. The postprocessor approach is more robust precisely because it ignores these interactions. Practical benefits of accounting for error autocorrelation are demonstrated by analyzing streamflow predictions aggregated to a monthly scale (where ignoring daily-scale error autocorrelation leads to significantly underestimated predictive uncertainty), and by analyzing one-day-ahead predictions (where accounting for the error autocorrelation produces clearly higher precision and better tracking of observed data). Including autocorrelation into the residual error model also significantly affects calibrated parameter values and uncertainty estimates. The paper concludes with a summary of outstanding challenges in residual error modeling, particularly in ephemeral catchments.


Proceedings of the National Academy of Sciences of the United States of America | 2014

Evaluating the utility of dynamical downscaling in agricultural impacts projections

Michael Glotter; Joshua Elliott; David McInerney; Neil Best; Ian T. Foster; Elisabeth J. Moyer

Significance One of the largest concerns about future climate change is its potential effect on food supply. Crop yield projections require climate inputs at higher resolution than typical for global climate models, and the computationally expensive technique of dynamical downscaling is widely used for this translation. We simulate maize yield in the United States to test whether current dynamical downscaling methods add value over simpler downscaling approaches. Our results suggest that they do not. Addressing large-scale systematic biases in climate output may be a higher priority for understanding future climate change impacts. Interest in estimating the potential socioeconomic costs of climate change has led to the increasing use of dynamical downscaling—nested modeling in which regional climate models (RCMs) are driven with general circulation model (GCM) output—to produce fine-spatial-scale climate projections for impacts assessments. We evaluate here whether this computationally intensive approach significantly alters projections of agricultural yield, one of the greatest concerns under climate change. Our results suggest that it does not. We simulate US maize yields under current and future CO2 concentrations with the widely used Decision Support System for Agrotechnology Transfer crop model, driven by a variety of climate inputs including two GCMs, each in turn downscaled by two RCMs. We find that no climate model output can reproduce yields driven by observed climate unless a bias correction is first applied. Once a bias correction is applied, GCM- and RCM-driven US maize yields are essentially indistinguishable in all scenarios (<10% discrepancy, equivalent to error from observations). Although RCMs correct some GCM biases related to fine-scale geographic features, errors in yield are dominated by broad-scale (100s of kilometers) GCM systematic errors that RCMs cannot compensate for. These results support previous suggestions that the benefits for impacts assessments of dynamically downscaling raw GCM output may not be sufficient to justify its computational demands. Progress on fidelity of yield projections may benefit more from continuing efforts to understand and minimize systematic error in underlying climate projections.


Journal of Climate | 2014

Statistical Emulation of Climate Model Projections Based on Precomputed GCM Runs

Stefano Castruccio; David McInerney; Michael L. Stein; Feifei Liu Crouch; Robert L. Jacob; Elisabeth J. Moyer

AbstractThe authors describe a new approach for emulating the output of a fully coupled climate model under arbitrary forcing scenarios that is based on a small set of precomputed runs from the model. Temperature and precipitation are expressed as simple functions of the past trajectory of atmospheric CO2 concentrations, and a statistical model is fit using a limited set of training runs. The approach is demonstrated to be a useful and computationally efficient alternative to pattern scaling and captures the nonlinear evolution of spatial patterns of climate anomalies inherent in transient climates. The approach does as well as pattern scaling in all circumstances and substantially better in many; it is not computationally demanding; and, once the statistical model is fit, it produces emulated climate output effectively instantaneously. It may therefore find wide application in climate impacts assessments and other policy analyses requiring rapid climate projections.


Water Resources Research | 2017

Improving probabilistic prediction of daily streamflow by identifying Pareto optimal approaches for modeling heteroscedastic residual errors

David McInerney; Mark Thyer; Dmitri Kavetski; Julien Lerat; George Kuczera

Reliable and precise probabilistic prediction of daily catchment-scale streamflow requires statistical characterization of residual errors of hydrological models. This study focuses on approaches for representing error heteroscedasticity with respect to simulated streamflow, i.e., the pattern of larger errors in higher streamflow predictions. We evaluate 8 common residual error schemes, including standard and weighted least squares, the Box-Cox transformation (with fixed and calibrated power parameter λ) and the log-sinh transformation. Case studies include 17 perennial and 6 ephemeral catchments in Australia and USA, and two lumped hydrological models. Performance is quantified using predictive reliability, precision and volumetric bias metrics. We find the choice of heteroscedastic error modelling approach significantly impacts on predictive performance, though no single scheme simultaneously optimizes all performance metrics. The set of Pareto optimal schemes, reflecting performance trade-offs, comprises Box-Cox schemes with λ of 0.2 and 0.5, and the log scheme (λ=0, perennial catchments only). These schemes significantly outperform even the average-performing remaining schemes (e.g., across ephemeral catchments, median precision tightens from 105% to 40% of observed streamflow, and median biases decrease from 25% to 4%). Theoretical interpretations of empirical results highlight the importance of capturing the skew/kurtosis of raw residuals and reproducing zero flows. Paradoxically, calibration of λ is often counterproductive: in perennial catchments, it tends to overfit low flows at the expense of abysmal precision in high flows. The log-sinh transformation is dominated by the simpler Pareto optimal schemes listed above. Recommendations for researchers and practitioners seeking robust residual error schemes for practical work are provided. This article is protected by copyright. All rights reserved.


Journal of Atmospheric and Oceanic Technology | 2008

Optimization of an observing system design for the North Atlantic meridional overturning circulation

Johanna Baehr; David McInerney; Klaus Keller; Jochem Marotzke

Abstract Three methods are analyzed for the design of ocean observing systems to monitor the meridional overturning circulation (MOC) in the North Atlantic. Specifically, a continuous monitoring array to monitor the MOC at 1000 m at different latitudes is “deployed” into a numerical model. The authors compare array design methods guided by (i) physical intuition (heuristic array design), (ii) sequential optimization, and (iii) global optimization. The global optimization technique can recover the true global solution for the analyzed array design, while gradient-based optimization would be prone to misconverge. Both global optimization and heuristic array design yield considerably improved results over sequential array design. Global optimization always outperforms the heuristic array design in terms of minimizing the root-mean-square error. However, whether the results are physically meaningful is not guaranteed; the apparent success might merely represent a solution in which misfits compensate for each ...


The Annals of Applied Statistics | 2016

Temperatures in transient climates: Improved methods for simulations with evolving temporal covariances

Andrew Poppick; David McInerney; Elisabeth J. Moyer; Michael L. Stein

Future climate change impacts depend on temperatures not only through changes in their means but also through changes in their variability. General circulation models (GCMs) predict changes in both means and variability; however, GCM output should not be used directly as simulations for impacts assessments because GCMs do not fully reproduce present-day temperature distributions. This paper addresses an ensuing need for simulations of future temperatures that combine both the observational record and GCM projections of changes in means and temporal covariances. Our perspective is that such simulations should be based on transforming observations to account for GCM projected changes, in contrast to methods that transform GCM output to account for discrepancies with observations. Our methodology is designed for simulating transient (non-stationary) climates, which are evolving in response to changes in CO


Environmental Modelling and Software | 2018

A generalised approach for identifying influential data in hydrological modelling

David P. Wright; Mark Thyer; Seth Westra; Benjamin Renard; David McInerney

_2


Environmental Modelling and Software | 2018

A simplified approach to produce probabilistic hydrological model predictions

David McInerney; Mark Thyer; Dmitri Kavetski; Bree Bennett; Julien Lerat; Matthew S. Gibbs; George Kuczera

concentrations (as is the Earth at present). This work builds on previously described methods for simulating equilibrium (stationary) climates. Since the proposed simulation relies on GCM projected changes in covariance, we describe a statistical model for the evolution of temporal covariances in a GCM under future forcing scenarios, and apply this model to an ensemble of runs from one GCM, CCSM3. We find that, at least in CCSM3, changes in the local covariance structure can be explained as a function of the regional mean change in temperature and the rate of change of warming. This feature means that the statistical model can be used to emulate the evolving covariance structure of GCM temperatures under scenarios for which the GCM has not been run. When combined with an emulator for mean temperature, our methodology can simulate evolving temperatures under such scenarios, in a way that accounts for projections of changes while still retaining fidelity with the observational record.

Collaboration


Dive into the David McInerney's collaboration.

Top Co-Authors

Avatar

Mark Thyer

University of Adelaide

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Klaus Keller

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Julien Lerat

Commonwealth Scientific and Industrial Research Organisation

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

John Noye

University of Adelaide

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge