Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where John Doherty is active.

Publication


Featured researches published by John Doherty.


Water Resources Research | 2005

A hybrid regularized inversion methodology for highly parameterized environmental models

Matthew J. Tonkin; John Doherty

[1] A hybrid approach to the regularized inversion of highly parameterized environmental models is described. The method is based on constructing a highly parameterized base model, calculating base parameter sensitivities, and decomposing the base parameter normal matrix into eigenvectors representing principal orthogonal directions in parameter space. The decomposition is used to construct super parameters. Super parameters are factors by which principal eigenvectors of the base parameter normal matrix are multiplied in order to minimize a composite least squares objective function. These eigenvectors define orthogonal axes of a parameter subspace for which information is available from the calibration data. The coordinates of the solution are sought within this subspace. Super parameters are estimated using a regularized nonlinear Gauss-Marquardt-Levenberg scheme. Though super parameters are estimated, Tikhonov regularization constraints are imposed on base parameters. Tikhonov regularization mitigates over fitting and promotes the estimation of reasonable base parameters. Use of a large number of base parameters enables the inversion process to be receptive to the information content of the calibration data, including aspects pertaining to small-scale parameter variations. Because the number of super parameters sustainable by the calibration data may be far less than the number of base parameters used to define the original problem, the computational burden for solution of the inverse problem is reduced. The hybrid methodology is described and applied to a simple synthetic groundwater flow model. It is then applied to a real-world groundwater flow and contaminant transport model. The approach and programs described are applicable to a range of modeling disciplines. Copyright 2005 by the American Geophysical Union.


Environmental Modelling and Software | 2007

Parameter estimation and uncertainty analysis for a watershed model

Mark R. Gallagher; John Doherty

Where numerical models are employed as an aid to environmental management, the uncertainty associated with predictions made by such models must be assessed. A number of different methods are available to make such an assessment. This paper explores the use of three such methods, and compares their performance when used in conjunction with a lumped parameter model for surface water flow (HSPF) in a large watershed. Linear (or first-order) uncertainty analysis has the advantage that it can be implemented with virtually no computational burden. While the results of such an analysis can be extremely useful for assessing parameter uncertainty in a relative sense, and ascertaining the degree of correlation between model parameters, its use in analyzing predictive uncertainty is often limited. Markov Chain Monte Carlo (MCMC) methods are far more robust, and can produce reliable estimates of parameter and predictive uncertainty. As well as this, they can provide the modeler with valuable qualitative information on the shape of parameter and predictive probability distributions; these shapes can be quite complex, especially where local objective function optima lie within those parts of parameter space that are considered probable after calibration has been undertaken. Nonlinear calibration-constrained optimization can also provide good estimates of parameter and predictive uncertainty, even in situations where the objective function surface is complex. Furthermore, they can achieve these estimates using far fewer model runs than MCMC methods. However, they do not provide the same amount of qualitative information on the probability structure of parameter space as do MCMC methods, a situation that can be partially rectified by combining their use with an efficient gradient-based search method that is specifically designed to locate different local optima. All methods of parameter and predictive uncertainty analysis discussed herein are implemented using freely-available software. Hence similar studies, or extensions of the present study, can be easily undertaken in other modeling contexts by other modelers.


Water Resources Research | 2009

Calibration‐constrained Monte Carlo analysis of highly parameterized models using subspace techniques

Matthew J. Tonkin; John Doherty

We describe a subspace Monte Carlo (SSMC) technique that reduces the burden of calibration-constrained Monte Carlo when undertaken with highly parameterized models. When Monte Carlo methods are used to evaluate the uncertainty in model outputs, ensuring that parameter realizations reproduce the calibration data requires many model runs to condition each realization. In the new SSMC approach, the model is first calibrated using a subspace regularization method, ideally the hybrid Tikhonov-TSVD superparameter"" approach described by Tonkin and Doherty (2005). Sensitivities calculated with the calibrated model are used to define the calibration null-space, which is spanned by parameter combinations that have no effect on simulated equivalents to available observations. Next, a stochastic parameter generator is used to produce parameter realizations, and for each a difference is formed between the stochastic parameters and the calibrated parameters. This difference is projected onto the calibration null-space and added to the calibrated parameters. If the model is no longer calibrated, parameter combinations that span the calibration solution space are reestimated while retaining the null-space projected parameter differences as additive values. The recalibration can often be undertaken using existing sensitivities, so that conditioning requires only a small number of model runs. Using synthetic and real-world model applications we demonstrate that the SSMC approach is general (it is not limited to any particular model or any particular parameterization scheme) and that it can rapidly produce a large number of conditioned parameter sets.


Ground Water | 2010

Quantifying Data Worth Toward Reducing Predictive Uncertainty

Alyssa M. Dausman; John Doherty; Christian D. Langevin; Michael C. Sukop

The present study demonstrates a methodology for optimization of environmental data acquisition. Based on the premise that the worth of data increases in proportion to its ability to reduce the uncertainty of key model predictions, the methodology can be used to compare the worth of different data types, gathered at different locations within study areas of arbitrary complexity. The method is applied to a hypothetical nonlinear, variable density numerical model of salt and heat transport. The relative utilities of temperature and concentration measurements at different locations within the model domain are assessed in terms of their ability to reduce the uncertainty associated with predictions of movement of the salt water interface in response to a decrease in fresh water recharge. In order to test the sensitivity of the method to nonlinear model behavior, analyses were repeated for multiple realizations of system properties. Rankings of observation worth were similar for all realizations, indicating robust performance of the methodology when employed in conjunction with a highly nonlinear model. The analysis showed that while concentration and temperature measurements can both aid in the prediction of interface movement, concentration measurements, especially when taken in proximity to the interface at locations where the interface is expected to move, are of greater worth than temperature measurements. Nevertheless, it was also demonstrated that pairs of temperature measurements, taken in strategic locations with respect to the interface, can also lead to more precise predictions of interface movement.


Water Resources Research | 2007

Comparison of hydrologic calibration of HSPF using automatic and manual methods

Sang Min Kim; Brian L. Benham; Kevin M. Brannan; Rebecca W. Zeckoski; John Doherty

The automatic calibration software Parameter Estimation (PEST) was used in the hydrologic calibration of Hydrological Simulation Program-Fortran (HSPF), and the results were compared with a manual calibration assisted by the Expert System for the Calibration of HSPF (HSPEXP). In this study, multiobjective functions based on the HSPEXP model performance criteria were developed for use in PEST, which allowed for the comparison of the calibration results of the two methods. The calibrated results of both methods were compared in terms of HSPEXP model performance criteria, goodness-of-fit measures (R-2, E, and RMSE), and base flow index. The automatic calibration results satisfied most of the HSPEXP model performance criteria and performed better with respect to R2, E, RMSE, and base flow index than manual calibration results. The results of the comparison with the manual calibration suggest that the automatic method using PEST may be a suitable alternative to manual method assisted by HSPEXP for calibration of hydrologic parameters for HSPF. However, further research of the weights used in the objective functions is necessary to provide guidance when applying PEST to surface water modeling.


Water Resources Research | 2007

Efficient nonlinear predictive error variance for highly parameterized models

Matthew J. Tonkin; John Doherty; Catherine Moore

Predictive error variance analysis attempts to determine how wrong predictions made by a calibrated model may be. Predictive error variance analysis is usually undertaken following calibration using a small number of parameters defined through a priori parsimony. In contrast, we introduce a method for investigating the potential error in predictions made by highly parameterized models calibrated using regularized inversion. Vecchia and Cooley (1987) describe a method of predictive error variance analysis that is constrained by calibration data. We extend this approach to include constraints on parameters that lie within the calibration null space. These constraints are determined by dividing parameter space into combinations of parameters for which estimates can be obtained and those for which they cannot. This enables the contribution to predictive error variance from parameterization simplifications required to solve the inverse problem to be quantified, in addition to the contribution from measurement noise. We also describe a novel technique that restricts the analysis to a strategically defined predictive solution subspace, enabling an approximate predictive error variance analysis to be completed efficiently. The method is illustrated using a synthetic and a real-world groundwater flow and transport model.


Scientific Investigations Report | 2010

Approaches to highly parameterized inversion: A guide to using PEST for model-parameter and predictive-uncertainty analysis

John Doherty; Randall J. Hunt; Matthew J. Tonkin

..........................................................................................................................................................


Scientific Investigations Report | 2011

Approaches to highly parameterized inversion: Pilot-point theory, guidelines, and research directions

John Doherty; Michael N. Fienen; Randall J. Hunt

.........................................................................................................................................................


Water Resources Research | 2007

Parameter interdependence and uncertainty induced by lumping in a hydrologic model

Mark R. Gallagher; John Doherty

[1] Throughout the world, watershed modeling is undertaken using lumped parameter hydrologic models that represent real-world processes in a manner that is at once abstract, but nevertheless relies on algorithms that reflect real-world processes and parameters that reflect real-world hydraulic properties. In most cases, values are assigned to the parameters of such models through calibration against flows at watershed outlets. One criterion by which the utility of the model and the success of the calibration process are judged is that realistic values are assigned to parameters through this process. This study employs regularization theory to examine the relationship between lumped parameters and corresponding real-world hydraulic properties. It demonstrates that any kind of parameter lumping or averaging can induce a substantial amount of structural noise,"" which devices such as Box-Cox transformation of flows and autoregressive moving average (ARMA) modeling of residuals are unlikely to render homoscedastic and uncorrelated. Furthermore, values estimated for lumped parameters are unlikely to represent average values of the hydraulic properties after which they are named and are often contaminated to a greater or lesser degree by the values of hydraulic properties which they do not purport to represent at all. As a result, the question of how rigidly they should be bounded during the parameter estimation process is still an open one.


Water Resources Research | 2014

Quantifying the predictive consequences of model error with linear subspace analysis

Jeremy T. White; John Doherty; Joseph D. Hughes

All computer models are simplified and imperfect simulators of complex natural systems. The discrepancy arising from simplification induces bias in model predictions, which may be amplified by the process of model calibration. This paper presents a new method to identify and quantify the predictive consequences of calibrating a simplified computer model. The method is based on linear theory, and it scales efficiently to the large numbers of parameters and observations characteristic of groundwater and petroleum reservoir models. The method is applied to a range of predictions made with a synthetic integrated surface-water/groundwater model with thousands of parameters. Several different observation processing strategies and parameterization/regularization approaches are examined in detail, including use of the Karhunen-Loeve parameter transformation. Predictive bias arising from model error is shown to be prediction specific and often invisible to the modeler. The amount of calibration-induced bias is influenced by several factors, including how expert knowledge is applied in the design of parameterization schemes, the number of parameters adjusted during calibration, how observations and model-generated counterparts are processed, and the level of fit with observations achieved through calibration. Failure to properly implement any of these factors in a prediction-specific manner may increase the potential for predictive bias in ways that are not visible to the calibration and uncertainty analysis process.

Collaboration


Dive into the John Doherty's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Randall J. Hunt

United States Geological Survey

View shared research outputs
Top Co-Authors

Avatar

Mary C. Hill

United States Geological Survey

View shared research outputs
Top Co-Authors

Avatar

Michael N. Fienen

United States Geological Survey

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David E. Welter

South Florida Water Management District

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Edward R. Banta

United States Geological Survey

View shared research outputs
Researchain Logo
Decentralizing Knowledge