Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mary C. Hill is active.

Publication


Featured researches published by Mary C. Hill.


Water Resources Research | 2009

Sensitivity analysis, calibration, and testing of a distributed hydrological model using error-based weighting and one objective function

Laura Foglia; Mary C. Hill; Steffen W. Mehl; Paolo Burlando

[1]xa0We evaluate the utility of three interrelated means of using data to calibrate the fully distributed rainfall-runoff model TOPKAPI as applied to the Maggia Valley drainage area in Switzerland. The use of error-based weighting of observation and prior information data, local sensitivity analysis, and single-objective function nonlinear regression provides quantitative evaluation of sensitivity of the 35 model parameters to the data, identification of data types most important to the calibration, and identification of correlations among parameters that contribute to nonuniqueness. Sensitivity analysis required only 71 model runs, and regression required about 50 model runs. The approach presented appears to be ideal for evaluation of models with long run times or as a preliminary step to more computationally demanding methods. The statistics used include composite scaled sensitivities, parameter correlation coefficients, leverage, Cooks D, and DFBETAS. Tests suggest predictive ability of the calibrated model typical of hydrologic models.


Water Resources Research | 2004

Investigating the Macrodispersion Experiment (MADE) site in Columbus, Mississippi, using a three-dimensional inverse flow and transport model

Heidi Christiansen Barlebo; Mary C. Hill; Dan Rosbjerg

[1]xa0Flowmeter-measured hydraulic conductivities from the heterogeneous MADE site have been used predictively in advection-dispersion models. Resulting simulated concentrations failed to reproduce even major plume characteristics and some have concluded that other mechanisms, such as dual porosity, are important. Here an alternative possibility is investigated: that the small-scale flowmeter measurements are too noisy and possibly too biased to use so directly in site-scale models and that the hydraulic head and transport data are more suitable for site-scale characterization. Using a calibrated finite element model of the site and a new framework to evaluate random and systematic model and measurement errors, the following conclusions are derived. (1) If variations in subsurface fluid velocities like those simulated in this work (0.1 and 2.0 m per day along parallel and reasonably close flow paths) exist, it is likely that classical advection-dispersion processes can explain the measured plume characteristics. (2) The flowmeter measurements are possibly systematically lower than site-scale values when the measurements are considered individually and using common averaging methods and display variability that obscures abrupt changes in hydraulic conductivities that are well supported by changes in hydraulic gradients and are important to the simulation of transport.


Computers & Geosciences | 1999

UCODE, a computer code for universal inverse modeling

Eileen P. Poeter; Mary C. Hill

Abstract This article presents the US Geological Survey computer program UCODE, which was developed in collaboration with the US Army Corps of Engineers Waterways Experiment Station and the International Ground Water Modeling Center of the Colorado School of Mines. UCODE performs inverse modeling, posed as a parameter-estimation problem, using nonlinear regression. Any application model or set of models can be used; the only requirement is that they have numerical (ASCII or text only) input and output files and that the numbers in these files have sufficient significant digits. Application models can include preprocessors and postprocessors as well as models related to the processes of interest (physical, chemical and so on), making UCODE extremely powerful for model calibration. Estimated parameters can be defined flexibly with user-specified functions. Observations to be matched in the regression can be any quantity for which a simulated equivalent value can be produced, thus simulated equivalent values are calculated using values that appear in the application model output files and can be manipulated with additive and multiplicative functions, if necessary. Prior, or direct, information on estimated parameters also can be included in the regression. The nonlinear regression problem is solved by minimizing a weighted least-squares objective function with respect to the parameter values using a modified Gauss–Newton method. Sensitivities needed for the method are calculated approximately by forward or central differences and problems and solutions related to this approximation are discussed. Statistics are calculated and printed for use in (1) diagnosing inadequate data or identifying parameters that probably cannot be estimated with the available data, (2) evaluating estimated parameter values, (3) evaluating the model representation of the actual processes and (4) quantifying the uncertainty of model simulated values. UCODE is intended for use on any computer operating system: it consists of algorithms programmed in perl, a freeware language designed for text manipulation and Fortran90, which efficiently performs numerical calculations.


Water Resources Research | 2001

Predictive modeling of flow and transport in a two‐dimensional intermediate‐scale, heterogeneous porous medium

Gilbert R. Barth; Mary C. Hill; Tissa H. Illangasekare; Harihar Rajaram

As a first step toward understanding the role of sedimentary structures in flow and transport through porous media, this work deterministically examines how small-scale laboratory-measured values of hydraulic conductivity relate to in situ values of simple, artificial structures in an intermediate-scale (10 m long), two-dimensional, heterogeneous, laboratory experiment. Results were judged based on how well simulations using measured values of hydraulic conductivities matched measured hydraulic heads, net flow, and transport through the tank. Discrepancies were investigated using sensitivity analysis and nonlinear regression estimates of the in situ hydraulic conductivity that produce the best fit to measured hydraulic heads and net flow. Permeameter and column experiments produced laboratory measurements of hydraulic conductivity for each of the sands used in the intermediate-scale experiments. Despite explicit numerical representation of the heterogeneity the laboratory-measured values underestimated net flow by 12–14% and were distinctly smaller than the regression-estimated values. The significance of differences in measured hydraulic conductivity values was investigated by comparing variability of transport predictions using the different measurement methods to that produced by different realizations of the heterogeneous distribution. Results indicate that the variations in measured hydraulic conductivity were more important to transport than variations between realizations of the heterogeneous distribution of hydraulic conductivity.


Water Resources Research | 2012

Analysis of regression confidence intervals and Bayesian credible intervals for uncertainty quantification

Dan Lu; Ming Ye; Mary C. Hill

[1] Confidence intervals based on classical regression theories augmented to include prior information and credible intervals based on Bayesian theories are conceptually different ways to quantify parametric and predictive uncertainties. Because both confidence and credible intervals are used in environmental modeling, we seek to understand their differences and similarities. This is of interest in part because calculating confidence intervals typically requires tens to thousands of model runs, while Bayesian credible intervals typically require tens of thousands to millions of model runs. Given multi-Gaussian distributed observation errors, our theoretical analysis shows that, for linear or linearized-nonlinear models, confidence and credible intervals are always numerically identical when consistent prior information is used. For nonlinear models, nonlinear confidence and credible intervals can be numerically identical if parameter confidence regions defined using the approximate likelihood method and parameter credible regions estimated using Markov chain Monte Carlo realizations are numerically identical and predictions are a smooth, monotonic function of the parameters. Both occur if intrinsic model nonlinearity is small. While the conditions of Gaussian errors and small intrinsic model nonlinearity are violated by many environmental models, heuristic tests using analytical and numerical models suggest that linear and nonlinear confidence intervals can be useful approximations of uncertainty even under significantly nonideal conditions. In the context of epistemic model error for a complex synthetic nonlinear groundwater problem, the linear and nonlinear confidence and credible intervals for individual models performed similarly enough to indicate that the computationally frugal confidence intervals can be useful in many circumstances. Experiences with these groundwater models are expected to be broadly applicable to many environmental models. We suggest that for environmental problems with lengthy execution times that make credible intervals inconvenient or prohibitive, confidence intervals can provide important insight. During model development when frequent calculation of uncertainty intervals is important to understanding the consequences of various model construction alternatives and data collection strategies, strategic use of both confidence and credible intervals can be critical.


Ground Water | 2016

Practical Use of Computationally Frugal Model Analysis Methods

Mary C. Hill; Dmitri Kavetski; Martyn P. Clark; Ming Ye; Mazdak Arabi; Dan Lu; Laura Foglia; Steffen Mehl

Three challenges compromise the utility of mathematical models of groundwater and other environmental systems: (1) a dizzying array of model analysis methods and metrics make it difficult to compare evaluations of model adequacy, sensitivity, and uncertainty; (2) the high computational demands of many popular model analysis methods (requiring 1000s, 10,000u2009s, or more model runs) make them difficult to apply to complex models; and (3) many models are plagued by unrealistic nonlinearities arising from the numerical model formulation and implementation. This study proposes a strategy to address these challenges through a careful combination of model analysis and implementation methods. In this strategy, computationally frugal model analysis methods (often requiring a few dozen parallelizable model runs) play a major role, and computationally demanding methods are used for problems where (relatively) inexpensive diagnostics suggest the frugal methods are unreliable. We also argue in favor of detecting and, where possible, eliminating unrealistic model nonlinearities-this increases the realism of the model itself and facilitates the application of frugal methods. Literature examples are used to demonstrate the use of frugal methods and associated diagnostics. We suggest that the strategy proposed in this paper would allow the environmental sciences community to achieve greater transparency and falsifiability of environmental models, and obtain greater scientific insight from ongoing and future modeling efforts.


Environmental Modelling and Software | 2014

A computer program for uncertainty analysis integrating regression and Bayesian methods

Dan Lu; Ming Ye; Mary C. Hill; Eileen P. Poeter; Gary P. Curtis

This work develops a new functionality in UCODE_2014 to evaluate Bayesian credible intervals using the Markov Chain Monte Carlo (MCMC) method. The MCMC capability in UCODE_2014 is based on the FORTRAN version of the differential evolution adaptive Metropolis (DREAM) algorithm of Vrugt et al. (2009), which estimates the posterior probability density function of model parameters in high-dimensional and multimodal sampling problems. The UCODE MCMC capability provides eleven prior probability distributions and three ways to initialize the sampling process. It evaluates parametric and predictive uncertainties and it has parallel computing capability based on multiple chains to accelerate the sampling process. This paper tests and demonstrates the MCMC capability using a 10-dimensional multimodal mathematical function, a 100-dimensional Gaussian function, and a groundwater reactive transport model. The use of the MCMC capability is made straightforward and flexible by adopting the JUPITER API protocol. With the new MCMC capability, UCODE_2014 can be used to calculate three types of uncertainty intervals, which all can account for prior information: (1) linear confidence intervals which require linearity and Gaussian error assumptions and typically 10s-100s of highly parallelizable model runs after optimization, (2) nonlinear confidence intervals which require a smooth objective function surface and Gaussian observation error assumptions and typically 100s-1,000s of partially parallelizable model runs after optimization, and (3) MCMC Bayesian credible intervals which require few assumptions and commonly 10,000s-100,000s or more partially parallelizable model runs. Ready access allows users to select methods best suited to their work, and to compare methods in many circumstances.


Computers & Geosciences | 2008

Building model analysis applications with the Joint Universal Parameter IdenTification and Evaluation of Reliability (JUPITER) API

Edward R. Banta; Mary C. Hill; Eileen P. Poeter; John Doherty; Justin E. Babendreier

The open-source, public domain JUPITER (Joint Universal Parameter IdenTification and Evaluation of Reliability) API (Application Programming Interface) provides conventions and Fortran-90 modules to develop applications (computer programs) for analyzing process models. The input and output conventions allow application users to access various applications and the analysis methods they embody with a minimum of time and effort. Process models simulate, for example, physical, chemical, and (or) biological systems of interest using phenomenological, theoretical, or heuristic approaches. The types of model analyses supported by the JUPITER API include, but are not limited to, sensitivity analysis, data needs assessment, calibration, uncertainty analysis, model discrimination, and optimization. The advantages provided by the JUPITER API for users and programmers allow for rapid programming and testing of new ideas. Application-specific coding can be in languages other than the Fortran-90 of the API. This article briefly describes the capabilities and utility of the JUPITER API, lists existing applications, and uses UCODE_2005 as an example.


Ground Water | 2015

Simulation of Water‐Table Aquifers Using Specified Saturated Thickness

Rodney A. Sheets; Mary C. Hill; Henk Haitjema; Alden M. Provost; John P. Masterson

Simulating groundwater flow in a water-table (unconfined) aquifer can be difficult because the saturated thickness available for flow depends on model-calculated hydraulic heads. It is often possible to realize substantial time savings and still obtain accurate head and flow solutions by specifying an approximate saturated thickness a priori, thus linearizing this aspect of the model. This specified-thickness approximation often relies on the use of the confined option in numerical models, which has led to confusion and criticism of the method. This article reviews the theoretical basis for the specified-thickness approximation, derives an error analysis for relatively ideal problems, and illustrates the utility of the approximation with a complex test problem. In the transient version of our complex test problem, the specified-thickness approximation produced maximum errors in computed drawdown of about 4% of initial aquifer saturated thickness even when maximum drawdowns were nearly 20% of initial saturated thickness. In the final steady-state version, the approximation produced maximum errors in computed drawdown of about 20% of initial aquifer saturated thickness (mean errors of about 5%) when maximum drawdowns were about 35% of initial saturated thickness. In early phases of model development, such as during initial model calibration efforts, the specified-thickness approximation can be a very effective tool to facilitate convergence. The reduced execution time and increased stability obtained through the approximation can be especially useful when many model runs are required, such as during inverse model calibration, sensitivity and uncertainty analyses, multimodel analysis, and development of optimal resource management scenarios.


Hydrogeology Journal | 2016

Parameterization, sensitivity analysis, and inversion: an investigation using groundwater modeling of the surface-mined Tivoli-Guidonia basin (Metropolitan City of Rome, Italy)

Francesco La Vigna; Mary C. Hill; Rudy Rossetto; Roberto Mazza

With respect to model parameterization and sensitivity analysis, this work uses a practical example to suggest that methods that start with simple models and use computationally frugal model analysis methods remain valuable in any toolbox of model development methods. In this work, groundwater model calibration starts with a simple parameterization that evolves into a moderately complex model. The model is developed for a water management study of the Tivoli-Guidonia basin (Rome, Italy) where surface mining has been conducted in conjunction with substantial dewatering. The approach to model development used in this work employs repeated analysis using sensitivity and inverse methods, including use of a new observation-stacked parameter importance graph. The methods are highly parallelizable and require few model runs, which make the repeated analyses and attendant insights possible. The success of a model development design can be measured by insights attained and demonstrated model accuracy relevant to predictions. Example insights were obtained: (1) A long-held belief that, except for a few distinct fractures, the travertine is homogeneous was found to be inadequate, and (2) The dewatering pumping rate is more critical to model accuracy than expected. The latter insight motivated additional data collection and improved pumpage estimates. Validation tests using three other recharge and pumpage conditions suggest good accuracy for the predictions considered. The model was used to evaluate management scenarios and showed that similar dewatering results could be achieved using 20xa0% less pumped water, but would require installing newly positioned wells and cooperation between mine owners.RésuméEn ce qui concerne le paramétrage des modèles et l’analyse de sensibilité, ce travail utilise un exemple pratique pour suggérer que les méthodes qui débutent avec des modèles simples et utilisent des méthodes d’analyses de modèle économe en calcul restent précieuses dans toute boîte à outils de méthodes de développement de modèles. Dans ce travail, l’étalonnage du modèle d’écoulement d’eaux souterraines commence par un paramétrage simple qui évolue vers un modèle de complexité moyenne. Le modèle est développé pour une étude de gestion des ressources en eau du bassin de Tivoli-Guidonia (Rome, Italie) où l’exploitation du sous-sol a conduit à un dénoyage important. L’approche pour le développement du modèle utilisée dans ce travail emploie des analyses répétées à l’aide de méthodes inverses et de sensibilité, y compris l’utilisation d’un nouveau graphique de l’importance des paramètres d’observation cumulée. Les méthodes sont fortement parallélisables et nécessitent peu d’exécution des modèles, ce qui rend possible des analyses répétées et des aperçus spécifiques. Le succès d’une conception de l’élaboration d’un modèle peut être mesuré par des aperçus des résultats et de la pertinence de la précision du modèle par rapport aux prévisions. Exemples d’informations obtenues : (1) Une croyance de longue date que, à l’exception de quelques fractures distinctes, le travertin est homogène, a été jugée comme inadéquate, et (2) le débit de pompage de dénoyage est plus critique que la précision du modèle, par rapport à ce qui était attendu. Cette dernière information a motivé la collecte de données supplémentaires et l’amélioration des estimations des pompages. Les essais de validation utilisant trois autres conditions de recharge et de pompage suggèrent une bonne précision pour les prévisions considérées. Le modèle a été utilisé pour évaluer des scenarios de gestion et a montré que des résultats similaires de dénoyage pourraient être obtenus en utilisant 20xa0% de moins d’eau pompée, mais nécessiterait l’installation de nouveaux puits et la coopération entre les propriétaires exploitant les ressources minérales du sous-sol.ResumenCon respecto a la parametrización del modelo y análisis de sensibilidad, este trabajo utiliza un ejemplo práctico para sugerir que los métodos que comienzan con modelos simples y utilizan métodos de análisis de modelos computacionalmente frugales siguen siendo valiosos en cualquier caja de herramientas de métodos de desarrollo para modelación. En este trabajo, la calibración del modelo del agua subterránea se inicia con una parametrización simple que evoluciona en un modelo de moderada complejidad. El modelo se desarrolla para un estudio sobre la gestión del agua de la cuenca del Tivoli-Guidonia (Roma, Italia), donde la minería de superficie se ha llevado a cabo en conjunción con una eliminación sustancial de agua. El enfoque de desarrollo del modelo utilizado en este trabajo emplea el análisis de sensibilidad y métodos inversos, incluso el uso de un gráfico de importancia de un nuevo parámetro de observación acumulado. Los métodos son altamente paralelizables y requieren unas pocas corridas del modelo, lo que hace posibles análisis repetidos y las interpretaciones. El éxito del diseño del desarrollo del modelo puede ser medido por las observaciones obtenidas y la exactitud demostrada por el modelo en relación a las predicciones. Se obtuvieron observaciones por ejemplo: (1) En relación con una creencia largamente sostenida de que, a excepción de una pocas fracturas claras, el travertino es homogéneo, se encontró que puede ser inadecuada, y (2) el ritmo de bombeo de la extracción de agua es más crítico para la precisión del modelo que lo esperado. Esta última explicación motivó una recolección adicional de datos y mejorá las estimaciones de volumen bombeado. Las pruebas de validación utilizando otras tres condiciones de recarga y bombeo sugieren una buena exactitud para las predicciones consideradas. El modelo se utilizó para evaluar los escenarios de gestión y mostró que los resultados de la eliminación del agua podrían alcanzarse usando 20xa0% menos de agua bombeada, pero requeriría instalar nuevos sitios para pozos y la cooperación entre los propietarios de minas.摘要针对模型参数化和灵敏度分析,本项研究利用实例提出从简单模型入手、采用计算简便的模型分析方法在任何模型开发方法中依然有价值。在本研究中,地下水模型校正从简单的参数化入手,然后进展到中等复杂的模型中。(意大利罗马市)Tivoli-Guidonia盆地露天开采,伴随着大量的排水,为当地的水管理研究开发了模型。本研究中的模型开发方法依靠灵敏度和反演法包括采用了新观测数据构成的参数重要性曲线进行重复分析。方法高度平行,需要很少运行模型,就可以进行重复分析并伴随得到认识结果。模型开发设计的成功可以通过获得的认识结果及展示的与预测相关的模型精确度来衡量。实例获取的认识结果有:1)除了几个明显的特点,石灰华是均质的这一长期持有的观念是不充分的,2)排水抽水速度对模型精确度来说比预想的更重要。后者的认识促使收集额外的资料,提高抽水量估算值精度。采用三个其他补给和抽水条件进行的校正试验对预测具有很好的精确度。模型用于评估各种管理方案,模型还表明,采用少于20%的抽水量可以获取类似的排水结果,但这需要打新定位的井及矿主之间的合作。ResumoCom respeito a parametrização de modelo e análise de sensibilidade, esse trabalho usa um exemplo prático para sugerir que métodos que se iniciam com modelos simples e usam métodos de análise do modelo computacionalmente frugal continuam a ser valiosos em qualquer caixa de ferramentas de métodos de desenvolvimento do modelo. Neste trabalho, calibração do modelo de águas subterrâneas começa com uma parametrização simples que evolui para um modelo de complexidade moderada. O modelo é desenvolvido para um estudo de gestão dos recursos hídricos da bacia do Tivoli-Guidonia (Roma, Itália) onde a mineração de superfície tem sido conduzida em conjunto com uma drenagem substancial. A abordagem para desenvolvimento do modelo utilizada nesse trabalho aplica análises repetidas utilizando análise de sensibilidade e métodos de inversão, incluindo o uso de um novo gráfico de importância das observações empilhadas. Os métodos são altamente paralelizável e exigem algumas realizações de modelo, que fazem as análises repetidas e compreensão de atendimento possíveis. O sucesso de um esquema de desenvolvimento de modelo pode ser medido por percepções alcançadas e demonstrou a precisão do modelo referente às previsões. Foram obtidos entendimentos, como por exemplo: (1) A antiga crença de que, com exceção de algumas fraturas distintas, o travertino é homogêneo foi considerada inadequada, e (2) A taxa de bombeamento de rebaixamento do lençol é mais crítica para modelar precisão do que o esperado. Este último entendimento motivou a coleta de dados adicionais e melhores estimativas de bombeamento. Testes de validação com três outras condições de recarga e de bombeamento sugerem boa precisão para as previsões consideradas. O modelo foi utilizado para avaliar cenários de gestão e que mostrou que resultados similares de drenagem podem ser alcançados utilizando 20xa0% menos água bombeada, mas pode requerer a instalação de novos poços posicionados e cooperação entre os donos de minas.

Collaboration


Dive into the Mary C. Hill's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ming Ye

Florida State University

View shared research outputs
Top Co-Authors

Avatar

Dan Lu

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Chunmiao Zheng

University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Dan Rosbjerg

Technical University of Denmark

View shared research outputs
Top Co-Authors

Avatar

Heidi Christiansen Barlebo

Geological Survey of Denmark and Greenland

View shared research outputs
Top Co-Authors

Avatar

John Doherty

University of Queensland

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

O. Rakovec

Helmholtz Centre for Environmental Research - UFZ

View shared research outputs
Top Co-Authors

Avatar

Laura Foglia

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge