Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where William L. Oberkampf is active.

Publication


Featured researches published by William L. Oberkampf.


Progress in Aerospace Sciences | 2002

VERIFICATION AND VALIDATION IN COMPUTATIONAL FLUID DYNAMICS

William L. Oberkampf; Timothy G. Trucano

Abstract Verification and validation (V&V) are the primary means to assess accuracy and reliability in computational simulations. This paper presents an extensive review of the literature in V&V in computational fluid dynamics (CFD), discusses methods and procedures for assessing V&V, and develops a number of extensions to existing ideas. The review of the development of V&V terminology and methodology points out the contributions from members of the operations research, statistics, and CFD communities. Fundamental issues in V&V are addressed, such as code verification versus solution verification, model validation versus solution validation, the distinction between error and uncertainty, conceptual sources of error and uncertainty, and the relationship between validation and prediction. The fundamental strategy of verification is the identification and quantification of errors in the computational model and its solution. In verification activities, the accuracy of a computational solution is primarily measured relative to two types of highly accurate solutions: analytical solutions and highly accurate numerical solutions. Methods for determining the accuracy of numerical solutions are presented and the importance of software testing during verification activities is emphasized. The fundamental strategy of validation is to assess how accurately the computational results compare with the experimental data, with quantified error and uncertainty estimates for both. This strategy employs a hierarchical methodology that segregates and simplifies the physical and coupling phenomena involved in the complex engineering system of interest. A hypersonic cruise missile is used as an example of how this hierarchical structure is formulated. The discussion of validation assessment also encompasses a number of other important topics. A set of guidelines is proposed for designing and conducting validation experiments, supported by an explanation of how validation experiments are different from traditional experiments and testing. A description is given of a relatively new procedure for estimating experimental uncertainty that has proven more effective at estimating random and correlated bias errors in wind-tunnel experiments than traditional methods. Consistent with the authors’ contention that nondeterministic simulations are needed in many validation comparisons, a three-step statistical approach is offered for incorporating experimental uncertainties into the computational analysis. The discussion of validation assessment ends with the topic of validation metrics, where two sample problems are used to demonstrate how such metrics should be constructed. In the spirit of advancing the state of the art in V&V, the paper concludes with recommendations of topics for future research and with suggestions for needed changes in the implementation of V&V in production and commercial software.


Applied Mechanics Reviews | 2004

Verification, Validation, and Predictive Capability in Computational Engineering and Physics

William L. Oberkampf; Timothy G. Trucano; Charles Hirsch

Developers of computer codes, analysts who use the codes, and decision makers who rely on the results of the analyses face a critical question: How should confidence in modeling and simulation be critically assessed? Verification and validation (V&V) of computational simulations are the primary methods for building and quantifying this confidence. Briefly, verification is the assessment of the accuracy of the solution to a computational model. Validation is the assessment of the accuracy of a computational simulation by comparison with experimental data. In verification, the relationship of the simulation to the real world is not an issue. In validation, the relationship between computation and the real world, i.e., experimental data, is the issue.


Reliability Engineering & System Safety | 2004

Challenge problems: uncertainty in system response given uncertain parameters

William L. Oberkampf; Jon C. Helton; Cliff Joslyn; Steven F. Wojtkiewicz; Scott Ferson

Abstract The risk assessment community has begun to make a clear distinction between aleatory and epistemic uncertainty in theory and in practice. Aleatory uncertainty is also referred to in the literature as variability, irreducible uncertainty, inherent uncertainty, and stochastic uncertainty. Epistemic uncertainty is also termed reducible uncertainty, subjective uncertainty, and state-of-knowledge uncertainty. Methods to efficiently represent, aggregate, and propagate different types of uncertainty through computational models are clearly of vital importance. The most widely known and developed methods are available within the mathematics of probability theory, whether frequentist or subjectivist. Newer mathematical approaches, which extend or otherwise depart from probability theory, are also available, and are sometimes referred to as generalized information theory (GIT). For example, possibility theory, fuzzy set theory, and evidence theory are three components of GIT. To try to develop a better understanding of the relative advantages and disadvantages of traditional and newer methods and encourage a dialog between the risk assessment, reliability engineering, and GIT communities, a workshop was held. To focus discussion and debate at the workshop, a set of prototype problems, generally referred to as challenge problems, was constructed. The challenge problems concentrate on the representation, aggregation, and propagation of epistemic uncertainty and mixtures of epistemic and aleatory uncertainty through two simple model systems. This paper describes the challenge problems and gives numerical values for the different input parameters so that results from different investigators can be directly compared.


Reliability Engineering & System Safety | 2002

Error and uncertainty in modeling and simulation

William L. Oberkampf; Sharon M. DeLand; Brian Milne Rutherford; Kathleen V. Diegert; Kenneth F. Alvin

Abstract This article develops a general framework for identifying error and uncertainty in computational simulations that deal with the numerical solution of a set of partial differential equations (PDEs). A comprehensive, new view of the general phases of modeling and simulation is proposed, consisting of the following phases: conceptual modeling of the physical system, mathematical modeling of the conceptual model, discretization and algorithm selection for the mathematical model, computer programming of the discrete model, numerical solution of the computer program model, and representation of the numerical solution. Our view incorporates the modeling and simulation phases that are recognized in the systems engineering and operations research communities, but it adds phases that are specific to the numerical solution of PDEs. In each of these phases, general sources of uncertainty, both aleatory and epistemic, and error are identified. Our general framework is applicable to any numerical discretization procedure for solving ODEs or PDEs. To demonstrate this framework, we describe a system-level example: the flight of an unguided, rocket-boosted, aircraft-launched missile. This example is discussed in detail at each of the six phases of modeling and simulation. Two alternative models of the flight dynamics are considered, along with aleatory uncertainty of the initial mass of the missile and epistemic uncertainty in the thrust of the rocket motor. We also investigate the interaction of modeling uncertainties and numerical integration error in the solution of the ordinary differential equations for the flight dynamics.


Journal of Computational Physics | 2006

Measures of agreement between computation and experiment: validation metrics

William L. Oberkampf; Matthew F. Barone

With the increasing role of computational modeling in engineering design, performance estimation, and safety assessment, improved methods are needed for comparing computational results and experimental measurements. Traditional methods of graphically comparing computational and experimental results, though valuable, are essentially qualitative. Computable measures are needed that can quantitatively compare computational and experimental results over a range of input, or control, variables to sharpen assessment of computational accuracy. This type of measure has been recently referred to as a validation metric. We discuss various features that we believe should be incorporated in a validation metric, as well as features that we believe should be excluded. We develop a new validation metric that is based on the statistical concept of confidence intervals. Using this fundamental concept, we construct two specific metrics: one that requires interpolation of experimental data and one that requires regression (curve fitting) of experimental data. We apply the metrics to three example problems: thermal decomposition of a polyurethane foam, a turbulent buoyant plume of helium, and compressibility effects on the growth rate of a turbulent free-shear layer. We discuss how the present metrics are easily interpretable for assessing computational model accuracy, as well as the impact of experimental measurement uncertainty on the accuracy assessment.


Reliability Engineering & System Safety | 2004

An exploration of alternative approaches to the representation of uncertainty in model predictions

Jon C. Helton; Jay D. Johnson; William L. Oberkampf

Abstract Several simple test problems are used to explore the following approaches to the representation of the uncertainty in model predictions that derives from uncertainty in model inputs: probability theory, evidence theory, possibility theory, and interval analysis. Each of the test problems has rather diffuse characterizations of the uncertainty in model inputs obtained from one or more equally credible sources. These given uncertainty characterizations are translated into the mathematical structure associated with each of the indicated approaches to the representation of uncertainty and then propagated through the model with Monte Carlo techniques to obtain the corresponding representation of the uncertainty in one or more model predictions. The different approaches to the representation of uncertainty can lead to very different appearing representations of the uncertainty in model predictions even though the starting information is exactly the same for each approach. To avoid misunderstandings and, potentially, bad decisions, these representations must be interpreted in the context of the theory/procedure from which they derive.


Reliability Engineering & System Safety | 2006

Calibration, validation, and sensitivity analysis : What's what

Timothy G. Trucano; Laura Painton Swiler; Takera Igusa; William L. Oberkampf; Martin Pilch

Abstract One very simple interpretation of calibration is to adjust a set of parameters associated with a computational science and engineering code so that the model agreement is maximized with respect to a set of experimental data. One very simple interpretation of validation is to quantify our belief in the predictive capability of a computational code through comparison with a set of experimental data. Uncertainty in both the data and the code are important and must be mathematically understood to correctly perform both calibration and validation. Sensitivity analysis, being an important methodology in uncertainty analysis, is thus important to both calibration and validation. In this paper, we intend to clarify the language just used and express some opinions on the associated issues. We will endeavor to identify some technical challenges that must be resolved for successful validation of a predictive modeling capability. One of these challenges is a formal description of a “model discrepancy” term. Another challenge revolves around the general adaptation of abstract learning theory as a formalism that potentially encompasses both calibration and validation in the face of model uncertainty.


19th AIAA Applied Aerodynamics Conference | 2001

Mathematical representation of uncertainty

William L. Oberkampf; Jon C. Helton; Kari Sentz

As widely done in the risk assessment community, a distinction is made between aleatory (random) and epistemic (subjective) uncertainty in the modeling and simulation process. The nature of epistemic uncertainty is discussed, including (1) occurrence in parameters contained in mathematical models of a system and its environment, (2) limited knowledge or understanding of a physical process or interactions of processes in a system, and (3) limited knowledge for the estimation of the likelihood of event scenarios of a system. To clarify the options available for representation of epistemic uncertainty, an overview is presented of a hierarchy of theories of uncertainty. Modern theories of uncertainty can represent much weaker statements of knowledge and more diverse types of uncertainty than traditional probability theory. A promising new theory, evidence (Dempster-Shafer) theory, is discussed and applied to a simple system given by an algebraic equation with two uncertain parameters. Multiple sources of information are provided for each parameter, but each source only provides an interval value for each parameter. The uncertainty in the system response is estimated using probability theory and evidence theory. The resultant solutions are compared with regard to their assessment of the likelihood that the system response exceeds a specified failure level. In this example, a traditional application of probability theory results in a significantly lower estimate of risk of failure as compared to evidence theory. Strengths and weaknesses of evidence theory are discussed, and several important open issues are identified that must be addressed before evidence theory can be used successfully in engineering applications. * Distinguished Member Technical Staff, Associate Fellow t Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the U. S. Department of Energy under contract No. DEAC04-94AL85000. This paper is declared a work of the U.S. Government and is not subject to copyright protection in the United States. Kari Sentz Systems Science and Industrial Engineering State University of New York-Binghamton Binghamton, New York


Archive | 2007

Experimental Uncertainty Estimation and Statistics for Data Having Interval Uncertainty

Vladik Kreinovich; William L. Oberkampf; Lev R. Ginzburg; Scott Ferson; Janos Hajagos

This report addresses the characterization of measurements that include epistemic uncertainties in the form of intervals. It reviews the application of basic descriptive statistics to data sets which contain intervals rather than exclusively point estimates. It describes algorithms to compute various means, the median and other percentiles, variance, interquartile range, moments, confidence limits, and other important statistics and summarizes the computability of these statistics as a function of sample size and characteristics of the intervals in the data (degree of overlap, size and regularity of widths, etc.). It also reviews the prospects for analyzing such data sets with the methods of inferential statistics such as outlier detection and regressions. The report explores the tradeoff between measurement precision and sample size in statistical results that are sensitive to both. It also argues that an approach based on interval statistics could be a reasonable alternative to current standard methods for evaluating, expressing and propagating measurement uncertainties.


Reliability Engineering & System Safety | 2004

Summary from the epistemic uncertainty workshop: consensus amid diversity

Scott Ferson; Cliff Joslyn; Jon C. Helton; William L. Oberkampf; Kari Sentz

Abstract The ‘Epistemic Uncertainty Workshop’ sponsored by Sandia National Laboratories was held in Albuquerque, New Mexico, on 6–7 August 2002. The workshop was organized around a set of Challenge Problems involving both epistemic and aleatory uncertainty that the workshop participants were invited to solve and discuss. This concluding article in a special issue of Reliability Engineering and System Safety based on the workshop discusses the intent of the Challenge Problems, summarizes some discussions from the workshop, and provides a technical comparison among the papers in this special issue. The Challenge Problems were computationally simple models that were intended as vehicles for the illustration and comparison of conceptual and numerical techniques for use in analyses that involve: (i) epistemic uncertainty, (ii) aggregation of multiple characterizations of epistemic uncertainty, (iii) combination of epistemic and aleatory uncertainty, and (iv) models with repeated parameters. There was considerable diversity of opinion at the workshop about both methods and fundamental issues, and yet substantial consensus about what the answers to the problems were, and even about how each of the four issues should be addressed. Among the technical approaches advanced were probability theory, Dempster–Shafer evidence theory, random sets, sets of probability measures, imprecise coherent probabilities, coherent lower previsions, probability boxes, possibility theory, fuzzy sets, joint distribution tableaux, polynomial chaos expansions, and info-gap models. Although some participants maintained that a purely probabilistic approach is fully capable of accounting for all forms of uncertainty, most agreed that the treatment of epistemic uncertainty introduces important considerations and that the issues underlying the Challenge Problems are legitimate and significant. Topics identified as meriting additional research include elicitation of uncertainty representations, aggregation of multiple uncertainty representations, dependence and independence, model uncertainty, solution of black-box problems, efficient sampling strategies for computation, and communication of analysis results.

Collaboration


Dive into the William L. Oberkampf's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jon C. Helton

Arizona State University

View shared research outputs
Top Co-Authors

Avatar

Timothy G. Trucano

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Martin Pilch

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jay D. Johnson

Science Applications International Corporation

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Cliff Joslyn

Pacific Northwest National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Kathleen V. Diegert

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge