Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where John Davis Jakeman is active.

Publication


Featured researches published by John Davis Jakeman.


Journal of Computational Physics | 2015

Enhancing ℓ 1 -minimization estimates of polynomial chaos expansions using basis selection

John Davis Jakeman; Michael S. Eldred; Khachik Sargsyan

In this paper we present a basis selection method that can be used with ? 1 -minimization to adaptively determine the large coefficients of polynomial chaos expansions (PCE). The adaptive construction produces anisotropic basis sets that have more terms in important dimensions and limits the number of unimportant terms that increase mutual coherence and thus degrade the performance of ? 1 -minimization. The important features and the accuracy of basis selection are demonstrated with a number of numerical examples. Specifically, we show that for a given computational budget, basis selection produces a more accurate PCE than would be obtained if the basis were fixed a priori. We also demonstrate that basis selection can be applied with non-uniform random variables and can leverage gradient information.


Journal of Computational Physics | 2010

Numerical approach for quantification of epistemic uncertainty

John Davis Jakeman; Michael S. Eldred; Dongbin Xiu

In the field of uncertainty quantification, uncertainty in the governing equations may assume two forms: aleatory uncertainty and epistemic uncertainty. Aleatory uncertainty can be characterised by known probability distributions whilst epistemic uncertainty arises from a lack of knowledge of probabilistic information. While extensive research efforts have been devoted to the numerical treatment of aleatory uncertainty, little attention has been given to the quantification of epistemic uncertainty. In this paper, we propose a numerical framework for quantification of epistemic uncertainty. The proposed methodology does not require any probabilistic information on uncertain input parameters. The method only necessitates an estimate of the range of the uncertain variables that encapsulates the true range of the input variables with overwhelming probability. To quantify the epistemic uncertainty, we solve an encapsulation problem, which is a solution to the original governing equations defined on the estimated range of the input variables. We discuss solution strategies for solving the encapsulation problem and the sufficient conditions under which the numerical solution can serve as a good estimator for capturing the effects of the epistemic uncertainty. In the case where probability distributions of the epistemic variables become known a posteriori, we can use the information to post-process the solution and evaluate solution statistics. Convergence results are also established for such cases, along with strategies for dealing with mixed aleatory and epistemic uncertainty. Several numerical examples are presented to demonstrate the procedure and properties of the proposed methodology.


SIAM Journal on Scientific Computing | 2017

A Generalized Sampling and Preconditioning Scheme for Sparse Approximation of Polynomial Chaos Expansions

John Davis Jakeman; Akil Narayan; Tao Zhou

We propose an algorithm for recovering sparse orthogonal polynomial expansions via collocation. A standard sampling approach for recovering sparse polynomials uses Monte Carlo sampling, from the density of orthogonality, which results in poor function recovery when the polynomial degree is high. Our proposed approach aims to mitigate this limitation by sampling with respect to the weighted equilibrium measure of the parametric domain and subsequently solves a preconditioned


Mathematics of Computation | 2016

A Christoffel function weighted least squares algorithm for collocation approximations

Akil Narayan; John Davis Jakeman; Tao Zhou

\ell^1


Journal of Computational Physics | 2013

Minimal multi-element stochastic collocation for uncertainty quantification of discontinuous functions

John Davis Jakeman; Akil Narayan; Dongbin Xiu

-minimization problem, where the weights of the diagonal preconditioning matrix are given by evaluations of the Christoffel function. Our algorithm can be applied to a wide class of orthogonal polynomial families on bounded and unbounded domains, including all classical families. We present theoretical analysis to motivate the algorithm and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest. Numerical examples are also provided to demonstrate that our proposed algor...


Archive | 2014

Dakota, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis version 6.0 theory manual

Brian M. Adams; Mohamed S. Ebeida; Michael S. Eldred; John Davis Jakeman; Laura Painton Swiler; John Adam Stephens; Dena M. Vigil; Timothy Michael Wildey; William J. Bohnhoff; John P. Eddy; Kenneth T. Hu; Keith R. Dalbey; Lara E Bauman; Patricia Diane Hough

We propose, theoretically investigate, and numerically validate an algorithm for the Monte Carlo solution of least-squares polynomial approximation problems in a collocation frame- work. Our method is motivated by generalized Polynomial Chaos approximation in uncertainty quantification where a polynomial approximation is formed from a combination of orthogonal polynomials. A standard Monte Carlo approach would draw samples according to the density of orthogonality. Our proposed algorithm samples with respect to the equilibrium measure of the parametric domain, and subsequently solves a weighted least-squares problem, with weights given by evaluations of the Christoffel function. We present theoretical analysis to motivate the algorithm, and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest.


SIAM Journal on Scientific Computing | 2015

LOCAL POLYNOMIAL CHAOS EXPANSION FOR LINEAR DIFFERENTIAL EQUATIONS WITH HIGH DIMENSIONAL RANDOM INPUTS

Yi Chen; John Davis Jakeman; Claude Jeffrey Gittelson; Dongbin Xiu

We propose a multi-element stochastic collocation method that can be applied in high-dimensional parameter space for functions with discontinuities lying along manifolds of general geometries. The key feature of the method is that the parameter space is decomposed into multiple elements defined by the discontinuities and thus only the minimal number of elements are utilized. On each of the resulting elements the function is smooth and can be approximated using high-order methods with fast convergence properties. The decomposition strategy is in direct contrast to the traditional multi-element approaches which define the sub-domains by repeated splitting of the axes in the parameter space. Such methods are more prone to the curse-of-dimensionality because of the fast growth of the number of elements caused by the axis based splitting. The present method is a two-step approach. Firstly a discontinuity detector is used to partition parameter space into disjoint elements in each of which the function is smooth. The detector uses an efficient combination of the high-order polynomial annihilation technique along with adaptive sparse grids, and this allows resolution of general discontinuities with a smaller number of points when the discontinuity manifold is low-dimensional. After partitioning, an adaptive technique based on the least orthogonal interpolant is used to construct a generalized Polynomial Chaos surrogate on each element. The adaptive technique reuses all information from the partitioning and is variance-suppressing. We present numerous numerical examples that illustrate the accuracy, efficiency, and generality of the method. When compared against standard locally-adaptive sparse grid methods, the present method uses many fewer number of collocation samples and is more accurate.


Proceedings of International Conference on High Performance Scientific Computing (HPSC 2006) | 2008

Simulation of Tsunami and Flash Floods

Stephen Roberts; O Nielsen; John Davis Jakeman

The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the Dakota toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the Dakota software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of Dakota-related research publications in the areas of surrogate-based optimization, uncertainty quantification, and optimization under uncertainty that provide the foundation for many of Dakota’s iterative analysis capabilities. Dakota Version 6.1 Theory Manual generated on November 7, 2014


19th AIAA Non-Deterministic Approaches Conference, 2017 | 2017

Scalable Environment for Quantification of Uncertainty and Optimization in Industrial Applications (SEQUOIA)

Juan J. Alonso; Michael S. Eldred; Paul G. Constantine; Karthikeyan Duraisamy; Charbel Farhat; Gianluca Iaccarino; John Davis Jakeman

In this paper we present a localized polynomial chaos expansion for partial differential equations (PDE) with random inputs. In particular, we focus on time independent linear stochastic problems with high dimensional random inputs, where the traditional polynomial chaos methods, and most of the existing methods, incur prohibitively high simulation cost. The local polynomial chaos method employs a domain decomposition technique to approximate the stochastic solution locally. In each subdomain, a subdomain problem is solved independently and, more importantly, in a much lower dimensional random space. In a postprocesing stage, accurate samples of the original stochastic problems are obtained from the samples of the local solutions by enforcing the correct stochastic structure of the random inputs and the coupling conditions at the interfaces of the subdomains. Overall, the method is able to solve stochastic PDEs in very large dimensions by solving a collection of low dimensional local problems and can be h...


Journal of Computational Physics | 2015

Enhancing adaptive sparse grid approximations and improving refinement strategies using adjoint-based a posteriori error estimates

John Davis Jakeman; Timothy Michael Wildey

Impacts to the built environment from hazards such tsunami or flash floods are critical in understanding the economic and social effects on our communities. In order to simulate the behaviour of water flow from such hazards within the built environment, Geoscience Australia and the Australian National University are developing a software modelling tool for hydrodynamic simulations.

Collaboration


Dive into the John Davis Jakeman's collaboration.

Top Co-Authors

Avatar

Michael S. Eldred

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Troy Butler

University of Colorado Denver

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andrew G. Salinger

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Bert J. Debusschere

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Cosmin Safta

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Khachik Sargsyan

Sandia National Laboratories

View shared research outputs
Researchain Logo
Decentralizing Knowledge