Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where William G. Hanley is active.

Publication


Featured researches published by William G. Hanley.


Journal of Applied Meteorology and Climatology | 2008

Bayesian Inference and Markov Chain Monte Carlo Sampling to Reconstruct a Contaminant Source on a Continental Scale

Luca Delle Monache; Julie K. Lundquist; Branko Kosovic; Gardar Johannesson; Kathleen M. Dyer; Roger D. Aines; Fotini Katopodes Chow; Rich D. Belles; William G. Hanley; Shawn Larsen; Gwen A. Loosmore; John J. Nitao; Gayle Sugiyama; Philip J. Vogt

Abstract A methodology combining Bayesian inference with Markov chain Monte Carlo (MCMC) sampling is applied to a real accidental radioactive release that occurred on a continental scale at the end of May 1998 near Algeciras, Spain. The source parameters (i.e., source location and strength) are reconstructed from a limited set of measurements of the release. Annealing and adaptive procedures are implemented to ensure a robust and effective parameter-space exploration. The simulation setup is similar to an emergency response scenario, with the simplifying assumptions that the source geometry and release time are known. The Bayesian stochastic algorithm provides likely source locations within 100 km from the true source, after exploring a domain covering an area of approximately 1800 km × 3600 km. The source strength is reconstructed with a distribution of values of the same order of magnitude as the upper end of the range reported by the Spanish Nuclear Security Agency. By running the Bayesian MCMC algorit...


Computational Statistics | 2005

Implementing random scan Gibbs samplers

Richard A. Levine; Zhaoxia Yu; William G. Hanley; John J. Nitao

SummaryThe Gibbs sampler, being a popular routine amongst Markov chain Monte Carlo sampling methodologies, has revolutionized the application of Monte Carlo methods in statistical computing practice. The performance of the Gibbs sampler relies heavily on the choice of sweep strategy, that is, the means by which the components or blocks of the random vector X of interest are visited and updated. We develop an automated, adaptive algorithm for implementing the optimal sweep strategy as the Gibbs sampler traverses the sample space. The decision rules through which this strategy is chosen are based on convergence properties of the induced chain and precision of statistical inferences drawn from the generated Monte Carlo samples. As part of the development, we analytically derive closed form expressions for the decision criteria of interest and present computationally feasible implementations of the adaptive random scan Gibbs sampler via a Gaussian approximation to the target distribution. We illustrate the results and algorithms presented by using the adaptive random scan Gibbs sampler developed to sample multivariate Gaussian target distributions, and screening test and image data.


Computational Statistics & Data Analysis | 2005

Implementing componentwise Hastings algorithms

Richard A. Levine; Zhaoxia Yu; William G. Hanley; John J. Nitao

Markov chain Monte Carlo (MCMC) routines have revolutionized the application of Monte Carlo methods in statistical application and statistical computing methodology. The Hastings sampler, encompassing both the Gibbs and Metropolis samplers as special cases, is the most commonly applied MCMC algorithm. The performance of the Hastings sampler relies heavily on the choice of sweep strategy, that is, the method by which the components or blocks of the random variable X of interest are visited and updated, and the choice of proposal distribution, that is the distribution from which candidate variates are drawn for the accept–reject rule in each iteration of the algorithm. We focus on the random sweep strategy, where the components of X are updated in a random order, and random proposal distributions, where the proposal distribution is characterized by a randomly generated parameter. We develop an adaptive Hastings sampler which learns from and adapts to random variates generated during the algorithm towards choosing the optimal random sweep strategy and proposal distribution for the problem at hand. As part of the development, we prove convergence of the random variates to the distribution of interest and discuss practical implementations of the methods. We illustrate the results presented by applying the adaptive componentwise Hastings samplers developed to sample multivariate Gaussian target distributions and Bayesian frailty models. c 2004 Elsevier B.V. All rights reserved.


2006 IEEE Nonlinear Statistical Signal Processing Workshop | 2006

Sequential Monte-Carlo Framework for Dynamic Data-Driven Event Reconstruction for Atmospheric Release

Gardar Johannesson; Kathleen M. Dyer; William G. Hanley; Branko Kosovic; Shawn Larsen; Gwendolen A. Loosmore; Julie K. Lundquist; Arthur A. Mirin

The release of hazardous materials into the atmosphere can have a tremendous impact on dense populations. We propose an atmospheric event reconstruction framework that couples observed data and predictive computer-intensive dispersion models via Bayesian methodology. Due to the complexity of the model framework, a sampling-based approach is taken for posterior inference that combines Markov chain Monte Carlo (MCMC) and sequential Monte Carlo (SMC) strategies.


IEEE Transactions on Knowledge and Data Engineering | 2013

Practical Ensemble Classification Error Bounds for Different Operating Points

Kush R. Varshney; Ryan Prenger; Tracy L. Marlatt; Barry Y. Chen; William G. Hanley

Classification algorithms used to support the decisions of human analysts are often used in settings in which zero-one loss is not the appropriate indication of performance. The zero-one loss corresponds to the operating point with equal costs for false alarms and missed detections, and no option for the classifier to leave uncertain test samples unlabeled. A generalization bound for ensemble classification at the standard operating point has been developed based on two interpretable properties of the ensemble: strength and correlation, using the Chebyshev inequality. Such generalization bounds for other operating points have not been developed previously and are developed in this paper. Significantly, the bounds are empirically shown to have much practical utility in determining optimal parameters for classification with a reject option, classification for ultralow probability of false alarm, and classification for ultralow probability of missed detection. Counter to the usual guideline of large strength and small correlation in the ensemble, different guidelines are recommended by the derived bounds in the ultralow false alarm and missed detection probability regimes.


Data Mining | 2010

An Extended Study of the Discriminant Random Forest

Tracy D. Lemmond; Barry Y. Chen; Andrew O. Hatch; William G. Hanley

Classification technologies have become increasingly vital to information analysis systems that rely upon collected data to make predictions or informed decisions. Many approaches have been developed, but one of the most successful in recent times is the random forest. The discriminant random forest is a novel extension of the random forest classification methodology that leverages linear discriminant analysis to performmultivariate node splitting during tree construction.An extended study of the discriminant random forest is presented which shows that its individual classifiers are stronger and more diverse than their random forest counterparts, yielding statistically significant reductions in classification error of up to 79.5%. Moreover, empirical tests suggest that this approach is computationally less costly with respect to both memory and efficiency. Further enhancements of the methodology are investigated that exhibit significant performance improvements and greater stability at low false alarm rates.


knowledge discovery and data mining | 2010

Class-specific error bounds for ensemble classifiers

Ryan Prenger; Tracy D. Lemmond; Kush R. Varshney; Barry Y. Chen; William G. Hanley

The generalization error, or probability of misclassification, of ensemble classifiers has been shown to be bounded above by a function of the mean correlation between the constituent (i.e., base) classifiers and their average strength. This bound suggests that increasing the strength and/or decreasing the correlation of an ensembles base classifiers may yield improved performance under the assumption of equal error costs. However, this and other existing bounds do not directly address application spaces in which error costs are inherently unequal. For applications involving binary classification, Receiver Operating Characteristic (ROC) curves, performance curves that explicitly trade off false alarms and missed detections, are often utilized to support decision making. To address performance optimization in this context, we have developed a lower bound for the entire ROC curve that can be expressed in terms of the class-specific strength and correlation of the base classifiers. We present empirical analyses demonstrating the efficacy of these bounds in predicting relative classifier performance. In addition, we specify performance regions of the ROC curve that are naturally delineated by the class-specific strengths of the base classifiers and show that each of these regions can be associated with a unique set of guidelines for performance optimization of binary classifiers within unequal error cost regimes.


computational intelligence and data mining | 2009

Building ultra-low false alarm rate Support Vector Classifier ensembles using Random Subspaces

Barry Y. Chen; Tracy D. Lemmond; William G. Hanley

This paper presents the Cost-Sensitive Random Subspace Support Vector Classifier (CS-RS-SVC), a new learning algorithm that combines random subspace sampling and bagging with Cost-Sensitive Support Vector Classifiers to more effectively address detection applications burdened by unequal misclassification requirements. When compared to its conventional, non-cost-sensitive counterpart on a two-class signal detection application, random subspace sampling is shown to very effectively leverage the additional flexibility offered by the Cost-Sensitive Support Vector Classifier, yielding a more than four-fold increase in the detection rate at a false alarm rate (FAR) of zero. Moreover, the CS-RS-SVC is shown to be fairly robust to constraints on the feature subspace dimensionality, enabling reductions in computation time of up to 82% with minimal performance degradation.


Geophysical Journal International | 2007

A Bayesian hierarchical method for multiple-event seismic location

Stephen C. Myers; Gardar Johannesson; William G. Hanley


Journal of Geophysical Research | 2005

Stochastic Inversion of Electrical Resistivity Changes Using a Markov Chain, Monte Carlo Approach

Abelardo Ramirez; John J. Nitao; William G. Hanley; Roger D. Aines; R. E. Glaser; S. K. Sengupta; Kathleen M. Dyer; T. L. Hickling; William Daily

Collaboration


Dive into the William G. Hanley's collaboration.

Top Co-Authors

Avatar

John J. Nitao

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Tracy D. Lemmond

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Barry Y. Chen

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Gardar Johannesson

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Kathleen M. Dyer

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Ronald E. Glaser

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Ryan Prenger

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Stephen C. Myers

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Branko Kosovic

National Center for Atmospheric Research

View shared research outputs
Top Co-Authors

Avatar

Julie K. Lundquist

University of Colorado Boulder

View shared research outputs
Researchain Logo
Decentralizing Knowledge