Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Joanne Wendelberger is active.

Publication


Featured researches published by Joanne Wendelberger.


Technometrics | 2013

Methods for Planning Repeated Measures Degradation Studies

Brian Weaver; William Q. Meeker; Luis A. Escobar; Joanne Wendelberger

Repeated measures degradation studies are used to assess product or component reliability when there are few or even no failures expected during a study. Such studies are often used to assess the shelf life of materials, components, and products. We show how to evaluate the properties of proposed test plans. Such evaluations are needed to identify statistically efficient tests. We consider test plans for applications where parameters related to the degradation distribution or the related lifetime distribution are to be estimated. We use the approximate large-sample variance–covariance matrix of the parameters of a mixed effects linear regression model for repeated measures degradation data to assess the effect of sample size (number of units and number of measurements within the units) on estimation precision of both degradation and failure-time distribution quantiles. We also illustrate the complementary use of simulation-based methods for evaluating and comparing test plans. These test-planning methods are illustrated with two examples. We provide the R code and examples as supplementary materials (available online on the journal web site) for this article.


high performance graphics | 2011

Randomized selection on the GPU

Laura Monroe; Joanne Wendelberger; Sarah Michalak

We implement here a fast and memory-sparing probabilistic top k selection algorithm on the GPU. The algorithm proceeds via an iterative probabilistic guess-and-check process on pivots for a three-way partition. When the guess is correct, the problem is reduced to selection on a much smaller set. This probabilistic algorithm always gives a correct result and always terminates. Las Vegas algorithms of this kind are a form of stochastic optimization and can be well suited to more general parallel processors with limited amounts of fast memory.


Technometrics | 2004

Bayesian Prediction Intervals and Their Relationship to Tolerance Intervals

Michael S. Hamada; Valen E. Johnson; Leslie M. Moore; Joanne Wendelberger

We consider Bayesian prediction intervals that contain a proportion of a finite number of observations with a specified probability. Such intervals arise in numerous applied contexts and are closely related to tolerance intervals. Several examples are provided to illustrate this methodology, and simulation studies are used to demonstrate potential pitfalls of using tolerance intervals when prediction intervals are required.


high performance distributed computing | 2013

Taming massive distributed datasets: data sampling using bitmap indices

Yu Su; Gagan Agrawal; Jonathan Woodring; Kary Myers; Joanne Wendelberger; James P. Ahrens

With growing computational capabilities of parallel machines, scientific simulations are being performed at finer spatial and temporal scales, leading to a data explosion. The growing sizes are making it extremely hard to store, manage, disseminate, analyze, and visualize these datasets, especially as neither the memory capacity of parallel machines, memory access speeds, nor disk bandwidths are increasing at the same rate as the computing power. Sampling can be an effective technique to address the above challenges, but it is extremely important to ensure that dataset characteristics are preserved, and the loss of accuracy is within acceptable levels. In this paper, we address the data explosion problems by developing a novel sampling approach, and implementing it in a flexible system that supports server-side sampling and data subsetting. We observe that to allow subsetting over scientific datasets, data repositories are likely to use an indexing technique. Among these techniques, we see that bitmap indexing can not only effectively support subsetting over scientific datasets, but can also help create samples that preserve both value and spatial distributions over scientific datasets. We have developed algorithms for using bitmap indices to sample datasets. We have also shown how only a small amount of additional metadata stored with bitvectors can help assess loss of accuracy with a particular subsampling level. Some of the other properties of this novel approach include: 1) sampling can be flexibly applied to a subset of the original dataset, which may be specified using a value-based and/or a dimension-based subsetting predicate, and 2) no data reorganization is needed, once bitmap indices have been generated. We have extensively evaluated our method with different types of datasets and applications, and demonstrated the effectiveness of our approach.


Concurrency and Computation: Practice and Experience | 2016

Power usage of production supercomputers and production workloads

Scott Pakin; Curtis B. Storlie; Michael Lang; Robert E. Fields; Eloy E. Romero; Craig Idler; Sarah Michalak; Hugh Greenberg; Josip Loncaric; Randal Rheinheimer; Gary Grider; Joanne Wendelberger

Power is becoming an increasingly important concern for large supercomputer centers. However, to date, there have been a dearth of studies of power usage ‘in the wild’—on production supercomputers running production workloads. In this paper, we present the initial results of a project to characterize the power usage of the three Top500 supercomputers at Los Alamos National Laboratory: Cielo, Roadrunner, and Luna (#15, #19, and #47, respectively, on the June 2012 Top500 list). Power measurements taken both at the switchboard level and within the compute racks are presented and discussed. Some noteworthy results of this study are that (1) variability in power consumption differs across architectures, even when running a similar workload and (2) Los Alamos National Laboratorys scientific workload draws, on average, only 70–75% of LINPACK power and only 40–55% of nameplate power, implying that power capping may enable a substantial reduction in power and cooling infrastructure while impacting comparatively few applications. Copyright


human factors in computing systems | 2015

Colormaps that Improve Perception of High-Resolution Ocean Data

Francesca Samsel; Mark R. Petersen; Terece Geld; Greg Abram; Joanne Wendelberger; James P. Ahrens

Scientists from the Climate, Ocean and Sea Ice Modeling Team (COSIM) at the Los Alamos National Laboratory (LANL) are interested in gaining a deeper understanding of three primary ocean currents: the Gulf Stream, the Kuroshio Current, and the Agulhas Current & Retroflection. To address these needs, visual artist Francesca Samsel teamed up with experts from the areas of computer science, climate science, statistics, and perceptual science. By engaging an artist specializing in color, we created colormaps that provide the ability to see greater detail in these high-resolution datasets. The new colormaps applied to the POP dataset enabled scientists to see areas of interest unclear using standard colormaps. Improvements in the perceptual range of color allowed scientists to highlight structures within specific ocean currents. Work with the COSIM team members drove development of nested colormaps which provide further detail to the scientists.


Cluster Computing | 2014

Effective and efficient data sampling using bitmap indices

Yu Su; Gagan Agrawal; Jonathan Woodring; Kary Myers; Joanne Wendelberger; James P. Ahrens

With growing computational capabilities of parallel machines, scientific simulations are being performed at finer spatial and temporal scales, leading to a data explosion. The growing sizes are making it extremely hard to store, manage, disseminate, analyze, and visualize these datasets, especially as neither the memory capacity of parallel machines, memory access speeds, nor disk bandwidths are increasing at the same rate as the computing power. Sampling can be an effective technique to address the above challenges, but it is extremely important to ensure that dataset characteristics are preserved, and the loss of accuracy is within acceptable levels. In this paper, we address the data explosion problems by developing a novel sampling approach, and implementing it in a flexible system that supports server-side sampling and data subsetting. We observe that to allow subsetting over scientific datasets, data repositories are likely to use an indexing technique. Among these techniques, we see that bitmap indexing can not only effectively support subsetting over scientific datasets, but can also help create samples that preserve both value and spatial distributions over scientific datasets. We have developed algorithms for using bitmap indices to sample datasets. We have also shown how only a small amount of additional metadata stored with bitvectors can help assess loss of accuracy with a particular subsampling level. Some of the other properties of this novel approach include: (1) sampling can be flexibly applied to a subset of the original dataset, which may be specified using a value-based and/or a dimension-based subsetting predicate, and (2) no data reorganization is needed, once bitmap indices have been generated. We have extensively evaluated our method with different types of datasets and applications, and demonstrated the effectiveness of our approach.


Quality Engineering | 2010

Uncertainty in Designed Experiments

Joanne Wendelberger

ABSTRACT Statistical experiment design can be used to efficiently select experimental runs to achieve a given experimental purpose. However, uncertainty is a fact of life in experimentation. The experimenter is faced with uncertainty in inputs, uncertainty in outputs from both random variability and uncertainty in measurement processes, as well as uncertainty about the underlying model structure of the phenomenon under investigation. In the face of all this uncertainty, the experimenter must try to collect and analyze data that will address questions of scientific interest.


Journal of Quality Technology | 2003

A Bayesian Approach to Calibration Intervals and Properly Calibrated Tolerance Intervals

Michael S. Hamada; A. Pohl; Cliff Spiegelman; Joanne Wendelberger

In this article we consider a Bayesian approach to inference in which there is a calibration relationship between measured and true quantities of interest. One situation in which this approach is useful is for unknowns in which calibration intervals are obtained. The other situation is when inference about a population is desired in which tolerance intervals are produced. The Bayesian approach easily handles a general calibration relationship, say nonlinear, with nonnormal errors. The population may also be general, say lognormal, for quantities which are nonnegative. The Bayesian approach is illustrated with three examples and implemented with the freely available WinBUGS software.


Technometrics | 2016

Partitioning a Large Simulation as It Runs

Kary Myers; Earl Lawrence; Michael L. Fugate; Claire McKay Bowen; Lawrence O. Ticknor; Jon Woodring; Joanne Wendelberger; James P. Ahrens

As computer simulations continue to grow in size and complexity, they present a particularly challenging class of big data problems. Many application areas are moving toward exascale computing systems, systems that perform 1018 FLOPS (FLoating-point Operations Per Second)—a billion billion calculations per second. Simulations at this scale can generate output that exceeds both the storage capacity and the bandwidth available for transfer to storage, making post-processing and analysis challenging. One approach is to embed some analyses in the simulation while the simulation is running—a strategy often called in situ analysis—to reduce the need for transfer to storage. Another strategy is to save only a reduced set of time steps rather than the full simulation. Typically the selected time steps are evenly spaced, where the spacing can be defined by the budget for storage and transfer. This article combines these two ideas to introduce an online in situ method for identifying a reduced set of time steps of the simulation to save. Our approach significantly reduces the data transfer and storage requirements, and it provides improved fidelity to the simulation to facilitate post-processing and reconstruction. We illustrate the method using a computer simulation that supported NASAs 2009 Lunar Crater Observation and Sensing Satellite mission.

Collaboration


Dive into the Joanne Wendelberger's collaboration.

Top Co-Authors

Avatar

James P. Ahrens

Los Alamos National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Leslie M. Moore

Los Alamos National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Kary Myers

Los Alamos National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Michael S. Hamada

Los Alamos National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Jonathan Woodring

Los Alamos National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Sarah Michalak

Los Alamos National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Brian Weaver

Los Alamos National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Jon Woodring

Los Alamos National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Laura Monroe

Los Alamos National Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge