Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Allen McPherson is active.

Publication


Featured researches published by Allen McPherson.


IEEE Transactions on Visualization and Computer Graphics | 2003

A model for volume lighting and modeling

Joe Kniss; Simon Premoze; Charles D. Hansen; Peter Shirley; Allen McPherson

Direct volume rendering is a commonly used technique in visualization applications. Many of these applications require sophisticated shading models to capture subtle lighting effects and characteristics of volumetric data and materials. For many volumes, homogeneous regions pose problems for typical gradient-based surface shading. Many common objects and natural phenomena exhibit visual quality that cannot be captured using simple lighting models or cannot be solved at interactive rates using more sophisticated methods. We present a simple yet effective interactive shading model which captures volumetric light attenuation effects that incorporates volumetric shadows, an approximation to phase functions, an approximation to forward scattering, and chromatic attenuation that provides the subtle appearance of translucency. We also present a technique for volume displacement or perturbation that allows realistic interactive modeling of high frequency detail for both real and synthetic volumetric data.


IEEE Computer Graphics and Applications | 2001

Interactive texture-based volume rendering for large data sets

Joe Kniss; Patrick S. McCormick; Allen McPherson; James P. Ahrens; James S. Painter; Alan Keahey; Charles D. Hansen

To employ direct volume rendering, TRex uses parallel graphics hardware, software-based compositing, and high-performance I/O to provide near-interactive display rates for time-varying, terabyte-sized data sets. We present a scalable, pipelined approach for rendering data sets too large for a single graphics card. To do so, we take advantage of multiple hardware rendering units and parallel software compositing. The goals of TRex, our system for interactive volume rendering of large data sets, are to provide near-interactive display rates for time-varying, terabyte-sized uniformly sampled data sets and provide a low-latency platform for volume visualization in immersive environments. We consider 5 frames per second (fps) to be near-interactive rates for normal viewing environments and immersive environments to have a lower bound frame rate of l0 fps. Using TRex for virtual reality environments requires low latency - around 50 ms per frame or 100 ms per view update or stereo pair. To achieve lower latency renderings, we either render smaller portions of the volume on more graphics pipes or subsample the volume to render fewer samples per frame by each graphics pipe. Unstructured data sets must be resampled to appropriately leverage the 3D texture volume rendering method.


ieee visualization | 1998

POPTEX: interactive ocean model visualization using texture mapping hardware

Allen McPherson; Mathew Maltrud

Global circulation models are used to gain an understanding of the processes that affect the Earths climate and may ultimately be used to assess the impact of humanitys activities on it. The POP ocean model developed at Los Alamos is an example of such a global circulation model that is being used to investigate the role of the ocean in the climate system. Data output from POP has traditionally been visualized using video technology which precludes rapid modification of visualization parameters and techniques. This paper describes a visualization system that leverages high speed graphics hardware, specifically texture mapping hardware, to accelerate data exploration to interactive rates. We describe the design of the system, the specific hardware features used, and provide examples of its use. The system is capable of viewing ocean circulation simulation results at up to 60 frames per second while loading texture memory at approximately 72 million texels per second.


Computer Physics Communications | 2014

Spatial adaptive sampling in multiscale simulation

Bertrand Rouet-Leduc; Kipton Barros; Emmanuel B. Cieren; Venmugil Elango; Christoph Junghans; Turab Lookman; Jamaludin Mohd-Yusof; Robert S. Pavel; Axel Y. Rivera; Dominic Roehm; Allen McPherson; Timothy C. Germann

Abstract In a common approach to multiscale simulation, an incomplete set of macroscale equations must be supplemented with constitutive data provided by fine-scale simulation. Collecting statistics from these fine-scale simulations is typically the overwhelming computational cost. We reduce this cost by interpolating the results of fine-scale simulation over the spatial domain of the macro-solver. Unlike previous adaptive sampling strategies, we do not interpolate on the potentially very high dimensional space of inputs to the fine-scale simulation. Our approach is local in space and time, avoids the need for a central database, and is designed to parallelize well on large computer clusters. To demonstrate our method, we simulate one-dimensional elastodynamic shock propagation using the Heterogeneous Multiscale Method (HMM); we find that spatial adaptive sampling requires only ≈ 50 × N 0.14 fine-scale simulations to reconstruct the stress field at all N grid points. Related multiscale approaches, such as Equation Free methods, may also benefit from spatial adaptive sampling.


Computer Physics Communications | 2015

Distributed Database Kriging for Adaptive Sampling (D2KAS)

Dominic Roehm; Robert S. Pavel; Kipton Barros; Bertrand Rouet-Leduc; Allen McPherson; Timothy C. Germann; Christoph Junghans

Abstract We present an adaptive sampling method supplemented by a distributed database and a prediction method for multiscale simulations using the Heterogeneous Multiscale Method. A finite-volume scheme integrates the macro-scale conservation laws for elastodynamics, which are closed by momentum and energy fluxes evaluated at the micro-scale. In the original approach, molecular dynamics (MD) simulations are launched for every macro-scale volume element. Our adaptive sampling scheme replaces a large fraction of costly micro-scale MD simulations with fast table lookup and prediction. The cloud database Redis provides the plain table lookup, and with locality aware hashing we gather input data for our prediction scheme. For the latter we use kriging, which estimates an unknown value and its uncertainty (error) at a specific location in parameter space by using weighted averages of the neighboring points. We find that our adaptive scheme significantly improves simulation performance by a factor of 2.5–25, while retaining high accuracy for various choices of the algorithm parameters.


2008 Workshop on Ultrascale Visualization | 2008

Petascale visualization: Approaches and initial results

James P. Ahrens; Li-Ta Lo; Boonthanome Nouanesengsy; John Patchett; Allen McPherson

With the advent of the first petascale supercomputer, Los Alamoss Roadrunner, there is a pressing need to address how to visualize petascale data. The crux of the petascale visualization performance problem is interactive rendering, since it is the most computationally intensive portion of the visualization process. For terascale platforms, commodity clusters with graphics processors (GPUs) have been used for interactive rendering. For petascale platforms, visualization and rendering may be able to run efficiently on the supercomputer platform itself. In this work, we evaluated the rendering performance of multi-core CPU and GPU-based processors. To achieve high-performance on multi-core processors, we tested with multi-core optimized raytracing engines for rendering. For real-world performance testing, and to prepare for petascale visualization tasks, we interfaced these rendering engines with VTK and ParaView. Initial results show that rendering software optimized for multi-core CPU processors provides competitive performance to GPUs for the parallel rendering of massive data. The current architectural multi-core trend suggests multi-core based supercomputers are able to provide interactive visualization and rendering support now and in the future.


Electronic Imaging: Science and Technology | 1996

Interactive layout mechanisms for image database retrieval

John Maccuish; Allen McPherson; Julio E. Barros; Patrick M. Kelly

In this paper we present a user interface, CANDID Camera, for image retrieval using query- by-example technology. Included in the interface are several new layout algorithms based on multidimensional scaling techniques that visually display global and local relationships between images within a large image database. We use the CANDID project algorithms to create signatures of the images, and then measure the dissimilarity between the signatures. The layout algorithms are of two types. The first are those that project the all-pairs dissimilarities to two dimensions, presenting a many-to-many relationship for a global view of the entire database. The second are those that relate a query image to a small set of matched images for a one-to-many relationship that provides a local inspection of the image relationships. Both types are based on well-known multidimensional scaling techniques that have been modified and used together for efficiency and effectiveness. They include nonlinear projection and classical projection. The global maps are hybrid algorithms using classical projection together with nonlinear projection. We have developed several one-to-many layouts based on a radial layout, also using modified nonlinear and classical projection.


ieee international conference on high performance computing data and analytics | 2015

Database assisted distribution to improve fault tolerance for multiphysics applications

Robert S. Pavel; Allen McPherson; Timothy C. Germann; Christoph Junghans

Multiscale physics applications present an interesting problem from a computer science standpoint as task granularity has the potential to vary drastically which places a heavy burden upon the task scheduler and load balancer. Additionally, due to the long execution time of some of these computations, fault tolerance becomes a necessity as not being able to recover from a fault during a single long running task results in the recomputation of all data used to generate the inputs. Traditionally, this is facilitated through the use of checkpointing. However, these checkpoints must be taken sparingly due to their high cost. In this paper, we describe our use of a NoSQL database and asynchronous task based runtimes to work directly from the checkpoints themselves with minimal code modifications by domain scientists. To evaluate the performance impact of this approach, we have studied the CoHMM proxy application: a co-design proxy application designed to test modern runtimes by simulating the propagation of a shock wave through a material through the use of the heterogeneous multiscale method. We distilled this proxy application to a library that we used to implement CoHMM in a range of runtimes with and without our database assisted approach and we measured the overhead of each with respect to the CoHMM application and the cost of serializing and migrating data in the runtimes themselves.


Archive | 2012

A 2-D Implicit, Energy and Charge Conserving Particle In Cell Method

Allen McPherson; Dana A. Knoll; Emmanuel B. Cieren; Nicolas Feltman; Christopher A. Leibs; Colleen McCarthy; Karthik S. Murthy; Yijie Wang

Recently, a fully implicit electrostatic 1D charge- and energy-conserving particle-in-cell algorithm was proposed and implemented by Chen et al ([2],[3]). Central to the algorithm is an advanced particle pusher. Particles are moved using an energy conserving scheme and are forced to stop at cell faces to conserve charge. Moreover, a time estimator is used to control errors in momentum. Here we implement and extend this advanced particle pusher to include 2D and electromagnetic fields. Derivations of all modifications made are presented in full. Special consideration is taken to ensure easy coupling into the implicit moment based method proposed by Taitano et al [19]. Focus is then given to optimizing the presented particle pusher on emerging architectures. Two multicore implementations, and one GPU (Graphics Processing Unit) implementation are discussed and analyzed.


international conference on distributed computing systems | 2018

BeeFlow: A Workflow Management System for In Situ Processing across HPC and Cloud Systems

Jieyang Chen; Qiang Guan; Zhao Zhang; Xin Liang; Louis James Vernon; Allen McPherson; Li-Ta Lo; Patricia Grubel; Tim Randles; Zizhong Chen; James P. Ahrens

Collaboration


Dive into the Allen McPherson's collaboration.

Top Co-Authors

Avatar

James P. Ahrens

Los Alamos National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Christoph Junghans

Los Alamos National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Emmanuel B. Cieren

Los Alamos National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Li-Ta Lo

Los Alamos National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Robert S. Pavel

Los Alamos National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Timothy C. Germann

Los Alamos National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Bertrand Rouet-Leduc

Los Alamos National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Christopher A. Leibs

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar

Dana A. Knoll

Los Alamos National Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge