Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Todd Harman is active.

Publication


Featured researches published by Todd Harman.


Engineering With Computers | 2006

A component-based parallel infrastructure for the simulation of fluid–structure interaction

Steven G. Parker; James Guilkey; Todd Harman

The Uintah computational framework is a component-based infrastructure, designed for highly parallel simulations of complex fluid–structure interaction problems. Uintah utilizes an abstract representation of parallel computation and communication to express data dependencies between multiple physics components. These features allow parallelism to be integrated between multiple components while maintaining overall scalability. Uintah provides mechanisms for load-balancing, data communication, data I/O, and checkpoint/restart. The underlying infrastructure is designed to accommodate a range of PDE solution methods. The primary techniques described here, are the material point method (MPM) for structural mechanics and a multi-material fluid mechanics capability. MPM employs a particle-based representation of solid materials that interact through a semi-structured background grid. We describe a scalable infrastructure for problems with large deformation, high strain rates, and complex material behavior. Uintah is a product of the University of Utah Center for Accidental Fires and Explosions (C-SAFE), a DOE-funded Center of Excellence. This approach has been used to simulate numerous complex problems, including the response of energetic devices subject to harsh environments such as hydrocarbon pool fires. This scenario involves a wide range of length and time scales including a relatively slow heating phase punctuated by pressurization and rupture of the device.


extreme science and engineering discovery environment | 2012

Radiation modeling using the Uintah heterogeneous CPU/GPU runtime system

Alan Humphrey; Qingyu Meng; Martin Berzins; Todd Harman

The Uintah Computational Framework was developed to provide an environment for solving fluid-structure interaction problems on structured adaptive grids on large-scale, long-running, data-intensive problems. Uintah uses a combination of fluid-flow solvers and particle-based methods for solids, together with a novel asynchronou task-based approach with fully automated load balancing. Uintah demonstrates excellent weak and strong scalability at full machine capacity on XSEDE resources such as Ranger and Kraken, and through the use of a hybrid memory approach based on a combination of MPI and Pthreads, Uintah now runs on up to 262k cores on the DOE Jaguar system. In order to extend Uintah to heterogeneous systems, with ever-increasing CPU core counts and additional on-node GPUs, a new dynamic CPU-GPU task scheduler is designed and evaluated in this study. This new scheduler enables Uintah to fully exploit these architectures with support for asynchronous, out-of-order scheduling of both CPU and GPU computational tasks. A new runtime system has also been implemented with an added multi-stage queuing architecture for efficient scheduling of CPU and GPU tasks. This new runtime system automatically handles the details of asynchronous memory copies to and from the GPU and introduces a novel method of pre-fetching and preparing GPU memory prior to GPU task execution. In this study this new design is examined in the context of a developing, hierarchical GPU-based ray tracing radiation transport model that provides Uintah with additional capabilities for heat transfer and electromagnetic wave propagation. The capabilities of this new scheduler design are tested by running at large scale on the modern heterogeneous systems, Keeneland and TitanDev, with up to 360 and 960 GPUs respectively. On these systems, we demonstrate significant speedups per GPU against a standard CPU core for our radiation problem.


international parallel and distributed processing symposium | 2016

Radiative Heat Transfer Calculation on 16384 GPUs Using a Reverse Monte Carlo Ray Tracing Approach with Adaptive Mesh Refinement

Alan Humphrey; Daniel Sunderland; Todd Harman; Martin Berzins

Modeling thermal radiation is computationally challenging in parallel due to its all-to-all physical and resulting computational connectivity, and is also the dominant mode of heat transfer in practical applications such as next-generation clean coal boilers, being modeled by the Uintah framework. However, a direct all-to-all treatment of radiation is prohibitively expensive on large computers systems whether homogeneous or heterogeneous. DOE Titan and the planned DOE Summit and Sierra machines are examples of current and emerging GPU-based heterogeneous systems where the increased processing capability of GPUs over CPUs exacerbates this problem. These systems require that computational frameworks like Uintah leverage an arbitrary number of on-node GPUs, while simultaneously utilizing thousands of GPUs within a single simulation. We show that radiative heat transfer problems can be made to scale within Uintah on heterogeneous systems through a combination of reverse Monte Carlo ray tracing (RMCRT) techniques combined with AMR, to reduce the amount of global communication. In particular, significant Uintah infrastructure changes, including a novel lock and contention-free, thread-scalable data structure for managing MPI communication requests and improved memory allocation strategies were necessary to achieve excellent strong scaling results to 16384 GPUs on Titan.


Archive | 2015

ASC ATDM Level 2 Milestone #5325: Asynchronous Many-Task Runtime System Analysis and Assessment for Next Generation Platforms.

Gavin Matthew Baker; Matthew Tyler Bettencourt; Steven W. Bova; Ken Franko; Marc Gamell; Ryan E. Grant; Simon D. Hammond; David S. Hollman; Samuel Knight; Hemanth Kolla; Paul Lin; Stephen L. Olivier; Gregory D. Sjaardema; Nicole Lemaster Slattengren; Keita Teranishi; Jeremiah J. Wilke; Janine C. Bennett; Robert L. Clay; Laxkimant Kale; Nikhil Jain; Eric Mikida; Alex Aiken; Michael Bauer; Wonchan Lee; Elliott Slaughter; Sean Treichler; Martin Berzins; Todd Harman; Alan Humphreys; John A. Schmidt

This report provides in-depth information and analysis to help create a technical road map for developing nextgeneration programming models and runtime systems that support Advanced Simulation and Computing (ASC) workload requirements. The focus herein is on asynchronous many-task (AMT) model and runtime systems, which are of great interest in the context of “exascale” computing, as they hold the promise to address key issues associated with future extreme-scale computer architectures. This report includes a thorough qualitative and quantitative examination of three best-of-class AMT runtime systems—Charm++, Legion, and Uintah, all of which are in use as part of the ASC Predictive Science Academic Alliance Program II (PSAAP-II) Centers. The studies focus on each of the runtimes’ programmability, performance, and mutability. Through the experiments and analysis presented, several overarching findings emerge. From a performance perspective, AMT runtimes show tremendous potential for addressing extremescale challenges. Empirical studies show an AMT runtime can mitigate performance heterogeneity inherent to the machine itself and that Message Passing Interface (MPI) and AMT runtimes perform comparably under balanced conditions. From a programmability and mutability perspective however, none of the runtimes in this study are currently ready for use in developing production-ready Sandia ASC applications. The report concludes by recommending a codesign path forward, wherein application, programming model, and runtime system developers work together to define requirements and solutions. Such a requirements-driven co-design approach benefits the high-performance computing (HPC) community as a whole, with widespread community engagement mitigating risk for both application developers and runtime system developers.


20th AIAA Computational Fluid Dynamics Conference 2011 | 2011

Efficient Parallelization of RMCRT for Large Scale LES Combustion Simulations

Isaac Hunsaker; Todd Harman; Jeremy Thornock; Philip J. Smith

At the high temperatures inherent to combustion systems, radiation is the dominant mode of heat transfer. An accurate simulation of a combustor therefore requires precise treatment of radiative heat transfer. This is accomplished by calculating the radiative-flux divergence at each cell of the discretized domain. Reverse Monte Carlo Ray Tracing (RMCRT) is one of the few numerical techniques that can accurately solve for the radiative-flux divergence while accounting, in an efficient manner, for the effects of participating media. Furthermore, RMCRT lends itself to massive parallelism because the intensities of each ray are mutually exclusive. Therefore, multiple rays can be traced simultaneously at any given time step. We have created a parallelized RMCRT algorithm that solves for the radiative-flux divergence in combustion systems. This algorithm has been verified against a 3D benchmark case involving participating media. The error of this algorithm converges with an increase in the number of rays traced per cell, such that at 700 rays per cell, the L2 error norm of a 41 3 mesh is 0.49%. Our algorithm demonstrates strong scaling when run in parallel on 2 to 1536 processors for domains of 128 3 and 256 3 cells.


ieee international conference on high performance computing, data, and analytics | 2015

A Scalable Algorithm for Radiative Heat Transfer Using Reverse Monte Carlo Ray Tracing

Alan Humphrey; Todd Harman; Martin Berzins; Phillip Smith

Radiative heat transfer is an important mechanism in a class of challenging engineering and research problems. A direct all-to-all treatment of these problems is prohibitively expensive on large core counts due to pervasive all-to-all MPI communication. The massive heat transfer problem arising from the next generation of clean coal boilers being modeled by the Uintah framework has radiation as a dominant heat transfer mode. Reverse Monte Carlo ray tracing (RMCRT) can be used to solve for the radiative-flux divergence while accounting for the effects of participating media. The ray tracing approach used here replicates the geometry of the boiler on a multi-core node and then uses an all-to-all communication phase to distribute the results globally. The cost of this all-to-all is reduced by using an adaptive mesh approach in which a fine mesh is only used locally, and a coarse mesh is used elsewhere. A model for communication and computation complexity is used to predict performance of this new method. We show this model is consistent with observed results and demonstrate excellent strong scaling to 262 K cores on the DOE Titan system on problem sizes that were previously computationally intractable.


extreme science and engineering discovery environment | 2012

Multiscale modeling of high explosives for transportation accidents

Joseph R. Peterson; Jacqueline C. Beckvermit; Todd Harman; Martin Berzins; Charles A. Wight

The development of a reaction model to simulate the accidental detonation of a large array of seismic boosters in a semi-truck subject to fire is considered. To test this model large scale simulations of explosions and detonations were performed by leveraging the massively parallel capabilities of the Uintah Computational Framework and the XSEDE computational resources. Computed stress profiles in bulk-scale explosive materials were validated using compaction simulations of hundred micron scale particles and found to compare favorably with experimental data. A validation study of reaction models for deflagration and detonation showed that computational grid cell sizes up to 10 mm could be used without loss of fidelity. The Uintah Computational Framework shows linear scaling up to 180K cores which combined with coarse resolution and validated models will now enable simulations of semi-truck scale transportation accidents for the first time.


2006 ASME International Mechanical Engineering Congress and Exposition, IMECE2006 | 2006

The effect of viscous dissipation on two-dimensional microchannel heat transfer

Jennifer van Rij; Tim Ameel; Todd Harman

Microchannel convective heat transfer characteristics in the slip flow regime are numerically evaluated for two-dimensional, steady state, laminar, constant wall heat flux and constant wall temperature flows. The effects of Knudsen number, accommodation coefficients, viscous dissipation, pressure work, second-order slip boundary conditions, axial conduction, and thermally/hydrodynamically developing flow are considered. The effects of these parameters on microchannel convective heat transfer are compared through the Nusselt number. Numerical values for the Nusselt number are obtained using a continuum based three-dimensional, unsteady, compressible computational fluid dynamics algorithm that has been modified with slip boundary conditions. Numerical results are verified using analytic solutions for thermally and hydrodynamically fully developed flows. The resulting analytical and numerical Nusselt numbers are given as a function of Knudsen number, the first- and second-order velocity slip and temperature jump coefficients, the Peclet number, and the Brinkman number. Excellent agreement between numerical and analytical data is demonstrated. Viscous dissipation, pressure work, second-order slip terms, and axial conduction are all shown to have significant effects on Nusselt numbers in the slip flow regime.Copyright


Computing in Science and Engineering | 2013

Multiscale Modeling of Accidental Explosions and Detonations

Jacqueline C. Beckvermit; Joseph R. Peterson; Todd Harman; Scott Bardenhagen; Charles A. Wight; Qingyu Meng; Martin Berzins

Uintah Computational Framework is the first software to enable effectively simulating the development of detonation in semi-truck-scale transportation accidents.


SIAM Journal on Scientific Computing | 2016

Extending the Uintah Framework through the Petascale Modeling of Detonation in Arrays of High Explosive Devices

Martin Berzins; Jacqueline C. Beckvermit; Todd Harman; Andrew Bezdjian; Alan Humphrey; Qingyu Meng; John A. Schmidt; Charles A. Wight

The Uintah software framework for the solution of a broad class of fluid-structure interaction problems has been developed by using a problem-driven approach that dates back to its inception. Uintah uses a layered task-graph approach that decouples the problem specification as a set of tasks from the adaptive runtime system that executes these tasks. Using this approach, it is possible to improve the performance of the software components to enable the solution of broad classes of problems as well as the driving problem itself. This process is illustrated by a motivating problem: the computational modeling of the hazards posed by thousands of explosive devices during a deflagration-to-detonation transition that occurred on Highway 6 in Utah. In order to solve this complex fluid-structure interaction problem at the required scale, substantial algorithmic and data structure improvements were needed to Uintah. These improvements enabled scalable runs for the target problem and provided the capability to mode...

Collaboration


Dive into the Todd Harman's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge