Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jeremy Thornock is active.

Publication


Featured researches published by Jeremy Thornock.


ieee/acm international symposium cluster, cloud and grid computing | 2013

Large Scale Parallel Solution of Incompressible Flow Problems Using Uintah and Hypre

John A. Schmidt; Martin Berzins; Jeremy Thornock; Tony Saad; James C. Sutherland

The Uintah Software framework was developed to provide an environment for solving fluid-structure interaction problems on structured adaptive grids on large-scale, long-running, data-intensive problems. Uintah uses a combination of fluid-flow solvers and particle-based methods for solids together with a novel asynchronous task-based approach with fully automated load balancing. As Uintah is often used to solve incompressible flow problems in combustion applications it is important to have a scalable linear solver. While there are many such solvers available, the scalability of those codes varies greatly. The hypre software offers a range of solvers and pre-conditioners for different types of grids. The weak scalability of Uintah and hypre is addressed for particular examples of both packages when applied to a number of incompressible flow problems. After careful software engineering to reduce startup costs, much better than expected weak scalability is seen for up to 100K cores on NSFs Kraken architecture and up to260K cpu cores, on DOEs new Titan machine. The scalability is found to depend in a crtitical way on the choice of algorithm used by hypre for a realistic application problem.


WIT Transactions on State-of-the-art in Science and Engineering | 2008

Heat Transfer To Objects In Pool Fires

Jennifer Spinti; Jeremy Thornock; Eric G. Eddings; Philip J. Smith; Adel F. Sarofim

In accident scenarios involving fire and the transport of explosive material, the time available for escape is dependent on the heat transfer rate from the fire to the energetic material. A review is presented of historical modeling approaches that draw on empiricism for estimating both heat flux from fires and fire hazard. While such methods can be used for conservative estimates of heat flux in determining safe separation distances, they cannot be used in situations where overestimating the heat flux may underestimate the hazard, such as the heating of high-energy explosives. Next, a large eddy simulation (LES) technique for addressing fire phenomena with embedded, heat sensitive objects is described. With the advent of high performance computing, LES is emerging as a powerful tool for resolving a large set of spatial and temporal scales in fires and for capturing observed pool fire phenomena such as visible flame structures. The development of the LES approach described here is based on verification and validation (V&V) principles, utilizing a V&V hierarchy that is focused on the intended use of the simulation. This LES approach couples surrogate fuel representations of complex hydrocarbon fuels, reaction models for incorporation of the detailed chemical kinetics associated with the surrogate fuel, soot formation models, models for unresolved turbulence/ chemistry interactions, radiative heat transfer models, and modifications to the LES algorithm for computing heat transfer to objects. The chapter concludes with an analysis of simulation and experimental data of heat transfer to embedded objects in large JP-8 pool fires and of time to ignition of an energetic device in such a fire. The analysis considers the role of validation, sensitivity analysis and uncertainty quantification in moving toward predictivity.


20th AIAA Computational Fluid Dynamics Conference 2011 | 2011

Efficient Parallelization of RMCRT for Large Scale LES Combustion Simulations

Isaac Hunsaker; Todd Harman; Jeremy Thornock; Philip J. Smith

At the high temperatures inherent to combustion systems, radiation is the dominant mode of heat transfer. An accurate simulation of a combustor therefore requires precise treatment of radiative heat transfer. This is accomplished by calculating the radiative-flux divergence at each cell of the discretized domain. Reverse Monte Carlo Ray Tracing (RMCRT) is one of the few numerical techniques that can accurately solve for the radiative-flux divergence while accounting, in an efficient manner, for the effects of participating media. Furthermore, RMCRT lends itself to massive parallelism because the intensities of each ray are mutually exclusive. Therefore, multiple rays can be traced simultaneously at any given time step. We have created a parallelized RMCRT algorithm that solves for the radiative-flux divergence in combustion systems. This algorithm has been verified against a 3D benchmark case involving participating media. The error of this algorithm converges with an increase in the number of rays traced per cell, such that at 700 rays per cell, the L2 error norm of a 41 3 mesh is 0.49%. Our algorithm demonstrates strong scaling when run in parallel on 2 to 1536 processors for domains of 128 3 and 256 3 cells.


Proceedings of the Second Internationsl Workshop on Extreme Scale Programming Models and Middleware | 2016

An overview of performance portability in the uintah runtime system through the use of kokkos

Daniel Sunderland; Brad Peterson; John A. Schmidt; Alan Humphrey; Jeremy Thornock; Martin Berzins

The current diversity in nodal parallel computer architectures is seen in machines based upon multicore CPUs, GPUs and the Intel Xeon Phis. A class of approaches for enabling scalability of complex applications on such architectures is based upon Asynchronous Many Task software architectures such as that in the Uintah framework used for the parallel solution of solid and fluid mechanics problems. Uintah has both an applications layer with its own programming model and a separate runtime system. While Uintah scales well today, it is necessary to address nodal performance portability in order for it to continue to do. Incrementally modifying Uintah to use the Kokkos performance portability library through prototyping experiments results in improved kernel performance by more than a factor of two.


Journal of Verification, Validation and Uncertainty Quantification | 2015

A Validation of Flare Combustion Efficiency Predictions from Large Eddy Simulations

Anchal Jatale; Philip J. Smith; Jeremy Thornock; Sean T. Smith; Michal Hradisky

Societal concerns about the wide-spread use of flaring of waste gases have motivated methods for predicting combustion efficiency from industrial flare systems under high crosswind conditions. The objective of this paper is to demonstrate, with a quantified degree of accuracy, a prediction procedure for the combustion efficiency of industrial flares in crosswind by using large eddy simulations (LES). LES is shown to resolve the important mixing between fuel and entrained air governing the extent of reaction to within less than a percent of combustion efficiency. The experimental data from the 4-inch flare tests performed at the CanmetENERGY wind tunnel flare facility was used as experimentally measured metrics to validate the simulation with quantified uncertainty. The approach used prior information about the models and experimental data and the associated likelihood functions to determine informative posterior distributions. The model values were subjected to a consistency constraint, which requires that all experiments and simulations be bounded by their individual experimental uncertainty. The final result was a predictive capability (in the nearby regime) for flare combustion efficiency where no/sparse experimental data is available but where the validation process produces error bars for the predicted combustion efficiency.


4th Asian Conference on Supercomputing Frontiers, SCFA 2018 | 2018

Scalable Data Management of the Uintah Simulation Framework for Next-Generation Engineering Problems with Radiation

Sidharth Kumar; Alan Humphrey; Will Usher; Steve Petruzza; Brad Peterson; John A. Schmidt; Derek Harris; Ben Isaac; Jeremy Thornock; Todd Harman; Valerio Pascucci; Martin Berzins

The need to scale next-generation industrial engineering problems to the largest computational platforms presents unique challenges. This paper focuses on data management related problems faced by the Uintah simulation framework at a production scale of 260K processes. Uintah provides a highly scalable asynchronous many-task runtime system, which in this work is used for the modeling of a 1000 megawatt electric (MWe) ultra-supercritical (USC) coal boiler. At 260K processes, we faced both parallel I/O and visualization related challenges, e.g., the default file-per-process I/O approach of Uintah did not scale on Mira. In this paper we present a simple to implement, restructuring based parallel I/O technique. We impose a restructuring step that alters the distribution of data among processes. The goal is to distribute the dataset such that each process holds a larger chunk of data, which is then written to a file independently. This approach finds a middle ground between two of the most common parallel I/O schemes–file per process I/O and shared file I/O–in terms of both the total number of generated files, and the extent of communication involved during the data aggregation phase. To address scalability issues when visualizing the simulation data, we developed a lightweight renderer using OSPRay, which allows scientists to visualize the data interactively at high quality and make production movies. Finally, this work presents a highly efficient and scalable radiation model based on the sweeping method, which significantly outperforms previous approaches in Uintah, like discrete ordinates. The integrated approach allowed the USC boiler problem to run on 260K CPU cores on Mira.


ASME 2012 Heat Transfer Summer Conference collocated with the ASME 2012 Fluids Engineering Division Summer Meeting and the ASME 2012 10th International Conference on Nanochannels, Microchannels, and Minichannels | 2012

A NEW MODEL FOR VIRTUAL RADIOMETERS

Isaac Hunsaker; David J. Glaze; Jeremy Thornock; Philip J. Smith

There exists a general need to compare radiative fluxes from experimental radiometers with fluxes computed in Thermal/Fluid simulations. Unfortunately, typical numerical simulation suites lack the ability to predict fluxes to objects with small view angles thus preventing validation of simulation results. A new model has been developed that allows users to specify arbitrary view angles, orientations, and locations of multiple radiometers, and receive as the output, high-accuracy radiative fluxes to these radiometers. This virtual radiometer model incorporates a reverse monte-carlo ray tracing algorithm adapted to meet these user specifications and runs on both unstructured and structured meshes. Verification testing of the model demonstrated the expected order of convergence. Validation testing showed good agreement between calculated fluxes from the model and measured fluxes from radiometers used in propellant fires. Sandia National Laboratories is a multiprogram laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy’s National Nuclear Security Administration under contract DEAC0494AL85000.Copyright


ieee international conference on high performance computing data and analytics | 2011

Large eddy simulation of industrial flares

Philip J. Smith; Jeremy Thornock; Dan Hinckley; Michal Hradisky

At the Institute for Clean and Secure Energy at the University of Utah we are focused on education through interdisciplinary research on high-temperature fuel-utilization processes for energy generation, and the associated health, environmental, policy and performance issues. We also work closely with the government agencies and private industry companies to promote rapid deployment of new technologies through the use of high performance computational tools. Industrial flare simulation can provide important information on combustion efficiency, pollutant emissions, and operational parameter sensitivities for design or operation that cannot be measured. These simulations provide information that may help design or operate flares so as to reduce or eliminate harmful pollutants and increase combustion efficiency. Fires and flares have been particularly difficult to simulate with traditional computational fluid dynamics (CFD) simulation tools that are based on Reynolds-Averaged Navier-Stokes (RANS) approaches. The large-scale mixing due to vortical coherent structures in these flames is not readily reduced to steady-state CFD calculations with RANS. Simulation of combustion using Large Eddy Simulations (LES) has made it possible to more accurately simulate the complex combustion seen in these flares. Resolution of all length and time scales is not possible even for the largest supercomputers. LES gives a numerical technique which resolves the large length and time scales while using models for more homogenous smaller scales. By using LES, the combustion dynamics capture the puffing created by buoyancy in industrial flare simulation. All of our simulations were performed using either the University of Utahs ARCHES simulation tool or the commercially available Star-CCM+ software. ARCHES is a finite-volume Large Eddy Simulation code built within the Uintah framework, which is a set of software components and libraries that facilitate the solution of partial differential equations on structured adaptive mesh refinement grids using thousands of processors. Uintah is the product of a ten-year partnership with the Department of Energys Advanced Simulation and Computing (ASC) program through the University of Utahs Center for Simulation of Accidental Fires and Explosions (C-SAFE). The ARCHES component was initially designed for predicting the heat-flux from large buoyant pool fires with potential hazards immersed in or near a pool fire of transportation fuel. Since then, this component has been extended to solve many industrially relevant problems such as industrial flares, oxy-coal combustion processes, and fuel gasification. In this report we showcase selected results that help us visualize and understand the physical processes occurring in the simulated systems. Most of the simulations were completed on the University of Utahs Updraft and Ember high performance computing clusters, which are managed by the Center for High Performance Computing. High performance computational tools are essential in our effort to successfully answer all aspects of our research areas and we promote the use of high performance computational tools beyond the research environment by directly working with our industry partners.


ieee international conference on high performance computing data and analytics | 2011

Large eddy simulation of a turbulent buoyant helium plume

Philip J. Smith; Michal Hradisky; Jeremy Thornock; Jennifer Spinti; Diem Nguyen

At the Institute for Clean and Secure Energy at the University of Utah we are focused on education through interdisciplinary research on high-temperature fuel-utilization processes for energy generation, and the associated health, environmental, policy and performance issues. We also work closely with the government agencies and private industry companies to promote rapid deployment of new technologies through the use of high performance computational tools. Buoyant flows are encountered in many situations of engineering and environmental importance, including fires, subsea and atmospheric exhaust phenomena, gas releases and geothermal events. Buoyancy-driven flows also play a key role in such physical processes as the spread of smoke or toxic gases from fires. As such, buoyant flow experiments are an important step in developing and validating simulation tools for numerical techniques such as Large Eddy Simulation (LES) for predictive use of complex systems. Large Eddy Simulation is a turbulence model that provides a much greater degree of resolution of physical scales than the more common Reynolds-Averaged Navier Stokes models. The validation activity requires increasing levels of complexity to sequentially quantify the effects of coupling increased physics, and to explore the effects of scale on the objectives of the simulation. In this project we are using buoyant flows to examine the validity and accuracy of numerical techniques. By using the non-reacting buoyant helium plume flow we can study the generation of turbulence due to buoyancy, uncoupled from the complexities of combustion chemistry. We are performing Large Eddy Simulation of a one-meter diameter buoyancy-driven helium plume using two software simulation tools -- ARCHES and Star-CCM+. ARCHES is a finite-volume Large Eddy Simulation code built within the Uintah framework, which is a set of software components and libraries that facilitate the solution of partial differential equations on structured adaptive mesh refinement grids using thousands of processors. Uintah is the product of a ten-year partnership with the Department of Energys Advanced Simulation and Computing (ASC) program through the University of Utahs Center for Simulation of Accidental Fires and Explosions (C-SAFE). The ARCHES component was initially designed for predicting the heat-flux from large buoyant pool fires with potential hazards immersed in or near a pool fire of transportation fuel. Since then, this component has been extended to solve many industrially relevant problems such as industrial flares, oxy-coal combustion processes, and fuel gasification. The second simulation tool, Star-CCM+, is a commercial, integrated software environment developed by CD-adapco, that can be used to simulate the entire engineering simulation process. The engineering process can be started with CAD preparation, meshing, model setup, and continued with running simulations, post-processing, and visualizing the results. This allows for faster development and design turn-over time, especially for industry-type application. Star-CCM+ was build from ground up to provide scalable parallel performance. Furthermore, it is not only supported on the industry-standard Linux HPC platforms, but also on Windows HPC, allowing us to explore computational demands on both Linux as well as Windows-based HPC clusters.


Archive | 2012

Oxy-coal Combustion Studies

Jost O.L. Wendt; Eric G. Eddings; JoAnn S. Lighty; Terry A. Ring; Philip J. Smith; Jeremy Thornock; W. Morris Y Jia; J. Pedel; D. Rezeai; L. Wang; J. Zhang; Kerry E. Kelly

The objective of this project is to move toward the development of a predictive capability with quantified uncertainty bounds for pilot-scale, single-burner, oxy-coal operation. This validation research brings together multi-scale experimental measurements and computer simulations. The combination of simulation development and validation experiments is designed to lead to predictive tools for the performance of existing air fired pulverized coal boilers that have been retrofitted to various oxy-firing configurations. In addition, this report also describes novel research results related to oxy-combustion in circulating fluidized beds. For pulverized coal combustion configurations, particular attention is focused on the effect of oxy-firing on ignition and coal-flame stability, and on the subsequent partitioning mechanisms of the ash aerosol.

Collaboration


Dive into the Jeremy Thornock's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alessandro Parente

Université libre de Bruxelles

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge