Leigh Orf
Central Michigan University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Leigh Orf.
international conference on cluster computing | 2012
Matthieu Dorier; Gabriel Antoniu; Franck Cappello; Marc Snir; Leigh Orf
With exascale computing on the horizon, the performance variability of I/O systems represents a key challenge in sustaining high performance. In many HPC applications, I/O is concurrently performed by all processes, which leads to I/O bursts. This causes resource contention and substantial variability of I/O performance, which significantly impacts the overall application performance and, most importantly, its predictability over time. In this paper, we propose a new approach to I/O, called Damaris, which leverages dedicated I/O cores on each multicore SMP node, along with the use of shared-memory, to efficiently perform asynchronous data processing and I/O in order to hide this variability. We evaluate our approach on three different platforms including the Kraken Cray XT5 supercomputer (ranked 11th in Top500), with the CM1 atmospheric model, one of the target HPC applications for the Blue Waters postpetascale supercomputer project. By overlapping I/O with computation and by gathering data into large files while avoiding synchronization between cores, our solution brings several benefits: 1) it fully hides jitter as well as all I/O-related costs, which makes simulation performance predictable, 2) it increases the sustained write throughput by a factor of 15 compared to standard approaches, 3) it allows almost perfect scalability of the simulation up to over 9,000 cores, as opposed to state-of-the-art approaches which fail to scale, 4) it enables a 600% compression ratio without any additional overhead, leading to a major reduction of storage requirements.
Journal of the Atmospheric Sciences | 1996
Leigh Orf; John R. Anderson; Jerry M. Straka
Abstract A parameter study of colliding microburst outflows is performed using a high-resolution three-dimensional model. The colliding microburst pairs me simulated in a domain of 18 km × 16 km × 4.25 km with 50-m resolution. Microburst pairs are examined in varying space and time separations, and the authors find that for certain geometries strong elevated wind fields are generated from the interactions between outflows. For a narrow range of space-time geometries, this elevated wind field is extremely divergent. An examination of the F-factor aircraft hazard parameter reveals that both the divergent wind fields and microburst downdraft cores are regions of danger to jet aircraft. Trajectory analysis reveals that the air composing the elevated jets can be traced back to the shallow outflow formed beneath each microburst core. An analysis of the parcel kinetic energy budget indicates that the pressure domes beneath and between the microbursts are the primary mechanisms for directing energy into the eleva...
Monthly Weather Review | 1999
Leigh Orf; John R. Anderson
Abstract An analysis of traveling microbursts in unidirectionally sheared environments is undertaken using a three-dimensional numerical model with 50-m resolution in a 19 × 12 × 4 km domain. For each run, the cooling source is centered at a height of 2 km and travels in an eastward direction of Cm, where Cm = 3, 6, 9, 12, and 15 m s−1. Environmental winds above 2 km are equal to Cm and decay linearly to 0 m s−1 below 2 km. The authors examine the kinetic energy budget of each run, focusing on the dynamic features that are not found in a static microburst simulation. As the source speed Cm increases from 0 to 9 m s−1, the magnitude of the surface horizontal winds increase in the direction of source movement. An examination of the dynamic pressure equation shows that rotationally induced pressure work forces are primarily responsible for increasing surface horizontal winds for the moving-source microbursts. In a similar form to previous studies of vertical perturbations in a sheared environment, elevated h...
Meteorology and Atmospheric Physics | 1992
J. R. Anderson; Leigh Orf; Jerry M. Straka
SummaryA new three-dimensional numerical model system has been designed to study the complex near-surface flow features that arise from collisions between microburst outflow events and other sub-cloud phenomena with complex geometry. The model was designed specifically for implementation on massively parallel computers, and makes use of the high computation rates and large memory sizes of these machines to achieve spatial resolutions of 50 m or less in each dimension. Here we will report on one of the first model applications, a parameter study of colliding microburst outflows. Results from this study indicate that the collision zone between the two downdrafts can be a region of violent and complicated dynamics which can often lead to an elevated region of significant aircraft hazard.
Bulletin of the American Meteorological Society | 2017
Leigh Orf; Robert B. Wilhelmson; Bruce D. Lee; Catherine A. Finley; Adam L. Houston
AbstractTornadoes are among nature’s most destructive forces. The most violent, long-lived tornadoes form within supercell thunderstorms. Tornadoes ranked EF4 and EF5 on the Enhanced Fujita scale that exhibit long paths are the least common but most damaging and deadly type of tornado. In this article we describe an ultra-high-resolution (30-m gridpoint spacing) simulation of a supercell that produces a long-track tornado that exhibits instantaneous near-surface storm-relative winds reaching as high as 143 m s−1. The computational framework that enables this work is described, including the Blue Waters supercomputer, the CM1 cloud model, a data management framework built around the HDF5 scientific data format, and the VisIt and Vapor visualization tools. We find that tornadogenesis occurs in concert with processes not clearly seen in previous supercell simulations, including the consolidation of numerous vortices and vorticity patches along the storm’s forward-flank downdraft boundary and the intensificat...
parallel computing | 2016
Matthieu Dorier; Gabriel Antoniu; Franck Cappello; Marc Snir; Roberto Sisneros; Orcun Yildiz; Shadi Ibrahim; Tom Peterka; Leigh Orf
With exascale computing on the horizon, reducing performance variability in data management tasks (storage, visualization, analysis, etc.) is becoming a key challenge in sustaining high performance. This variability significantly impacts the overall application performance at scale and its predictability over time. In this article, we present Damaris, a system that leverages dedicated cores in multicore nodes to offload data management tasks, including I/O, data compression, scheduling of data movements, in situ analysis, and visualization. We evaluate Damaris with the CM1 atmospheric simulation and the Nek5000 computational fluid dynamic simulation on four platforms, including NICS’s Kraken and NCSA’s Blue Waters. Our results show that (1) Damaris fully hides the I/O variability as well as all I/O-related costs, thus making simulation performance predictable; (2) it increases the sustained write throughput by a factor of up to 15 compared with standard I/O approaches; (3) it allows almost perfect scalability of the simulation up to over 9,000 cores, as opposed to state-of-the-art approaches that fail to scale; and (4) it enables a seamless connection to the VisIt visualization software to perform in situ analysis and visualization in a way that impacts neither the performance of the simulation nor its variability. In addition, we extended our implementation of Damaris to also support the use of dedicated nodes and conducted a thorough comparison of the two approaches—dedicated cores and dedicated nodes—for I/O tasks with the aforementioned applications.
international conference on cluster computing | 2016
Matthieu Dorier; Robert Sisneros; Leonardo Bautista Gomez; Tom Peterka; Leigh Orf; Lokman Rahmani; Gabriel Antoniu; Luc Bougé
While many parallel visualization tools now provide in situ visualization capabilities, the trend has been to feed such tools with large amounts of unprocessed output data and let them render everything at the highest possible resolution. This leads to an increased run time of simulations that still have to complete within a fixed-length job allocation. In this paper, we tackle the challenge of enabling in situ visualization under performance constraints. Our approach shuffles data across processes according to its content and filters out part of it in order to feed a visualization pipeline with only a reorganized subset of the data produced by the simulation. Our framework leverages fast, generic evaluation procedures to score blocks of data, using information theory, statistics, and linear algebra. It monitors its own performance and adapts dynamically to achieve appropriate visual fidelity within predefined performance constraints. Experiments on the Blue Waters supercomputer with the CM1 simulation show that our approach enables a 5x speedup with respect to the initial visualization pipeline and is able to meet performance constraints.
parallel computing | 2016
Leigh Orf; Robert B. Wilhelmson; Louis J. Wicker
A breakthrough thunderstorm simulation is rendered with high fidelity.Cloud model I/O is improved by utilizing buffered HDF5 I/O.VisIt and Vapor software tools are used to create high-quality imagery.Visualization techniques reveal new flow structures never seen before. Tornadoes are one of natures most destructive forces, creating winds that can exceed 300 miles per hour. The strongest tornadoes are produced by supercells, long-lived thunderstorms characterized by a persistent rotating updraft. The sheer destructive power of the strongest class of tornado (EF5) makes these tornadoes the subject of active research. However, very little is currently known about why some supercells produce long-track (a long damage path) EF5 tornadoes, while other storms in similar environments produce short-lived, weak tornadoes, or produce no tornado at all.Recently, a breakthrough simulation was conducted on the Blue Waters supercomputer in which a simulated supercell produces an EF5 tornado that is on the ground for almost two hours. In this paper we report on the visualizations illuminating the simulation, which elucidate three-dimensional features thought to play an important role in creating and maintaining the tornado. Several obstacles needed to be overcome in order to produce the visualization of this simulation, including managing nearly 100 TB of model output, interfacing the model output format to high-quality visualization tools, and choosing effective visualization parameters.
international conference on cluster computing | 2017
Shaomeng Li; Sudhanshu Sane; Leigh Orf; Pablo D. Mininni; John Clyne; Hank Childs
Data reduction through compression is emerging as a promising approach to ease I/O costs for simulation codes on supercomputers. Typically, this compression is achieved by techniques that operate on individual time slices. However, as simulation codes advance in time, outputting multiple time slices as they go, the opportunity for compression incorporating the time dimension has not been extensively explored. Moreover, recent supercomputers are increasingly equipped with deeper memory hierarchies, including solid state drives and burst buffers, which creates the opportunity to temporarily store multiple time slices and then apply compression to them all at once, i.e., spatiotemporal compression. This paper explores the benefits of incorporating the time dimension into existing wavelet compression, including studying its key parameters and demonstrating its benefits in three axes: storage, accuracy, and temporal resolution. Our results demonstrate that temporal compression can improve each of these axes, and that the impact on performance for real systems, including tradeoffs in memory usage and execution time, is acceptable. We also demonstrate the benefits of spatiotemporal wavelet compression with real-world visualization use cases and tailored evaluation metrics.
Journal of the Atmospheric Sciences | 2015
Alan Shapiro; Stefan Rahimi; Corey K. Potvin; Leigh Orf
AbstractAn advection correction procedure is used to mitigate temporal interpolation errors in trajectory analyses constructed from gridded (in space and time) velocity data. The procedure is based on a technique introduced by Gal-Chen to reduce radar data analysis errors arising for the nonsimultaneity of the data collection. Experiments are conducted using data from a high-resolution Cloud Model 1 (CM1) numerical model simulation of a supercell storm initialized within an environment representative of the 24 May 2011 El Reno, Oklahoma, tornadic supercell storm. Trajectory analyses using advection correction are compared to traditional trajectory analyses using linear time interpolation. Backward trajectories are integrated over a 5-min period for a range of data input time intervals and for velocity-pattern-translation estimates obtained from different analysis subdomain sizes (box widths) and first-guess options. The use of advection correction reduces trajectory end-point position errors for a large m...