Benjamin Rutt
Ohio State University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Benjamin Rutt.
Reliability Engineering & System Safety | 2010
Benjamin Rutt; Kyle Metzroth; Aram Hakobyan; Tunc Aldemir; Richard Denning; Sean Dunagan; David Kunsman
Analysis of dynamic accident progression trees (ADAPT) is a mechanized procedure for the generation of accident progression event trees. Use of ADAPT substantially reduces the manual and computational effort for Level 2 probabilistic risk assessment (PRA) of nuclear power plants; reduces the likelihood of input errors; determines the order of events dynamically; and treats accidents in a phenomenology consistent manner. ADAPT is based on the concept of dynamic event trees which use explicit modeling of the deterministic dynamic processes that take place within the plant (through system simulation codes such as MELCOR, RELAP) for the modeling of stochastic system evolution. The computational infrastructure of ADAPT is presented, along with a prototype implementation of ADAPT using MELCOR for the PRA modeling of a station blackout in a pressurized water reactor. The computational infrastructure allows for flexibility in linking with different simulation codes, parallel processing of the scenarios under consideration, on-line scenario management (initiation as well as termination) and user-friendly graphical capabilities.
international conference of the ieee engineering in medicine and biology society | 2008
Vijay Kumar; Benjamin Rutt; Tahsin M. Kurç; Tony Pan; Sunny K. Chow; Stephan Lamont; Maryann E. Martone; Joel H. Saltz
This paper presents the application of a component-based Grid middleware system for processing extremely large images obtained from digital microscopy devices. We have developed parallel, out-of-core techniques for different classes of data processing operations employed on images from confocal microscopy scanners. These techniques are combined into a data preprocessing and analysis pipeline using the component-based middleware system. The experimental results show that: 1) our implementation achieves good performance and can handle very large datasets on high-performance Grid nodes, consisting of computation and/or storage clusters and 2) it can take advantage of Grid nodes connected over high-bandwidth wide-area networks by combining task and data parallelism.
challenges of large applications in distributed environments | 2006
Benjamin Rutt; Aram Hakobyan; Kyle Metzroth; Tunc Aldemir; Richard Denning; Sean Dunagan; David Kunsman
Level 2 probabilistic risk assessments of nuclear plants (analysis of radionuclide release from containment) may require hundreds of runs of severe accident analysis codes such as MELCOR or RELAP/SCDAP to analyze possible sequences of events (scenarios) that may follow given initiating events. With the advances in computer architectures and ubiquitous networking, it is now possible to utilize multiple computing and storage resources for such computational experiments. This paper presents a system software infrastructure that supports execution and analysis of multiple dynamic event-tree simulations on distributed environments. The infrastructure allow for 1) the testing of event tree completeness, and, 2) the assessment and propagation of uncertainty on the plant state in the quantification of event trees
conference on high performance computing (supercomputing) | 2006
Vijay Kumar; Benjamin Rutt; Tahsin M. Kurç; Joel H. Saltz; Sunny K. Chow; Stephan Lamont; Maryann E. Martone
This paper is concerned with efficient execution of a pipeline of data processing operations on very large images obtained from confocal microscopy instruments. We describe parallel, out-of-core algorithms for each operation in this pipeline. One of the challenging steps in the pipeline is the warping operation using inverse mapping based methods. We propose and investigate a set of algorithms to handle the warping computations on storage clusters. Our experimental results show that the proposed approaches are scalable both in terms of number of processors and the size of images
international conference on computational science | 2005
Manish Parashar; Vincent Matossian; Wolfgang Bangerth; Hector Klie; Benjamin Rutt; Tahsin M. Kurç; Joel H. Saltz; Mary F. Wheeler
The adequate location of wells in oil and environmental applications has a significant economical impact on reservoir management. However, the determination of optimal well locations is both challenging and computationally expensive. The overall goal of this research is to use the emerging Grid infrastructure to realize an autonomic dynamic data-driven self-optimizing reservoir framework. In this paper, we present the use of distributed data to dynamically drive the optimization of well placement in an oil reservoir.
international conference on cluster computing | 2005
Benjamin Rutt; Vijay Kumar; Tony Pan; Tahsin M. Kurç; Joel H. Saltz; Yujun Wang
We present a combined task- and data-parallel approach for distributed execution of pre-processing operations to support efficient evaluation of polygonal aggregation queries on digitized microscopy images. Our approach targets out-of-core, pipelined processing of very large images on active storage clusters. Our experimental results show that the proposed approach is scalable both in terms of number of processors and the size of images
Concurrency and Computation: Practice and Experience | 2006
Shahid H. Bokhari; Benjamin Rutt; Pete Wyckoff; Paul Buerger
Mass storage systems (MSSs) play a key role in data‐intensive parallel computing. Most contemporary MSSs are implemented as redundant arrays of independent/inexpensive disks (RAID) in which commodity disks are tied together with proprietary controller hardware. The performance of such systems can be difficult to predict because most internal details of the controller behavior are not public. We present a systematic method for empirically evaluating MSS performance by obtaining measurements on a series of RAID configurations of increasing size and complexity. We apply this methodology to a large MSS at Ohio Supercomputer Center that has 16 input/output processors, each connected to four 8 + 1 RAID5 units and provides 128 TB of storage (of which 116.8 TB are usable when formatted). Our methodology permits storage‐system designers to evaluate empirically the performance of their systems with considerable confidence. Although we have carried out our experiments in the context of a specific system, our methodology is applicable to all large MSSs. The measurements obtained using our methods permit application programmers to be aware of the limits to the performance of their codes. Copyright
Archive | 2008
David Kunsman; Tunc Aldemir; Benjamin Rutt; Kyle Metzroth; Richard Denning; Aram Hakobyan; Sean Dunagan
This LDRD project has produced a tool that makes probabilistic risk assessments (PRAs) of nuclear reactors - analyses which are very resource intensive - more efficient. PRAs of nuclear reactors are being increasingly relied on by the United States Nuclear Regulatory Commission (U.S.N.R.C.) for licensing decisions for current and advanced reactors. Yet, PRAs are produced much as they were 20 years ago. The work here applied a modern systems analysis technique to the accident progression analysis portion of the PRA; the technique was a system-independent multi-task computer driver routine. Initially, the objective of the work was to fuse the accident progression event tree (APET) portion of a PRA to the dynamic system doctor (DSD) created by Ohio State University. Instead, during the initial efforts, it was found that the DSD could be linked directly to a detailed accident progression phenomenological simulation code - the type on which APET construction and analysis relies, albeit indirectly - and thereby directly create and analyze the APET. The expanded DSD computational architecture and infrastructure that was created during this effort is called ADAPT (Analysis of Dynamic Accident Progression Trees). ADAPT is a system software infrastructure that supports execution and analysis of multiple dynamic event-tree simulations on distributed environments. A simulator abstraction layer was developed, and a generic driver was implemented for executing simulators on a distributed environment. As a demonstration of the use of the methodological tool, ADAPT was applied to quantify the likelihood of competing accident progression pathways occurring for a particular accident scenario in a particular reactor type using MELCOR, an integrated severe accident analysis code developed at Sandia. (ADAPT was intentionally created with flexibility, however, and is not limited to interacting with only one code. With minor coding changes to input files, ADAPT can be linked to other such codes.) The results of this demonstration indicate that the approach can significantly reduce the resources required for Level 2 PRAs. From the phenomenological viewpoint, ADAPT can also treat the associated epistemic and aleatory uncertainties. This methodology can also be used for analyses of other complex systems. Any complex system can be analyzed using ADAPT if the workings of that system can be displayed as an event tree, there is a computer code that simulates how those events could progress, and that simulator code has switches to turn on and off system events, phenomena, etc. Using and applying ADAPT to particular problems is not human independent. While the human resources for the creation and analysis of the accident progression are significantly decreased, knowledgeable analysts are still necessary for a given project to apply ADAPT successfully. This research and development effort has met its original goals and then exceeded them.
pacific-asia conference on knowledge discovery and data mining | 2004
Benjamin Rutt; Srinivasan Parthasarathy
In many cases, normal uses of a system form patterns that will repeat. The most common patterns can be collected into a prediction model which will essentially predict that usage patterns common in the past will occur again in the future. Systems can then use the prediction models to provide advance notice to their implementations about how they are likely to be used in the near future. This technique creates opportunities to enhance system implementation performance since implementations can be better prepared to handle upcoming usage.
ieee international conference on high performance computing data and analytics | 2006
Xi Zhang; Benjamin Rutt; Tahsin M. Kurç; Paul L. Stoffa; Mrinal K. Sen; Joel H. Saltz
The ability to query and process very large, terabyte-scale datasets has become a key step in many scientific and engineering applications. In this paper, we describe the application of two middleware frameworks in an integrated fashion to provide a scalable and efficient system for execution of seismic data analysis on large datasets in a distributed environment. We investigate different strategies for efficient querying of large datasets and parallel implementations of a seismic image reconstruction algorithm. Our results on a state-of-the-art mass storage system coupled with a high-end compute cluster show that our implementation is scalable and can achieve about 2.9 Gigabytes per second data processing rate – about 70% of the maximum 4.2GB/s application-level raw I/O bandwidth of the storage platform.