Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rolf Riesen is active.

Publication


Featured researches published by Rolf Riesen.


ieee international conference on high performance computing data and analytics | 2011

Evaluating the viability of process replication reliability for exascale systems

Kurt Brian Ferreira; Jon Stearley; James H. Laros; Ron A. Oldfield; Kevin Pedretti; Ronald B. Brightwell; Rolf Riesen; Patrick G. Bridges; Dorian C. Arnold

As high-end computing machines continue to grow in size, issues such as fault tolerance and reliability limit application scalability. Current techniques to ensure progress across faults, like checkpoint-restart, are increasingly problematic at these scales due to excessive overheads predicted to more than double an applications time to solution. Replicated computing techniques, particularly state machine replication, long used in distributed and mission critical systems, have been suggested as an alternative to checkpoint-restart. In this paper, we evaluate the viability of using state machine replication as the primary fault tolerance mechanism for upcoming exascale systems. We use a combination of modeling, empirical analysis, and simulation to study the costs and benefits of this approach in comparison to check-point/restart on a wide range of system parameters. These results, which cover different failure distributions, hardware mean time to failures, and I/O bandwidths, show that state machine replication is a potentially useful technique for meeting the fault tolerance demands of HPC applications on future exascale platforms.


international parallel and distributed processing symposium | 2002

Portals 3.0: protocol building blocks for low overhead communication

Ron Brightwell; Rolf Riesen; Bill Lawry; Arthur B. Maccabe

This paper describes the evolution of the Portals message passing architecture and programming interface from its initial development on tightly-coupled massively parallel platforms to the current implementation running on a 1792-node commodity PC Linux cluster. Portals provides the basic building blocks needed for higher-level protocols to implement scalable, low-overhead communication. Portals has several unique characteristics that differentiate it from other high-performance system-area data movement layers. This paper discusses several of these features and illustrates how they can impact the scalability and performance of higher-level message passing protocols.


parallel computing | 2000

Massively parallel computing using commodity components

Ron Brightwell; Lee Ann Fisk; David S. Greenberg; Trammell Hudson; Michael J. Levenhagen; Arthur B. Maccabe; Rolf Riesen

The Computational Plant (Cplant) project at Sandia National Laboratories is developing a large-scale, massively parallel computing resource from a cluster of commodity computing and networking components. We are combining the benefits of commodity cluster computing with our expertise in designing, developing, using, and maintaining large-scale, massively parallel processing (MPP) machines. In this paper, we present the design goals of the cluster and an approach to developing a commodity-based computational resource capable of delivering performance comparable to production-level MPP machines. We provide a description of the hardware components of a 96-node Phase I prototype machine and discuss the experiences with the prototype that led to the hardware choices for a 400-node Phase II production machine. We give a detailed description of the management and runtime software components of the cluster and oAer computational performance data as well as performance measurements of functions that are critical to the management of large systems. ” 2000 Elsevier Science B.V. All rights reserved.


conference on high performance computing (supercomputing) | 1997

A System Software Architecture for High End Computing

David S. Greenberg; Ron Brightwell; Lee Ann Fisk; Arthur Maccabe; Rolf Riesen

MPP systems can neither solve Grand Challenge scientific problems nor enable large-scale industrial and governmental simulations if they rely on extensions to workstation system software. We present a new system architecture used at Sandia. Highest performance is achieved through a lightweight applications interface to a collection of processing nodes. Usability is provided by creating node partitions specialized for user access, networking, and I/O. The system is glued together by a data movement interface called portals. Portals allow data to flow between processing nodes with minimal system overhead while maintaining a suitable degree of protection and reconfigurability.


EuroMPI'11 Proceedings of the 18th European MPI Users' Group conference on Recent advances in the message passing interface | 2011

libhashckpt: hash-based incremental checkpointing using GPU's

Kurt Brian Ferreira; Rolf Riesen; Ron Brighwell; Patrick G. Bridges; Dorian C. Arnold

Concern is beginning to grow in the high-performance computing (HPC) community regarding the reliability guarantees of future large-scale systems. Disk-based coordinated checkpoint/restart has been the dominant fault tolerance mechanism in HPC systems for the last 30 years. Checkpoint performance is so fundamental to scalability that nearly all capability applications have custom checkpoint strategies to minimize state and reduce checkpoint time. One well-known optimization to traditional checkpoint/restart is incremental checkpointing, which has a number of known limitations. To address these limitations, we introduce libhashckpt; a hybrid incremental checkpointing solution that uses both page protection and hashing on GPUs to determine changes in application data with very low overhead. Using real capability workloads, we show the merit of this technique for a certain class of HPC applications.


Archive | 2011

rMPI : increasing fault resiliency in a message-passing environment.

Jon Stearley; James H. Laros; Kurt Brian Ferreira; Kevin Pedretti; Ron A. Oldfield; Rolf Riesen; Ronald Brian Brightwell

As High-End Computing machines continue to grow in size, issues such as fault tolerance and reliability limit application scalability. Current techniques to ensure progress across faults, like checkpoint-restart, are unsuitable at these scale due to excessive overheads predicted to more than double an applications time to solution. Redundant computation, long used in distributed and mission critical systems, has been suggested as an alternative to checkpoint-restart on its own. In this paper we describe the rMPI library which enables portable and transparent redundant computation for MPI applications. We detail the design of the library as well as two replica consistency protocols, outline the overheads of this library at scale on a number of real-world applications, and finally outline the significant increase in an applications time to solution at extreme scale as well as show the scenarios in which redundant computation makes sense.


ieee international conference on high performance computing data and analytics | 2005

Analyzing the Impact of Overlap, Offload, and Independent Progress for Message Passing Interface Applications

Ron Brightwell; Rolf Riesen; Keith D. Underwood

The overlap of computation and communication has long been considered to be a significant performance benefit for applications. Similarly, the ability of the Message Passing Interface (MPI) to make independent progress (that is, to make progress on outstanding communication operations while not in the MPI library) is also believed to yield performance benefits. Using an intelligent network interface to offload the work required to support overlap and independent progress is thought to be an ideal solution, but the benefits of this approach have not been studied in depth at the application level. This lack of analysis is complicated by the fact that most MPI implementations do not sufficiently support overlap or independent progress. Recent work has demonstrated a quantifiable advantage for an MPI implementation that uses offload to provide overlap and independent progress. The study is conducted on two different platforms with each having two MPI implementations (one with and one without independent progress). Thus, identical network hardware and virtually identical software stacks are used. Furthermore, one platform, ASCI Red, allows further separation of features such as overlap and offload. Thus, this paper extends previous work by further qualifying the source of the performance advantage: offload, overlap, or independent progress.


international conference on cluster computing | 2006

A Hybrid MPI Simulator

Rolf Riesen

Performance analysis of large scale applications and predicting the impact of architectural changes on the behavior of such applications is difficult. Traditional approaches to measuring applications usually change their behavior, require recompilation, and need specialized tools to extract performance information. Often the tools are programming language specific and not suitable for all applications. If instead, an application is to be modeled to gather the same kind of information, then in-depth knowledge of the application is required. Furthermore, parameters that control the behavior of the application on a specific machine have to be adjusted; often in ways that are more art than science. In this paper we describe an approach that is a hybrid between running a parallel application in stand-alone mode and simulating the network it uses for MPI data exchanges. The discrete event network simulator is execution-driven by the application. We explain how our early prototype works and how it can be used. We mention several experiments that we have already performed with this prototype and show its potential for future research


software product lines | 1993

Out of core, out of mind: practical parallel I/O

David E. Womble; David S. Greenberg; Rolf Riesen; Stephen R. Wheat

Parallel computers are becoming more powerful and more complex in response to the demand for computing power by scientists and engineers. Inevitably, new and more complex I/O systems will be developed for these systems. In particular we believe that the I/O system must provide the programmer with the ability to explicitly manage storage (despite the trend toward complex parallel file systems and caching schemes). One method of doing so is to have a partitioned secondary storage in which each processor owns a logical disk. Along with operating system enhancements which allow overheads such as buffer copying to be avoided and libraries to support optimal remapping of data, this sort of I/O system meets the needs of high performance computing.<<ETX>>


measurement and modeling of computer systems | 2011

A framework for architecture-level power, area, and thermal simulation and its application to network-on-chip design exploration

Ming-yu Hsieh; Arun Rodrigues; Rolf Riesen; Kevin Thompson; William J. Song

We describe the integrated power, area and thermal modeling framework in the Structural Simulation Toolkit (SST) for large-scale high performance computer simulation. It integrates various power and thermal modeling tools and computes run-time energy dissipation for core, network on chip, memory controller and shared cache. It also has functionality to update the leakage power as temperature changes. We illustrate the utilization of the framework by applying it to explore interconnect options in manycore systems with consideration of temperature variation and leakage feedback. We compare power, energy-delay-area product (EDAP), and energy-delay product (EDP) of four manycore configurations-1 core, 2 cores, 4 cores and 8 cores per cluster. Results from simulation with or without consideration of temperature variation both show that the 4-core per cluster configuration has the best EDAP and EDP. Even so, considering temperature variation increases total power dissipation. We demonstrate the importance of considering temperature variation in the design ow. With this power, area and thermal modeling capability, SST can be used for hardware/software co-design of future Exascale systems.

Collaboration


Dive into the Rolf Riesen's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kurt Brian Ferreira

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Ron Brightwell

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Kevin Pedretti

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jon Stearley

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Ron A. Oldfield

University of Texas at El Paso

View shared research outputs
Top Co-Authors

Avatar

Ronald B. Brightwell

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Trammell Hudson

Sandia National Laboratories

View shared research outputs
Researchain Logo
Decentralizing Knowledge