Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Joseph D. Ratterman is active.

Publication


Featured researches published by Joseph D. Ratterman.


international conference on supercomputing | 2008

The deep computing messaging framework: generalized scalable message passing on the blue gene/P supercomputer

Sameer Kumar; Gabor Dozsa; Gheorghe Almasi; Philip Heidelberger; Dong Chen; Mark E. Giampapa; Michael Blocksome; Ahmad Faraj; Jeffrey J. Parker; Joseph D. Ratterman; Brian E. Smith; Charles J. Archer

We present the architecture of the Deep Computing Messaging Framework (DCMF), a message passing runtime designed for the Blue Gene/P machine and other HPC architectures. DCMF has been designed to easily support several programming paradigms such as the Message Passing Interface (MPI), Aggregate Remote Memory Copy Interface (ARMCI), Charm++, and others. This support is made possible as DCMF provides an application programming interface (API) with active messages and non-blocking collectives. DCMF is being open sourced and has a layered component based architecture with multiple levels of abstraction, allowing the members of the community to contribute new components to its design at the various layers. The DCMF runtime can be extended to other architectures through the development of architecture specific implementations of interface classes. The production DCMF runtime on Blue Gene/P takes advantage of the direct memory access (DMA) hardware to offload message passing work and achieve good overlap of computation and communication. We take advantage of the fact that the Blue Gene/P node is a symmetric multi-processor with four cache-coherent cores and use multi-threading to optimize the performance on the collective network. We also present a performance evaluation of the DCMF runtime on Blue Gene/P and show that it delivers performance close to hardware limits.


international parallel and distributed processing symposium | 2012

PAMI: A Parallel Active Message Interface for the Blue Gene/Q Supercomputer

Sameer Kumar; Amith R. Mamidala; Daniel Faraj; Brian E. Smith; Michael Blocksome; Bob Cernohous; Douglas Miller; Jeffrey J. Parker; Joseph D. Ratterman; Philip Heidelberger; Dong Chen; Burkhard Steinmacher-Burrow

The Blue Gene/Q machine is the next generation in the line of IBM massively parallel supercomputers, designed to scale to 262144 nodes and sixteen million threads. With each BG/Q node having 68 hardware threads, hybrid programming paradigms, which use message passing among nodes and multi-threading within nodes, are ideal and will enable applications to achieve high throughput on BG/Q. With such unprecedented massive parallelism and scale, this paper is a groundbreaking effort to explore the design challenges for designing a communication library that can match and exploit such massive parallelism In particular, we present the Parallel Active Messaging Interface (PAMI) library as our BG/Q library solution to the many challenges that come with a machine at such scale. PAMI provides (1) novel techniques to partition the application communication overhead into many contexts that can be accelerated by communication threads, (2) client and context objects to support multiple and different programming paradigms, (3) lockless algorithms to speed up MPI message rate, and (4) novel techniques leveraging the new BG/Q architectural features such as the scalable atomic primitives implemented in the L2 cache, the highly parallel hardware messaging unit that supports both point-to-point and collective operations, and the collective hardware acceleration for operations such as broadcast, reduce, and all reduce. We experimented with PAMI on 2048 BG/Q nodes and the results show high messaging rates as well as low latencies and high throughputs for collective communication operations.


european pvm mpi users group meeting on recent advances in parallel virtual machine and message passing interface | 2008

Architecture of the Component Collective Messaging Interface

Sameer Kumar; Gabor Dozsa; Jeremy Berg; Bob Cernohous; Douglas Miller; Joseph D. Ratterman; Brian E. Smith; Philip Heidelberger

Different programming paradigms utilize a variety of collective communication operations, often with different semantics. We present the component collective messaging interface (CCMI), that can support asynchronous non-blocking collectives and is extensible to different programming paradigms and architectures. CCMI is designed with components written in the C++ programming language, allowing it to have reuse and extendability. Collective algorithms are embodied in topological schedulesand executorsthat execute them. Portability across architectures is enabled by the multisend data movement component. CCMI includes a programming language adaptor used to implement different APIs with different semantics for different paradigms. We study the effectiveness of CCMI on Blue Gene/P and evaluate its performance for the barrier, broadcast, and allreduce collective operations. We also present the performance of the barrier collective on the Abe Infiniband cluster.


conference on high performance computing (supercomputing) | 2006

Design and Implementation of a One-Sided Communication Interface for the IBM eServer Blue Gene

Michael Blocksome; Charles J. Archer; Todd Inglett; Patrick McCarthy; Michael Mundy; Joseph D. Ratterman; A. Sidelnik; Brian E. Smith; George S. Almasi; José G. Castaños; Derek Lieber; José E. Moreira; Sriram Krishnamoorthy; Vinod Tipparaju; Jaroslaw Nieplocha

This paper discusses the design and implementation of a one-sided communication interface for the IBM Blue Gene/L supercomputer. This interface facilitates ARMCI and the Global Arrays toolkit and can be used by other one-sided communication libraries. New protocols, interrupt driven communication, and compute node kernel enhancements were required to enable these libraries. Three possible methods for enabling ARMCI on the Blue Gene/L software stack are discussed. A detailed look into the development process shows how the implementation of the one-sided communication interface was completed. This was accomplished on a compressed time scale with the collaboration of various organizations within IBM and open source communities. In addition to enabling the one-sided libraries, bandwidth enhancements were made for communication along a diagonal on the Blue Gene/L torus network. The maximum bandwidth improved by a factor of three. This work will enable a variety of one-sided applications to run on Blue Gene/L


Archive | 2006

Method of Video Display and Multiplayer Gaming

Charles J. Archer; Mark Megerian; Joseph D. Ratterman; Brian E. Smith; Brian Paul Wallenfelt


Archive | 2007

Executing an Allgather Operation on a Parallel Computer

Charles J. Archer; José E. Moreira; Joseph D. Ratterman


Archive | 2006

Executing an allgather operation with an alltoallv operation in a parallel computer

Charles J. Archer; Philip Heidelberger; José E. Moreira; Joseph D. Ratterman


Archive | 2006

Computer Hardware Fault Diagnosis

Charles J. Archer; Mark Megerian; Joseph D. Ratterman; Brian E. Smith


Archive | 2007

Executing a Scatter Operation on a Parallel Computer

Charles J. Archer; Joseph D. Ratterman


Archive | 2006

PARALLEL APPLICATION LOAD BALANCING AND DISTRIBUTED WORK MANAGEMENT

Charles J. Archer; Timothy J. Mullins; Joseph D. Ratterman; Albert Sidelnik; Brian E. Smith

Collaboration


Dive into the Joseph D. Ratterman's collaboration.

Researchain Logo
Decentralizing Knowledge