Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Timothy S. Woodall is active.

Publication


Featured researches published by Timothy S. Woodall.


Lecture Notes in Computer Science | 2004

Open MPI: Goals, Concept, and Design of a Next Generation MPI Implementation

Edgar Gabriel; Graham E. Fagg; George Bosilca; Thara Angskun; Jack J. Dongarra; Jeffrey M. Squyres; Vishal Sahay; Prabhanjan Kambadur; Andrew Lumsdaine; Ralph H. Castain; David Daniel; Richard L. Graham; Timothy S. Woodall

A large number of MPI implementations are currently available, each of which emphasize different aspects of high-performance computing or are intended to solve a specific research problem. The result is a myriad of incompatible MPI implementations, all of which require separate installation, and the combination of which present significant logistical challenges for end users. Building upon prior research, and influenced by experience gained from the code bases of the LAM/MPI, LA-MPI, and FT-MPI projects, Open MPI is an all-new, production-quality MPI-2 implementation that is fundamentally centered around component concepts. Open MPI provides a unique combination of novel features previously unavailable in an open-source, production-quality implementation of MPI. Its component architecture provides both a stable platform for third-party research as well as enabling the run-time composition of independent software add-ons. This paper presents a high-level overview the goals, design, and implementation of Open MPI.


parallel processing and applied mathematics | 2005

Open MPI: a flexible high performance MPI

Richard L. Graham; Timothy S. Woodall; Jeffrey M. Squyres

A large number of MPI implementations are currently available, each of which emphasize different aspects of high-performance computing or are intended to solve a specific research problem. The result is a myriad of incompatible MPI implementations, all of which require separate installation, and the combination of which present significant logistical challenges for end users. Building upon prior research, and influenced by experience gained from the code bases of the LAM/MPI, LA-MPI, FT-MPI, and PACX-MPI projects, Open MPI is an all-new, production-quality MPI-2 implementation that is fundamentally centered around component concepts. Open MPI provides a unique combination of novel features previously unavailable in an open-source, production-quality implementation of MPI. Its component architecture provides both a stable platform for third-party research as well as enabling the run-time composition of independent software add-ons. This paper presents a high-level overview the goals, design, and implementation of Open MPI, as well as performance results for its point-to-point implementation.


international parallel and distributed processing symposium | 2006

Infiniband scalability in Open MPI

Galen M. Shipman; Timothy S. Woodall; Richard L. Graham; Arthur B. Maccabe; Patrick G. Bridges

Infiniband is becoming an important interconnect technology in high performance computing. Efforts in large scale Infiniband deployments are raising scalability questions in the HPC community. Open MPI, a new open source implementation of the MPI standard targeted for production computing, provides several mechanisms to enhance Infiniband scalability. Initial comparisons with MVAPICH, the most widely used Infiniband MPI implementation, show similar performance but with much better scalability characteristics. Specifically, small message latency is improved by up to 10% in medium/large jobs and memory usage per host is reduced by as much as 300%. In addition, Open MPI provides predictable latency that is close to optimal without sacrificing bandwidth performance


international parallel and distributed processing symposium | 2004

Architecture of LA-MPI, a network-fault-tolerant MPI

Rob T. Aulwes; David Daniel; Nehal N. Desai; Richard L. Graham; L.D. Risinger; Mark A. Taylor; Timothy S. Woodall; M.W. Sukalski

Summary form only given. We discuss the unique architectural elements of the Los Alamos message passing interface (LA-MPI), a high-performance, network-fault-tolerant, thread-safe MPI library. LA-MPI is designed for use on terascale clusters which are inherently unreliable due to their sheer number of system components and trade-offs between cost and performance. We examine in detail the design concepts used to implement LA-MPI. These include reliability features, such as application-level checksumming, message retransmission, and automatic message rerouting. Other key performance enhancing features, such as concurrent message routing over multiple, diverse network adapters and protocols, and communication-specific optimizations (e.g., shared memory) are examined.


Future Generation Computer Systems | 2008

The Open Run-Time Environment (OpenRTE): A transparent multicluster environment for high-performance computing

Ralph H. Castain; Timothy S. Woodall; David Daniel; Jeffrey M. Squyres; Brian Barrett; Graham Edward Fagg

The Open Run-Time Environment (OpenRTE)-a spin-off from the Open MPI project-was developed to support distributed high-performance computing applications operating in a heterogeneous environment. The system transparently provides support for interprocess communication, resource discovery and allocation, and process launch across a variety of platforms. In addition, users can launch their applications remotely from their desktop, disconnect from them, and reconnect at a later time to monitor progress. This paper will describe the capabilities of the OpenRTE system, describe its architecture, and discuss future directions for the project.


Lecture Notes in Computer Science | 2004

TEG: A High-Performance, Scalable, Multi-network Point-to-Point Communications Methodology

Timothy S. Woodall; Richard L. Graham; Ralph H. Castain; David Daniel; Mitchel W. Sukalski; Graham E. Fagg; Edgar Gabriel; George Bosilca; Thara Angskun; Jack J. Dongarra; Jeffrey M. Squyres; Vishal Sahay; Prabhanjan Kambadur; Andrew Lumsdaine

TEG is a new component-based methodology for point-to-point messaging. Developed as part of the Open MPI project, TEG provides a configurable fault-tolerant capability for high-performance messaging that utilizes multi-network interfaces where available. Initial performance comparisons with other MPI implementations show comparable ping-pong latencies, but with bandwidths up to 30% higher.


Lecture Notes in Computer Science | 2004

Open MPI's TEG Point-to-Point Communications Methodology: Comparison to Existing Implementations

Timothy S. Woodall; Richard L. Graham; Ralph H. Castain; David Daniel; Mitchel W. Sukalski; Graham E. Fagg; Edgar Gabriel; George Bosilca; Thara Angskun; Jack J. Dongarra; Jeffrey M. Squyres; Vishal Sahay; Prabhanjan Kambadur; Andrew Lumsdaine

TEG is a new methodology for point-to-point messaging developed as a part of the Open MPI project. Initial performance measurements are presented, showing comparable ping-pong latencies in a single NIC configuration, but with bandwidths up to 30% higher than that achieved by other leading MPI implementations. Homogeneous dual-NIC configurations further improved performance, but the heterogeneous case requires continued investigation.


Lecture Notes in Computer Science | 2005

The open run-time environment (OpenRTE): a transparent multi-cluster environment for high-performance computing

Ralph H. Castain; Timothy S. Woodall; David Daniel; Jeffrey M. Squyres; Graham E. Fagg

The Open Run-Time Environment (OpenRTE)—a spin-off from the Open MPI project—was developed to support distributed high-performance computing applications operating in a heterogeneous environment. The system transparently provides support for interprocess communication, resource discovery and allocation, and process launch across a variety of platforms. In addition, users can launch their applications remotely from their desktop, disconnect from them, and reconnect at a later time to monitor progress. This paper will describe the capabilities of the OpenRTE system, describe its architecture, and discuss future directions for the project.


Parallel Processing Letters | 2007

Open MPI: A High Performance, Flexible Implementation of MPI Point-to-Point Communications

Richard L. Graham; Galen M. Shipman; Timothy S. Woodall; George Bosilca

Open MPIs point-to-point communications abstractions, described in this paper, handle several different communications scenarios, with a portable, high-performance design and implementation. These abstractions support two types of low-level communication protocols – general purpose point-to-point communications, like the OpenIB interface, and MPI-like interfaces, such as Myricoms MX library. Support for the first type of protocols makes use of all communications resources available to a given application run, with optional support for communications error recovery. The latter provides a interface layer, relying on the communications library to guarantee correct MPI message ordering and matching. This paper describes the three point-to-point communications protocols currently supported in the Open MPI implementation, supported with performance data. This includes comparisons with other MPI implementations using the OpenIB, MX, and GM communications libraries.


international parallel and distributed processing symposium | 2005

Design and implementation of open MPI over Quadrics/Elan4

Weikuan Yu; Timothy S. Woodall; Richard L. Graham; Dhabaleswar K. Panda

Open MPI is a project recently initiated to provide a fault-tolerant, multi-network capable implementation of MPI-2 (1997), based on experiences gained from FT-MPI (G. E. Fagg et al., 2003), LA-MPI (R. L. Graham et al., 2003), LAM/MPI (J. Squyers et al., 2003), and MVAPICH projects. Its initial communication architecture is layered on top of TCP/IP. In this paper, we have designed and implemented open MPI point-to-point layer on top of a high-end interconnect, Quadrics/Elan4. The restriction of Quadrics static process model has been overcome to accommodate the requirement of MPI-2 dynamic process management. Quadrics queued-based direct memory access (QDMA) and remote direct memory access (RDMA) mechanisms have been integrated to form a low-overhead, high-performance transport layer. Lightweight asynchronous progress is made possible with a combination of Quadrics chained event and QDMA mechanisms. Experimental results indicate that the resulting point-to-point transport layer is able to achieve comparable performance to Quadrics native QDMA operations, from which it is derived. Our implementation provides an MPI-2 compliant message passing library over Quadrics/Elan4 with a performance comparable to MPICH-Quadrics.

Collaboration


Dive into the Timothy S. Woodall's collaboration.

Top Co-Authors

Avatar

Richard L. Graham

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar

David Daniel

Los Alamos National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ralph H. Castain

Los Alamos National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andrew Lumsdaine

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Galen M. Shipman

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge