Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rolf Hempel is active.

Publication


Featured researches published by Rolf Hempel.


Archive | 1993

A Proposal for a User-Level, Message-Passing Interface in a Distributed Memory Environment

Jack J. Dongarra; Rolf Hempel; Anthony J. G. Hey; David W. Walker

This paper describes Message Passing Interface 1 (MPI1), a proposed library interface standard for supporting point-to-point message passing. The intended standard will be provided with Fortran 77 and C interfaces, and will form the basis of a standard high level communication environment featuring collective communication and data distribution transformations. The standard proposed here provides blocking and nonblocking message passing between pairs of processes, with message selectivity by source process and message type. Provision is made for noncontiguous messages. Context control provides a convenient means of avoiding message selectivity conflicts between different phases of an application. The ability to form and manipulate process groups permit task parallelism to be exploited, and is a useful abstraction in controlling certain types of collective communication.


european pvm mpi users group meeting on recent advances in parallel virtual machine and message passing interface | 1999

Flattening on the Fly: Efficient Handling of MPI Derived Datatypes

Jesper Larsson Träff; Rolf Hempel; Hubert Ritzdorf; Falk Zimmermann

The Message Passing Interface (MPI) incorporates a mechanism for describing structured, non-contiguous memory layouts for use as communication buffers in MPI communication functions. The rationale behind the derived datatype mechanism is to alleviate the user from tedious packing and unpacking of non-consecutive data into contiguous communication buffers. Furthermore, the mechanism makes it possible to improve performance by saving on internal buffering. Apparently, current MPI implementations entail considerable performance penalties when working with derived datatypes. We describe a new method called flattening on the fly for the efficient handling of derived datatypes in MPI. The method aims at exploiting regularities in the memory layout described by the datatype as far as possible. In addition it considerably reduces the overhead for parsing the datatype. Flattening on the fly has been implemented and evaluated on an NEC SX-4 vector supercomputer. On the SX-4 flattening on the fly performs significantly better than previous methods, resulting in performance comparable to what the user can in the best case achieve by packing and unpacking data manually. Also on a PC cluster the method gives worthwhile improvements in cases that are not handled well by the conventional implementation.


parallel computing | 1994

Portable programming with the PARMACS message-passing library

Robin Calkin; Rolf Hempel; H.-C. Hoppe; Peter Wypior

Abstract Message passing is the most efficient and most general programming paradigm currently used on parallel machines with distributed memory. In the absence of a message passing standard the broad variety of vendor-specific interfaces inhibits the portability of application programs. The PARMACS library which is presented in this paper defines a portability layer which has been implemented on most MIMD computers, ranging from MPP systems to workstation networks. The new release version 6.0 is discussed in detail. It is available for applications written in Fortran 77 and C. To assess the time overhead caused by PARMACS, two benchmark applications with differing communication requirements have been implemented using machine-specific interfaces and portably using PARMACS. The performance has been compared for problems of various sizes on three machines of different architectures. In general the use of PARMACS does not cause any significant overhead.


conference on high performance computing (supercomputing) | 2000

The Implementation of MPI-2 One-Sided Communication for the NEC SX-5

Jesper Larsson Träff; Hubert Ritzdorf; Rolf Hempel

We describe the MPI/SX implementation of the MPI-2 standard for one-sided communication (Remote Memory Access) for the NEC SX-5 vector supercomputer. MPI/SX is a non-threaded implementation of the full MPI-2 standard. Essential features of the implementation are presented, including the synchronization mechanisms, the handling of communication windows in global shared and in process local memory, as well as the handling of MPI derived datatypes. In comparative benchmarks the data transfer operations for one-sided communication and point-to-point message passing show very similar performance, both when data reside in global shared and when in process local memory. Derived datatypes, which are of particular importance for applications using one-sided communications, impose only a modest overhead and can be used without any significant loss of performance. Thus, the MPI/SX programmer can freely choose either the message passing or the one-sided communication model, whichever is most convenient for the given application.


ieee international conference on high performance computing data and analytics | 1994

The MPI Standard for Message Passing

Rolf Hempel

The growing number of different message-passing interfaces make it difficult to port an application program from one parallel computer to another. This is a major problem in parallel computing. The open MPI forum has developed a de facto message-passing interface standard which was finalized in the first quarter of 1994. All major parallel machine manufacturers were involved in the process, and the first implementations are already available.


Computer Standards & Interfaces | 1999

The emergence of the MPI message passing standard for parallel computing

Rolf Hempel; David W. Walker

Abstract MPI has been widely adopted as the message passing interface of choice in parallel computing environments. This paper examines how MPI grew out of the requirements of the scientific research community through a broad-based consultative process. The importance of MPI in providing a portable platform upon which to build higher level parallel software, such as numerical software libraries, is discussed. The development of MPI is contrasted with other similar standardization efforts, such as those of the Parallel Computing Forum and the HPF Forum. MPI is also compared with the Parallel Virtual Machine (PVM) system. Some general lessons learned from the MPI specification process are presented.


parallel computing | 1996

Real applications on the new parallel system NEC Cenju-3

Rolf Hempel; Robin Calkin; Reinhold Hess; Wolfgang Joppich; Cornelis W. Oosterlee; Hubert Ritzdorf; Peter Wypior; Wolfgang Ziegler; Nubohiko Koike; Takashi Washio; Udo Keller

The new massively parallel computer Cenju-3 of NEC has entered the market recently. NEC has set up a 64-processor machine at GMD. After the implementation of the PARMACS programming interface, large applications have been ported to the system. Early benchmark results show the performance of the Cenju-3 in a variety of application areas.


european pvm mpi users group meeting on recent advances in parallel virtual machine and message passing interface | 1997

Implementation of MPI on NEC's SX-4 Multi-Node Architecture

Rolf Hempel; Hubert Ritzdorf; Falk Zimmermann

MPICH is a portable implementation of MPI, the international standard for message-passing programming. This paper describes the port of MPICH to the NEC SX-4 parallel vector-supercomputer. By fine-tuning the implementation to the underlying architecture, the message-passing performance could be greatly enhanced. Some of the performance optimizations which led to NECs current product-level MPI library are presented. Finally, there is an outlook on future activities which will include further optimizations and functional extensions.


parallel computing | 1999

High Performance Implementation of MPI for Myrinet

Maciej Golebiewski; Markus Baum; Rolf Hempel

This paper presents a new implementation of MPI on a cluster of Linux-based, dual-processor PCs interconnected by aMyricom high speed network. A survey of existing software for this hardware configuration resulted in the non-availability of a fully functional, correct and complete MPI library exploiting the full hardware potential. Our library uses MPICH for the high level protocol and FM/HPVM for the basic communications layer. It allows multiple processes and multiple users on the same PC, and passes an extensive test suite, including all test programs from the MPICH distribution, for both C and Fortran. The presented benchmarks, both simple communication kernels and full applications, show good performance. The result is the first high-performance MPI interface which allows regular multi-user service for applications on our PC cluster.


Wiley Encyclopedia of Electrical and Electronics Engineering | 1999

Message-Passing Software Systems

Jack J. Dongarra; Graham E. Fagg; Rolf Hempel; David W. Walker

The sections in this article are 1 Introduction 2 Parallel Programming Model 3 Message Passing 4 Message Passing Research and Experimental Systems 5 The Message Passing Interface Standard - MPI 6 Lessons Learned Keywords: parallel computing; message passing libraries; MPI; PVM

Collaboration


Dive into the Rolf Hempel's collaboration.

Top Co-Authors

Avatar

Hubert Ritzdorf

Center for Information Technology

View shared research outputs
Top Co-Authors

Avatar

Falk Zimmermann

Center for Information Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rajeev Thakur

Argonne National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge