Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rainer Keller is active.

Publication


Featured researches published by Rainer Keller.


european pvm mpi users group meeting on recent advances in parallel virtual machine and message passing interface | 1998

Distributed Computing in a Heterogeneous Computing Environment

Edgar Gabriel; Michael M. Resch; Thomas Beisel; Rainer Keller

Distributed computing is a means to overcome the limitations of single computing systems. In this paper we describe how clusters of heterogeneous supercomputers can be used to run a single application or a set of applications. We concentrate on the communication problem in such a configuration and present a software library called PACX-MPI that was developed to allow a single system image from the point of view of an MPI programmer. We describe the concepts that have been implemented for heterogeneous clusters of this type and give a description of real applications using this library.


Journal of Grid Computing | 2003

Towards Efficient Execution of MPI Applications on the Grid: Porting and Optimization Issues

Rainer Keller; Edgar Gabriel; Bettina Krammer; Matthias S. Müller; Michael M. Resch

The message passing interface (MPI) is a standard used by many parallel scientific applications. It offers the advantage of a smoother migration path for porting applications from high performance computing systems to the Grid. In this paper Grid-enabled tools and libraries for developing MPI applications are presented. The first is MARMOT, a tool that checks the adherence of an application to the MPI standard. The second is PACX-MPI, an implementation of the MPI standard optimized for Grid environments. Besides the efficient development of the program, an optimal execution is of paramount importance for most scientific applications. We therefore discuss not only performance on the level of the MPI library, but also several application specific optimizations, e.g., for a sparse, parallel equation solver and an RNA folding code, like latency hiding, prefetching, caching and topology-aware algorithms.


Lecture Notes in Computer Science | 2003

Performance Prediction in a Grid Environment

Rosa M. Badia; Francesc Escalé; Edgar Gabriel; Judit Gimenez; Rainer Keller; Jesús Labarta; Matthias S. Müller

Knowing the performance of an application in a Grid environment is an important issue in application development and for scheduling decisions. In this paper we describe the analysis and optimisation of a computation- and communication-intensive application from the field of bioinformatics, which was demonstrated at the HPC-Challenge of Supercomputing 2002 at Baltimore. This application has been adapted to be run on an heterogeneous computational Grid by means of PACX-MPI. The analysis and optimisation is based on trace driven tools, mainly Dimemas and Vampir. All these methodologies and tools are being extended in the frame of the DAMIEN IST project.


formal methods | 2013

The ParaPhrase Project: Parallel patterns for adaptive heterogeneous multicore systems

Kevin Hammond; Marco Aldinucci; Christopher Brown; Francesco Cesarini; Marco Danelutto; Horacio González-Vélez; Peter Kilpatrick; Rainer Keller; Michael Rossbory; Gilad Shainer

This paper describes the ParaPhrase project, a new 3-year targeted research project funded under EU Framework 7 Objective 3.4 (Computer Systems), starting in October 2011. ParaPhrase aims to follow a new approach to introducing parallelism using advanced refactoring techniques coupled with high-level parallel design patterns. The refactoring approach will use these design patterns to restructure programs defined as networks of software components into other forms that are more suited to parallel execution. The programmer will be aided by high-level cost information that will be integrated into the refactoring tools. The implementation of these patterns will then use a well-understood algorithmic skeleton approach to achieve good parallelism.


Archive | 2008

Tools for High Performance Computing

Michael M. Resch; Rainer Keller; Valentin Himmler; Bettina Krammer; Alexander Schulz

With the advent of multi-core processors, implementing parallel programming methods in application development is absolutely necessary in order to achieve good performance. Soon, 8-core and possibly 16-core processors will be available, even for desktop machines. To support application developers in the various tasks involved in this process, several different tools need to be at his or her disposal. This workshop will give the users an overview of the existing tools in the area of integrated development environments for clusters, various parallel debuggers, and new-style performance analysis tools, as well as an update on the state of the art of long-term research tools, which have advanced to an industrial level. The proceedingsof the 2nd Parallel Tools Workshop guide participants by providing a technical overview to help them decide upon which tool suits the requirements for the development task at hand. Additionally, through the hands-on sessions, the workshopwill enable the user to immediately deploy the tools.


EuroMPI'11 Proceedings of the 18th European MPI Users' Group conference on Recent advances in the message passing interface | 2011

OMPIO: a modular software architecture for MPI I/O

Mohamad Chaarawi; Edgar Gabriel; Rainer Keller; Richard L. Graham; George Bosilca; Jack J. Dongarra

I/O is probably the most limiting factor on high-end machines for large scale parallel applications as of today. This paper introduces OMPIO, a new parallel I/O architecture for OpenMPI. OMPIO provides a highly modular approach to parallel I/O by separating I/O functionality into smaller units (frameworks) and an arbitrary number of modules in each framework. Furthermore, each framework has a customized selection criteria that determines which module to use depending on the functionality of the framework as well as external parameters.


international conference on computational science | 2003

Software development in the grid: the DAMIEN tool-set

Edgar Gabriel; Rainer Keller; Peggy Lindner; Matthias S. Müller; Michael M. Resch

The development of applications for Grid-environments is currently lacking the support of tools, which end-users are familiar with from their regular working environment. This paper analyzes the requirements for developing, porting and optimizing scientific applications for Grid-environments. A toolbox designed and implemented in the frame of the DAMIEN project which closes some of the gaps and supports the end-user during the development of the application and its day-today usage in Grid-environments is then presented.


international symposium on parallel and distributed computing | 2006

Testing the Correctness of MPI Implementations

Rainer Keller; Michael M. Resch

This paper introduces an MPI test suite to thoroughly test the correctness of an MPI-implementation. Scientific applications require stable tools and libraries for portable and efficient programming, e. g. when depending on the single-sided communication on multiple platforms. This test suite was originally used to check the correct transmission of data in the PACX-MPI implementation, but has been mainly used and extended to test the development of the new Open MPI implementation. The tool has been designed to be easily extendible in order to integrate new tests using the underlying functionality


Parallel Computational Fluid Dynamics 1998#R##N#Development and Applications of Parallel Technology | 1999

A Metacomputing Environment for Computational Fluid Dynamics

Michael M. Resch; Dirk Rantzau; Holger Berger; Katrin Bidmon; Rainer Keller; Edgar Gabriel

The purpose of this article is to present an environment that was set up to allow metacomputing on a cluster of MPPs. The environment makes it possible to run a single MPI application on several massively parallel computers without changing the code. It furthermore allows to visualize the results of the simulation online.


Parallel Tools Workshop | 2012

Advanced Memory Checking Frameworks for MPI Parallel Applications in Open MPI

Shiqing Fan; Rainer Keller; Michael M. Resch

In this paper, we describe the implementation of memory checking functionality that is based on instrumentation tools. The combination of instrumentation based checking functions and the MPI-implementation offers superior debugging functionalities, for errors that otherwise are not possible to detect with comparable MPI-debugging tools. Our implementation contains three parts: first, a memory callback extension that is implemented on top of the Valgrind Memcheck tool for advanced memory checking in parallel applications; second, a new instrumentation tool was developed based on the Intel Pin framework, which provides similar functionality as Memcheck it can be used in Windows environments that have no access to the Valgrind suite; third, all the checking functionalities are integrated as the so-called memchecker framework within Open MPI. This will also allow other memory debuggers that offer a similar API to be integrated. The tight control of the user’s memory passed to Open MPI, allows us to detect application errors and to track bugs within Open MPI itself. The extension of the callback mechanism targets communication buffer checks in both pre- and post-communication phases, in order to analyze the usage of the received data, e.g. whether the received data has been overwritten before it is used in an computation or whether the data is never used. We describe our actual checks, classes of errors being found, how memory buffers are being handled internally, show errors actually found in user’s code, and the performance implications of our instrumentation.

Collaboration


Dive into the Rainer Keller's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shiqing Fan

University of Stuttgart

View shared research outputs
Top Co-Authors

Avatar

Craig A. Stewart

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar

Donald K. Berry

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge