Bettina Krammer
University of Tennessee
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Bettina Krammer.
parallel computing | 2004
Bettina Krammer; Katrin Bidmon; Matthias S. Müller; Michael M. Resch
Publisher Summary This chapter discusses MARMOT, a tool to check during runtime if message passing interface (MPI) application conforms to the MPI standard— that is, if resources like communicators, datatypes etc., and other parameters are handled correctly. Another problem with parallel programs is the occurrence of race conditions or deadlocks. Examples are the introduction of irreproducibility, deadlocks, and incorrect management of resources like communicators, groups, datatypes, and operators. The benchmark results show that the use of MARMOT introduces a certain overhead, especially when the execution is serialized. As the development of MARMOT is still ongoing, the main emphasis has so far been on implementing its functionality and not on optimizing its performance. However, the benchmarks results show that for applications with a reasonable communication to computation ratio, the overhead of using MARMOT is below 20%–50% on up to 16 processors. The idea of MARMOT is to verify the standard conformance of an MPI program automatically during runtime and help to debug the program in case of problems.
Journal of Grid Computing | 2003
Rainer Keller; Edgar Gabriel; Bettina Krammer; Matthias S. Müller; Michael M. Resch
The message passing interface (MPI) is a standard used by many parallel scientific applications. It offers the advantage of a smoother migration path for porting applications from high performance computing systems to the Grid. In this paper Grid-enabled tools and libraries for developing MPI applications are presented. The first is MARMOT, a tool that checks the adherence of an application to the MPI standard. The second is PACX-MPI, an implementation of the MPI standard optimized for Grid environments. Besides the efficient development of the program, an optimal execution is of paramount importance for most scientific applications. We therefore discuss not only performance on the level of the MPI library, but also several application specific optimizations, e.g., for a sparse, parallel equation solver and an RNA folding code, like latency hiding, prefetching, caching and topology-aware algorithms.
international conference on computational science | 2004
Bettina Krammer; Matthias S. Müller; Michael M. Resch
The Message Passing Interface (MPI) is widely used to write parallel programs using message passing. Due to the complexity of parallel programming there is a need for tools supporting the development process. There are many situations where incorrect usage of MPI by the application programmer can automatically be detected. Examples are the introduction of irreproducibility, deadlocks and incorrect management of resources like communicators, groups, datatypes and operators. We also describe the tool MARMOT that implements some of these tests. Finally we describe our experiences with three applications of the CrossGrid project regarding the usability and performance of this tool.
Archive | 2008
Michael M. Resch; Rainer Keller; Valentin Himmler; Bettina Krammer; Alexander Schulz
With the advent of multi-core processors, implementing parallel programming methods in application development is absolutely necessary in order to achieve good performance. Soon, 8-core and possibly 16-core processors will be available, even for desktop machines. To support application developers in the various tasks involved in this process, several different tools need to be at his or her disposal. This workshop will give the users an overview of the existing tools in the area of integrated development environments for clusters, various parallel debuggers, and new-style performance analysis tools, as well as an update on the state of the art of long-term research tools, which have advanced to an industrial level. The proceedingsof the 2nd Parallel Tools Workshop guide participants by providing a technical overview to help them decide upon which tool suits the requirements for the development task at hand. Additionally, through the hands-on sessions, the workshopwill enable the user to immediately deploy the tools.
Lecture Notes in Computer Science | 2006
Bettina Krammer; Michael M. Resch
The MPI-2 standard defines functions for Remote Memory Access (RMA) by allowing one process to specify all communication parameters both for the sending and the receiving side, which is also referred to as one-sided communication. Having experienced parallel programming as a complex and error-prone task, we have developed the MPI correctness checking tool MARMOT covering the MPI-1.2 standard and are now aiming at extending it to support application developers also for the more frequently used parts of MPI-2 such as one-sided communication. In this paper we describe our tool, which is designed to check the correct usage of the MPI API automatically at run-time, and we also analyse to what extent it is possible to do so for RMA.
Lecture Notes in Computer Science | 2004
Bettina Krammer; Matthias S. Müller; Michael M. Resch
The most frequently used part of MPI-2 is MPI I/O. Due to the complexity of parallel programming in general, and of handling parallel I/O in particular, there is a need for tools that support the application development process. There are many situations where incorrect usage of MPI by the application programmer can be automatically detected. In this paper we describe the MARMOT tool that uncovers some of these errors and we also analyze to what extent it is possible to do so for MPI I/O.
International Journal of Parallel Programming | 2009
Tobias Hilbrich; Matthias S. Müller; Bettina Krammer
The MPI interface is the de-facto standard for message passing applications, but it is also complex and defines several usage patterns as erroneous. A current trend is the investigation of hybrid programming techniques that use MPI processes and multiple threads per process. As a result, more and more MPI implementations support multi-threading, which are restricted by several rules of the MPI standard. In order to support developers of hybrid MPI applications, we present extensions to the MPI correctness checking tool Marmot. Basic extensions make it aware of OpenMP multi-threading, while further ones add new correctness checks. As a result, it is possible to detect errors that actually occur in a run with Marmot. However, some errors only occur for certain execution orders, thus, we present a novel approach using artificial data races, which allows us to employ thread checking tools, e.g., Intel Thread Checker, to detect MPI usage errors.
Archive | 2005
Bettina Krammer; Matthias S. Müller; Michael M. Resch
parallel computing | 2005
Bettina Krammer; Matthias S. Müller
parallel computing | 2007
Bettina Krammer; Valentin Himmler; David Lecomber