Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michael M. Resch is active.

Publication


Featured researches published by Michael M. Resch.


Journal of Grid Computing | 2003

Towards Efficient Execution of MPI Applications on the Grid: Porting and Optimization Issues

Rainer Keller; Edgar Gabriel; Bettina Krammer; Matthias S. Müller; Michael M. Resch

The message passing interface (MPI) is a standard used by many parallel scientific applications. It offers the advantage of a smoother migration path for porting applications from high performance computing systems to the Grid. In this paper Grid-enabled tools and libraries for developing MPI applications are presented. The first is MARMOT, a tool that checks the adherence of an application to the MPI standard. The second is PACX-MPI, an implementation of the MPI standard optimized for Grid environments. Besides the efficient development of the program, an optimal execution is of paramount importance for most scientific applications. We therefore discuss not only performance on the level of the MPI library, but also several application specific optimizations, e.g., for a sparse, parallel equation solver and an RNA folding code, like latency hiding, prefetching, caching and topology-aware algorithms.


Archive | 2008

Tools for High Performance Computing

Michael M. Resch; Rainer Keller; Valentin Himmler; Bettina Krammer; Alexander Schulz

With the advent of multi-core processors, implementing parallel programming methods in application development is absolutely necessary in order to achieve good performance. Soon, 8-core and possibly 16-core processors will be available, even for desktop machines. To support application developers in the various tasks involved in this process, several different tools need to be at his or her disposal. This workshop will give the users an overview of the existing tools in the area of integrated development environments for clusters, various parallel debuggers, and new-style performance analysis tools, as well as an update on the state of the art of long-term research tools, which have advanced to an industrial level. The proceedingsof the 2nd Parallel Tools Workshop guide participants by providing a technical overview to help them decide upon which tool suits the requirements for the development task at hand. Additionally, through the hands-on sessions, the workshopwill enable the user to immediately deploy the tools.


international parallel and distributed processing symposium | 2008

Outlier detection in performance data of parallel applications

Katharina Benkert; Edgar Gabriel; Michael M. Resch

When an adaptive software component is employed to select the best-performing implementation for a communication operation at runtime, the correctness of the decision taken strongly depends on detecting and removing outliers in the data used for the comparison. This automatic decision is greatly complicated by the fact that the types and quantities of outliers depend on the network interconnect and the nodes assigned to the job by the batch scheduler. This paper evaluates four different statistical methods used for handling outliers, namely a standard interquartile range method, a heuristic derived from the trimmed mean value, cluster analysis and a method using robust statistics. Using performance data from the Abstract Data and Communication Library (ADCL) we evaluate the correctness of the decisions made with each statistical approach over three fundamentally different network interconnects, namely a highly reliable InfiniBand network, a gigabit Ethernet network having a larger variance in the performance, and a hierarchical gigabit Ethernet network.


Parallel Tools Workshop | 2012

Advanced Memory Checking Frameworks for MPI Parallel Applications in Open MPI

Shiqing Fan; Rainer Keller; Michael M. Resch

In this paper, we describe the implementation of memory checking functionality that is based on instrumentation tools. The combination of instrumentation based checking functions and the MPI-implementation offers superior debugging functionalities, for errors that otherwise are not possible to detect with comparable MPI-debugging tools. Our implementation contains three parts: first, a memory callback extension that is implemented on top of the Valgrind Memcheck tool for advanced memory checking in parallel applications; second, a new instrumentation tool was developed based on the Intel Pin framework, which provides similar functionality as Memcheck it can be used in Windows environments that have no access to the Valgrind suite; third, all the checking functionalities are integrated as the so-called memchecker framework within Open MPI. This will also allow other memory debuggers that offer a similar API to be integrated. The tight control of the user’s memory passed to Open MPI, allows us to detect application errors and to track bugs within Open MPI itself. The extension of the callback mechanism targets communication buffer checks in both pre- and post-communication phases, in order to analyze the usage of the received data, e.g. whether the received data has been overwritten before it is used in an computation or whether the data is never used. We describe our actual checks, classes of errors being found, how memory buffers are being handled internally, show errors actually found in user’s code, and the performance implications of our instrumentation.


Parallel Tools Workshop | 2008

Enhanced Memory debugging of MPI-parallel Applications in Open MPI

Shiqing Fan; Rainer Keller; Michael M. Resch

In this paper, we describe the implementation of memory checking functionality based on instrumentation using Valgrind-Memcheck tool. The combination of Valgrind based checking functions within the MPI-implementation offers superior debugging functionalities, for errors that otherwise are not possible to detect with comparable MPI-debugging tools. The functionality is integrated into Open MPI as the so-called memchecker-framework. This allows other memory debuggers that offer a similar API to be integrated. The tight control of the user’s memory passed to Open MPI, allows not only to find application errors, but also helps track bugs within Open MPI itself. We describe the actual checks, classes of errors being found, how memory buffers internally are being handled, show errors actually found in user’s code and the performance implications of this instrumentation.


international conference of the ieee engineering in medicine and biology society | 2002

Stent graft treatment optimization in a computer guided simulation environment

Marc Garbey; Michael M. Resch; Yuri V. Vassilevski; Björn Sander; Daniela Pless; Thorsten R. Fleiter

Treatment of Abdominal Aortic Aneurysms (AAA) has seen dramatic improvements over the last years. With improved surgical methods the rate of mortality has been going down to about 20% and less today. One way to treat such an AAA is to implant stent grafts in order to channel the blood at its way through the aneurysm using endovascular methods. In this procedure the surgeon delivers the stent graft via a catheter inside the dilatation in which it unfolds, taking away the pressure from the weakened aortic wall. This method has been proven to work well and patients typically can be released as early as 24 hours after receiving this procedure. However, complications may occur after the treatment, which can include leakages and migration of the stent or even its elemental breakdown. The causes for these problems are not exactly known. In order to get a better understanding of the behavior of the complex mechanical systems simulation is a feasible approach. This requires adequate data gathering for the individual patient, a feasible mathematical and numerical models, and substantial compute performance.


Archive | 2005

Runtime Checking of MPI Applications with MARMOT

Bettina Krammer; Matthias S. Müller; Michael M. Resch


Archive | 2003

A Workbench for Teraflop Supercomputing

Michael M. Resch; Uwe Küster; Matthias S. Müller; Ulrich Lang


Archive | 2007

The HLRS-NEC Teraflop Workbench — Strategies, Result and Future

Martin Galle; Thomas Boenisch; Katharina Benkert; Stefan Borowski; Stefan Haberhauer; Peter Lammers; Fredrik Svensson; Sunil R. Tiyyagura; Michael M. Resch; Wolfgang Bez


Archive | 2007

Architectural and Programming Issues for Sustained Petaflop Performance

Uwe Küster; Michael M. Resch

Collaboration


Dive into the Michael M. Resch's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shiqing Fan

University of Stuttgart

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge