Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michael Gerndt is active.

Publication


Featured researches published by Michael Gerndt.


euromicro workshop on parallel and distributed processing | 1999

Performance analysis on CRAY T3E

Michael Gerndt; Bernd Mohr; Mario Pantano; Felix Wolf

One of the reasons why parallel programming is considered to be a difficult task is that users frequently cannot predict the performance impact of implementation decisions prior to program execution. This results in a cycle of incremental performance improvements based on run-time performance data. While gathering and analyzing performance data is supported by a large number of tools, typically interactive, the task of performance analysis is still too complex for users. This article illustrates this fact based on the current analysis support on CRAY T3E. As a consequence, we are convinced that automatic analysis tools are required to identify frequently occurring and well-defined performance problems automatically. This article describes the novel design of a generic automatic performance analysis environment called KOJAK. Besides its structure we also outline the first component, EARL, a new meta-tool designed and implemented as a programmable interface to calculate more abstract metrics from existing trace files, and to locate complex patterns describing performance problems.


joint international conference on vector and parallel processing parallel processing | 1994

A Comparison of Shared Virtual Memory and Message Passing Programming Techniques Based on a Finite Element Application

Rudolf Berrendorf; Michael Gerndt; Zakaria Lahjomri; Thierry Priol

This paper describes the methods used and experiences made with implementing a finite element application on three different parallel computers with either message passing or shared virtual memory as the programming model. Designing a parallel finite element application using message-passing requires to find a data domain decomposition to map data into the local memory of the processors. Since data accesses may be very irregular, communication patterns are unknown prior to the parallel execution and thus makes the parallelization a difficult task. We argue that the use of a shared virtual memory greatly simplifies the parallelization step. It is shown experimentally on an hypercube iPSC/2 that the use of the KOAN/Fortran-S programming environment based on a shared virtual memory allows to port quickly and easily a sequential application without a significant degradation in performance compared to the message passing version. Results for recent parallel architectures such as the Paragon XP/S for message-passing and the KSR1 for shared virtual memory are presented, too.


GI Jahrestagung | 1997

Sprachunterstützung zur Programmierung von Multiprozessorsystemen mit Shared Virtual Memory

Michael Gerndt

Neben massiv-parallelen Rechnern, in denen Prozesse aufgrund des verteilten Speichers nur durch den Austausch von Nachrichten kommunizieren konnen, werden zunehmend Rechner entwickelt, die mit oder ohne Hardwareunterstutzung einen gemeinsamen Adresraum auf dem verteilten Speicher realisieren. Bei der Programmierung dieser Rechner mus die unterschiedfche Latenzzeit von Speicherzugriffen auf den lokalen Speicher des Prozessors und den Speicher anderer Prozessoren beachtet werden. SVM-Fortran ist eine taskparallele Programmiersprache, die zusatzlich Sprachmittel zur Spezifikation der Verteilung paralleler Auftrage auf die Prozesse anbietet, um so das Zugriffsverhalten der Prozesse bzgl. des lokalen Speichers zu optimieren. Dieser Artikel stellt die Sprachmittel von SVM-Fortran zur Unterstutzung numerischer Anwendungen mit regularen und unstrukturierten Gittern vor.


euromicro workshop on parallel and distributed processing | 1996

Programming shared virtual memory multiprocessors

Michael Gerndt

Highly parallel machines needed to solve compute intensive scientific applications are based on the distribution of physical memory across the compute nodes. The drawback of such systems is the difficult message passing programming model. Therefore, a lot of research in simplifying the programming model is going on. This article investigates the combination of a task parallel programming model implemented on top of a shared virtual address space provided by the operating system of the parallel machine.


GI Jahrestagung | 1993

Massively Parallel Computing in a Production Environment iPSC/860 Installation at KFA Jülich

Rudolf Berrendorf; Ulrich Detert; Jutta Docter; U. Ehrhart; Michael Gerndt; Inge Gutheil; Renate Knecht

The Research Centre Julich installed the first Intel Paragon production system in Europe. Prior to that installation an Intel iPSC/860 system was made available last year to allow users to develop parallel programs for such an architecture. This article describes all aspects of the iPSC/860 installation, such as system access, system administration, operating, user support, and applications. KFA is cooperating with Intel on the evaluation of the Paragon software and the development of necessary tools.


Archive | 1994

Intel Paragon XP/S - Architecture, Software Environment, and Performance

Rudolf Berrendorf; Michael Gerndt; Heribert C. Burg; Renate Knecht; Rüdiger Esser; Ulrich Detert


Archive | 1995

SVM-Fortran Reference Manual, Version 1.4

Rudolf Berrendorf; Michael Gerndt


Archive | 1995

Compiling Data Parallel Languages for Shared Virtual Memory Systems

Rudolf Berrendorf; Michael Gerndt


Archive | 1995

Tool Suite for Partial Parallelization

Ulrich Detert; Michael Gerndt


Archive | 1994

Shared Virtual Memory and Message Passing Programming on a Finite Element Application

Rudolf Berrendorf; Michael Gerndt; Zakaria Lahjomri; Thierry Priol

Collaboration


Dive into the Michael Gerndt's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bernd Mohr

Forschungszentrum Jülich

View shared research outputs
Top Co-Authors

Avatar

Felix Wolf

Technische Universität Darmstadt

View shared research outputs
Researchain Logo
Decentralizing Knowledge