Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jörn Gehring is active.

Publication


Featured researches published by Jörn Gehring.


Future Generation Computer Systems | 1996

MARS—a framework for minimizing the job execution time in a metacomputing environment

Jörn Gehring; Alexander Reinefeld

Abstract Utilizing a collection of workstations and supercomputers in a metacomputing environment does not only offer an enormous amount of computing power, but also raises new problems. The true potential of WAN-based distributed computing can only be exploited if the application-to-architecture mapping reflects the different processor speeds, network performances and the applications communication characteristics. In this paper, we present the Metacomputer Adaptive Runtime System (MARS), a framework for minimizing the execution time of distributed applications on a WAN metacomputer. Work-load balancing and task migration is based on dynamic information on the processor load and network performance. Moreover, MARS uses accumulated statistical data on previous execution runs of the same application to derive an improved task-to-process mapping. Migration decisions are based on: (1) the current system load; (2) the network load; and (3) previously obtained application-specific characteristics. Our current implementation supports C applications with MPI message passing calls, but the general framework is also applicable to other programing environments like PVM, PARMACS and Express.


job scheduling strategies for parallel processing | 1999

Scheduling a Metacomputer with Uncooperative Sub-schedulers

Jörn Gehring; Thomas Preiss

The main advantage of a metacomputer is not its peak performance but better utilization of its machines. Therefore, efficient scheduling strategies are vitally important to any metacomputing project. A real metacomputer management system will not gain exclusive access to all its resources, because participating centers will not be willing to give up autonomy. As a consequence, the scheduling algorithm has to deal with a set of local sub-schedulers performing individual machine management. Based on the proposal made by Feitelson and Rudolph in 1998 we developed a scheduling model that takes these circumstances into account. It has been implemented as a generic simulation environment, which we make available to the public. Using this tool, we examined the behavior of several well known scheduling algorithms in a metacomputing scenario. The results demonstrate that interaction with the sub-schedulers, communication of parallel applications, and the huge size of the metacomputer are among the most important aspects for scheduling a metacomputer. Based upon these observations we developed a new technique that makes it possible to use scheduling algorithms developed for less realistic machine models for real world metacomputing projects. Simulation runs demonstrate that this technique leads to far better results than the algorithms currently used in metacomputer management systems.


high performance distributed computing | 2000

Robust resource management for metacomputers

Jörn Gehring; Achim Streit

Presents a robust software infrastructure for metacomputing. The system is intended to be used by others as a building block for large and powerful computational grids. Much effort has been taken to develop a fault-tolerant architecture that does not exhibit a single point of failure. Furthermore, we have designed the system to be modular, lean and portable. It is available as open source code and has been successfully compiled on POSIX- and Microsoft Windows-compliant platforms. The system does not originate from a laboratory environment but has proven its robustness within two large metacomputing installations. It embodies a modular concept which allows easy integration of new or modified components. Hence, it is not necessary to buy into the system as whole. We rather encourage others to use only those components that fit into their specific environments.


ieee international conference on high performance computing data and analytics | 1999

Dynamite - Blasting Obstacles to Parallel Cluster Computing

G. Dick van Albada; J. Clinckmaillie; A. H. L. Emmen; Jörn Gehring; Oliver Heinz; Frank van der Linden; Benno J. Overeinder; Alexander Reinefeld; Peter M. A. Sloot

Workstations make up a very large fraction of the total available computing capacity in many organisations. In order to use this capacity optimally, dynamic allocation of computing resources is needed. The Esprit project Dynamite addresses this load balancing problem through the migration of tasks in a dynamically linked parallel program. An important goal of the project is to accomplish this in a manner that is transparent both to the application programmer and to the user. As a test bed, the Pam-Crash software from ESI is used.


job scheduling strategies for parallel processing | 1996

Architecture-Independent Request-Scheduling with Tight Waiting-Time Estimations

Jörn Gehring; Friedhelm Ramme

In the course of the last few years, the users interaction with parallel computer-systems has changed. A continuous growth in the number of interactive HPC-applications can be observed. When considering partitionable MPP-systems with exclusive usage of the physically separated regions, issues like the average waiting-time become more dominant for the users than the total system-throughput.


international parallel and distributed processing symposium | 1994

Sorting large data sets on a massively parallel system

Ralf Diekmann; Jörn Gehring; Reinhard Lüling; Burkhard Monien; Markus Nubel; Rolf Wanka

This paper presents a performance study for many of todays popular parallel sorting algorithms. It is the first to present a comparative study on a large scale MIMD system. The machine, a Parsytec GCel, contains 1024 processors connected as a two-dimensional grid. To justify the experimental results, we develop a theoretical model to predict the performance in terms of communication and computation times. We get a very close relation between the experiments and the theoretical model as long as the edge congestion caused by the algorithms is predicted precisely. We compare: Bitonicsort, Shearsort, Gridsort, Samplesort, and Radixsort. Experiments were performed using random instances according to a well known benchmark problem. Results show that for the machine we used, Bitonicsort performs best for smaller numbers of keys per processor (<2048) and Samplesort outperforms all other methods for larger instances.<<ETX>>


Archive | 1998

RSD — Resource and Service Description

Matthias Brune; Jörn Gehring; Axel Keller; Alexander Reinefeld

RSD (Resource and Service Description) is a scheme for specifying resources and services in complex heterogeneous computing systems and metacomputing environments. At the system administrator level, RSD is used to specify the available system components, such as the number of nodes, their interconnection topology, CPU speeds, and available software packages. At the user level, a GUI provides a comfortable, high-level interface for specifying system requests. A textual editor can be used for defining repetitive and recursive structures. This gives service providers the necessary flexibility for fine-grained specification of system topologies, interconnection networks, system and software dependent properties. All these representations are mapped onto a single, coherent internal object-oriented resource representation.


ieee international conference on high performance computing data and analytics | 1997

A Lightweight Communication Interface for Parallel Programming Environments

Matthias Brune; Jörn Gehring; Alexander Reinefeld

We present a small, extensible software interface for the communication between different parallel programming models. With only four new commands our PLUS communication interface can be easily integrated into existing parallel codes, allowing tasks to transparently communicate from, e.g., PVM to MPI and PARIX, or any other parallel programming model.


Journal of Systems Architecture | 1997

Communicating across parallel message-passing environments

Alexander Reinefeld; Jörn Gehring; Matthias Brune

Abstract We present a small, extensible interface for the transparent communication between vendor-specific and standard message-passing environments. With only four new commands, existing parallel applications can make use of our PLUS communication interface, thereby allowing inter-process communication with other programming environments. Much effort has been spent in optimizing the communication speed across Internet and Intranet links. Our current implementation supports process communication between PVM, MPI, and PARIX. With only marginal additional effort, the interface can be adapted to support other message-passing environments as well.


grid computing | 2000

Experiments with Migration of Message-Passing Tasks

Kamil Iskra; Z.W. Hendrikse; G. Dick van Albada; Benno J. Overeinder; Peter M. A. Sloot; Jörn Gehring

The combined computing capacity of the workstations that are present in many organisations nowadays is often under-utilised, as the performance for parallel programs is unpredictable. Load balancing through dynamic task re-allocation can help to obtain a more reliable performance. The Esprit project Dynamite provides such an automated load balancing system. It can migrate tasks that are part of a parallel program using a message passing library. Currently Dynamite supports PVM only, but it is being extended to support MPI as well. The Dynamite package is completely transparent, i.e. neither system (kernel) nor application source code need to be modified. Dynamite supports migration of tasks using dynamically linked libraries, open files and both direct and indirect PVM communication. Monitors and a scheduler are included. In this paper, we first briefly describe the Dynamite system. Next we describe how migration decisions are made and report on some performance measurements.

Collaboration


Dive into the Jörn Gehring's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Axel Keller

University of Paderborn

View shared research outputs
Top Co-Authors

Avatar

Achim Streit

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Peter M. A. Sloot

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge