Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jürgen Brehm is active.

Publication


Featured researches published by Jürgen Brehm.


Concurrency and Computation: Practice and Experience | 1998

Performance modeling for SPMD message-passing programs

Jürgen Brehm; Patrick H. Worley; Manish Madhukar

Todays massively parallel machines are typically message-passing systems consisting of hundreds or thousands of processors. Implementing parallel applications efficiently in this environment is a challenging task, and poor parallel design decisions can be expensive to correct. Tools and techniques that allow the fast and accurate evaluation of different parallelization strategies would significantly improve the productivity of application developers and increase throughput on parallel architectures. This paper investigates one of the major issues in building tools to compare parallelization strategies: determining what type of performance models of the application code and of the computer system are sufficient for a fast and accurate comparison of different strategies. The paper is built around a case study employing the performance prediction tool (PerPreT) to predict performance of the parallel spectral transform shallow water model code (PSTSWM) on the Intel Paragon. PSTSWM is a parallel application code that was designed to evaluate different parallel strategies for the spectral transform method as it is used in climate modeling and weather forecasting. Multiple parallel algorithms and algorithm variants are embedded in the code. PerPreT uses a relatively simple algebraic model to predict execution time for SPMD (single program multiple data) parallel applications. Applications are modeled through parameterized formulae for communication and computation, where the parameters include the problem size, the number of processors used to execute the program, and system characteristics (e.g. setup times for communication, link bandwidth and sustained computing performance per processor). In this paper we describe performance models that predict the performance of the different algorithms in PSTSWM accurately enough to allow them to be compared, establishing the feasibility of such a demanding application of performance modeling. We also discuss issues in generating and validating the performance models, emphasizing the practical importance of tools such as PerPreT in such studies.


MMB '95 Proceedings of the 8th International Conference on Modelling Techniques and Tools for Computer Performance Evaluation: Quantitative Evaluation of Computing and Communication Systems | 1995

PerPreT - A Performance Prediction Tool for Massive Parallel Sysytems

Jürgen Brehm; Manish Madhukar; Evgenia Smirni; Lawrence W. Dowdy

Todays massively parallel machines are typically message passing systems consisting of hundreds or thousands of processors. Implementing parallel applications efficiently in this environment is a challenging task. The Performance Prediction Tool (PerPreT) presented in this paper is useful for system designers and application developers. The system designers can use the tool to examine the effects of changes of architectural parameters on parallel applications (e.g., reduction of setup time, increase of link bandwidth, faster execution units). Application developers are interested in a fast evaluation of different parallelization strategies of their codes. PerPreT uses a relatively simple analytical model to predict speedup, execution time, computation time, and communication time for a parametrized application. Especially for large numbers of processors, PerPreTs analytical model is preferable to traditional models (e.g., Markov based approaches such as queueing and Petri net models). The applications are modelled through parameterized formulae for communication and computation. The parameters used by PerPreT include the problem size and the number of processors used to execute the program. The target systems are described by architectural parameters (e.g., setup times for communication, link bandwidth, and sustained computing performance per node).


international conference on supercomputing | 1989

Parallelizing algorithms for MIMD architectures with shared memory

Jürgen Brehm; Harry F. Jordan

The solution of a system of linear equations Ax = b is an important application in scientific computation. It arises for the numerical solution of self adjoint problems using finite difference or finite element methods for discretization. For realistic problems the coefficient matrix A is sparse most of the times, i.e. a large number of its elements are zero. The three commonly used classes of algorithms to solve the linear system of equations are direct methods, semi-iterative methods and iterative methods which differ in their numerical properties and their computer implementations. The multiprocessor L/U Decomposition for sparse systems was implemented by Gita Alaghband [Git85] as an example for a parallelized direct method. In this paper we will present examples for efficient parallelizations of iterative and semi-iterative algorithms and their implementations on various MIMD shared memory architectures. The iterative methods generate a sequence of approximate solutions which converge to the exact solution. To improve the rate of convergence the Multigrid approach [Hac80] is used as an accelerator. Semi-iterative methods operate in an iterative manner with the property of finite termination in exact arithmetic. The Conjugate Gradient methods shown in this paper are among the most popular representatives of this class of algorithms. They are usually not as robust as direct methods but there are computational advantages over methods that require the factorization of the coefficient matrix A. First we present FORCE [Jor87] as an example for a portable parallel language, then, in section three, the implementation of the algorithms on three MIMD shared memory architectures is described and finally runtime measurements and speedup calculations are carried out in section four.


intelligent networking and collaborative systems | 2009

Social Educational Games Based on Open Content

Monika Steinberg; Jürgen Brehm

Open Content represents a huge knowledge base in nearly every domain regarding the currently arising Social Semantic Web [1]. However, weaving these precious resources sustainable into agile knowledge transfer and learning is not lighted very well so far. The focus of this contribution lies on utilizing available, distributed data sets on the web for vast interactive knowledge transfer and learning. Social educational gaming is instanced to simplify interoperation, to spread knowledge in a handy way and to enhance users’ collaboration with Open Content. Learning aims deal with broaden the users’ perspective on available information and resources. Especially fact-related knowledge is assessable in an effective manner this way. Attendance increases through game-based, incentive arrangements like point and level concepts. Confrontation with daily knowledge topics can be immerged through knowledge games for a broad audience. Long-term objective is to establish Open Content in educational contexts as knowledge base - by enabling a proposal of suitable web service compositions and a modular construction kit for user-centered interfaces with high functionality and design issues. Overall, we boost Open Content as an inherent part of higher-layered, lightweight applications in knowledge and information transfer via standard tasks of knowledge engineering and augmented user interaction. Knowledge management concerns interweave with social gaming and collaboration.


international parallel processing symposium | 1997

Performance prediction for complex parallel applications

Jürgen Brehm; Patrick H. Worley

Todays massively parallel machines are typically message-passing systems consisting of hundreds or thousands of processors. Implementing parallel applications efficiently in this environment is a challenging task, and poor parallel design decisions can be expensive to correct. Tools and techniques that allow the fast and accurate evaluation of different parallelization strategies would significantly improve the productivity of application developers and increase throughput on parallel architectures. This paper investigates one of the major issues in building tools to compare parallelization strategies: determining what type of performance models of the application code and of the computer system are sufficient for a fast and accurate comparison of different strategies. The paper is built around a case study employing the Performance Prediction Tool (PerPreT) to predict performance of the Parallel Spectral Transform Shallow Water Model code (PSTSWM) on the Intel Paragon.


international conference on supercomputing | 1991

Parallel conjugate gradient algorithms for solving the Neutron Diffusion Equation on SUPERNUM

Ansgar Böhm; Jürgen Brehm; H. Finnemann

In this paper we present an implementation of a parallelized sparse matrix algorithm for solving the Neutron Diffusion Equation on the SUPRENUM multiprocessor system. The solution of the steady-state and transient Neutron Diffusion Equation is one of the major task in reactor physics. We used standard and preconditioned Conjugate Gradient Methods well suited for parallelization and vectorization on multiprocessor architectures. All presented algorithms were implemented on the 2 ChIster SUPRENUM at the University of Erlangen-Nuremberg.


parallel computing | 1988

Solution of the neutron diffusion equation through multigrid methods implemented on a memory-coupled 25-processor system

H. Finnemann; Jürgen Brehm; E. Michel; Jens Volkert

Abstract For the complex task to attain maximum use and maximum security maintenance of nuclear reactors, we simulate the physical behaviour by way of mathematical models. The stationary solution of the Neutron Diffusion Equation as an eigenvalue problem, presented in this paper, results in a large non-linear partial differential equation system (PDE) whose solution is parallelized and implemented on the DIRMU multiprocessor system. We demonstrate that modern algorithms (e.g. multigrid methods) combined with innovative hardware (e.g. vector-multiprocessor systems) can lead to powerful tools which are needed for real-time simulation of physical events. This approach also allows to solve problems which arise in fluid dynamics, aerodynamics, weather forecast, etc. in a reasonable amount of time.


IDCS 2015 Proceedings of the 8th International Conference on Internet and Distributed Computing Systems - Volume 9258 | 2015

Task Execution in Distributed Smart Systems

Uwe Jänen; Carsten Grenz; Sarah Edenhofer; Anthony Stein; Jürgen Brehm; Jörg Hähner

This paper presents a holistic approach to execute tasks in distributed smart systems. This is shown by the example of monitoring tasks in smart camera networks. The proposed approach is general and thus not limited to a specific scenario. A job-resource model is introduced to describe the smart system and the tasks, with as much order as necessary and as few rules as possible. Based on that model, a local algorithm is presented, which is developed to achieve optimization transparency. This means that the optimization on system-wide criteria will not be visible to the participants. To a task, the system-wide optimization is a virtual local single-step optimization. The algorithm is based on proactive quotation broadcasting to the local neighborhood. Additionally, it allows the parallel execution of tasks on resources and includes the optimization of multiple-task-to-resource assignments.


cooperative information systems | 2003

An InfoSpace Paradigm for Local and ad hoc Peer-to-Peer Communication

Jürgen Brehm; George Brancovici; Christian Müller-Schloer; Tarek Smaoui; S. Voigt; R. Welge

A key feature of ubiquitous handheld devices will be their simple usability in mobile and highly dynamic ad-hoc peer-to-peer environments. With Linda, JavaSpaces and TSpaces, shared memory-based communication concepts have been realized on the system level with corresponding APIs. This article proposes to consequently use the same simple communication paradigm also on the user interface level. In this paper we examine existing basic technologies, analyze typical application scenarios and the underlying communication patterns. We discuss in detail, how these patterns can be realized with the InfoSpace mechanism which offers to each user a private local space and a view on a common space shared between all participants. Finally we show an architecture which realizes the InfoSpace infrastructure and report on first experience with a Jxta-based solution.


joint international conference on vector and parallel processing parallel processing | 1990

Sparse Matrix Algorithms for SUPRENUM

Jürgen Brehm; Ansgar Böhm; Jens Volkert

In this talk we will present the SUPRENUM multiprocessor system and some implementations of parallelized sparse matrix algorithms. The SUPRENUM multiprocessor system was delivered late in 1989 for the first time. It is the result of a research project where German research institutes, universities and industrial companies worked together to built a 256 processor distributed memory machine. In parallel with the construction of the SUPRENUM a lot of time and man power was invested for the software support of the project. As an important application in scientific computation we parallelized the solution of systems of linear equations Ax=b. For realistic problems the large coefficient matrix A is sparse most of the time, i.e. a large number of its entries are zero. We show how direct algorithms based on Gauss Elimination and semi-iterative algorithms (Conjugate Gradient Methods) can be implemented on SUPRENUM. Especially the Conjugate Gradient Methods which are very well suited for parallelization and vectorization proved to be very efficient on multiprocessor architectures.

Collaboration


Dive into the Jürgen Brehm's collaboration.

Top Co-Authors

Avatar

Ansgar Böhm

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Patrick H. Worley

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

E. Michel

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar

Jens Volkert

Johannes Kepler University of Linz

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge