Welf Löwe
Karlsruhe Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Welf Löwe.
workshop on program comprehension | 2003
Dirk Heuzeroth; Thomas Holl; Gustav Högström; Welf Löwe
We detect design patterns in legacy code combining static and dynamic analyses. The analyses do not depend on coding or naming conventions. We classify potential pattern instances according to the evidence our analyses provide. We discuss our approach for the observer, composite, mediator, chain of responsibility and visitor patterns. Our Java analysis tool analyzes Java programs. We evaluate our approach by applying the tool on itself and on the Java SwingSetExample using the Swing library.
Annals of Software Engineering | 2002
Welf Löwe; Markus L. Noga; Thilo S. Gaul
Communication with XML often involves pre-agreed document types. In this paper, we propose an offline parser generation approach to enhance online processing performance for documents conforming to a given DTD. Our examination of DTDs and the languages they define demonstrates the existence of ambiguities. We present an algorithm that maps DTDs to deterministic context-free grammars defining the same languages. We prove the grammars to be LL(1) and LALR(1), making them suitable for standard parser generators. Our experiments show the superior performance of generated optimized parsers. Our results generalize from DTDs to XML schema specifications with certain restrictions, most notably the absence of namespaces, which exceed the scope of context-free grammars.
joint international conference on vector and parallel processing parallel processing | 1994
Wolf Zimmermann; Welf Löwe
Currently, many parallel algorithms are defined for sharedmemory architectures. The prefered machine model for designing these algorithms is the PRAM. However, this model does not take into account properties of existing architectures. Recently, Culler et al. defined the LogP machine model which better reflects the behaviour of massively parallel computers. We discuss an important class of programs for sharedmemory architectures and show how they can be mapped to the LogP machine. We define this class and show how to compute the mapping at compile time. For this mapping a constant factor delay with respect to the optimal LogP execution time can be guaranteed.
Proceedings. Advances in Parallel and Distributed Computing | 1997
Jörn Eisenbiegler; Welf Löwe; Andreas Wehrenpfennig
We present a strategy for optimizing parallel algorithms introducing redundant computations. In order to calculate the optimal amount of redundancy, we generalize the LogP model to capture messages of varying sizes using functions instead of constants for the machine parameters. We validate our method for a wave simulation algorithm on a Parsytec PowerXplorer with eight processors and a workstation cluster with four workstations.
international conference on supercomputing | 1995
Welf Löwe; Wolf Zimmermann
In sequential computing the step from programming in machine code to programming in machine independent high level languages has been done for decades. Although high level programming languages are available for parallel machines today’s parallel programs highly depend on the archit ectures they are intended to run on. Designing efficient parallel programs is a difficult task that can be performed by specialists only. Porting those programs to other parallel architectures is nearly impossible without a considerable loss of performance. Abstract machine models for parallel computing like the PRAM-model are accepted by theoreticians but have no practical relevance since these models don’t take into account properties of existing architectures. However, the PRAM is easy to program. Recently, Culler et al. defined the LogP machine model which better reflects the behaviour of massively parallel computers. In this work, we show transformations of a subclass of PRAM-programs leading to efficient LogP programs and give upper bounds for executing them on the LogP machine. Therefore, we first briefly summarize the transformations from PRAM to LogP programs. Second, we extend the LogP machine model by a set of machine instructions. Third, we define the classes of coarse and fine grained LogP programs. The former class of programs can be executed within the factor two of the optimum. The latter class of programs has an upper time bound for execution that is a little worse. Finally, we show how to decide statically which strategy is promising for a given program optimization problem.
component based software engineering | 2001
Dirk Heuzeroth; Welf Löwe; Andreas Ludwig; Uwe Aßmann
In order to compose components, we have to adapt them. Therefore, we pursue a transformational approach focusing on the communication view. We show how to separate the definition of communication from the definition of other system aspects, how to extract this definition from existing systems, and how to weave it back into the system. Our main concern is the reconfiguration of this aspect.
parallel computing | 2000
Welf Löwe; Wolf Zimmermann
Abstract This paper discusses algorithms for scheduling task-graphs G =( V , E , τ ) to LogP-machines. These algorithms depend on the granularity of G , i.e., on the ratio of computation τ ( v ) and communication times in the LogP-cost model, and on the structure of G . We define a class of coarse-grained task-graphs that can be scheduled with a performance guarantee of 4× T opt ( G ), where T opt ( G ) is the time required for the optimal makespan. Furthermore, we define a class of fine-grained task-graphs that can be scheduled with a performance guarantee approaching 4× T opt ( G ) for increasing problem sizes. The discussed classes of task-graphs cover algorithms such as fast Fourier transformation, stencil computations to solve partial differential equations, matrix multiplication, etc.
Electronic Notes in Theoretical Computer Science | 2002
Welf Löwe; Markus L. Noga
Metaprogramming is a generic approach described in many articles. Surprisingly, examples of successful applications are scarce. This paper gives such an example. With a metaprogram of less than 2500 lines, we deploy components on the web by adding specific XML-based communication facilities. This underlines the expressiveness of the metaprogramming approach.
european conference on parallel processing | 1997
Welf Löwe; Wolf Zimmermann; Jörn Eisenbiegler
We discuss linear schedules of task-graphs under the communication cost model of the LogP-machine. In addition to our previous work, we consider also non-constant parameters L, o and g, i.e. we introduce messages of different sizes into the LogP-model. The main results of this work are the following: (i) in the LogP-model, less communication in linear schedules does not necessarily imply a better performance. (ii) We give an upper time bound on the execution time of linear clusterings. (iii) We give an efficient algorithm which computes linear clusterings with a minimum number of clusters.
european conference on parallel processing | 2001
Welf Löwe; Alex Liebrich
Efficient scheduling of task graphs for parallel machines is a major issue in parallel computing. Such algorithms are often hard to understand and hard to evaluate. We present a framework for the visualization of scheduling algorithms. Using the LogP cost model for parallel machines, we simulate the effects of scheduling algorithms for specific target machines and task graphs before performing time and resource consumptive measurements in practice.