Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michael Philippsen is active.

Publication


Featured researches published by Michael Philippsen.


Concurrency and Computation: Practice and Experience | 1997

JavaParty – transparent remote objects in Java

Michael Philippsen; Matthias Zenger

Java’s threads offer appropriate means either for parallel programming of SMPs or as target constructs when compiling add-on features (e.g. forall constructs, automatic parallelization, etc.) Unfortunately, Java does not provide elegant and straightforward mechanisms for parallel programming on distributed memory machines, like clusters of workstations. JavaParty transparently adds remote objects to Java purely by declaration while avoiding disadvantages of explicit socket communication, the programming overhead of RMI, and many disadvantages of the message-passing approach in general. JavaParty is specifically targeted towards and implemented on clusters of workstations. It hence combines Java-like programming and the concepts of distributed shared memory in heterogeneous networks.


IEEE Transactions on Software Engineering | 2002

Two controlled experiments assessing the usefulness of design pattern documentation in program maintenance

Lutz Prechelt; Barbara Unger-Lamprecht; Michael Philippsen; Walter F. Tichy

Using design patterns is claimed to improve programmer productivity and software quality. Such improvements may manifest both at construction time (in faster and better program design) and at maintenance time (in faster and more accurate program comprehension). The paper focuses on the maintenance context and reports on experimental tests of the following question: does it help the maintainer if the design patterns in the program code are documented explicitly (using source code comments) compared to a well-commented program without explicit reference to design patterns? Subjects performed maintenance tasks on two programs ranging from 360 to 560 LOC including comments. The experiments tested whether pattern comment lines (PCL) help during maintenance if patterns are relevant and sufficient program comments are already present. This question is a challenge for the experimental methodology: A setup leading to relevant results is quite difficult to find. We discuss these issues in detail and suggest a general approach to such situations. A conservative analysis of the results supports the hypothesis that pattern-relevant maintenance tasks were completed faster or with fewer errors if redundant design pattern information was provided. The article provides the first controlled experiment results on design pattern usage and it presents a solution approach to an important class of experiment design problems for experiments regarding documentation.


Proceedings of the ACM 1999 conference on Java Grande | 1999

A more efficient RMI for Java

Christian Nester; Michael Philippsen; Bernhard Haumacher

In current Java implementations, Flemote Method Invocation (RMI) is too slow, especially for high performance computing. RMI is designed for wide-area and high-latency networks, it is based on a slow object serialization, and it does not support high-performance communication networks. The paper demonstrates that a much faster drop-in RMI and an efficient serialization can be designed and implemented completely in Java without any native code. Moreover, the re-designed RMI supports non-TCP/IP communication networks, even with heterogeneous transport protocols. As a by-product,, a benchmark collection for RMI is presented. This collection asked -for by the Java Grande Forum from its first meeting can guide JVM vendors in their performance optimizations. On PCs connected through Ethernet, the better serialization and the improved RMI save a median of 45% (maximum of 71%) of the runtime for :some set of arguments. On our Myrinet-based ParaStation network (a cluster of DEC Alphas) we save a median of 85% (maximum of 96%), compared to standard RMI, standard serialization, and Fast Ethernet; a remote method invocation runs as fast as 115 /.s round trip time, compared to about 1.5 ms.


Concurrency and Computation: Practice and Experience | 2000

More efficient serialization and RMI for Java

Michael Philippsen; Bernhard Haumacher; Christian Nester

In current Java implementations, Remote Method Invocation (RMI) is too slow, especially for high-performance computing. RMI is designed for wide-area and high-latency networks, it is based on a slow object serialization, and it does not support high-performance communication networks. The paper demonstrates that a much faster drop-in RMI and an efficient drop-in serialization can be designed and implemented completely in Java without any native code. Moreover, the re-designed RMI supports non-TCP/IP communication networks, even with heterogeneous transport protocols. We demonstrate that for high-performance computing some of the official serializations generality can and should be traded for speed. As a by-product, a benchmark collection for RMI is presented. On PCs connected through Ethernet, the better serialization and the improved RMI save a median of 45% (maximum of 71%) of the runtime for some set of arguments. On our Myrinet-based ParaStation network (a cluster of DEC Alphas) we save a median of 85% (maximum of 96%), compared to standard RMI, standard serialization, and Fast Ethernet; a remote method invocation runs as fast as 80 μs round trip time, compared with about 1.5 ms. Copyright


european conference on machine learning | 2005

A quantitative comparison of the subgraph miners mofa, gspan, FFSM, and gaston

Marc Wörlein; Thorsten Meinl; Ingrid Fischer; Michael Philippsen

Several new miners for frequent subgraphs have been published recently. Whereas new approaches are presented in detail, the quantitative evaluations are often of limited value: only the performance on a small set of graph databases is discussed and the new algorithm is often only compared to a single competitor based on an executable. It remains unclear, how the algorithms work on bigger/other graph databases and which of their distinctive features is best suited for which database. We have re-implemented the subgraph miners MoFa, gSpan, FFSM, and Gaston within a common code base and with the same level of programming expertise and optimization effort. This paper presents the results of a comparative benchmarking that ran the algorithms on a comprehensive set of graph databases.


Communications of The ACM | 2001

Multiparadigm communications in Java for grid computing

Vladimir Getov; Gregor von Laszewski; Michael Philippsen; Ian T. Foster

In this article, we argue that the rapid development of Java technology now makes it possible to support, in a single object-oriented framework, the different communication and coordination structures that arise in scientific applications. We outline how this integrated approach can be achieved, reviewing in the process the state-of-the-art in communication paradigms within Java. We also present recent evaluation results indicating that this integrated approach can be achieved without compromising on performance.


Journal of Systems and Software | 2003

A controlled experiment on inheritance depth as a cost factor for code maintenance

Lutz Prechelt; Barbara Unger; Michael Philippsen; Walter F. Tichy

In two controlled experiments we compare the performance on code maintenance tasks for three equivalent programs with 0, 3, and 5 levels of inheritance. For the given tasks, which focus on understanding effort more than change effort, programs with less inheritance were faster to maintain. Daly et al. previously reported similar experiments on the same question with quite different results. They found that the 5-level program tended to be harder to maintain than the 0-level program, while the 3-level program was significantly easier to maintain than the 0-level program. We describe the design and setup of our experiment, the differences to the previous ones, and the results obtained. Ours and the previous experiments are different in several ways: We used a longer and more complex program, made an inheritance diagram available to the subjects, and added a second kind of maintenance task.When taken together, the previous results plus ours suggest that there is no such thing as usefulness or harmfulness of a certain inheritance depth as such. Code maintenance effort is hardly correlated with inheritance depth, but rather depends on other factors (partly related to inheritance depth). Using statistical modeling, we identify the number of relevant methods to be one such factor. We use it to build an explanation model of average code maintenance effort that is much more powerful than a model relying on inheritance depth.


Computing in Science and Engineering | 2001

Java and numerical computing

Ronald F. Boisvert; José E. Moreira; Michael Philippsen; Roldan Pozo

Java represents both a challenge and an opportunity to practitioners of numerical computing. The article analyzes the current state of Java in numerical computing and identifies some directions for the realization of its full potential. Many research projects have demonstrated the technology to achieve very high performance in floating-point computations with Java. Its incorporation into commercially available JVMs is more an economic and market issue than a technical one. The combination of Java programming features, pervasiveness, and performance could make it the language of choice for numerical computing. Furthermore, all Java programmers can potentially benefit from the techniques developed for optimizing Javas numerical performance. The authors hope the article will encourage more numerical programmers to pursue developing their applications in Java. This, in turn, will motivate vendors to develop better execution environments, harnessing Javas true potential for numerical computing.


Concurrency and Computation: Practice and Experience | 2000

A survey of concurrent object-oriented languages

Michael Philippsen

SUMMARY During the last decade object-oriented programming has grown from marginal influence into widespread acceptance. During the same period, progress in hardware and networking has changed the computing environment from sequential to parallel. Multi-processor workstations and clusters are now quite common. Unnumbered proposals have been made to combine both developments. Always the prime objective has been to provide the advantages of object-oriented software design at the increased power of parallel machines. However, combining both concepts has proven to be notoriously difficult. Depending on the approach, often key characteristics of either the object-oriented paradigm or key performance factors of parallelism are sacrificed, resulting in unsatisfactory languages. This survey first recapitulates well-known characteristics of both the object-oriented paradigm and parallel programming, and then marks out the design space of possible combinations by identifying various interdependencies of key concepts. The design space is then filled with data points: for 111 proposed languages we provide brief characteristics and feature tables. Feature tables, the comprehensive bibliography, and web-addresses might help in identifying open questions and preventing re-inventions. Copyright


international parallel processing symposium | 1999

More Efficient Object Serialization

Michael Philippsen; Bernhard Haumacher

In current Java implementations, Remote Method Invocation is too slow for high performance computing. Since Java’s object serialization often takes 25%–50% of the time needed for a remote invocation, an essential step towards a fast RMI is to reduce the cost of serialization.

Collaboration


Dive into the Michael Philippsen's collaboration.

Top Co-Authors

Avatar

Walter F. Tichy

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Ronald Veldema

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar

Christopher Mutschler

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar

Bernhard Haumacher

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Ernst A. Heinz

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Lutz Prechelt

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Michael Klemm

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar

Thomas M. Warschko

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Ingrid Fischer

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar

Barbara Unger

Karlsruhe Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge