Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Christian Kurmann is active.

Publication


Featured researches published by Christian Kurmann.


international conference on cluster computing | 2000

Partition repositories for partition cloning OS independent software maintenance in large clusters of PCs

Felix Rauch; Christian Kurmann; Thomas M. Stricker

As a novel approach to software maintenance in large clusters of PCs requiring multiple OS installations we implemented partition cloning and partition repositories as well as a set of OS independent tools for software maintenance using entire partitions, thus providing a clean abstraction of all operating system configuration states. We identify the evolution of software installations (different releases) and the customization of installed systems (different machines) as two orthogonal axes. Using this analysis we devise partition repositories as an efficient, incremental storage scheme to maintain all necessary partition images for versatile, large clusters of PCs. We evaluate our approach with a release history of sample images used in the Patagonia multi-purpose clusters at ETH Zurich including several Linux, Windows NT and Oberon images. The study includes quantitative data that shows the viability of the OS independent approach of working with entire partitions and investigates some relevant tradeoffs: e.g., between difference granularity and compression block size.


Cluster Computing | 2001

Speculative Defragmentation – Leading Gigabit Ethernet to True Zero-Copy Communication

Christian Kurmann; Felix Rauch; Thomas M. Stricker

Clusters of Personal Computers (CoPs) offer excellent compute performance at a low price. Workstations with “Gigabit to the Desktop” can give workers access to a new game of multimedia applications. Networking PCs with their modest memory subsystem performance requires either extensive hardware acceleration for protocol processing or alternatively, a highly optimized software system to reach the full Gigabit/sec speeds in applications. So far this could not be achieved, since correctly defragmenting packets of the various communication protocols in hardware remains an extremely complex task and prevented a clean “zero-copy” solution in software. We propose and implement a defragmenting driver based on the same speculation techniques that are common to improve processor performance with instruction level parallelism. With a speculative implementation we are able to eliminate the last copy of a TCP/IP stack even on simple, existing Ethernet NIC hardware. We integrated our network interface driver into the Linux TCP/IP protocol stack and added the well known page remapping and fast buffer strategies to reach an overall zero-copy solution. An evaluation with measurement data indicates three trends: (1) for Gigabit Ethernet the CPU load of communication can be reduced processing significantly, (2) speculation will succeed in most cases, and (3) the performance for burst transfers can be improved by a factor of 1.5–2 over the standard communication software in Linux 2.2. Finally we can suggest simple hardware improvements to increase the speculation success rates based on our implementation.


european conference on parallel processing | 2000

Partition Cast — Modelling and Optimizing the Distribution of Large Data Sets in PC Clusters

Felix Rauch; Christian Kurmann; Thomas M. Stricker

Multicasting large amounts of data efficiently to all nodes of a PC cluster is an important operation. In the form of a partition cast it can be used to replicate entire software installations by cloning. Optimizing a partition cast for a given cluster of PCs reveals some interesting architectural tradeoffs, since the fastest solution does not only depend on the network speed and topology, but remains highly sensitive to other resources like the disk speed, the memory system performance and the processing power in the participating nodes. We present an analytical model that guides an implementation towards an optimal configuration for any given PC cluster. The model is validated by measurements on our cluster using Gigabit- and Fast Ethernet links. The resulting simple software tool, Dolly, can replicate an entire 2 GByteWindows NT image onto 24 machines in less than 5 minutes.


Scientia Forestalis | 1999

A Comparison of Three Gigabit Technologies: SCI, Myrinet and SGI/Cray T3D

Christian Kurmann; Thomas M. Stricker

In 1993 Cray Research shipped its first T3D Massively Parallel Processor (MPP) and set high standards for Gigabit/s SAN (System Area Network) interconnects of microprocessor based MPP systems sustaining 1 Gigabit/s per link in many common applications. Today, in 1999, the communication speed is still at one Gigabit/s, but major advances in technology managed to drastically lower costs and to bring such interconnects to the mainstream market of PCI based commodity personal computers. Two products based on two completely different technologies are readily available: the Scalable Coherent Interface (SCI) implementation by Dolphin Interconnect Solutions and a Myrinet implementation by Myricom Inc. Both networking technologies include cabling for System Area Networking (SAN) and Local Area Networking (LAN) distances and adapter cards that connect to the standard I/O bus of a high end PC. Both technologies can incorporate crossbar switches to extend point to point links into an entire network fabric. Myrinet links are strictly point to point while SCI links can be rings of multiple nodes that are possibly connected to a switch for expansion. In the mean time two Internet technologies emerging from the inter-networking world also arrived at Gigabit speeds—ATM (Asynchronous Transfer Mode) and Gigabit Ethernet. Based on the specification and their history those two alternatives are related to the evaluated technologies Myrinet and SCI.


Concurrency and Computation: Practice and Experience | 2002

Optimizing the distribution of large data sets in theory and practice

Felix Rauch; Christian Kurmann; Thomas M. Stricker

Multicasting large amounts of data efficiently to all nodes of a PC clusteris an important operation. In the form of a partition cast it can be used to replicate entire software installations by cloning. Optimizing a partition cast for a given cluster of PCs reveals some interesting architectural tradeoffs, since the fastest solution does not only depend on the network speed and topology, but remains highly sensitive to other resources like the disk speed, the memory system performance and the processing power in the participating nodes. We present an analytical model that guides an implementation towards an optimal configuration for any given PC cluster. The model is validated by measurements on our cluster using Gigabit‐ and Fast‐Ethernet links. The resulting simple software tool, Dolly, can replicate an entire 2 GB Windows NT image onto 24 machines in less than 5 min. Copyright


high performance distributed computing | 2003

Zero-copy for CORBA - efficient communication for distributed object middleware

Christian Kurmann; Thomas M. Stricker

Many large applications require distributed computing for the sake of better performance and software systems that facilitate the development of such applications have attracted a great deal of attention. Modeling the application as distributed objects or components promises the benefits of better abstractions and increased software reuse. Using distributed object middleware (DOM) like CORBA (common object request broker architecture) looks promising, but most often one cannot afford its notorious inefficiency. We address the bandwidth bottleneck by extending highly efficient zero-copy communication architecture from the operating system through the middleware layers all the way to the application. In contrast to previous attempts on improving efficiency in CORBA we preserve the advantages of object oriented abstraction for the software design process and propose an efficient CORBA system that can handle bulk data transfers within the object request broker (ORB). Our prototype uses a clean separation of control-and data transfers within the ORB and for the ORB-to-ORB communication and manages to get rid of all inefficient buffering for certain types while still preserving the standard Internet interORB protocol (IIOP). It achieves the full performance that is only available with a strict zero-copy implementation across all layers between the operating system and the application.


high performance distributed computing | 2000

Speculative defragmentation - a technique to improve the communication software efficiency for Gigabit Ethernet

Christian Kurmann; Michel G. Muller; Felix Rauch; Thomas M. Stricker


CS technical report | 2003

Cost/performance tradeoffs in network interconnects for clusters of commodity PCs

Christian Kurmann; Felix Rauch; Thomas M. Stricker


Cluster Computing | 1999

Patagonia - A Dual Use Cluster of PCs for Computation and Education

Felix Rauch; Christian Kurmann; Blanca Maria Müller-Lagunez; Thomas M. Stricker


CS technical report | 2000

Partition cast: Modelling and optimizing the distribution of large data sets in PC clusters

Felix Rauch; Christian Kurmann; Thomas M. Stricker

Collaboration


Dive into the Christian Kurmann's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Felix Rauch

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Michel G. Muller

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Michel Müller

École Polytechnique Fédérale de Lausanne

View shared research outputs
Researchain Logo
Decentralizing Knowledge