Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Felix Rauch is active.

Publication


Featured researches published by Felix Rauch.


international conference on cluster computing | 2000

Partition repositories for partition cloning OS independent software maintenance in large clusters of PCs

Felix Rauch; Christian Kurmann; Thomas M. Stricker

As a novel approach to software maintenance in large clusters of PCs requiring multiple OS installations we implemented partition cloning and partition repositories as well as a set of OS independent tools for software maintenance using entire partitions, thus providing a clean abstraction of all operating system configuration states. We identify the evolution of software installations (different releases) and the customization of installed systems (different machines) as two orthogonal axes. Using this analysis we devise partition repositories as an efficient, incremental storage scheme to maintain all necessary partition images for versatile, large clusters of PCs. We evaluate our approach with a release history of sample images used in the Patagonia multi-purpose clusters at ETH Zurich including several Linux, Windows NT and Oberon images. The study includes quantitative data that shows the viability of the OS independent approach of working with entire partitions and investigates some relevant tradeoffs: e.g., between difference granularity and compression block size.


Cluster Computing | 2001

Speculative Defragmentation – Leading Gigabit Ethernet to True Zero-Copy Communication

Christian Kurmann; Felix Rauch; Thomas M. Stricker

Clusters of Personal Computers (CoPs) offer excellent compute performance at a low price. Workstations with “Gigabit to the Desktop” can give workers access to a new game of multimedia applications. Networking PCs with their modest memory subsystem performance requires either extensive hardware acceleration for protocol processing or alternatively, a highly optimized software system to reach the full Gigabit/sec speeds in applications. So far this could not be achieved, since correctly defragmenting packets of the various communication protocols in hardware remains an extremely complex task and prevented a clean “zero-copy” solution in software. We propose and implement a defragmenting driver based on the same speculation techniques that are common to improve processor performance with instruction level parallelism. With a speculative implementation we are able to eliminate the last copy of a TCP/IP stack even on simple, existing Ethernet NIC hardware. We integrated our network interface driver into the Linux TCP/IP protocol stack and added the well known page remapping and fast buffer strategies to reach an overall zero-copy solution. An evaluation with measurement data indicates three trends: (1) for Gigabit Ethernet the CPU load of communication can be reduced processing significantly, (2) speculation will succeed in most cases, and (3) the performance for burst transfers can be improved by a factor of 1.5–2 over the standard communication software in Linux 2.2. Finally we can suggest simple hardware improvements to increase the speculation success rates based on our implementation.


european conference on parallel processing | 2000

Partition Cast — Modelling and Optimizing the Distribution of Large Data Sets in PC Clusters

Felix Rauch; Christian Kurmann; Thomas M. Stricker

Multicasting large amounts of data efficiently to all nodes of a PC cluster is an important operation. In the form of a partition cast it can be used to replicate entire software installations by cloning. Optimizing a partition cast for a given cluster of PCs reveals some interesting architectural tradeoffs, since the fastest solution does not only depend on the network speed and topology, but remains highly sensitive to other resources like the disk speed, the memory system performance and the processing power in the participating nodes. We present an analytical model that guides an implementation towards an optimal configuration for any given PC cluster. The model is validated by measurements on our cluster using Gigabit- and Fast Ethernet links. The resulting simple software tool, Dolly, can replicate an entire 2 GByteWindows NT image onto 24 machines in less than 5 minutes.


Concurrency and Computation: Practice and Experience | 2002

Optimizing the distribution of large data sets in theory and practice

Felix Rauch; Christian Kurmann; Thomas M. Stricker

Multicasting large amounts of data efficiently to all nodes of a PC clusteris an important operation. In the form of a partition cast it can be used to replicate entire software installations by cloning. Optimizing a partition cast for a given cluster of PCs reveals some interesting architectural tradeoffs, since the fastest solution does not only depend on the network speed and topology, but remains highly sensitive to other resources like the disk speed, the memory system performance and the processing power in the participating nodes. We present an analytical model that guides an implementation towards an optimal configuration for any given PC cluster. The model is validated by measurements on our cluster using Gigabit‐ and Fast‐Ethernet links. The resulting simple software tool, Dolly, can replicate an entire 2 GB Windows NT image onto 24 machines in less than 5 min. Copyright


Operating Systems Review | 2002

Comments on "transparent user-level process checkpoint and restore for migration" by Bozyigit and Wasiq

Felix Rauch; Thomas M. Stricker

The simple checkpointing and migration system for UNIX processes as described in the article of Bozyigit and Wasiq [1] can be improved in two ways: First by a technique to checkpoint and migrate applications without the need to recompile them and second by an alternative approach to precisely locate all the data segments of a process that need to be checkpointed. We fully acknowledge the difficulty to do checkpointing or even portable checkpointing for the general case of processes and do not claim to solve the many remaining problems with the simplistic checkpointing and migration approaches presented in the earlier article. Still we are aware of many systems and applications where a simple solution is extremely helpful once it also works with binaries.


high performance distributed computing | 2000

Speculative defragmentation - a technique to improve the communication software efficiency for Gigabit Ethernet

Christian Kurmann; Michel G. Muller; Felix Rauch; Thomas M. Stricker


CS technical report | 2003

Cost/performance tradeoffs in network interconnects for clusters of commodity PCs

Christian Kurmann; Felix Rauch; Thomas M. Stricker


Cluster Computing | 1999

Patagonia - A Dual Use Cluster of PCs for Computation and Education

Felix Rauch; Christian Kurmann; Blanca Maria Müller-Lagunez; Thomas M. Stricker


CS technical report | 2000

Partition cast: Modelling and optimizing the distribution of large data sets in PC clusters

Felix Rauch; Christian Kurmann; Thomas M. Stricker


australasian database conference | 2005

OS support for a commodity database on PC clusters: distributed devices vs. distributed file systems

Felix Rauch; Thomas M. Stricker

Collaboration


Dive into the Felix Rauch's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Christian Kurmann

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Michel G. Muller

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Michel Müller

École Polytechnique Fédérale de Lausanne

View shared research outputs
Researchain Logo
Decentralizing Knowledge