Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hubertus Franke is active.

Publication


Featured researches published by Hubertus Franke.


international symposium on computer architecture | 2003

DRPM: dynamic speed control for power management in server class disks

Sudhanva Gurumurthi; Anand Sivasubramaniam; Mahmut T. Kandemir; Hubertus Franke

A large portion of the power budget in server environments goes into the I/O subsystem - the disk array in particular. Traditional approaches to disk power management involve completely stopping the disk rotation, which can take a considerable amount of time, making them less useful in cases where idle times between disk requests may not be long enough to outweigh the overheads. This paper presents a new approach called DRPM to modulate disk speed (RPM) dynamically, and gives a practical implementation to exploit this mechanism. Extensive simulations with different workload and hardware parameters show that DRPM can provide significant energy savings without compromising much on performance. This paper also discusses practical issues when implementing DRPM on server disks.


international symposium on low power electronics and design | 2007

Thermal-aware task scheduling at the system software level

Jeonghwan Choi; Chen-Yong Cher; Hubertus Franke; Hendrik F. Hamann; Alan J. Weger; Pradip Bose

Power-related issues have become important considerations in current generation microprocessor design. One of these issues is that of elevated on-chip temperatures. This has an adverse effect on cooling cost and, if not addressed suitably, on chip reliability. In this paper we investigate the general trade-offs between temporal and spatial hot spot mitigation schemes and thermal time constants, workload variations and microprocessor power distributions. By leveraging spatial and temporal heat slacks, our schemes enable lowering of on-chip unit temperatures by changing the workload in a timely manner with Operating System(OS) and existing hardware support.


job scheduling strategies for parallel processing | 1997

Modeling of Workload in MPPs

Joefon Jann; Pratap Pattnaik; Hubertus Franke; Fang Wang; Joseph Skovira; Joseph Riordan

In this paper we have characterized the inter-arrival time and service time distributions for jobs at a large MPP supercomputing center. Our findings show that the distributions are dispersive and complex enough that they require Hyper Erlang distributions to capture the first three moments of the observed workload. We also present the parameters from the characterization so that they can be easily used for both theoretical studies and the simulations of various scheduling algorithms.


IEEE Transactions on Parallel and Distributed Systems | 2003

An integrated approach to parallel scheduling using gang-scheduling, backfilling, and migration

Yanyong Zhang; Hubertus Franke; José E. Moreira; Anand Sivasubramaniam

Effective scheduling strategies to improve response times, throughput, and utilization are an important consideration in large supercomputing environments. Parallel machines in these environments have traditionally used space-sharing strategies to accommodate multiple jobs at the same time by dedicating the nodes to a single job until it completes. This approach, however, can result in low system utilization and large job wait times. This paper discusses three techniques that can be used beyond simple space-sharing to improve the performance of large parallel systems. The first technique we analyze is backfilling, the second is gang-scheduling, and the third is migration. The main contribution of this paper is an analysis of the effects of combining the above techniques. Using extensive simulations based on detailed models of realistic workloads, the benefits of combining the various techniques are shown over a spectrum of performance criteria.


IEEE Computer | 2003

Reducing disk power consumption in servers with DRPM

Sudhanva Gurumurthi; Anand Sivasubramaniam; Mahmut T. Kandemir; Hubertus Franke

Although effective techniques exist for tackling disk power for laptops and workstations, applying them in a server environment presents a considerable challenge, especially under stringent performance requirements. Using a dynamic rotations per minute approach to speed control in server disk arrays can provide significant savings in I/O system power consumption without lessening performance.


Ibm Journal of Research and Development | 2010

Workload and network-optimized computing systems

David P. LaPotin; Shahrokh Daijavad; Charles L. Johnson; Steven W. Hunter; Kazuaki Ishizaki; Hubertus Franke; Heather D. Achilles; Dan Peter Dumarot; Nancy Anne Greco; Bijan Davari

This paper describes a recent system-level trend toward the use of massive on-chip parallelism combined with efficient hardware accelerators and integrated networking to enable new classes of applications and computing-systems functionality. This system transition is driven by semiconductor physics and emerging network-application requirements. In contrast to general-purpose approaches, workload and network-optimized computing provides significant cost, performance, and power advantages relative to historical frequency-scaling approaches in a serial computational model. We highlight the advantages of on-chip network optimization that enables efficient computation and new services at the network edge of the data center. Software and application development challenges are presented, and a service-oriented architecture application example is shown that characterizes the power and performance advantages for these systems. We also discuss a roadmap for next-generation systems that proportionally scale with future networking bandwidth growth rates and employ 3-D chip integration methods for design flexibility and modularity.


international parallel and distributed processing symposium | 2000

Improving parallel job scheduling by combining gang scheduling and backfilling techniques

Yanyong Zhang; Hubertus Franke; José E. Moreira; Anand Sivasubramaniam

Two different approaches have been commonly used to address problems associated with space sharing scheduling strategies: (a) augmenting space sharing with backfilling, which performs out of order job scheduling; and (b) augmenting space sharing with time sharing, using a technique called coscheduling or gang scheduling. With three important experimental results-impact of priority queue order on backfilling, impact of overestimation of job execution times, and comparison of scheduling techniques-this paper presents an integrated strategy that combines backfilling with gang scheduling. Using extensive simulations based on detailed models of realistic workloads, the benefits of combining backfilling and gang scheduling are clearly demonstrated over a spectrum of performance criteria.


conference on object-oriented programming systems, languages, and applications | 2002

Creating and preserving locality of java applications at allocation and garbage collection times

Yefim Shuf; Manish Gupta; Hubertus Franke; Andrew W. Appel; Jaswinder Pal Singh

The growing gap between processor and memory speeds is motivating the need for optimization strategies that improve data locality. A major challenge is to devise techniques suitable for pointer-intensive applications. This paper presents two techniques aimed at improving the memory behavior of pointer-intensive applications with dynamic memory allocation, such as those written in Java. First, we present an allocation time object placement technique based on the recently introduced notion of prolific (frequently instantiated) types. We attempt to co-locate, at allocation time, objects of prolific types that are connected via object references. Then, we present a novel locality based graph traversal technique. The benefits of this technique, when applied to garbage collection (GC), are twofold: (i) it improves the performance of GC due to better locality during a heap traversal and (ii) it restructures surviving objects in a way that enhances locality. On multiprocessors, this technique can further reduce overhead due to synchronization and false sharing. The experimental results, on a well-known suite of Java benchmarks (SPECjvm98 [26], SPECjbb2000 [27], and jOlden [4]), from an implementation of these techniques in the Jikes RVM [1], are very encouraging. The object co-allocation technique improves application performance by up to 21% (10% on average) in the Jikes RVM configured with a non-copying mark-and-sweep collector. The locality-based traversal technique reduces GC times by up to 20% (10% on average) and improves the performance of applications by up to 14% (6% on average) in the Jikes RVM configured with a copying semi-space collector. Both techniques combined can improve application performance by up to 22% (10% on average) in the Jikes RVM configured with a non-copying mark-and-sweep collector.


Ibm Journal of Research and Development | 2014

Software defined environments: an introduction

Chung-Sheng Li; B. L. Brech; Scott W. Crowder; Daniel M. Dias; Hubertus Franke; Matt R. Hogstrom; David Lindquist; Giovanni Pacifici; Stefan Pappe; Bala Rajaraman; Josyula R. Rao; Radha Ratnaparkhi; Rodney A. Smith; Michael D. Williams

During the past few years, enterprises have been increasingly aggressive in moving mission-critical and performance-sensitive applications to the cloud, while at the same time many new mobile, social, and analytics applications are directly developed and operated on cloud computing platforms. These two movements are encouraging the shift of the value proposition of cloud computing from cost reduction to simultaneous agility and optimization. These requirements (agility and optimization) are driving the recent disruptive trend of software defined computing, for which the entire computing infrastructure--compute, storage and network--is becoming software defined and dynamically programmable. The key elements within software defined environments include capability-based resource abstraction, goal-based and policy-based workload definition, and outcome-based continuous mapping of the workload to the available resources. Furthermore, software defined environments provide the tooling and capabilities to compose workloads from existing components that are then continuously and autonomously mapped onto the underlying programmable infrastructure. These elements enable software defined environments to achieve agility, efficiency, and continuous outcome-optimized provisioning and management, plus continuous assurance for resiliency and security. This paper provides an overview and introduction to the key elements and challenges of software defined environments.


symposium on frontiers of massively parallel computation | 1996

Gang scheduling for highly efficient, distributed multiprocessor systems

Hubertus Franke; Pratap Pattnaik; Larry Rudolph

We have implemented a job scheduling system for workstation clusters and massively parallel systems with highly efficient message passing interconnects that supports space and time sharing through multiuser gang scheduling of parallel jobs. The system is available on the IBM-SP-2 cluster. It is highly modular, scalable and can easily be adapted to a variety of other MPP systems. The system supports various scheduling policies. We architect the system so that the time-sharing of processors avoids any significant serialization and extra resource consumption, but preserves the reliability and the efficiency of the high performance communication subsystem that characterizes a dedicated non time shared systems.

Collaboration


Dive into the Hubertus Franke's collaboration.

Researchain Logo
Decentralizing Knowledge