Chung-Hsing Hsu
Rutgers University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Chung-Hsing Hsu.
programming language design and implementation | 2003
Chung-Hsing Hsu; Ulrich Kremer
This paper presents the design and implementation of a compiler algorithm that effectively optimizes programs for energy usage using dynamic voltage scaling (DVS). The algorithm identifies program regions where the CPU can be slowed down with negligible performance loss. It is implemented as a source-to-source level transformation using the SUIF2 compiler infrastructure. Physical measurements on a high-performance laptop show that total system (i.e., laptop) energy savings of up to 28% can be achieved with performance degradation of less than 5% for the SPECfp95 benchmarks. On average, the system energy and energy-delay product are reduced by 11% and 9%, respectively, with a performance slowdown of 2%. It was also discovered that the energy usage of the programs using our DVS algorithm is within 6% from the theoretical lower bound. To the best of our knowledge, this is one of the first work that evaluates DVS algorithms by physical measurements.
languages compilers and tools for embedded systems | 2002
Hendra Saputra; Mahmut T. Kandemir; Narayanan Vijaykrishnan; Mary Jane Irwin; Jie S. Hu; Chung-Hsing Hsu; Ulrich Kremer
As energy consumption has become a majorconstraint in current system design, it is essential to look beyond the traditional low-power circuit and architectural optimizations. Further, software is becoming an increasing portion of embedded/portable systems. Consequently, optimizing the software in conjunction with the underlying low-power hardware features such as voltage scaling is vital.In this paper, we present two compiler-directed energy optimization strategies based on voltage scaling: static voltage scaling and dynamic voltage scaling. In static voltage scaling, the compiler determines a single supply voltage level for the entire input program. We primarily aim at improving the energy consumption of a given code without increasing its execution time. To accomplish this, we employ classical loop-level compiler optimizations. However, we use these optimizations to create opportunities for voltage scaling to save energy, rather than increase program performance.In dynamic voltage scaling, the compiler can select different supply voltage levels for different parts of the code. Our compilation strategy is based on integer linear programming and can accommodate energy/performance constraints. For a benchmark suite of array-based scientific codes and embedded video/image applications, our experiments show average energy savings of 31.8% when static voltage scaling is used. Our dynamic voltage scaling strategy saves 15.3% more energy than static voltage scaling when invoked under the same performance constraints.
PACS '00 Proceedings of the First International Workshop on Power-Aware Computer Systems-Revised Papers | 2000
Chung-Hsing Hsu; Ulrich Kremer; Michael S. Hsiao
Dynamic voltage and frequency scaling has been identified as one of the most effective ways to reduce power dissipation. This paper discusses a compilation strategy that identifies opportunities for dynamic voltage and frequency scaling of the CPU without significant increase in overall program execution time. The paper introduces a simple, yet effective performance model to determine an efficient CPU slowdown factor for memory bound loop computations. Simulation results of a superscalar target architecture and a program kernel compiled at different optimizations levels show the potential benefit of the proposed compiler optimization. The energy savings are reported for a hypothetical target machine with power dissipation characteristics similar to Transmetas Crusoe TM5400 processor.
PACS'02 Proceedings of the 2nd international conference on Power-aware computer systems | 2002
Chung-Hsing Hsu; Ulrich Kremer
This paper evaluates five policies for cluster-wide power management in server farms. The policies employ various combinations of dynamic voltage scaling and node vary-on/vary-off (VOVO) to reduce the aggregate power consumption of a server cluster during periods of reduced workload. We evaluate the policies using a validated simulator that calculates the energy usage and response times of a Web server cluster serving traces culled from real-life Web server workloads. n nOur results show that a relatively simple policy of independent dynamic voltage scaling on each server node can achieve savings ranging up to 29% and is competitive with more complex schemes for some workloads. A policy that brings nodes online and takes them offline depending on the workload intensity also produces significant savings up to 42%. The largest savings are obtained by using a coordinated voltage scaling policy in conjunction with VOVO. This policy provides up to 18% more savings than just using VOVO in isolation. All five policies maintain server response times within acceptable norms.
The Journal of Supercomputing | 2004
Chung-Hsing Hsu; Ulrich Kremer
Loop tiling is an effective optimizing transformation to boost the memory performance of a program, especially for dense matrix scientific computations. The magnitude and stability of the achieved performance improvements are heavily dependent on the appropriate selection of tile sizes. Many existing tile selection algorithms try to find tile sizes which eliminate self-interference cache conflict misses, maximize cache utilization, and minimize cross-interference cache conflict misses. These techniques depend heavily on the actual layout of the arrays in memory. Array padding, an effective data layout optimization technique, is therefore incorporated by many algorithms to help loop tiling stabilize its effectiveness by avoiding “pathological” array sizes.In this paper, we examine several such combined algorithms in terms of cost-benefit trade-offs, and introduce a new algorithm. The preliminary experimental results show that more precise and costly tile selection and array padding algorithms may not be justified by the resulting performance improvements since such improvements may also be achieved by much simpler and therefore less expensive strategies. The key issues in finding a good tiling algorithm are (1) to identify critical performance factors and (2) to develop corresponding performance models that allow predictions at a sufficient level of accuracy. Following this insight, we have developed a new tiling algorithm that performs better than previous algorithms in terms of execution time and stability, and generates code with a performance comparable to the best measured algorithm. Experimental results on two standard benchmark kernels for matrix multiply and LU factorization show that the new algorithm is orders of magnitude faster than the best previous algorithm without sacrificing stability and execution speed of the generated code.
languages and compilers for parallel computing | 2001
Chung-Hsing Hsu; Ulrich Kremer
Dynamic voltage and frequency scaling (DVFS) of the CPU has been shown to be one of the most effective ways to reduce energy consumption of a program. This paper discusses the benefit of dynamic voltage and frequency scaling for scientific applications under different optimization levels. The reported experiments show that there are still many opportunities to apply DVFS to the highly optimized codes, and the profitability is significant across the benchmarks. It is also observed that there are performance and energy consumption tradeoffs for different optimization levels in the presence of DVFS. While in general compiling for performance will improve energy usage as well, in some cases the less successful optimization lead to higher energy savings. Finally, a comparison of the benefits of operating system support versus compiler support for DVFS is discussed.
Archive | 1998
Chung-Hsing Hsu; Ulrich Kremer
PACS | 2002
Chung-Hsing Hsu; Ulrich Kremer
Archive | 2003
Chung-Hsing Hsu; Ulrich Kremer
Archive | 2001
Chung-Hsing Hsu; Ulrich Kremer