Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rami G. Melhem is active.

Publication


Featured researches published by Rami G. Melhem.


real-time systems symposium | 2001

Dynamic and aggressive scheduling techniques for power-aware real-time systems

Hakan Aydin; Rami G. Melhem; Daniel Mossé; Pedro Mejía-Alvarez

In this paper we address power-aware scheduling of periodic hard real-time tasks using dynamic voltage scaling. Our solution includes three parts: (a) a static (off-line) solution to compute the optimal speed, assuming worst-case workload for each arrival, (b) an on-line speed reduction mechanism to reclaim energy by adapting to the actual workload, and (c) an online, adaptive and speculative speed adjustment mechanism to anticipate early completions of future executions by using the average-case workload information. All these solutions still guarantee that all deadlines are met. Our simulation results show that the reclaiming algorithm saves a striking 50% of the energy, over the static algorithm. Further our speculative techniques allow for an additional approximately 20% savings over the reclaiming algorithm. In this study, we also establish that solving an instance of the static power-aware scheduling problem is equivalent to solving an instance of the reward-based scheduling problem [1, 4] with concave reward functions.


IEEE Transactions on Computers | 2004

Power-aware scheduling for periodic real-time tasks

Hakan Aydin; Rami G. Melhem; Daniel Mossé; Pedro Mejía-Alvarez

We address power-aware scheduling of periodic tasks to reduce CPU energy consumption in hard real-time systems through dynamic voltage scaling. Our intertask voltage scheduling solution includes three components: 1) a static (offline) solution to compute the optimal speed, assuming worst-case workload for each arrival, 2) an online speed reduction mechanism to reclaim energy by adapting to the actual workload, and 3) an online, adaptive and speculative speed adjustment mechanism to anticipate early completions of future executions by using the average-case workload information. All these solutions still guarantee that all deadlines are met. Our simulation results show that our reclaiming algorithm alone outperforms other recently proposed intertask voltage scheduling schemes. Our speculative techniques are shown to provide additional gains, approaching the theoretical lower-bound by a margin of 10 percent.In this paper, we address power-aware scheduling of periodic tasks to reduce CPU energy consumption in hard real-time systems through dynamic voltage scaling. Our intertask voltage scheduling solut...


IEEE Transactions on Parallel and Distributed Systems | 2003

Scheduling with dynamic voltage/speed adjustment using slack reclamation in multiprocessor real-time systems

Dakai Zhu; Rami G. Melhem; Bruce R. Childers

The high power consumption of modern processors becomes a major concern because it leads to decreased mission duration (for battery-operated systems), increased heat dissipation, and decreased reliability. While many techniques have been proposed to reduce power consumption for uniprocessor systems, there has been considerably less work on multiprocessor systems. In this paper, based on the concept of slack sharing among processors, we propose two novel power-aware scheduling algorithms for task sets with and without precedence constraints executing on multiprocessor systems. These scheduling techniques reclaim the time unused by a task to reduce the execution speed of future tasks and, thus, reduce the total energy consumption of the system. We also study the effect of discrete voltage/speed levels on the energy savings for multiprocessor systems and propose a new scheme of slack reservation to incorporate voltage/speed adjustment overhead in the scheduling algorithms. Simulation and trace-based results indicate that our algorithms achieve substantial energy savings on systems with variable voltage processors. Moreover, processors with a few discrete voltage/speed levels obtain nearly the same energy savings as processors with continuous voltage/speed, and the effect of voltage/speed adjustment overhead on the energy savings is relatively small.


international conference on computer aided design | 2004

The effects of energy management on reliability in real-time embedded systems

Dakai Zhu; Rami G. Melhem; Daniel Mossé

The slack time in real-time systems can be used by recovery schemes to increase system reliability as well as by frequency and voltage scaling techniques to save energy. Moreover, the rate of transient faults (i.e., soft errors caused, for example, by cosmic ray radiations) also depends on system operating frequency and supply voltage. Thus, there is an interesting trade-off between system reliability and energy consumption. This work first investigates the effects of frequency and voltage scaling on the fault rate and proposes two fault rate models based on previously published data. Then, the effects of energy management on reliability are studied. Our analysis results show that, energy management through frequency and voltage scaling could dramatically reduce system reliability, and ignoring the effects of energy management on the fault rate is too optimistic and may lead to unsatisfied system reliability.


euromicro conference on real-time systems | 2001

Determining optimal processor speeds for periodic real-time tasks with different power characteristics

Hakan Aydin; Rami G. Melhem; Daniel Mossé; Pedro Mejía-Alvarez

In this paper, we provide an efficient solution for periodic real-time tasks with (potentially) different power consumption characteristics. We show that a task T/sub i/ can run at a constant speed S/sub i/ at every instance without hurting optimality. We sketch an O(n/sup 2/ log n) algorithm to compute the optimal S/sub i/ values. We also prove that the EDF (Earliest Deadline First) scheduling policy can be used to obtain a feasible schedule with these optimal speed values.


international parallel and distributed processing symposium | 2003

Energy aware scheduling for distributed real-time systems

Ramesh Mishra; Namrata Rastogi; Dakai Zhu; Daniel Mossé; Rami G. Melhem

Power management has become popular in mobile computing as well as in server farms. Although a lot of work has been done to manage the energy consumption on uniprocessor real-time systems, there is less work done on their multicomputer counterparts. For a set of real-time tasks with precedence constraints executing on a distributed system, we propose new static and dynamic power management schemes. Assuming a given static schedule generated from any list scheduling heuristic algorithm, our static power management scheme uses the static slack (if any) based on the degree of parallelism in the schedule. To consider the run-time behavior of tasks, an on-line dynamic power management technique is proposed to further explore the idle periods of processors. By comparing our static technique with the simple static power management, where the static slack is distributed to the schedule proportionally, we find that our static scheme can save an average of 10% more energy. When combined with dynamic schemes, our schemes significantly improve energy savings.


IEEE Transactions on Parallel and Distributed Systems | 1997

Fault-tolerance through scheduling of aperiodic tasks in hard real-time multiprocessor systems

Sunondo Ghosh; Rami G. Melhem; Daniel Mossé

Real time systems are being increasingly used in several applications which are time critical in nature. Fault tolerance is an important requirement of such systems, due to the catastrophic consequences of not tolerating faults. We study a scheme that provides fault tolerance through scheduling in real time multiprocessor systems. We schedule multiple copies of dynamic, aperiodic, nonpreemptive tasks in the system, and use two techniques that we call deallocation and overloading to achieve high acceptance ratio (percentage of arriving tasks scheduled by the system). The paper compares the performance of our scheme with that of other fault tolerant scheduling schemes, and determines how much each of deallocation and overloading affects the acceptance ratio of tasks. The paper also provides a technique that can help real time system designers determine the number of processors required to provide fault tolerance in dynamic systems. Lastly, a formal model is developed for the analysis of systems with uniform tasks.


conference on high performance computing (supercomputing) | 2005

On the Feasibility of Optical Circuit Switching for High Performance Computing Systems

Kevin J. Barker; Alan F. Benner; Raymond R. Hoare; Adolfy Hoisie; Darren J. Kerbyson; Dan Li; Rami G. Melhem; Ramakrishnan Rajamony; Eugen Schenfeld; Shuyi Shao; Craig B. Stunkel; Peter A. Walker

The interconnect plays a key role in both the cost and performance of large-scale HPC systems. The cost of future high-bandwidth electronic interconnects mushrooms due to expensive optical transceivers needed between electronic switches. We describe a potentially cheaper and more power-efficient approach to building high-performance interconnects. Through empirical analysis of HPC applications, we find that the bulk of inter-processor communication (barring collectives) is bounded in degree and changes very slowly or never. Thus we propose a two-network interconnect: An Optical Circuit Switching (OCS) network handling long-lived bulk data transfers, using optical switches; and a secondary lower-bandwidth Electronic Packet Switching (EPS) network. An OCS could be significantly cheaper, as it uses fewer optical transceivers than an electronic network. Collectives and transient communication packets traverse the electronic network. We present compiler techniques and dynamic run-time policies, using this two-network interconnect. Simulation results show that our approach provides high performance at low cost.


design, automation, and test in europe | 2010

Increasing PCM main memory lifetime

Alexandre Peixoto Ferreira; Miao Zhou; Santiago Bock; Bruce R. Childers; Rami G. Melhem; Daniel Mossé

The introduction of Phase-Change Memory (PCM) as a main memory technology has great potential to achieve a large energy reduction. PCM has desirable energy and scalability properties, but its use for main memory also poses challenges such as limited write endurance with at most 107 writes per bit cell before failure. This paper describes techniques to enhance the lifetime of PCM when used for main memory. Our techniques are (a) writeback minimization with new cache replacement policies, (b) avoidance of unnecessary writes, which write only the bit cells that are actually changed, and (c) endurance management with a novel PCM-aware swap algorithm for wear-leveling. A failure detection algorithm is also incorporated to improve the reliability of PCM. With these approaches, the lifetime of a PCM main memory is increased from just a few days to over 8 years.


IEEE Transactions on Computers | 2004

The interplay of power management and fault recovery in real-time systems

Rami G. Melhem; Daniel Mossé; Elmootazbellah Nabil Elnozahy

We describe how to exploit the scheduling slack in a real-time system to reduce energy consumption and achieve fault tolerance at the same time. During failure-free operation, a task takes checkpoints to enable recovery from failure. Additionally, the system exploits the slack to conserve energy by reducing the processor speed. If a task fails, it will restart from a saved checkpoint and execute at maximum speed to guarantee that the deadlines are met. We show that the number of checkpoints and their placements interact in subtle ways with the power management policy. We study two checkpoint placement policies for aperiodic tasks and analytically derive the optimal number of checkpoints to conserve energy under each. This optimal number allows the CPU speed to be slowed down to the level that yields minimum energy consumption, while still guaranteeing recoverability of tasks under each checkpointing policy. The results show that traditional periodic checkpointing is not the best policy for the combined purpose of conserving energy and guaranteeing recovery. Instead, better energy savings are possible through a nonuniform distribution of checkpoints that takes into account the energy consumption and reliability factors. Depending on the amount of slack and the checkpointing overhead, energy can be reduced by up to 68 percent under nonuniform checkpointing. We also demonstrate the applicability of these checkpoint placement policies to periodic tasks.

Collaboration


Dive into the Rami G. Melhem's collaboration.

Top Co-Authors

Avatar

Daniel Mossé

University of Pittsburgh

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Taieb Znati

University of Pittsburgh

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xin Yuan

Florida State University

View shared research outputs
Top Co-Authors

Avatar

Rajiv Gupta

University of California

View shared research outputs
Top Co-Authors

Avatar

Dakai Zhu

University of Texas at San Antonio

View shared research outputs
Top Co-Authors

Avatar

Hakan Aydin

George Mason University

View shared research outputs
Researchain Logo
Decentralizing Knowledge