Mohamed N. Bennani
George Mason University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Mohamed N. Bennani.
international conference on autonomic computing | 2005
Mohamed N. Bennani; Daniel A. Menascé
Large data centers host several application environments (AEs) that are subject to workloads whose intensity varies widely and unpredictably. Therefore, the servers of the data center may need to be dynamically redeployed among the various AEs in order to optimize some global utility function. Previous approaches to solving this problem suffer from scalability limitations and cannot easily address the fact that there may be multiple classes of workloads executing on the same AE. This paper presents a solution that addresses these limitations. This solution is based on the use of analytic queuing network models combined with combinatorial search techniques. The paper demonstrates the effectiveness of the approach through simulation experiments. Both online and batch workloads are considered
international conference on autonomic computing | 2006
Gerald Tesauro; Nicholas K. Jong; Rajarshi Das; Mohamed N. Bennani
Reinforcement Learning (RL) provides a promising new approach to systems performance management that differs radically from standard queuing-theoretic approaches making use of explicit system performance models. In principle, RL can automatically learn high-quality management policies without an explicit performance model or traffic model and with little or no built-in system specific knowledge. In our original work [1], [2], [3] we showed the feasibility of using online RL to learn resource valuation estimates (in lookup table form) which can be used to make high-quality server allocation decisions in a multi-application prototype Data Center scenario. The present work shows how to combine the strengths of both RL and queuing models in a hybrid approach in which RL trains offline on data collected while a queuing model policy controls the system. By training offline we avoid suffering potentially poor performance in live online training. We also now use RL to train nonlinear function approximators (e.g. multi-layer perceptrons) instead of lookup tables; this enables scaling to substantially larger state spaces. Our results now show that in both open-loop and closed-loop traffic, hybrid RL training can achieve significant performance improvements over a variety of initial model-based policies. We also find that, as expected, RL can deal effectively with both transients and switching delays, which lie outside the scope of traditional steady-state queuing theory.
international conference on autonomic and autonomous systems | 2006
Daniel A. Menascé; Mohamed N. Bennani
Virtualization was invented more than thirty years ago to allow large expensive mainframes to be easily shared among different application environments. As hardware prices went down, the need for virtualization faded away. Virtualization at all levels (system, storage, and network) became important again as a way to improve system security, reliability and availability, reduce costs, and provide greater flexibility. Virtualization is being used to support server consolidation efforts. In that case, many virtual machines running different application environments share the same hardware resources. This paper shows how autonomic computing techniques can be used to dynamically allocate processing resources to various virtual machines as the workload varies. The goal of the autonomic controller is to optimize a utility function for the virtualized environment. The paper considers dynamic CPU priority allocation and the allocation of CPU shares to the various virtual machines. Results obtained through simulation show that the autonomic controller is capable of achieving its goal
international conference on autonomic computing | 2004
Mohamed N. Bennani; Daniel A. Menascé
Computer systems are becoming extremely complex due to the large number and heterogeneity of their hardware and software components, the multilayered architecture used in their design, and the unpredictable nature of their workloads. Thus, performance management becomes difficult and expensive when carried out by human beings. An approach, called self-managing computer systems, is to build into the systems the mechanisms required to self-adjust configuration parameters so that the quality of service requirements of the system are constantly met. In this paper, we evaluate the robustness of such methods when the workload exhibits high variability in terms of the interarrival time and service times of requests. Another contribution of this paper is the assessment of the use of workload forecasting techniques in the design of QoS controllers.
Lecture Notes in Computer Science | 2005
Daniel A. Menascé; Mohamed N. Bennani; Honglei Ruan
Current computing environments are becoming increasingly complex in nature and exhibit unpredictable workloads. These environments create challenges to the design of systems that can adapt to changes in the workload while maintaining desired QoS levels. This paper focuses on the use of online analytic performance models in the design of self-managing and self-organizing computer systems. A general approach for building such systems is presented along with the algorithms used by a Quality of Service (QoS) controller. The robustness of the approach with respect to the variability of the workload and service time distributions is evaluated. The use of an adaptive controller that uses workload forecasting is discussed. Finally, the paper shows how online performance models can be used to design QoS-aware service oriented architectures.
european conference on machine learning | 2006
Gerald Tesauro; Nicholas K. Jong; Rajarshi Das; Mohamed N. Bennani
Reinforcement Learning (RL) holds particular promise in an emerging application domain of performance management of computing systems. In recent work, online RL yielded effective server allocation policies in a prototype Data Center, without explicit system models or built-in domain knowledge. This paper presents a substantially improved and more practical “hybrid” approach, in which RL trains offline on data collected while a queuing-theoretic policy controls the system. This approach avoids potentially poor performance in live online training. Additionally we use nonlinear function approximators instead of tabular value functions; this greatly improves scalability, and surprisingly, eliminated the need for exploratory actions. In experiments using both open-loop and closed-loop traffic as well as large switching delays, our results show significant performance improvement over state-of-art queuing model policies.
Cluster Computing | 2007
Gerald Tesauro; Nicholas K. Jong; Rajarshi Das; Mohamed N. Bennani
Int. CMG Conference | 2003
Daniel A. Menascé; Mohamed N. Bennani
Int. CMG Conference | 2006
Daniel A. Menascé; Mohamed N. Bennani
Archive | 2006
Daniel A. Menascé; Mohamed N. Bennani