Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michael R. Hines is active.

Publication


Featured researches published by Michael R. Hines.


ieee international conference on cloud computing technology and science | 2011

Applications Know Best: Performance-Driven Memory Overcommit with Ginkgo

Michael R. Hines; Abel Gordon; Marcio A. Silva; Dilma Da Silva; Kyung Dong Ryu; Muli Ben-Yehuda

Memory over commitment enables cloud providers to host more virtual machines on a single physical server, exploiting spare CPU and I/O capacity when physical memory becomes the bottleneck for virtual machine deployment. However, over commiting memory can also cause noticeable application performance degradation. We present Ginkgo, a policy framework for over omitting memory in an informed and automated fashion. By directly correlating application-level performance to memory, Ginkgo automates the redistribution of scarce memory across all virtual machines, satisfying performance and capacity constraints. Ginkgo also achieves memory gains for traditionally fixed-size Java applications by coordinating the redistribution of available memory with the activities of the Java Virtual Machine heap. When compared to a non-over commited system, Ginkgo runs the Day Trader 2.0 and SPEC Web 2009 benchmarks with the same number of virtual machines while saving up to 73% (50% omitting free space) of a physical servers memory while keeping application performance degradation within 7%.


ieee international conference on cloud engineering | 2013

CloudBench: Experiment Automation for Cloud Environments

Marcio A. Silva; Michael R. Hines; Diego S. Gallo; Qi Liu; Kyung Dong Ryu; Dilma Da Silva

The growth in the adoption of cloud computing is driven by distinct and clear benefits for both cloud customers and cloud providers. However, the increase in the number of cloud providers as well as in the variety of offerings from each provider has made it harder for customers to choose. At the same time, the number of options to build a cloud infrastructure, from cloud management platforms to different interconnection and storage technologies, also poses a challenge for cloud providers. In this context, cloud experiments are as necessary as they are labor intensive. Cloud Bench [1] is an open-source framework that automates cloud-scale evaluation and benchmarking through the running of controlled experiments, where complex applications are automatically deployed. Experiments are described through experiment plans, containing directives with enough descriptive power to make the experiment descriptions brief while allowing for customizable multi-parameter variation. Experiments can be executed in multiple clouds using a single interface. Cloud Bench is capable of managing experiments spread across multiple regions and for long periods of time. The modular approach adopted allows it to be easily extended to accommodate new cloud infrastructure APIs and benchmark applications, directly by external users. A built-in data collection system collects, aggregates and stores metrics for cloud management activities (such as VM provisioning and VM image capture) and application runtime information. Experiments can be conducted in a highly controllable fashion, in order to assess the stability, scalability and reliability of multiple cloud configurations. We demonstrate Cloud Benchs main characteristics through the evaluation of an Open Stack installation, including experiments with approximately 1200 simultaneous VMs at an arrival rate of up to 400 VMs/hour.


winter simulation conference | 2012

Hardware-in-the-loop simulation for automated benchmarking of cloud infrastructures

Qi Liu; Marcio A. Silva; Michael R. Hines; Dilma Da Silva

To address the challenge of automated performance benchmarking in virtualized cloud infrastructures, an extensible and adaptable framework called CloudBench has been developed to conduct scalable, controllable, and repeatable experiments in such environments. This paper presents the hardware-in-the-loop simulation technique used in CloudBench, which integrates an efficient discrete-event simulation with the cloud infrastructure under test in a closed feedback control loop. The technique supports the decomposition of complex resource usage patterns and provides a mechanism for statistically multiplexing application requests of varied characteristics to generate realistic and emergent behavior. It also exploits parallelism at multiple levels to improve simulation efficiency, while maintaining temporal and causal relationships with proper synchronization. Our experiments demonstrate that the proposed technique can synthesize complex resource usage behavior for effective cloud performance benchmarking.


Archive | 2012

Dynamic Virtual Machine Resizing in a Cloud Computing Infrastructure

David Breitgand; Dilma Da Silva; Amir Epstein; Alexander Glikson; Michael R. Hines; Kyung Dong Ryu; Marcio A. Silva


Archive | 2012

DYNAMIC MEMORY MANAGEMENT IN A VIRTUALIZED COMPUTING ENVIRONMENT

Shmuel Ben-Yehuda; Dilma Da Silva; Abel Gordon; Michael R. Hines


Archive | 2012

Resource management using reliable and efficient delivery of application performance information in a cloud computing system

Dilma Da Silva; Michael R. Hines; Kyung Dong Ryu; Marcio A. Silva


Archive | 2014

AGILE VM LOAD BALANCING THROUGH MICRO-CHECKPOINTING AND MULTI-ARCHITECTURE EMULATION

Bulent Abali; Michael R. Hines; Gokul B. Kandiraju; Jack Kouloheris


Archive | 2013

Using RDMA for fast system recovery in virtualized environments

Mohammad Banikazemi; John A. Bivens; Michael R. Hines


Archive | 2014

VIRTUAL MACHINE DISTRIBUTED CHECKPOINTING

Bulent Abali; Hubertus Franke; Michael R. Hines; Gokul B. Kandiraju; Makoto Ono


Archive | 2013

INSTANTANEOUS SAVE/RESTORE OF VIRTUAL MACHINES WITH PERSISTENT MEMORY

Bulent Abali; Mohammad Banikazemi; John A. Bivens; Michael R. Hines; Dan E. Poff

Researchain Logo
Decentralizing Knowledge