Akshat Verma
IBM
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Akshat Verma.
acm ifip usenix international conference on middleware | 2008
Akshat Verma; Puneet Ahuja; Anindya Neogi
Workload placement on servers has been traditionally driven by mainly performance objectives. In this work, we investigate the design, implementation, and evaluation of a power-aware application placement controller in the context of an environment with heterogeneous virtualized server clusters. The placement component of the application management middleware takes into account the power and migration costs in addition to the performance benefit while placing the application containers on the physical servers. The contribution of this work is two-fold: first, we present multiple ways to capture the cost-aware application placement problem that may be applied to various settings. For each formulation, we provide details on the kind of information required to solve the problems, the model assumptions, and the practicality of the assumptions on real servers. In the second part of our study, we present the pMapper architecture and placement algorithms to solve one practical formulation of the problem: minimizing power subject to a fixed performance requirement. We present comprehensive theoretical and experimental evidence to establish the efficacy of pMapper.
international conference on supercomputing | 2008
Akshat Verma; Puneet Ahuja; Anindya Neogi
High Performance Computing applications and platforms have been typically designed without regard to power consumption. With increased awareness of energy cost, power management is now an issue even for compute-intensive server clusters. In this work, we investigate the use of power management techniques for high performance applications on modern power-efficient servers with virtualization support. We consider power management techniques such as dynamic consolidation and usage of dynamic power range enabled by low power states on servers. We identify application performance isolation and virtualization overhead with multiple virtual machines as the key bottlenecks for server consolidation. We perform a comprehensive experimental study to identify the scenarios where applications are isolated from each other. We also establish that the power consumed by HPC applications may be application dependent, non-linear and have a large dynamic range. We show that for HPC applications, working set size is a key parameter to take care of while placing applications on virtualized servers. We use the insights obtained from our experimental study to present a framework and methodology for power-aware application placement for HPC applications.
international conference on cloud computing | 2012
Sourav Dutta; Sankalp Gera; Akshat Verma; Balaji Viswanathan
Enterprise clouds today support an on demand resource allocation model and can provide resources requested by applications in a near online manner using virtual machine resizing or cloning. However, in order to take advantage of an on demand resource model, enterprise applications need to be automatically scaled in a way that makes the most efficient use of resources. In this work, we present the SmartScale automated scaling framework. SmartScale uses a combination of vertical (adding more resources to existing VM instances) and horizontal (adding more VM instances) scaling to ensure that the application is scaled in a manner that optimizes both resource usage and the reconfiguration cost incurred due to scaling. The SmartScale methodology is proactive and ensures that the application converges quickly to the desired scaling level even when the workload intensity changes significantly. We evaluate SmartScale using real production traces on Olio, an emerging cloud benchmark, running on a kvm-based cloud testbed. We present both theoretical and experimental evidence that comprehensively establish the effectiveness of SmartScale.
international conference on autonomic computing | 2010
Ricardo Koller; Akshat Verma; Anindya Neogi
The increasing heterogeneity between applications in emerging virtualized data centers like clouds introduce significant challenges in estimating the power drawn by the data center. In this work, we presentWattApp: an application-aware power meter for shared data centers that addresses this challenge. In order to deal with heterogeneous applications, WattApp introduces application parameters (e.g, throughput) in the power modeling framework. WattApp is based on a carefully designed set of experiments on a mix of diverse applications: power benchmarks, web-transaction workloads, HPC workloads and I/O-intensive workloads. Given a set of N applications and M server types, WattApp runs in O(N) time, uses O(NxM) calibration runs, and predicts the power drawn by any arbitrary placement within 5%of the real power for the applications studied.
international world wide web conferences | 2003
Akshat Verma; Sugata Ghosal
Variability and diverseness among incoming requests to a service hosted on a finite capacity resource necessitates sophisticated request admission control techniques for providing guaranteed quality of service (QoS). We propose in this paper a service time based online admission control methodology for maximizing profits of a service provider. The proposed methodology chooses a subset of incoming requests such that the revenue of the provider is maximized. Admission control decision in our proposed system is based upon an estimate of the service time of the request, QoS bounds, prediction of arrivals and service times of requests to come in the short-term future, and rewards associated with servicing a request within its QoS bounds. Effectiveness of the proposed admission control methodology is demonstrated using experiments with a content-based messaging middleware service.
modeling, analysis, and simulation on computer and telecommunication systems | 2011
Akshat Verma; Gautam Kumar; Ricardo Koller; Aritra Sen
Clouds allow enterprises to increase or decrease their resource allocation on demand in response to changes in workload intensity. Virtualization is one of the building blocks for cloud computing and provides the mechanisms to implement the dynamic allocation of resources. These dynamic reconfiguration actions lead to performance impact during the reconfiguration duration. In this paper, we model the cost of reconfiguring a cloud-based IT infrastructure in response to workload variations. We show that maintaining a cloud requires frequent reconfigurations necessitating both VM resizing and VM live migration, with live migration dominating reconfiguration costs. We design the CosMig model to predict the duration of live migration and its impact on application performance. Our model is based on parameters that are typically monitored in enterprise data centers. Further, the model faithfully captures the impact of shared resources in a virtualized environment. We experimentally validate the accuracy and effectiveness of CosMig using micro benchmarks and representative applications.
international middleware conference | 2010
Akshat Verma; Gautam Kumar; Ricardo Koller
Emerging clouds promise enterprises the ability to increase or decrease their resource allocation on demand using virtual machine resizing and migration. These dynamic reconfiguration actions lead to performance impact during the reconfiguration duration. In this paper, we study the cost of reconfiguring a cloud-based IT infrastructure in response to workload variations. We observe that live migration requires a significant amount of spare CPU on the source server (but not on the target server). If spare CPU is not available, it impacts both the duration of migration and the performance of the application being migrated. Further, the amount of CPU required for live migration varies with the active memory of the VM being migrated. Finally, we show that live migration may impact any co-located VMs based on the cache usage pattern of the co-located VM. We distill all our observations to present a list of practical recommendations to cloud providers for minimizing the impact of reconfiguration during dynamic resource allocation.
dependable systems and networks | 2013
Bikash Sharma; Praveen Jayachandran; Akshat Verma; Chita R. Das
In this work, we address problem determination in virtualized clouds. We show that high dynamism, resource sharing, frequent reconfiguration, high propensity to faults and automated management introduce significant new challenges towards fault diagnosis in clouds. Towards this, we propose CloudPD, a fault management framework for clouds. CloudPD leverages (i) a canonical representation of the operating environment to quantify the impact of sharing; (ii) an online learning process to tackle dynamism; (iii) a correlation-based performance models for higher detection accuracy; and (iv) an integrated end-to-end feedback loop to synergize with a cloud management ecosystem. Using a prototype implementation with cloud representative batch and transactional workloads like Hadoop, Olio and RUBiS, it is shown that CloudPD detects and diagnoses faults with low false positives (<; 16%) and high accuracy of 88%, 83% and 83%, respectively. In an enterprise trace-based case study, CloudPD diagnosed anomalies within 30 seconds and with an accuracy of 77%, demonstrating its effectiveness in real-life operations.
Communications of The ACM | 2011
Gargi Dasgupta; Amit Sharma; Akshat Verma; Anindya Neogi; Ravi Kothari
Power-aware dynamic application placement can address underutilization of servers as well as the rising energy costs in a data center.
international conference on data engineering | 2005
Koustuv Dasgupta; Sugata Ghosal; Rohit Jain; Upendra Sharma; Akshat Verma
Logical reorganization of data and requirements of differentiated QoS in information systems necessitate bulk data migration by the underlying storage layer. Such data migration needs to ensure that regular client I/Os are not impacted significantly while migration is in progress. We formalize the data migration problem in a unified admission control framework that captures both the performance requirements of client I/Os and the constraints associated with migration. We propose an adaptive rate-control based data migration methodology, QoSMig, that achieves the optimal client performance in a differentiated QoS setting, while ensuring that the specified migration constraints are met QoSMig uses both long term averages and short term forecasts of client traffic to compute a migration schedule. We present an architecture based on Service Level Enforcement Discipline for Storage (SLEDS) that supports QoSMig. Our trace-driven experimental study demonstrates that QoSMig provides significantly better I/O performance as compared to existing migration methodologies.