J. Lakshmi
Indian Institute of Science
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by J. Lakshmi.
ieee international conference on cloud computing technology and science | 2013
Ankit Anand; J. Lakshmi; S. K. Nandy
Cloud computing model separates usage from ownership in terms of control on resource provisioning. Resources in the cloud are projected as a service and are realized using various service models like IaaS, PaaS and SaaS. In IaaS model, end users get to use a VM whose capacity they can specify but not the placement on a specific host or with which other VMs it can be co-hosted. Typically, the placement decisions happen based on the goals like minimizing the number of physical hosts to support a given set of VMs by satisfying each VMs capacity requirement. However, the role of the VMM usage to support I/O specific workloads inside a VM can make this capacity requirement incomplete. I/O workloads inside VMs require substantial VMM CPU cycles to support their performance. As a result, placement algorithms need to include the VMMs usage on a per VM basis. Secondly, cloud centers encounter situations wherein change in existing VMs capacity or launching of new VMs need to be considered during different placement intervals. Usually, this change is handled by migrating existing VMs to meet the goal of optimal placement. We argue that VM migration is not a trivial task and does include loss of performance during migration. We quantify this migration overhead based on the VMs workload type and include the same in placement problem. One of the goals of the placement algorithm is to reduce the VMs migration prospects, thereby reducing chances of performance loss during migration. This paper evaluates the existing ILP and First Fit Decreasing (FFD) algorithms to consider these constraints to arrive at placement decisions. We observe that ILP algorithm yields optimal results but needs long computing time even with parallel version. However, FFD heuristics are much faster and scalable algorithms that generate a sub-optimal solution, as compared to ILP, but in time-scales that are useful in real-time decision making. We also observe that including VM migration overheads in the placement algorithm results in a marginal increase in the number of physical hosts but a significant, of about 84 percent reduction in VM migration.
grid computing | 2012
Mohit Dhingra; J. Lakshmi; S. K. Nandy
Monitoring of infrastructural resources in clouds plays a crucial role in providing application guarantees like performance, availability, and security. Monitoring is crucial from two perspectives - the cloud-user and the service provider. The cloud users interest is in doing an analysis to arrive at appropriate Service-level agreement (SLA) demands and thecloud providers interest is to assess if the demand can be met. To support this, a monitoring framework is necessary particularly since cloud hosts are subject to varying load conditions. To illustrate the importance of such a framework, we choose the example of performance being the Quality of Service (QoS) requirement and show how inappropriate provisioning of resources may lead to unexpected performance bottlenecks. We evaluate existing monitoring frameworks to bring out the motivation for buildingmuch more powerful monitoring frameworks. We then propose a distributed monitoring framework, which enables ï¬ne grained monitoring for applications and demonstrate with a prototype system implementation for typical use cases.
international conference on advanced computing | 2012
Ankit Anand; Mohit Dhingra; J. Lakshmi; S. K. Nandy
Realization of cloud computing has been possible due to availability of virtualization technologies on commodity platforms. Measuring resource usage on the virtualized servers is difficult because of the fact that the performance counters used for resource accounting are not virtualized. Hence, many of the prevalent virtualization technologies like Xen, VMware, KVM etc., use host specific CPU usage monitoring, which is coarse grained. In this paper, we present a performance monitoring tool for KVM based virtualized machines, which measures the CPU overhead incurred by the hypervisor on behalf of the virtual machine along-with the CPU usage of virtual machine itself. This fine-grained resource usage information, provided by the above tool, can be used for diverse situations like resource provisioning to support performance associated QoS requirements, identification of bottlenecks during VM placements, resource profiling of applications in cloud environments, etc. We demonstrate a use case of this tool by measuring the performance of web-servers hosted on a KVM based virtualized server.
IEEE Transactions on Services Computing | 2015
Nitisha Jain; J. Lakshmi
Virtualization is one of the key enabling technologies for Cloud computing. Although it facilitates improved utilization of resources, virtualization can lead to performance degradation due to the sharing of physical resources like CPU, memory, network interfaces, disk controllers, etc. Multi-tenancy can cause highly unpredictable performance for concurrent I/O applications running inside virtual machines that share local disk storage in Cloud. Disk I/O requests in a typical Cloud setup may have varied requirements in terms of latency and throughput as they arise from a range of heterogeneous applications having diverse performance goals. This necessitates providing differential performance services to different I/O applications. In this paper, we present PriDyn, a novel scheduling framework which is designed to consider I/O performance metrics of applications such as acceptable latency and convert them to an appropriate priority value for disk access based on the current system state. This framework aims to provide differentiatedI/O service to various applications and ensures predictable performance for critical applications in multi-tenant Cloud environment. We demonstrate through experimental validations on real world I/O traces that this framework achieves appreciable enhancements in I/O performance, indicating that this approach is a promising step towards enabling QoS guarantees on Cloud storage.
international conference on cloud computing | 2013
Mohit Dhingra; J. Lakshmi; S. K. Nandy; Chiranjib Bhattacharyya; K. Gopinath
Elasticity in cloud systems provides the flexibility to acquire and relinquish computing resources on demand. However, in current virtualized systems resource allocation is mostly static. Resources are allocated during VM instantiation and any change in workload leading to significant increase or decrease in resources is handled by VM migration. Hence, cloud users tend to characterize their workloads at a coarse grained level which potentially leads to under-utilized VM resources or under performing application. A more flexible and adaptive resource allocation mechanism would benefit variable workloads, such as those characterized by web servers. In this paper, we present an elastic resources framework for IaaS cloud layer that addresses this need. The framework provisions for application workload forecasting engine, that predicts at run-time the expected demand, which is input to the resource manager to modulate resource allocation based on the predicted demand. Based on the prediction errors, resources can be over-allocated or under-allocated as compared to the actual demand made by the application. Over-allocation leads to unused resources and under allocation could cause under performance. To strike a good trade-off between over-allocation and under-performance we derive an excess cost model. In this model excess resources allocated are captured as over-allocation cost and under-allocation is captured as a penalty cost for violating application service level agreement (SLA). Confidence interval for predicted workload is used to minimize this excess cost with minimal effect on SLA violations. An example case-study for an academic institute web server workload is presented. Using the confidence interval to minimize excess cost, we achieve significant reduction in resource allocation requirement while restricting application SLA violations to below 2-3%.
international conference on big data and cloud computing | 2014
Pavan Kumar Akulakrishna; J. Lakshmi; Nandy S K
GPS applications need real-time responsiveness and are location-sensitive. GPS data is time-variant, dynamic and large. Current methods of centralized or distributed storage with static data impose constraints on addressing the real-time requirement of such applications. In this project we explore the need for real-timeliness of location based applications and evolve a methodology of storage mechanism for the GPS applications data. So far, the data is distributed based on zones and it also has limited redundancy leading to non-availability in case of failures. In our approach, data is partitioned into cells giving priority to Geo-spatial location. The geography of an area like a district, state, country or for that matter the whole world is divided into data cells. The size of the data cells is decided based on the previously observed location specific queries on the area. The cell size is so selected that a majority of the queries are addressed within the cell itself. This enables computation to happen closer to data location. As a result, data communication overheads are eliminated. We also build some data redundancy, which is used not only to enable failover mechanisms but also to target performance. This is done by nine-cell approach wherein each cell stores data of eight of its neighbours along with its own data. Cells that have an overload of queries, can easily pass-off some of their workload to their near neighbours and ensure timeliness in response. Further, effective load balancing of data ensures better utilization of resources. Experimental results show that our approach improves query response times, yields better throughput and reduces average query waiting time apart from enabling real-time updates on data.
international conference on cloud computing | 2015
Nitisha Jain; Nikolay Grozev; J. Lakshmi; Rajkumar Buyya
Cloud computing facilitates flexible on-demand provisioning of IT resources over the Internet. It has proven to be very advantageous for a wide range of industries in the emerging markets by allowing them to match the infrastructure capabilities of their already established competitors. However, Cloud computing also poses several unique challenges in terms of resource allocation and scheduling. In particular, I/O bound applications have been mostly neglected in the research efforts in the area. Simulation tools play a key role in the evaluation of new resource management policies by providing an affordable and replicable testing environment. In this paper, we propose a novel simulation framework PriDynSim for priority based I/O policies, which manage the available resources to ensure adequate performance in the presence of deadline constraints. In a case study, we demonstrate how PriDynSim can be used to evaluate a resource management policy, which guarantees Quality of Service (QoS) for I/O bound applications running in a typical Cloud environment.
international conference on cloud computing | 2014
Nitisha Jain; J. Lakshmi
Virtualization is one of the key enabling technologies for cloud computing. Although it facilitates improved utilization of resources, virtualization can lead to performance degradation due to the sharing of physical resources like CPU, memory, network interfaces, disk controllers, etc. Multi-tenancy can cause highly unpredictable performance for concurrent I/O applications running inside virtual machines that share local disk storage in cloud. Disk I/O requests in a typical cloud setup may have varied requirements in terms of latency and throughput as they arise from a range of heterogeneous applications having diverse performance goals. This necessitates providing differential performance services to different I/O applications. In this paper, we present PriDyn, a novel scheduling framework which is designed to consider I/O performance metrics of applications such as acceptable latency and convert them to an appropriate priority value for disk access based on the current system state. This framework aims to provide differentiated I/O service to various applications and ensures predictable performance for critical applications in multi-tenant cloud environment. We demonstrate that this framework achieves appreciable enhancements in I/O performance indicating that this approach is a promising step towards enabling QoS guarantees on cloud storage.
international conference on cloud computing | 2015
Nitisha Jain; J. Lakshmi
Server consolidation is at the heart of Cloud computing paradigm. Applications running on Cloud exhibit a broad spectrum of functionality, having varied characteristics and resource requirements. An optimal combination of concurrent applications on a server can enable maximum resource utilization of the physical resources while also ensuring the desired performance. In this work, we analyze the need for meta-scheduling of I/O workloads in Cloud computing environments in the context of disk consolidation and I/O performance. We propose a proactive admission controller and disk scheduling framework PCOS for I/O intensive applications. By foreseeing the resource utilization patterns of the applications while scheduling new requests on a server, PCOS enables the selection of suitable workload combination for servers to optimize disk bandwidth utilization. At the same time, PCOS can guarantee that performance is not adversely affected by scheduling of new I/O requests on the server. Experimental validations performed on real world I/O traces demonstrate the utility and importance of this framework for actual Cloud storage environments.
international conference on cloud computing | 2015
Pratima Ashok Dhuldhule; J. Lakshmi; S. K. Nandy
HPC applications are widely used in scientific and industrial research, data analytic and visualization, social behavioral studies etc. Most HPC applications require dedicated, available and highly customized resources and environments for computation since they exhibit intense resource utilization. These needs were traditionally provided by clusters and supercomputers which are difficult to setup, manage or operate. While majority of the HPC installations ensure good resource utilization, the reach of these is restricted to few who are members of a specific HPC community. Cloud computing is emerging as a latest computing technology. The on-demand nature of cloud has provoked interest to explore if cloud properties can be useful for HPC setups. This paper is a work in that direction. The prevalent public clouds have accessibility to many and have been explored by the HPC community too. The biggest deterrent identified on these computing platforms for HPC workloads is the virtualization layer used by the cloud systems for resource provisioning. In this paper we propose a Platform-as-a-Service model to build an HPC cloud setup. The key goals for the architecture design is to include features like on-demand provisioning both for hardware as well as HPC runtime environment for the cloud user and at the same time ensure that the HPC applications do not suffer virtualization overheads. The architecture builds the required HPC platform by providing dedicated node or a group of nodes booted with the desired HPC environment without the virtualization layer. Technologies like Wake-on-LAN and network booting are used to achieve this goal. Once the usage of these resources is relinquished, the same nodes are re-deployed for another HPC platform. Thus this architecture merges cloud properties with HPC platforms for delivering effective performance. We show the results of benchmarks used to evaluate performance difference between a virtualized and non-virtualized environment for this observation.