Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Upendra Sharma is active.

Publication


Featured researches published by Upendra Sharma.


international conference on distributed computing systems | 2011

A Cost-Aware Elasticity Provisioning System for the Cloud

Upendra Sharma; Prashant J. Shenoy; Sambit Sahu; Anees Shaikh

In this paper we present Kingfisher, a {\em cost-aware} system that provides efficient support for elasticity in the cloud by (i) leveraging multiple mechanisms to reduce the time to transition to new configurations, and (ii) optimizing the selection of a virtual server configuration that minimizes the cost. We have implemented a prototype of Kingfisher and have evaluated its efficacy on a laboratory cloud platform. Our experiments with varying application workloads demonstrate that Kingfisher is able to (i) decrease the cost of virtual server resources by as much as


international conference on autonomic computing | 2010

Autonomic mix-aware provisioning for non-stationary data center workloads

Rahul Singh; Upendra Sharma; Emmanuel Cecchet; Prashant J. Shenoy

24\%


virtual execution environments | 2011

Dolly: virtualization-driven database provisioning for the cloud

Emmanuel Cecchet; Rahul Singh; Upendra Sharma; Prashant J. Shenoy

compared to the current cost-unaware approach, (ii) reduce by an order of magnitude the time to transition to a new configuration through multiple elasticity mechanisms in the cloud, and (iii), illustrate the opportunity for design alternatives which trade-off the cost of server resources with the time required to scale the application.


international conference on computer communications | 2011

Kingfisher: Cost-aware elasticity in the cloud

Upendra Sharma; Prashant J. Shenoy; Sambit Sahu; Anees Shaikh

Online Internet applications see dynamic workloads that fluctuate over multiple time scales. This paper argues that the non-stationarity in Internet application workloads, which causes the request mix to change over time, can have a significant impact on the overall processing demands imposed on data center servers. We propose a novel mix-aware dynamic provisioning technique that handles both the non-stationarity in the workload as well as changes in request volumes when allocating server capacity in Internet data centers. Our technique employs the k-means clustering algorithm to automatically determine the workload mix and a queuing model to predict the server capacity for a given workload mix. We implement a prototype provisioning system that incorporates our technique and experimentally evaluate its efficacy on a laboratory Linux data center running the TPC-W web benchmark. Our results show that our k-means clustering technique accurately captures workload mix changes in Internet applications. We also demonstrate that mix-aware dynamic provisioning eliminates SLA violations due to under-provisioning with non-stationary web workloads, and that it offers a better resource usage by reducing over-provisioning when compared to a baseline provisioning approach that only reacts to workload volume changes. We also present a case study of our provisioning approach on Amazons EC2 cloud platform.


ACM Transactions on Internet Technology | 2014

Cost-Aware Cloud Bursting for Enterprise Applications

Tian Guo; Upendra Sharma; Prashant J. Shenoy; Timothy Wood; Sambit Sahu

Cloud computing platforms are becoming increasingly popular for e-commerce applications that can be scaled on-demand in a very cost effective way. Dynamic provisioning is used to autonomously add capacity in multi-tier cloud-based applications that see workload increases. While many solutions exist to provision tiers with little or no state in applications, the database tier remains problematic for dynamic provisioning due to the need to replicate its large disk state. In this paper, we explore virtual machine (VM) cloning techniques to spawn database replicas and address the challenges of provisioning shared-nothing replicated databases in the cloud. We argue that being able to determine state replication time is crucial for provisioning databases and show that VM cloning provides this property. We propose Dolly, a database provisioning system based on VM cloning and cost models to adapt the provisioning policy to the cloud infrastructure specifics and application requirements. We present an implementation of Dolly in a commercial-grade replication middleware and evaluate database provisioning strategies for a TPC-W workload on a private cloud and on Amazon EC2. By being aware of VM-based state replication cost, Dolly can solve the challenge of automated provisioning for replicated databases on cloud platforms.


international conference on data engineering | 2005

QoSMig: adaptive rate-controlled migration of bulk data in storage systems

Koustuv Dasgupta; Sugata Ghosal; Rohit Jain; Upendra Sharma; Akshat Verma

In this paper we present Kingfisher, a cost-aware system that provides efficient support for elasticity in the cloud by (i) leveraging multiple mechanisms to reduce the time to transition to new configurations, and (ii) optimizing the selection of a virtual server configuration that minimizes the cost. We have implemented a prototype of Kingfisher and have evaluated its efficacy on a laboratory cloud platform. Our experiments with varying application workloads demonstrate that Kingfisher is able to (i) decrease the cost of virtual server resources by as much as 24% compared to the current cost-unaware approach, (ii) reduce by an order of magnitude the time to transition to a new configuration through multiple elasticity mechanisms in the cloud, and (iii), illustrate the opportunity for further design alternatives which trade-off the cost of server resources with the time required to scale the application.


ieee international workshop on policies for distributed systems and networks | 2005

Policy-based information lifecycle management in a large-scale file system

Mandis Beigi; Murthy V. Devarakonda; Rohit Jain; Marc Adam Kaplan; David Pease; Jim Rubas; Upendra Sharma; Akshat Verma

The high cost of provisioning resources to meet peak application demands has led to the widespread adoption of pay-as-you-go cloud computing services to handle workload fluctuations. Some enterprises with existing IT infrastructure employ a hybrid cloud model where the enterprise uses its own private resources for the majority of its computing, but then “bursts” into the cloud when local resources are insufficient. However, current commercial tools rely heavily on the system administrator’s knowledge to answer key questions such as when a cloud burst is needed and which applications must be moved to the cloud. In this article, we describe Seagull, a system designed to facilitate cloud bursting by determining which applications should be transitioned into the cloud and automating the movement process at the proper time. Seagull optimizes the bursting of applications using an optimization algorithm as well as a more efficient but approximate greedy heuristic. Seagull also optimizes the overhead of deploying applications into the cloud using an intelligent precopying mechanism that proactively replicates virtualized applications, lowering the bursting time from hours to minutes. Our evaluation shows over 100% improvement compared to naïve solutions but produces more expensive solutions compared to ILP. However, the scalability of our greedy algorithm is dramatically better as the number of VMs increase. Our evaluation illustrates scenarios where our prototype can reduce cloud costs by more than 45% when bursting to the cloud, and that the incremental cost added by precopying applications is offset by a burst time reduction of nearly 95%.


ieee conference on mass storage systems and technologies | 2005

An architecture for lifecycle management in very large file systems

Akshat Verma; Upendra Sharma; Jim Rubas; David Pease; Marc Adam Kaplan; Rohit Jain; Murthy V. Devarakonda; Mandis Beigi

Logical reorganization of data and requirements of differentiated QoS in information systems necessitate bulk data migration by the underlying storage layer. Such data migration needs to ensure that regular client I/Os are not impacted significantly while migration is in progress. We formalize the data migration problem in a unified admission control framework that captures both the performance requirements of client I/Os and the constraints associated with migration. We propose an adaptive rate-control based data migration methodology, QoSMig, that achieves the optimal client performance in a differentiated QoS setting, while ensuring that the specified migration constraints are met QoSMig uses both long term averages and short term forecasts of client traffic to compute a migration schedule. We present an architecture based on Service Level Enforcement Discipline for Storage (SLEDS) that supports QoSMig. Our trace-driven experimental study demonstrates that QoSMig provides significantly better I/O performance as compared to existing migration methodologies.


international conference on autonomic computing | 2012

Provisioning multi-tier cloud applications using statistical bounds on sojourn time

Upendra Sharma; Prashant J. Shenoy; Donald F. Towsley

Policy-based file lifecycle management is important for balancing storage utilization and for regulatory conformance. It poses two important challenges, the need for simple yet effective policy design and an implementation that scales to billions of files. This paper describes the design and an innovative implementation technique of policy-based lifecycle management in a prototype built as a part of IBMs new SAN file system. The policy specification leverages a key abstraction in the file system called storage pools and its ability to support location independence for files. The policy implementation uses an innovative new technique that combines concurrent policy execution and a policy decisions cache, to enable scaling to billions of files under normal usage patterns.


international middleware conference | 2014

Trustworthy geographically fenced hybrid clouds

K. R. Jayaram; David Robert Safford; Upendra Sharma; Vijay K. Naik; Dimitrios Pendarakis; Shu Tao

We present a policy-based architecture STEPS for lifecycle management (LCM) in a mass scale distributed file system. The STEPS architecture is designed in the context of IBMs SAN file system (SFS) and leverages the parallelism and scalability offered by SFS, while providing a centralized point of control for policy-based management. The architecture uses novel concepts like policy cache and rate-controlled migration for efficient and non-intrusive execution of the LCM functions, while ensuring that the architecture scales with very large number of files. The architecture has been implemented and used for lifecycle management in a distributed deployment of SFS with heterogeneous data. We conduct experiments on the implementation to study the performance of the architecture. We observed that STEPS is highly scalable with increase in the number as well as the size of the file objects hosted by SFS. The performance study also demonstrated that most of the efficiency of policy execution is derived from policy cache. Further, a rate-control mechanism is necessary to ensure that users are isolated from LCM operations.

Collaboration


Dive into the Upendra Sharma's collaboration.

Researchain Logo
Decentralizing Knowledge