Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Saurabh Kumar Garg is active.

Publication


Featured researches published by Saurabh Kumar Garg.


Future Generation Computer Systems | 2013

A framework for ranking of cloud computing services

Saurabh Kumar Garg; Steven Versteeg; Rajkumar Buyya

Cloud computing is revolutionizing the IT industry by enabling them to offer access to their infrastructure and application services on a subscription basis. As a result, several enterprises including IBM, Microsoft, Google, and Amazon have started to offer different Cloud services to their customers. Due to the vast diversity in the available Cloud services, from the customers point of view, it has become difficult to decide whose services they should use and what is the basis for their selection. Currently, there is no framework that can allow customers to evaluate Cloud offerings and rank them based on their ability to meet the users Quality of Service (QoS) requirements. In this work, we propose a framework and a mechanism that measure the quality and prioritize Cloud services. Such a framework can make a significant impact and will create healthy competition among Cloud providers to satisfy their Service Level Agreement (SLA) and improve their QoS. We have shown the applicability of the ranking framework using a case study.


Journal of Parallel and Distributed Computing | 2011

Environment-conscious scheduling of HPC applications on distributed Cloud-oriented data centers

Saurabh Kumar Garg; Chee Shin Yeo; Arun Anandasivam; Rajkumar Buyya

The use of High Performance Computing (HPC) in commercial and consumer IT applications is becoming popular. HPC users need the ability to gain rapid and scalable access to high-end computing capabilities. Cloud computing promises to deliver such a computing infrastructure using data centers so that HPC users can access applications and data from a Cloud anywhere in the world on demand and pay based on what they use. However, the growing demand drastically increases the energy consumption of data centers, which has become a critical issue. High energy consumption not only translates to high energy cost which will reduce the profit margin of Cloud providers, but also high carbon emissions which are not environmentally sustainable. Hence, there is an urgent need for energy-efficient solutions that can address the high increase in the energy consumption from the perspective of not only the Cloud provider, but also from the environment. To address this issue, we propose near-optimal scheduling policies that exploit heterogeneity across multiple data centers for a Cloud provider. We consider a number of energy efficiency factors (such as energy cost, carbon emission rate, workload, and CPU power efficiency) which change across different data centers depending on their location, architectural design, and management system. Our carbon/energy based scheduling policies are able to achieve on average up to 25% of energy savings in comparison to profit based scheduling policies leading to higher profit and less carbon emissions.


conference on decision and control | 2011

SLA-oriented resource provisioning for cloud computing: Challenges, architecture, and solutions

Rajkumar Buyya; Saurabh Kumar Garg; Rodrigo N. Calheiros

Cloud computing systems promise to offer subscription-oriented, enterprise-quality computing services to users worldwide. With the increased demand for delivering services to a large number of users, they need to offer differentiated services to users and meet their quality expectations. Existing resource management systems in data centers are yet to support Service Level Agreement (SLA)-oriented resource allocation, and thus need to be enhanced to realize cloud computing and utility computing. In addition, no work has been done to collectively incorporate customer-driven service management, computational risk management, and autonomic resource management into a market-based resource management system to target the rapidly changing enterprise requirements of Cloud computing. This paper presents vision, challenges, and architectural elements of SLA-oriented resource management. The proposed architecture supports integration of market-based provisioning policies and virtualisation technologies for flexible allocation of resources to applications. The performance results obtained from our working prototype system shows the feasibility and effectiveness of SLA-based resource provisioning in Clouds.


utility and cloud computing | 2011

NetworkCloudSim: Modelling Parallel Applications in Cloud Simulations

Saurabh Kumar Garg; Rajkumar Buyya

As interest in adopting Cloud computing for various applications is rapidly growing, it is important to understand how these applications and systems will perform when deployed on Clouds. Due to the scale and complexity of shared resources, it is often hard to analyze the performance of new scheduling and provisioning algorithms on actual Cloud test beds. Therefore, simulation tools are becoming more and more important in the evaluation of the Cloud computing model. Simulation tools allow researchers to rapidly evaluate the efficiency, performance and reliability of their new algorithms on a large heterogeneous Cloud infrastructure. However, current solutions lack either advanced application models such as message passing applications and workflows or scalable network model of data center. To fill this gap, we have extended a popular Cloud simulator (CloudSim) with a scalable network and generalized application model, which allows more accurate evaluation of scheduling and resource provisioning policies to optimize the performance of a Cloud infrastructure.


Future Generation Computer Systems | 2010

Time and cost trade-off management for scheduling parallel applications on Utility Grids

Saurabh Kumar Garg; Rajkumar Buyya; Howard Jay Siegel

With the growth of Utility Grids and various Grid market infrastructures, the need for efficient and cost effective scheduling algorithms is also increasing rapidly, particularly in the area of meta-scheduling. In these environments, users not only may have conflicting requirements with other users, but also they have to manage the trade-off between time and cost such that their applications can be executed most economically in the minimum time. Thus, selection of the best Grid resources becomes a challenge in such a competitive environment. This paper presents three novel heuristics for scheduling parallel applications on Utility Grids that manage and optimize the trade-off between time and cost constraints. The performance of the heuristics is evaluated through extensive simulations of a real-world environment with real parallel workload models to demonstrate the practicality of our algorithms. We compare our scheduling algorithms against existing common meta-schedulers experimentally. The results show that our algorithms outperform existing algorithms by minimizing the time and cost of application execution on Utility Grids.


cluster computing and the grid | 2012

Pricing Cloud Compute Commodities: A Novel Financial Economic Model

Bhanu Sharma; Ruppa K. Thulasiram; Parimala Thulasiraman; Saurabh Kumar Garg; Rajkumar Buyya

In this study, we design, develop, and simulate a cloud resources pricing model that satisfies two important constraints: the dynamic ability of the model to provide a high satisfaction guarantee measured as Quality of Service (QoS) - from users perspectives, profitability constraints - from the cloud service providers perspectives We employ financial option theory and treat the cloud resources as underlying assets to capture the realistic value of the cloud compute commodities (C3). We then price the cloud resources using our model. We discuss the results for four different metrics that we introduce to guarantee the quality of service and price as follows: (a) Moores law based depreciation of asset values, (b) new technology based volatility measures in capturing price changes, (c) a new financial option pricing based model combining the above two concepts, and (d) the effect of age of resources and depreciation of cloud resource on QoS. We show that the cloud parameters can be mapped to financial economic model and we discuss the results of cloud compute commodity pricing for various parameters, such as the age of the resource, quality of service, and contract period.


international conference on algorithms and architectures for parallel processing | 2011

SLA-based resource provisioning for heterogeneous workloads in a virtualized cloud datacenter

Saurabh Kumar Garg; Srinivasa K. Gopalaiyengar; Rajkumar Buyya

Efficient provisioning of resources is a challenging problem in cloud computing environments due to its dynamic nature and the need for supporting heterogeneous applications with different performance requirements. Currently, cloud datacenter providers either do not offer any performance guarantee or prefer static VM allocation over dynamic, which lead to inefficient utilization of resources. Earlier solutions, concentrating on a single type of SLAs (Service Level Agreements) or resource usage patterns of applications, are not suitable for cloud computing environments. In this paper, we tackle the resource allocation problem within a datacenter that runs different type of application workloads, particularly non-interactive and transactional applications. We propose admission control and scheduling mechanism which not only maximizes the resource utilization and profit, but also ensures the SLA requirements of users. In our experimental study, the proposed mechanism has shown to provide substantial improvement over static server consolidation and reduces SLA Violations.


Journal of Network and Computer Applications | 2014

SLA-based virtual machine management for heterogeneous workloads in a cloud datacenter

Saurabh Kumar Garg; Adel Nadjaran Toosi; Srinivasa K. Gopalaiyengar; Rajkumar Buyya

Efficient provisioning of resources is a challenging problem in cloud computing environments due to its dynamic nature and the need for supporting heterogeneous applications. Even though VM (Virtual Machine) technology allows several workloads to run concurrently and to use a shared infrastructure, still it does not guarantee application performance. Thus, currently cloud datacenter providers either do not offer any performance guarantee or prefer static VM allocation over dynamic, which leads to inefficient utilization of resources. Moreover, the workload may have different QoS (Quality Of Service) requirements due to the execution of different types of applications such as HPC and web, which makes resource provisioning much harder. Earlier work either concentrate on single type of SLAs (Service Level Agreements) or resource usage patterns of applications, such as web applications, leading to inefficient utilization of datacenter resources. In this paper, we tackle the resource allocation problem within a datacenter that runs different types of application workloads, particularly non-interactive and transactional applications. We propose an admission control and scheduling mechanism which not only maximizes the resource utilization and profit, but also ensures that the QoS requirements of users are met as specified in SLAs. In our experimental study, we found that it is important to be aware of different types of SLAs along with applicable penalties and the mix of workloads for better resource provisioning and utilization of datacenters. The proposed mechanism provides substantial improvement over static server consolidation and reduces SLA violations.


international conference on parallel processing | 2013

Energy and carbon-efficient placement of virtual machines in distributed cloud data centers

Atefeh Khosravi; Saurabh Kumar Garg; Rajkumar Buyya

Due to the increasing use of Cloud computing services and the amount of energy used by data centers, there is a growing interest in reducing energy consumption and carbon footprint of data centers. Cloud data centers use virtualization technology to host multiple virtual machines (VMs) on a single physical server. By applying efficient VM placement algorithms, Cloud providers are able to enhance energy efficiency and reduce carbon footprint. Previous works have focused on reducing the energy used within a single or multiple data centers without considering their energy sources and Power Usage Effectiveness (PUE). In contrast, this paper proposes a novel VM placement algorithm to increase the environmental sustainability by taking into account distributed data centers with different carbon footprint rates and PUEs. Simulation results show that the proposed algorithm reduces the CO2 emission and power consumption, while it maintains the same level of quality of service compared to other competitive algorithms.


international conference on parallel processing | 2011

Green cloud framework for improving carbon efficiency of clouds

Saurabh Kumar Garg; Chee Shin Yeo; Rajkumar Buyya

The energy efficiency of ICT has become a major issue with the growing demand of Cloud Computing. More and more companies are investing in building large datacenters to host Cloud services. These datacenters not only consume huge amount of energy but are also very complex in the infrastructure itself. Many studies have been proposed to make these datacenter energy efficient using technologies such as virtualization and consolidation. Still, these solutions are mostly cost driven and thus, do not directly address the critical impact on the environmental sustainability in terms of CO2 emissions. Hence, in this work, we propose a user-oriented Cloud architectural framework, i.e. Carbon Aware Green Cloud Architecture, which addresses this environmental problem from the overall usage of Cloud Computing resources. We also present a case study on IaaS providers. Finally, we present future research directions to enable the wholesome carbon efficiency of Cloud Computing.

Collaboration


Dive into the Saurabh Kumar Garg's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dimitrios Georgakopoulos

Swinburne University of Technology

View shared research outputs
Top Co-Authors

Avatar

Linlin Wu

University of Melbourne

View shared research outputs
Top Co-Authors

Avatar

Peter E. Strazdins

Australian National University

View shared research outputs
Top Co-Authors

Avatar

Xuezhi Zeng

Australian National University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Prem Prakash Jayaraman

Swinburne University of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge