Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rajiv Ranjan is active.

Publication


Featured researches published by Rajiv Ranjan.


Software - Practice and Experience | 2011

CloudSim: a toolkit for modeling and simulation of cloud computing environments and evaluation of resource provisioning algorithms

Rodrigo N. Calheiros; Rajiv Ranjan; Anton Beloglazov; César A. F. De Rose; Rajkumar Buyya

Cloud computing is a recent advancement wherein IT infrastructure and applications are provided as ‘services’ to end‐users under a usage‐based payment model. It can leverage virtualized services even on the fly based on requirements (workload patterns and QoS) varying with time. The application services hosted under Cloud computing model have complex provisioning, composition, configuration, and deployment requirements. Evaluating the performance of Cloud provisioning policies, application workload models, and resources performance models in a repeatable manner under varying system and user configurations and requirements is difficult to achieve. To overcome this challenge, we propose CloudSim: an extensible simulation toolkit that enables modeling and simulation of Cloud computing systems and application provisioning environments. The CloudSim toolkit supports both system and behavior modeling of Cloud system components such as data centers, virtual machines (VMs) and resource provisioning policies. It implements generic application provisioning techniques that can be extended with ease and limited effort. Currently, it supports modeling and simulation of Cloud computing environments consisting of both single and inter‐networked clouds (federation of clouds). Moreover, it exposes custom interfaces for implementing policies and provisioning techniques for allocation of VMs under inter‐networked Cloud computing scenarios. Several researchers from organizations, such as HP Labs in U.S.A., are using CloudSim in their investigation on Cloud resource provisioning and energy‐efficient management of data center resources. The usefulness of CloudSim is demonstrated by a case study involving dynamic provisioning of application services in the hybrid federated clouds environment. The result of this case study proves that the federated Cloud computing model significantly improves the application QoS requirements under fluctuating resource and service demand patterns. Copyright


international conference on algorithms and architectures for parallel processing | 2010

InterCloud: utility-oriented federation of cloud computing environments for scaling of application services

Rajkumar Buyya; Rajiv Ranjan; Rodrigo N. Calheiros

Cloud computing providers have setup several data centers at different geographical locations over the Internet in order to optimally serve needs of their customers around the world However, existing systems do not support mechanisms and policies for dynamically coordinating load distribution among different Cloud-based data centers in order to determine optimal location for hosting application services to achieve reasonable QoS levels Further, the Cloud computing providers are unable to predict geographic distribution of users consuming their services, hence the load coordination must happen automatically, and distribution of services must change in response to changes in the load To counter this problem, we advocate creation of federated Cloud computing environment (InterCloud) that facilitates just-in-time, opportunistic, and scalable provisioning of application services, consistently achieving QoS targets under variable workload, resource and network conditions The overall goal is to create a computing environment that supports dynamic expansion or contraction of capabilities (VMs, services, storage, and database) for handling sudden variations in service demands. This paper presents vision, challenges, and architectural elements of InterCloud for utility-oriented federation of Cloud computing environments The proposed InterCloud environment supports scaling of applications across multiple vendor clouds We have validated our approach by conducting a set of rigorous performance evaluation study using the CloudSim toolkit The results demonstrate that federated Cloud computing model has immense potential as it offers significant performance gains as regards to response time and cost saving under dynamic workload scenarios.


international conference on high performance computing and simulation | 2009

Modeling and simulation of scalable Cloud computing environments and the CloudSim toolkit: Challenges and opportunities

Rajkumar Buyya; Rajiv Ranjan; Rodrigo N. Calheiros

Cloud computing aims to power the next generation data centers and enables application service providers to lease data center capabilities for deploying applications depending on user QoS (Quality of Service) requirements. Cloud applications have different composition, configuration, and deployment requirements. Quantifying the performance of resource allocation policies and application scheduling algorithms at finer details in Cloud computing environments for different application and service models under varying load, energy performance (power consumption, heat dissipation), and system size is a challenging problem to tackle. To simplify this process, in this paper we propose CloudSim: an extensible simulation toolkit that enables modelling and simulation of Cloud computing environments. The CloudSim toolkit supports modelling and creation of one or more virtual machines (VMs) on a simulated node of a Data Center, jobs, and their mapping to suitable VMs. It also allows simulation of multiple Data Centers to enable a study on federation and associated policies for migration of VMs for reliability and automatic scaling of applications.


IEEE Communications Surveys and Tutorials | 2008

Peer-to-peer-based resource discovery in global grids: a tutorial

Rajiv Ranjan; Aaron Harwood; Rajkumar Buyya

An efficient resource discovery mechanism is one of the fundamental requirements for grid computing systems, as it aids in resource management and scheduling of applications. Resource discovery activity involves searching for the appropriate resource types that match the users application requirements. Various kinds of solutions to grid resource discovery have been suggested, including centralized and hierarchical information server approaches. However, both of these approaches have serious limitations in regard to scalability, fault tolerance, and network congestion. To overcome these limitations, indexing resource information using a decentralized (e.g., peer-to-peer (P2P)) network model has been actively proposed in the past few years. This article investigates various decentralized resource discovery techniques primarily driven by the P2P network model. To summarize, this article presents a: summary of the current state of the art in grid resource discovery, resource taxonomy with focus on the computational grid paradigm, P2P taxonomy with a focus on extending the current structured systems (e.g., distributed hash tables) for indexing d-dimensional grid resource queries,1 a detailed survey of existing work that can support rf-dimensional grid resource queries, and classification of the surveyed approaches based on the proposed P2P taxonomy. We believe that this taxonomy and its mapping to relevant systems would be useful for academic and industry-based researchers who are engaged in the design of scalable grid and P2P systems.


international world wide web conferences | 2012

CloudGenius: decision support for web server cloud migration

Michael Menzel; Rajiv Ranjan

Cloud computing is the latest computing paradigm that delivers hardware and software resources as virtualized services in which users are free from the burden of worrying about the low-level system administration details. Migrating Web applications to Cloud services and integrating Cloud services into existing computing infrastructures is non-trivial. It leads to new challenges that often require innovation of paradigms and practices at all levels: technical, cultural, legal, regulatory, and social. The key problem in mapping Web applications to virtualized Cloud services is selecting the best and compatible mix of software images (e.g., Web server image) and infrastructure services to ensure that Quality of Service (QoS) targets of an application are achieved. The fact that, when selecting Cloud services, engineers must consider heterogeneous sets of criteria and complex dependencies between infrastructure services and software images, which are impossible to resolve manually, is a critical issue. To overcome these challenges, we present a framework (called CloudGenius) which automates the decision-making process based on a model and factors specifically for Web server migration to the Cloud. CloudGenius leverages a well known multi-criteria decision making technique, called Analytic Hierarchy Process, to automate the selection process based on a model, factors, and QoS parameters related to an application. An example application demonstrates the applicability of the theoretical CloudGenius approach. Moreover, we present an implementation of CloudGenius that has been validated through experiments.


arXiv: Distributed, Parallel, and Cluster Computing | 2010

Peer-to-Peer Cloud Provisioning: Service Discovery and Load-Balancing

Rajiv Ranjan; Liang Zhao; Xiaomin Wu; Anna Liu; Andres Quiroz; Manish Parashar

Clouds have evolved as the next-generation platform that facilitates creation of wide-area on-demand renting of computing or storage services for hosting application services that experience highly variable workloads and requires high availability and performance. Interconnecting Cloud computing system components (servers, virtual machines (VMs), application services) through peer-to-peer routing and information dissemination structure are essential to avoid the problems of provisioning efficiency bottleneck and single point of failure that are predominantly associated with traditional centralized or hierarchical approaches. These limitations can be overcome by connecting Cloud system components using a structured peer-to-peer network model (such as distributed hash tables (DHTs)). DHTs offer deterministic information/query routing and discovery with close to logarithmic bounds as regards network message complexity. By maintaining a small routing state of O (log n) per VM, a DHT structure can guarantee deterministic look-ups in a completely decentralized and distributed manner. This chapter presents: (i) a layered peer-to-peer Cloud provisioning architecture; (ii) a summary of the current state-of-the-art in Cloud provisioning with particular emphasis on service discovery and load-balancing; (iii) a classification of the existing peer-to-peer network management model with focus on extending the DHTs for indexing and managing complex provisioning information; and (iv) the design and implementation of novel, extensible software fabric (Cloud peer) that combines public/private clouds, overlay networking, and structured peer-to-peer indexing techniques for supporting scalable and self-managing service discovery and load-balancing in Cloud computing environments. Finally, an experimental evaluation is presented that demonstrates the feasibility of building next-generation Cloud provisioning systems based on peer-to-peer network management and information dissemination models. The experimental test-bed has been deployed on a public cloud computing platform, Amazon EC2, which demonstrates the effectiveness of the proposed peer-to-peer Cloud provisioning software fabric.


IEEE Internet Computing | 2015

Cloud Resource Orchestration Programming: Overview, Issues, and Directions

Rajiv Ranjan; Boualem Benatallah; Schahram Dustdar; Mike P. Papazoglou

Cloud computing provides on-demand access to affordable hardware (such as multicore CPUs, GPUs, disk drives, and networking equipment) and software (databases, application servers, load-balancers, data processors, and frameworks). The pervasiveness and power of cloud computing alleviates some of the problems that application administrators face in their existing hardware and locally managed software environments. However, the rapid increase in scale, dynamicity, heterogeneity, and diversity of cloud resources necessitates having expert knowledge about programming complex orchestration operations (for example, selection, deployment, monitoring, and runtime control) on those resources to achieve the desired quality of service. This article provides an overview of the key cloud resource types and resource orchestration operations, with special focus on research issues involved in programming those operations.


Concurrency and Computation: Practice and Experience | 2011

A taxonomy and survey on autonomic management of applications in grid computing environments

Mustafizur Rahman; Rajiv Ranjan; Rajkumar Buyya; Boualem Benatallah

In Grid computing environments, the availability, performance, and state of resources, applications, services, and data undergo continuous changes during the life cycle of an application. Uncertainty is a fact in Grid environments, which is triggered by multiple factors, including: (1) failures, (2) dynamism, (3) incomplete global knowledge, and (4) heterogeneity. Unfortunately, the existing Grid management methods, tools, and application composition techniques are inadequate to handle these resource, application and environment behaviors. The aforementioned characteristics impose serious requirements on the Grid programming and runtime systems if they wish to deliver efficient performance to scientific and commercial applications. To overcome the above challenges, the Grid programming and runtime systems must become autonomic or self‐managing in accordance with the high‐level behavior specified by system administrators. Autonomic systems are inspired by biological systems that deal with similar challenges of complexity, dynamism, heterogeneity, and uncertainty. To this end, we propose a comprehensive taxonomy that characterizes and classifies different software components and high‐level methods that are required for autonomic management of applications in Grids. We also survey several representative Grid computing systems that have been developed by various leading research groups in the academia and industry. The taxonomy not only highlights the similarities and differences of state‐of‐the‐art technologies utilized in autonomic application management from the perspective of Grid computing, but also identifies the areas that require further research initiatives. We believe that this taxonomy and its mapping to relevant systems would be highly useful for academic‐ and industry‐based researchers, who are engaged in the design of Autonomic Grid and more recently, Cloud computing systems. Copyright


Future Generation Computer Systems | 2010

Cooperative and decentralized workflow scheduling in global grids

Mustafizur Rahman; Rajiv Ranjan; Rajkumar Buyya

Existing Grid scheduling systems, such as e-Science workflow brokers operate in tandem but lack the notion of cooperation mechanism that can lead to efficient application schedules across distributed resources. Lack of coordination exacerbates the utilization of various resources including computing cycles and network bandwidth. Moreover, current brokering systems have evolved around centralized client/server or hierarchical models. The responsibilities of the key functionalities such as resource discovery are delegated to the centralized server machines. Centralized models have well-known drawbacks regarding scalability, single point of failure, and network congestion at links leading to the server. To overcome these problems, this paper proposes a novel approach for decentralized and cooperative workflow scheduling in a dynamic and distributed Grid resource sharing environment. The participants in the system, such as the workflow brokers, resources, and users who belong to multiple control domains, work together to enable a single cooperative resource sharing environment. The proposed approach derives from a Distributed Hash Table (DHT) based d-dimensional logical index space with regard to resource discovery, coordination and overall system decentralization. The DHT-based d-dimensional index space serves as a blackboard system, where distributed participants can post and search complex coordination objects that regulate system wide scheduling decision making. With the implementation of our approach, not only the performance bottlenecks are likely to be eliminated but also efficient scheduling with enhanced scalability will be achieved. We evaluate and prove the feasibility of our approach through an extensive trace-driven simulation. In order to show the performance of the proposed approach against non-cooperative scheduling approach, we conduct experiment for different sizes of workflow. The results show that our scheduling technique can reduce the makespan up to 25% and demonstrates improved load balancing capability. We also compare the performance of the proposed approach against a centralized coordination technique and show that our approach is as efficient as the centralized technique with respect to achieving coordinated schedules.


cluster computing and the grid | 2008

A Decentralized and Cooperative Workflow Scheduling Algorithm

Rajiv Ranjan; Mustafizur Rahman; Rajkumar Buyya

In the current approaches to workflow scheduling, there is no cooperation between the distributed workflow brokers and as a result, the problem of conflicting schedules occur. To overcome this problem, in this paper, we propose a decentralized and cooperative workflow scheduling algorithm. The proposed approach utilizes a peer-to-peer (P2P) coordination space with respect to coordinating the application schedules among the Grid wide distributed workflow brokers. The proposed algorithm is completely decentralized in the sense that there is no central point of contact in the system, the responsibility of the key functionalities such as resource discovery and scheduling coordination are delegated to the P2P coordination space. With the implementation of our approach, not only the performance bottlenecks are likely to be eliminated but also efficient scheduling with enhanced scalability and better autonomy for the users are likely to be achieved. We prove the feasibility of our approach through an extensive trace driven simulation study.

Collaboration


Dive into the Rajiv Ranjan's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Boualem Benatallah

University of New South Wales

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Liang Zhao

University of New South Wales

View shared research outputs
Top Co-Authors

Avatar

Schahram Dustdar

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge