Maria Alejandra Rodriguez
University of Melbourne
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Maria Alejandra Rodriguez.
ieee international conference on cloud computing technology and science | 2014
Maria Alejandra Rodriguez; Rajkumar Buyya
Cloud computing is the latest distributed computing paradigm and it offers tremendous opportunities to solve large-scale scientific problems. However, it presents various challenges that need to be addressed in order to be efficiently utilized for workflow applications. Although the workflow scheduling problem has been widely studied, there are very few initiatives tailored for cloud environments. Furthermore, the existing works fail to either meet the users quality of service (QoS) requirements or to incorporate some basic principles of cloud computing such as the elasticity and heterogeneity of the computing resources. This paper proposes a resource provisioning and scheduling strategy for scientific workflows on Infrastructure as a Service (IaaS) clouds. We present an algorithm based on the meta-heuristic optimization technique, particle swarm optimization (PSO), which aims to minimize the overall workflow execution cost while meeting deadline constraints. Our heuristic is evaluated using CloudSim and various well-known scientific workflows of different sizes. The results show that our approach performs better than the current state-of-the-art algorithms.
Concurrency and Computation: Practice and Experience | 2017
Maria Alejandra Rodriguez; Rajkumar Buyya
Large‐scale scientific problems are often modeled as workflows. The ever‐growing data and compute requirements of these applications has led to extensive research on how to efficiently schedule and deploy them in distributed environments. The emergence of the latest distributed systems paradigm, cloud computing, brings with it tremendous opportunities to run scientific workflows at low costs without the need of owning any infrastructure. It provides a virtually infinite pool of resources that can be acquired, configured, and used as needed and are charged on a pay‐per‐use basis. However, along with these benefits come numerous challenges that need to be addressed to generate efficient schedules. This work identifies these challenges and studies existing algorithms from the perspective of the scheduling models they adopt as well as the resource and application model they consider. A detailed taxonomy that focuses on features particular to clouds is presented, and the surveyed algorithms are classified according to it. In this way, we aim to provide a comprehensive understanding of existing literature and aid researchers by providing an insight into future directions and open issues.
international conference on parallel processing | 2015
Maria Alejandra Rodriguez; Rajkumar Buyya
Scientific workflows are used to process vast amounts of data and to conduct large-scale experiments and simulations. They are time consuming and resource intensive applications that benefit from running in distributed platforms. In particular, scientific workflows can greatly leverage the ease-of-access, affordability, and scalability offered by cloud computing. To achieve this, innovative and efficient ways of orchestrating the workflow tasks and managing the compute resources in a cost-conscious manner need to be developed. We propose an adaptive, resource provisioning and scheduling algorithm for scientific workflows deployed in Infrastructure as a Service clouds. Our algorithm was designed to address challenges specific to clouds such as the pay-as-you-go model, the performance variation of resources and the on-demand access to unlimited, heterogeneous virtual machines. It is capable of responding to the dynamics of the cloud infrastructure and is successful in generating efficient solutions that meet a user-defined deadline and minimise the overall cost of the used infrastructure. Our simulation experiments demonstrate that it performs better than other state-of-the-art algorithms.
Future Generation Computer Systems | 2018
Maria Alejandra Rodriguez; Rajkumar Buyya
Abstract With the advent of cloud computing and the availability of data collected from increasingly powerful scientific instruments, workflows have become a prevailing mean to achieve significant scientific advances at an increased pace. Emerging Workflow as a Service (WaaS) platforms offer scientists a simple, easily accessible, and cost-effective way of deploying their applications in the cloud at anytime and from anywhere. They are multi-tenant frameworks and are designed to manage the execution of a continuous workload of heterogeneous workflows. To achieve this, they leverage the compute, storage, and network resources offered by Infrastructure as a Service (IaaS) providers. Hence, at any given point in time, a WaaS platform should be capable of efficiently scheduling an arbitrarily large number of workflows with different characteristics and quality of service requirements. As a result, we propose a resource provisioning and scheduling strategy designed specifically for WaaS environments. The algorithm is scalable and dynamic to adapt to changes in the environment and workload. It leverages containers to address resource utilization inefficiencies and aims to minimize the overall cost of leasing the infrastructure resources while meeting the deadline constraint of each individual workflow. To the best of our knowledge, this is the first approach that explicitly addresses VM sharing in the context of WaaS by modeling the use of containers in the resource provisioning and scheduling heuristics. Our simulation results demonstrate its responsiveness to environmental uncertainties, its ability to meet deadlines, and its cost-efficiency when compared to a state-of-the-art algorithm.
ACM Transactions on Autonomous and Adaptive Systems | 2017
Maria Alejandra Rodriguez; Rajkumar Buyya
With the advent of cloud computing and the availability of data collected from increasingly powerful scientific instruments, workflows have become a prevailing mean to achieve significant scientific advances at an increased pace. Scheduling algorithms are crucial in enabling the efficient automation of these large-scale workflows, and considerable effort has been made to develop novel heuristics tailored for the cloud resource model. The majority of these algorithms focus on coarse-grained billing periods that are much larger than the average execution time of individual tasks. Instead, our work focuses on emerging finer-grained pricing schemes (e.g., per-minute billing) that provide users with more flexibility and the ability to reduce the inherent wastage that results from coarser-grained ones. We propose a scheduling algorithm whose objective is to optimize a workflow’s execution time under a budget constraint; quality of service requirement that has been overlooked in favor of optimizing cost under a deadline constraint. Our proposal addresses fundamental challenges of clouds such as resource elasticity, abundance, and heterogeneity, as well as resource performance variation and virtual machine provisioning delays. The simulation results demonstrate our algorithm’s responsiveness to environmental uncertainties and its ability to generate high-quality schedules that comply with the budget constraint while achieving faster execution times when compared to state-of-the-art algorithms.
Software Architecture for Big Data and the Cloud | 2017
Maria Alejandra Rodriguez; Rajkumar Buyya
Abstract Infrastructure-as-a-Service clouds offer access to a scalable virtualized infrastructure on a pay-per-use basis. This is greatly beneficial for the deployment of scientific workflows, and as a result considerable effort is being made to develop and update existing workflow management systems to support the cloud resource model. The majority of existing systems are designed to work with traditional distributed platforms such as grids and clusters in which the resources are limited and readily-available. In contrast, clouds offer access to elastic and abundant resources that can be provisioned and deprovisioned on-demand. In this chapter, we present our efforts to extend an existing workflow system, the Cloudbus WMS, to enable the deployment of scientific applications in cloud computing environments. We present a case study to demonstrate the added functionality and evaluate the performance and cost of a well-known astronomy application on Microsoft Azure.
Future Generation Computer Systems | 2018
Maria Alejandra Rodriguez; Ramamohanarao Kotagiri; Rajkumar Buyya
Abstract Technological advances and the emergence of the Internet of Things have lead to the collection of vast amounts of scientific data from increasingly powerful scientific instruments and a growing number of distributed sensors. This has not only exacerbated the significance of the analyses performed by scientific applications but has also increased their complexity and scale. Hence, emerging extreme-scale scientific workflows are becoming widespread and so is the need to efficiently automate their deployment on a variety of platforms such as high performance computers, dedicated clusters, and cloud environments. Performance anomalies can considerably affect the execution of these applications. They may be caused by different factors including failures and resource contention and they may lead to undesired circumstances such as lengthy delays in the workflow runtime or unnecessary costs in cloud environments. As a result, it is essential for modern workflow management systems to enable the early detection of this type of anomalies, to identify their cause, and to formulate and execute actions to mitigate their effects. In this work, we propose the use of Hierarchical Temporal Memory (HTM) to detect performance anomalies on real-time infrastructure metrics collected by continuously monitoring the resource consumption of executing workflow tasks. The framework is capable of processing a stream of measurements in an online and unsupervised manner and is successful in adapting to changes in the underlying statistics of the data. This allows it to be easily deployed on a variety of infrastructure platforms without the need of previously collecting data and training a model. We evaluate our approach by using two real scientific workflows deployed in Microsoft Azure’s cloud infrastructure. Our experiment results demonstrate the ability of our model to accurately capture performance anomalies on different resource consumption metrics caused by a variety of competing workloads introduced into the system. A performance comparison of HTM to other online anomaly detection algorithms is also presented, demonstrating the suitability of the chosen algorithm for the problem presented in this work.
ACM Computing Surveys | 2018
Rajkumar Buyya; Satish Narayana Srirama; Giuliano Casale; Rodrigo N. Calheiros; Yogesh Simmhan; Blesson Varghese; Erol Gelenbe; Bahman Javadi; Luis M. Vaquero; Marco Aurelio Stelmar Netto; Adel Nadjaran Toosi; Maria Alejandra Rodriguez; Ignacio Martín Llorente; Sabrina De Capitani di Vimercati; Pierangela Samarati; Dejan S. Milojicic; Carlos A. Varela; Rami Bahsoon; Marcos Dias de Assunção; Omer Farooq Rana; Wanlei Zhou; Hai Jin; Wolfgang Gentzsch; Albert Y. Zomaya; Haiying Shen
arXiv: Distributed, Parallel, and Cluster Computing | 2018
Maria Alejandra Rodriguez; Rajkumar Buyya
arXiv: Distributed, Parallel, and Cluster Computing | 2018
Rajkumar Buyya; Maria Alejandra Rodriguez; Adel Nadjaran Toosi; Jaeman Park