Ilia Pietri
University of Manchester
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ilia Pietri.
workflows in support of large scale science | 2014
Ilia Pietri; Gideon Juve; Ewa Deelman; Rizos Sakellariou
Scientific workflows, which capture large computational problems, may be executed on large-scale distributed systems such as Clouds. Determining the amount of resources to be provisioned for the execution of scientific workflows is a key component to achieve cost-efficient resource management and good performance. In this paper, a performance prediction model is presented to estimate execution time of scientific workflows for a different number of resources, taking into account their structure as well as their system-dependent characteristics. In the evaluation, three real-world scientific workflows are used to compare the estimated makespan calculated by the model with the actual makespan achieved on different system configurations of Amazon EC2. The results show that the proposed model can predict execution time with an error of less than 20% for over 96.8% of the experiments..
ACM Computing Surveys | 2016
Ilia Pietri; Rizos Sakellariou
Cloud computing enables users to provision resources on demand and execute applications in a way that meets their requirements by choosing virtual resources that fit their application resource needs. Then, it becomes the task of cloud resource providers to accommodate these virtual resources onto physical resources. This problem is a fundamental challenge in cloud computing as resource providers need to map virtual resources onto physical resources in a way that takes into account the providers’ optimization objectives. This article surveys the relevant body of literature that deals with this mapping problem and how it can be addressed in different scenarios and through different objectives and optimization techniques. The evaluation aspects of different solutions are also considered. The article aims at both identifying and classifying research done in the area adopting a categorization that can enhance understanding of the problem.
international conference on cloud and green computing | 2013
Ilia Pietri; Maciej Malawski; Gideon Juve; Ewa Deelman; Jarek Nabrzyski; Rizos Sakellariou
Large computational problems may often be modelled using multiple scientific workflows with similar structure. These workflows can be grouped into ensembles, which may be executed on distributed platforms such as the Cloud. In this paper, we focus on the provisioning of resources for scientific workflow ensembles and address the problem of meeting energy constraints along with either budget or deadline constraints. We propose and evaluate two energy-aware algorithms that can be used for resource provisioning and task scheduling. Experimental evaluation is based on simulations using synthetic data based on parameters of real scientific workflow applications. The results show that our proposed algorithms can meet constraints and minimize energy consumption without compromising the number of completed workflows in an ensemble.
Future Generation Computer Systems | 2017
Rafael Ferreira da Silva; Rosa Filgueira; Ilia Pietri; Ming Jiang; Rizos Sakellariou; Ewa Deelman
Automation of the execution of computational tasks is at the heart of improving scientific productivity. Over the last years, scientific workflows have been established as an important abstraction that captures data processing and computation of large and complex scientific applications. By allowing scientists to model and express entire data processing steps and their dependencies, workflow management systems relieve scientists from the details of an application and manage its execution on a computational infrastructure. As the resource requirements of today’s computational and data science applications that process vast amounts of data keep increasing, there is a compelling case for a new generation of advances in high-performance computing, commonly termed as extreme-scale computing, which will bring forth multiple challenges for the design of workflow applications and management systems. This paper presents a novel characterization of workflow management systems using features commonly associated with extreme-scale computing applications. We classify 15 popular workflow management systems in terms of workflow execution models, heterogeneous computing environments, and data access methods. The paper also surveys workflow applications and identifies gaps for future research on the road to extreme-scale workflows and management systems.
international conference on parallel processing | 2014
Ilia Pietri; Rizos Sakellariou
Dynamic Voltage and Frequency Scaling (DVFS) is a power management technique used to decrease the processor frequency and minimize power consumption in modern computing systems. This may lead to higher energy savings for large-scale computational problems, with scientific workflows comprising an important category of applications among these. However, as frequency scaling may result in increased execution time overall, idle time on the processors may also increase, to such a degree that any gains in power are annulled, this depends on the system and workflow characteristics. In this paper, we propose a scheduling algorithm that adopts frequency scaling to reduce overall energy consumption of scientific workflows given an allocation of tasks onto machines and a deadline to complete the execution. Based on the observation that using the lowest possible frequency may not necessarily be energy-efficient, the proposed algorithm works iteratively to scale the frequency further and distribute any slack time, only when overall energy consumption can be decreased. Synthetic data based on parameters of real scientific workflows are used in the evaluation. The results show that the proposed algorithm can achieve energy savings, sometimes at the expense of execution time to reduce the idle time of the processors and decrease overall energy consumption.
ieee acm international conference utility and cloud computing | 2015
Thiago A. L. Genez; Ilia Pietri; Rizos Sakellariou; Luiz F. Bittencourt; Edmundo Roberto Mauro Madeira
In this paper, we propose a procedure based on Particle Swarm Optimization (PSO) to guide the user in splitting an amount of CPU capacity (sum of frequencies) among a fixed number of resources in order to minimize the execution time (makespan) of the workflow. The proposed procedure was evaluated and compared with a naive approach, which selects only identical CPU frequency configurations for resources. Simulation results show that, by keeping the overall amount of provisioned CPU frequency constant, the proposed PSO-based approach was able to reduce the makespan of the workflow by carefully selecting different CPU frequencies for resources.
grid economics and business models | 2015
Ilia Pietri; Rizos Sakellariou
Cloud providers now offer resources as combinations of CPU frequencies and prices, with faster resources (which operate at higher frequencies) charged at a higher monetary cost. With the emergence of this new pricing scheme, the problem of choosing cost-efficient configurations is becoming even more challenging for users. The frequencies required to achieve cost-efficient configurations may vary in different scenarios, depending on both the provider’s pricing model and the application characteristics. In this paper, two cost-aware algorithms that select low-cost CPU frequencies for each resource to complete a scientific workflow application within a deadline and at a minimum cost are presented. The proposed approaches are evaluated and compared through simulation using different pricing models that charge resource provisioning also based on the CPU frequency.
international conference on cloud computing | 2015
Dražen Lučanin; Ilia Pietri; Ivona Brandic; Rizos Sakellariou
New dynamic cloud pricing options are emerging with cloud providers offering resources as a wide range of CPU frequencies and matching prices that can be switched at runtime. On the other hand, cloud providers are facing the problem of growing operational energy costs. This raises a trade-off problem between energy savings and revenue loss when performing actions such as CPU frequency scaling. Although existing cloud controllers for managing cloud resources deploy frequency scaling, they only consider fixed virtual machine (VM) pricing. In this paper we propose a performance-based pricing model adapted for VMs with different CPU-bounded ness properties. We present a cloud controller that scales CPU frequencies to achieve energy cost savings that exceed service revenue losses. We evaluate the approach in a simulation based on real VM workload, electricity price and temperature traces, estimating energy cost savings up to 32% in certain scenarios.
ieee acm international conference utility and cloud computing | 2014
Ilia Pietri; Rizos Sakellariou
Cloud providers now offer resources that operate in a range of CPU frequencies giving users a large number of resource configurations to choose for their application needs. As higher CPU frequencies incur a higher monetary cost, users face the challenge of selecting the CPU frequencies that lead to cost-efficient configurations and strike a good balance between cost and performance. In this paper, an algorithm that achieves cost-aware provisioning by selecting different CPU frequencies for each resource in order to execute scientific workflows within a deadline is presented.
IEEE Transactions on Cloud Computing | 2016
Drazen Lucanin; Ilia Pietri; Simon Holmbacka; Ivona Brandic; Johan Lilius; Rizos Sakellariou
New pricing policies are emerging where cloud providers charge resource provisioning based on the allocated CPU frequencies. As a result, resources are offered to users as combinations of different performance levels and prices which can be configured at runtime. With such new pricing schemes and the increasing energy costs in data centres, balancing energy savings with performance and revenue losses is a challenging problem for cloud providers. CPU frequency scaling can be used to reduce power dissipation, but also impacts virtual machine (VM) performance and therefore revenue. In this paper, we first propose a non-linear power model that estimates power dissipation of a multi-core CPU physical machine (PM) and second a pricing model that adjusts the pricing based on the VMs CPU-boundedness characteristics. Finally, we present a cloud controller that uses these models to allocate VM and scale CPU frequencies of the physical machine (PM) to achieve energy cost savings that exceed service revenue losses. We evaluate the proposed approach using simulations with realistic VM workloads, electricity price and temperature traces and estimate energy savings of up to 14.57 percent.