Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Toni Mastelic is active.

Publication


Featured researches published by Toni Mastelic.


ACM Computing Surveys | 2015

Cloud Computing: Survey on Energy Efficiency

Toni Mastelic; Ariel Oleksiak; Holger Claussen; Ivona Brandic; Jean-Marc Pierson; Athanasios V. Vasilakos

Cloud computing is today’s most emphasized Information and Communications Technology (ICT) paradigm that is directly or indirectly used by almost every online user. However, such great significance comes with the support of a great infrastructure that includes large data centers comprising thousands of server units and other supporting equipment. Their share in power consumption generates between 1.1% and 1.5% of the total electricity use worldwide and is projected to rise even more. Such alarming numbers demand rethinking the energy efficiency of such infrastructures. However, before making any changes to infrastructure, an analysis of the current status is required. In this article, we perform a comprehensive analysis of an infrastructure supporting the cloud computing paradigm with regards to energy efficiency. First, we define a systematic approach for analyzing the energy efficiency of most important data center domains, including server and network equipment, as well as cloud management systems and appliances consisting of a software utilized by end users. Second, we utilize this approach for analyzing available scientific and industrial literature on state-of-the-art practices in data centers and their equipment. Finally, we extract existing challenges and highlight future research directions.


IEEE Cloud Computing | 2015

Recent Trends in Energy-Efficient Cloud Computing

Toni Mastelic; Ivona Brandic

Almost every online user directly or indirectly uses cloud computing, which is the most promising information and communication technology (ICT) paradigm. However, cloud computings ultrascale size requires large datacenters comprising thousands of servers and other supporting equipment. The power consumption share of such infrastructures reaches 1.1 percent to 1.5 percent of the total electricity use worldwide, and is projected to rise even more. In this article, the authors describe recent trends in cloud computing regarding the energy efficiency of its supporting infrastructure. They present state-of-the-art approaches found in literature and in practice covering servers, networking, cloud management systems, and appliances (user software). They also describe benefits and trade-offs when applying energy-efficiency techniques, and discuss existing challenges and future research directions.


world congress on services | 2015

Predicting Resource Allocation and Costs for Business Processes in the Cloud

Toni Mastelic; Walid Fdhila; Ivona Brandic; Stefanie Rinderle-Ma

By moving business processes into the cloud, business partners can benefit from lower costs, more flexibility and greater scalability in terms of resources offered by the cloud providers. In order to execute a process or a part of it, a business process owner selects and leases feasible resources while considering different constraints, e.g., Optimizing resource requirements and minimizing their costs. In this context, utilizing information about the process models or the dependencies between tasks can help the owner to better manage leased resources. In this paper, we propose a novel resource allocation technique based on the execution path of the process, used to assist the business process owner in efficiently leasing computing resources. The technique comprises three phases, namely process execution prediction, resource allocation and cost estimation. The first exploits the business process model metrics and attributes in order to predict the process execution and the requires resources, while the second utilizes this prediction for efficient allocation of the cloud resources. The final phase estimates and optimizes costs of leased resources by combining different pricing models offered by the provider.


computer software and applications conference | 2014

Towards Uniform Management of Cloud Services by Applying Model-Driven Development

Toni Mastelic; Ivona Brandic; Andres Garcia

Popularity of Cloud Computing produced the birth of Everything-as-a-Service (XaaS) concept, where each service can comprise large variety of software and hardware elements. Although having the same concept, each of these services represent complex system that have to be deployed and managed by a provider using individual tools for almost every element. This usually leads to a combination of different deployment tools that are unable to interact with each other in order to provide an unified and automatic service deployment procedure. Therefore, the tools are usually used manually or specifically integrated for a single cloud service, which on the other hand requires changing the entire deployment procedure in case the service gets modified. In this paper we utilize Model-driven development (MDD) approach for building and managing arbitrary cloud services. We define a metamodel of a cloud service called CoPS, which describes a cloud service as a composition of software and hardware elements by using three sequential models, namely Component, Product and Service. We also present an architecture of a Cloud Management System (CMS) that is able to manage such services by automatically transforming the service models from the abstract representation to the actual deployment. Finally, we validate our approach by realizing three real world use cases using a prototype implementation of the proposed CMS architecture.


international conference on cloud computing | 2013

TimeCap: Methodology for Comparing IT Infrastructures Based on Time and Capacity Metrics

Toni Mastelic; Ivona Brandic

Scientific community is one of the major driving forces for developing and utilizing IT technologies such as Supercomputers and Grid. Although, the main race has always been for bigger and faster infrastructures, an easier access to such infrastructures in recent years created a demand for more customizable and scalable environments. However, introducing new technologies and paradigms such as Cloud computing requires a comprehensive analysis of its benefits before the actual implementation. In this paper we introduce the TimeCap, a methodology for comparing IT infrastructures based on time requirements and resource capacity wastage. We go beyond comparing just the execution time by introducing the Time Complexity as part of TimeCap, a methodology used for comparing arbitrary time related tasks required for completing a procedure, i.e., obtaining the scientific results. Moreover, a resource capacity wastage is compared using the Discrete Capacity, a second methodology as part of TimeCap used for analyzing resource assignment and utilization. We evaluate our methodology by comparing a traditional physical infrastructure and a Cloud infrastructure using our local IT resources. We use a real world scientific application for calculating plasma instabilities for analyzing time and capacity required for computation.


ieee international conference on cloud computing technology and science | 2012

Methodology for trade-off analysis when moving scientific applications to cloud

Toni Mastelic; Drazen Lucanin; Andreas Ipp; Ivona Brandic

Scientific applications have always been one of the major driving forces for the development and efficient utilization of large scale distributed systems - computational Grids represent one of the prominent examples. While these infrastructures, such as Grids or Clusters, are widely used for running most of the scientific applications, they still use bare physical machines with fixed configurations and very little customizability. Today, Clouds represent another step forward in advanced utilization of distributed computing. They provide a fully customizable and self-managing infrastructure with scalable on-demand resources. However, true benefits and trade-offs of running scientific applications on a cloud infrastructure are still obscure, due to the lack of decision making support, which would provide a systematic approach for comparing these infrastructures. In this paper we introduce a comprehensive methodology for comparing the costs of using both infrastructures based on resource and energy usage, as well as their performance. We also introduce a novel approach for comparing the complexity of setting up and administrating such an infrastructure.


arXiv: Distributed, Parallel, and Cluster Computing | 2012

Energy efficient service delivery in clouds in compliance with the kyoto protocol

Drazen Lucanin; Michael Maurer; Toni Mastelic; Ivona Brandic

Cloud computing is revolutionizing the ICT landscape by providing scalable and efficient computing resources on demand. The ICT industry --- especially data centers, are responsible for considerable amounts of CO2 emissions and will very soon be faced with legislative restrictions, such as the Kyoto protocol, defining caps at different organizational levels (country, industry branch etc.) A lot has been done around energy efficient data centers, yet there is very little work done in defining flexible models considering CO2. In this paper we present a first attempt of modeling data centers in compliance with the Kyoto protocol. We discuss a novel approach for trading credits for emission reductions across data centers to comply with their constraints. CO2 caps can be integrated with Service Level Agreements and juxtaposed to other computing commodities (e.g. computational power, storage), setting a foundation for implementing next-generation schedulers and pricing models that support Kyoto-compliant CO2 trading schemes.


Journal of Systems and Software | 2016

Towards uniform management of multi-layered cloud services by applying model-driven development

Toni Mastelic; Andres Garcia; Ivona Brandic

XaaS comprises hardware/software elements from several layers.Cloud services can be modeled using Model-Driven Development.Modular Cloud services provide improved consolidation capabilities. Cloud Computing started by renting computing infrastructures in form of virtual machines, which include hardware resources such as memory and processors. However, due to its popularity it gave birth to Everything-as-a-Service concept, where each service can comprise large variety of software/hardware elements. Although having the same concept, services represent complex environments that have to be deployed and managed by a provider using individual tools. The tools are usually used manually or specifically integrated for a single service. This requires changing an entire deployment procedure in case the service gets modified, while additionally limiting consolidation capabilities due to tight service integration.In this paper, we utilize Model-Driven Development approach for managing arbitrary Cloud services. We define a metamodel of a Cloud service called CoPS, which describes a service as a composition of software/hardware elements by using three sequential models, namely Component, Product and Service. We also present an architecture of a Cloud Management System (CMS) used for automatic service management, which transforms the models from an abstract representation to an actual deployment. The approach is validated by realizing four real-world use cases using a prototype implementation. Finally, we evaluate its consolidation capabilities by simulating resource consumption and deployment time.


ieee international conference on cloud computing technology and science | 2015

Data Velocity Scaling via Dynamic Monitoring Frequency on Ultrascale Infrastructures

Toni Mastelic; Ivona Brandic

Monitoring ultrascale systems such as Clouds requires collecting enormous amount of data by periodically reading metric values from a system. Current approaches tend to select a static frequency for sampling monitoring data. On one hand, over-sampling the data by collecting it at high frequencies results in data redundancy during steady runs of the system. On the other hand, under-sampling with low monitoring frequencies results in information loss during volatile behaviour of the system as data is significantly diluted. Therefore, choosing an optimal monitoring frequency represents a challenging research issue. In this paper, we propose a dynamic monitoring frequency algorithm for collecting monitoring data from ultrascale systems such as Clouds. The algorithm deterministically reduces data velocity by self-adapting the monitoring frequency to the volatility of data being collected. Consequently, it collects less data due to fewer readings, while keeping the same data value as the equivalent static monitoring frequency. The proposed approach is evaluated using Google traces where it is able to reduce the velocity of monitoring data by up to 85% without diluting information quality.


ieee international conference on cloud computing technology and science | 2014

CPU Performance Coefficient (CPU-PC): A Novel Performance Metric Based on Real-Time CPU Resource Provisioning in Time-Shared Cloud Environments

Toni Mastelic; Ivona Brandic; Jasmina Jaarevic

The Cloud represents an emerging paradigm that provides on-demand computing resources, such as CPU. The resources are customized in quantity through various virtual machine (VM) flavours, which are deployed on top of time-shared infrastructure, where a single server can host several VMs. However, their Quality of Service (QoS) is limited and boils down to the VM availability, which does not provide any performance guarantees for the shared underlying resources. Consequently, the providers usually over-provision their resources trying to increase utilization, while the customers can suffer from poor performance due to increased concurrency. In this paper, we introduce CPU Performance Coefficient (CPU-PC), a novel performance metric used for measuring the real-time quality of CPU provisioning in virtualized environments. The metric isolates an impact of the provisioned CPU on the performance of the customers application, hence allowing the provider to measure the quality of provisioned resources and manage them accordingly. Additionally, we provide a measurement of the proposed metric for the customer as well, thus enabling the latter to monitor the quality of rented resources. As evaluation, we utilize three real world applications used in existing Cloud services, and correlate the CPU-PC metric with the response time of the applications. An R-squared correlation of over 0.9557 indicates the applicability of our approach in the real world.

Collaboration


Dive into the Toni Mastelic's collaboration.

Top Co-Authors

Avatar

Ivona Brandic

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar

Ariel Oleksiak

Poznań University of Technology

View shared research outputs
Top Co-Authors

Avatar

Drazen Lucanin

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar

Michael Maurer

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Georgios L. Stavrinides

Aristotle University of Thessaloniki

View shared research outputs
Researchain Logo
Decentralizing Knowledge