Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Marco Aurelio Stelmar Netto is active.

Publication


Featured researches published by Marco Aurelio Stelmar Netto.


Journal of Parallel and Distributed Computing | 2015

Big Data computing and clouds

Marcos Dias de Assunção; Rodrigo N. Calheiros; Silvia Cristina Sardela Bianchi; Marco Aurelio Stelmar Netto; Rajkumar Buyya

This paper discusses approaches and environments for carrying out analytics on Clouds for Big Data applications. It revolves around four important areas of analytics and Big Data, namely (i) data management and supporting architectures; (ii) model development and scoring; (iii) visualisation and user interaction; and (iv) business models. Through a detailed survey, we identify possible gaps in technology and provide recommendations for the research community on future directions on Cloud-supported Big Data computing and analytics solutions. Survey of solutions for carrying out analytics and Big Data on Clouds.Identification of gaps in technology for Cloud-based analytics.Recommendations of research directions for Cloud-based analytics and Big Data.


Future Generation Computer Systems | 2011

Server consolidation with migration control for virtualized data centers

Tiago C. Ferreto; Marco Aurelio Stelmar Netto; Rodrigo N. Calheiros; César A. F. De Rose

Virtualization has become a key technology for simplifying service management and reducing energy costs in data centers. One of the challenges faced by data centers is to decide when, how, and which virtual machines (VMs) have to be consolidated into a single physical server. Server consolidation involves VM migration, which has a direct impact on service response time. Most of the existing solutions for server consolidation rely on eager migrations, which try to minimize the number of physical servers running VMs. These solutions generate unnecessary migrations due to unpredictable workloads that require VM resizing. This paper proposes an LP formulation and heuristics to control VM migration, which prioritize virtual machines with steady capacity. We performed experiments using TU-Berlin and Google data center workloads to compare our migration control strategy against existing eager-migration-based solutions. We observed that avoiding migration of VMs with steady capacity reduces the number of migrations with minimal penalty in the number of physical servers.


Software - Practice and Experience | 2013

EMUSIM: an integrated emulation and simulation environment for modeling, evaluation, and validation of performance of Cloud computing applications

Rodrigo N. Calheiros; Marco Aurelio Stelmar Netto; César A. F. De Rose; Rajkumar Buyya

Cloud computing allows the deployment and delivery of application services for users worldwide. Software as a Service providers with limited upfront budget can take advantage of Cloud computing and lease the required capacity in a pay‐as‐you‐go basis, which also enables flexible and dynamic resource allocation according to service demand. One key challenge potential Cloud customers have before renting resources is to know how their services will behave in a set of resources and the costs involved when growing and shrinking their resource pool. Most of the studies in this area rely on simulation‐based experiments, which consider simplified modeling of applications and computing environment. In order to better predict services behavior on Cloud platforms, we developed an integrated architecture that is based on both simulation and emulation. The proposed architecture, named EMUSIM, automatically extracts information from application behavior via emulation and then uses this information to generate the corresponding simulation model. We performed experiments using an image processing application as a case study and found that EMUSIM was able to accurately model such application via emulation and use the model to supply information about its potential performance in a Cloud provider. We also discuss our experience using EMUSIM for deploying applications in a real public Cloud provider. EMUSIM is based on an open source software stack and therefore it can be extended for analysis behavior of several other applications. Copyright


international conference on service oriented computing | 2007

SLA-Based Advance Reservations with Flexible and Adaptive Time QoS Parameters

Marco Aurelio Stelmar Netto; Kris Bubendorfer; Rajkumar Buyya

Utility computing enables the use of computational resources and services by consumers with service obligations and expectations defined in Service Level Agreements (SLAs). Parallel applications and workflows can be executed across multiple sites to benefit from access to a wide range of resources and to respond to dynamic runtime requirements. A utility computing provider has the difficult role of ensuring that all current SLAs are provisioned, while concurrently forming new SLAs and providing multiple services to numerous consumers. Scheduling to satisfy SLAs can result in a low return from a providers resources due to trading off Quality of Service (QoS) guarantees against utilisation. One technique is to employ advance reservations so that an SLA aware scheduler can properly manage and schedule its resources. To improve system utilisation we exploit the principle that some consumers will be more flexible than others in relation to the starting or completion time, and that we can juggle the execution schedule right up until each execution starts. In this paper we present a QoS scheduler that uses SLAs to efficiently schedule advance reservations for computation services based on their flexibility. In our SLA model users can reduce or increase the flexibility of their QoS requirements over time according to their needs and resource provider policies. We introduce our scheduling algorithms, and show experimentally that it is possible to use flexible advance reservations to meet specified QoS while improving resource utilisation.


computer software and applications conference | 2012

CASViD: Application Level Monitoring for SLA Violation Detection in Clouds

Vincent C. Emeakaroha; Tiago C. Ferreto; Marco Aurelio Stelmar Netto; Ivona Brandic; César A. F. De Rose

Cloud resources and services are offered based on Service Level Agreements (SLAs) that state usage terms and penalties in case of violations. Although, there is a large body of work in the area of SLA provisioning and monitoring at infrastructure and platform layers, SLAs are usually assumed to be guaranteed at the application layer. However, application monitoring is a challenging task due to monitored metrics of the platform or infrastructure layer that cannot be easily mapped to the required metrics at the application layer. Sophisticated SLA monitoring among those layers to avoid costly SLA penalties and maximize the provider profit is still an open research challenge. This paper proposes an application monitoring architecture named CASViD, which stands for Cloud Application SLA Violation Detection architecture. CASViD architecture monitors and detects SLA violations at the application layer, and includes tools for resource allocation, scheduling, and deployment. Different from most of the existing monitoring architectures, CASViD focuses on application level monitoring, which is relevant when multiple customers share the same resources in a Cloud environment. We evaluate our architecture in a real Cloud testbed using applications that exhibit heterogeneous behaviors in order to investigate the effective measurement intervals for efficient monitoring of different application types. The achieved results show that our architecture, with low intrusion level, is able to monitor, detect SLA violations, and suggest effective measurement intervals for various workloads.


utility and cloud computing | 2012

Context-Aware Job Scheduling for Cloud Computing Environments

Marcos Dias De Assuncao; Marco Aurelio Stelmar Netto; Fernando Koch; Silvia Cristina Sardela Bianchi

The more instrumented society is demanding smarter services to help coordinate daily activities and exceptional situations. Applications become sophisticated and context-aware as the pervasiveness of technology increases. In order to cope with resource limitations of mobile-based environments, it is a common practice to delegate processing intensive components to a Cloud Computing infrastructure. In this scenario, executions of server-based jobs are still dependent on the local variations of the end-user context. We claim that there is a need for an advanced model for smarter services that combines techniques of context awareness and adaptive job scheduling. This model aims at rationalising the resource utilisation in a Cloud Computing environment, while leading to significant improvement of quality of service. In this paper, we introduce such a model and describe its performance benefits through a combination of social and service simulations. We analyse the results by demonstrating gains in performance, quality of service, reduction of wasted jobs, and improvement of overall end-user experience.


modeling, analysis, and simulation on computer and telecommunication systems | 2014

Evaluating Auto-scaling Strategies for Cloud Computing Environments

Marco Aurelio Stelmar Netto; Carlos Henrique Cardonha; Renato L. F. Cunha; Marcos Dias de Assunção

Auto-scaling is a key feature in clouds responsible for adjusting the number of available resources to meet service demand. Resource pool modifications are necessary to keep performance indicators, such as utilisation level, between user-defined lower and upper bounds. Auto-scaling strategies that are not properly configured according to user workload characteristics may lead to unacceptable QoS and large resource waste. As a consequence, there is a need for a deeper understanding of auto-scaling strategies and how they should be configured to minimise these problems. In this work, we evaluate various auto-scaling strategies using log traces from a production Google data centre cluster comprising millions of jobs. Using utilisation level as performance indicator, our results show that proper management of auto-scaling parameters reduces the difference between the target utilisation interval and the actual values-we define such difference as Auto-scaling Demand Index. We also present a set of lessons from this study to help cloud providers build recommender systems for auto-scaling operations.


grid computing | 2008

Rescheduling co-allocation requests based on flexible advance reservations and processor remapping

Marco Aurelio Stelmar Netto; Rajkumar Buyya

Large-scale computing environments, such as TeraGrid, Distributed ASCI Supercomputer (DAS), and Gridpsila5000, have been using resource co-allocation to execute applications on multiple sites. Their schedulers work with requests that contain imprecise estimations provided by users. This lack of accuracy generates fragments inside the scheduling queues that can be filled by rescheduling both local and multi-site requests. Current resource co-allocation solutions rely on advance reservations to ensure that users can access all the resources at the same time. These coallocation requests cannot be rescheduled if they are based on rigid advance reservations. In this work, we investigate the impact of rescheduling co-allocation requests based on flexible advance reservations and processor remapping. The metascheduler can modify the start time of each job component and remap the number of processors they use in each site. The experimental results show that local jobs may not fill all the fragments in the scheduling queues and hence rescheduling co-allocation requests reduces response time of both local and multi-site jobs. Moreover, we have observed in some scenarios that processor remapping increases the chances of placing the tasks of multi-site jobs into a single cluster, thus eliminating the inter-cluster network overhead.


Future Generation Computer Systems | 2016

Optimising resource costs of cloud computing for education

Fernando Koch; Marcos Dias de Assunção; Carlos Henrique Cardonha; Marco Aurelio Stelmar Netto

There is a growing interest around the utilisation of cloud computing in education. As organisations involved in the area typically face severe budget restrictions, there is a need for cost optimisation mechanisms that explore unique features of digital learning environments. In this work, we introduce a method based on Maximum Likelihood Estimation that considers heterogeneity of IT infrastructure in order to devise resource allocation plans that maximise platform utilisation for educational environments. We performed experiments using modelled datasets from real digital teaching solutions and obtained cost reductions of up to 30%, compared with conservative resource allocation strategies. Context-aware algorithm for allocating computing resources for class- rooms.Experiment setup based on real-world school data.Evaluation analysis considering security margin, costs, and QoS.


Ibm Journal of Research and Development | 2015

An architecture and algorithm for context-aware resource allocation for digital teaching platforms

Fernando Luiz Koch; Marcos Dias de Assunção; Carlos Henrique Cardonha; Marco Aurelio Stelmar Netto; Tiago Thompsen Primo

Digital Teaching Platforms (DTPs) are aimed to support personalization of classroom education to help optimize the learning process. A trend for research and development exists regarding methods to analyze multimodal data, aiming to infer how students interact with delivered content and understanding student behavior, academic performance, and the way teachers react to student engagement. Existing DTPs can deliver several types of insights, some of which teachers can use to adjust learning activities in real-time. These technologies require a computing infrastructure capable of collecting and analyzing large volumes of data, and, for this, cloud computing is an ideal candidate solution. Nonetheless, preliminary field tests with DTPs demonstrate that applying fully remote services is prohibitive in scenarios with limited bandwidth and a constrained communication infrastructure. Therefore, we propose an architecture for DTPs and an algorithm to promote the adjustable balance between local and federated cloud resources. The solution works by deciding where tasks should be executed, based on resource availability and the quality of insights they may provide to teachers during learning sessions. In this work, we detail the system architecture, describe a proof-of-concept, and discuss the viability of the proposed approach for practical scenarios.

Collaboration


Dive into the Marco Aurelio Stelmar Netto's collaboration.

Researchain Logo
Decentralizing Knowledge