Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Marcos Dias De Assuncao is active.

Publication


Featured researches published by Marcos Dias De Assuncao.


ACM Computing Surveys | 2014

A survey on techniques for improving the energy efficiency of large-scale distributed systems

Anne-Cécile Orgerie; Marcos Dias De Assuncao; Laurent Lefèvre

The great amounts of energy consumed by large-scale computing and network systems, such as data centers and supercomputers, have been a major source of concern in a society increasingly reliant on information technology. Trying to tackle this issue, the research community and industry have proposed myriad techniques to curb the energy consumed by IT systems. This article surveys techniques and solutions that aim to improve the energy efficiency of computing and network resources. It discusses methods to evaluate and model the energy consumed by these resources, and describes techniques that operate at a distributed system level, trying to improve aspects such as resource allocation, scheduling, and network traffic management. This work aims to review the state of the art on energy efficiency and to foster research on schemes to make network and computing resources more efficient.


utility and cloud computing | 2012

Context-Aware Job Scheduling for Cloud Computing Environments

Marcos Dias De Assuncao; Marco Aurelio Stelmar Netto; Fernando Koch; Silvia Cristina Sardela Bianchi

The more instrumented society is demanding smarter services to help coordinate daily activities and exceptional situations. Applications become sophisticated and context-aware as the pervasiveness of technology increases. In order to cope with resource limitations of mobile-based environments, it is a common practice to delegate processing intensive components to a Cloud Computing infrastructure. In this scenario, executions of server-based jobs are still dependent on the local variations of the end-user context. We claim that there is a need for an advanced model for smarter services that combines techniques of context awareness and adaptive job scheduling. This model aims at rationalising the resource utilisation in a Cloud Computing environment, while leading to significant improvement of quality of service. In this paper, we introduce such a model and describe its performance benefits through a combination of social and service simulations. We analyse the results by demonstrating gains in performance, quality of service, reduction of wasted jobs, and improvement of overall end-user experience.


grid economics and business models | 2012

A cost analysis of cloud computing for education

Fernando Koch; Marcos Dias De Assuncao; Marco Aurelio Stelmar Netto

Educational institutions have become highly dependent on information technology to support the delivery of personalised material, digital content, interactive classes, and others. These institutions are progressively transitioning into Cloud Computing technology to shift costs from locally-hosted services to a renting model often with higher availability, elasticity, and resilience. However, in order to properly explore the cost benefits of the pay-as-you-go business model, there is a need for processes for resource allocation, monitoring, and self-adjustment that take advantage of characteristics of the application domain. In this paper we perform a numerical analysis of three resource allocation methods that work by (i) pre-allocating resource capacity to handle peak demands; (ii) reactively allocating resource capacity based on current demand; and (iii) proactively allocating and releasing resources prior to load increases or decreases by exploring characteristics of the educational domain and more precise information about expected demand. The results show that there is an opportunity for both educational institutions and Cloud providers to collaborate in order to enhance the quality of services and reduce costs.


international conference on service oriented computing | 2013

Patience-Aware Scheduling for Cloud Services: Freeing Users from the Chains of Boredom

Carlos Henrique Cardonha; Marcos Dias De Assuncao; Marco Aurelio Stelmar Netto; Renato L. F. Cunha; Carlos Queiroz

Scheduling of service requests in Cloud computing has traditionally focused on the reduction of pre-service wait, generally termed as waiting time. Under certain conditions such as peak load, however, it is not always possible to give reasonable response times to all users. This work explores the fact that different users may have their own levels of tolerance or patience with response delays. We introduce scheduling strategies that produce better assignment plans by prioritising requests from users who expect to receive results earlier and by postponing servicing jobs from those who are more tolerant to response delays. Our analytical results show that the behaviour of users patience plays a key role in the evaluation of scheduling techniques, and our computational evaluation demonstrates that, under peak load, the new algorithms typically provide better user experience than the traditional FIFO strategy.


network operations and management symposium | 2012

CloudAffinity: A framework for matching servers to cloudmates

Marcos Dias De Assuncao; Marco Aurelio Stelmar Netto; Brian Peterson; Lakshminarayanan Renganarayana; John J. Rofrano; Christopher Ward; Christopher C. Young

Increasingly organizations are considering moving their workloads to clouds to take advantage of the anticipated benefits of a more cost effective and agile IT infrastructure. A key component of a cloud service, as it is exposed to the consumer, is the published selection of instance resource configurations (CPU, memory, and disk). The number of instance configurations, as well as the specific values that characterize them, form important decisions for the cloud service provider. This paper explores these resource configurations; examines how well a traditional data center fits into the cloud model from a resource allocation perspective; and proposes a framework, named CloudAffinity, aimed at selecting an optimal number of configurations based on customer requirements.


international conference on cloud computing | 2014

Exploiting User Patience for Scaling Resource Capacity in Cloud Services

Renato L. F. Cunha; Marcos Dias De Assuncao; Carlos Henrique Cardonha; Marco Aurelio Stelmar Netto

An important feature of cloud computing is its elasticity, that is, the ability to have resource capacity dynamically modified according to the current system load. Auto-scaling is challenging because it must account for two conflicting objectives: minimising system capacity available to users and maximising QoS, which typically translates to short response times. Current auto-scaling techniques are based solely on load forecasts and ignore the perception that users have from cloud services. As a consequence, providers tend to provision a volume of resources that is significantly larger than necessary to keep users satisfied. In this article, we propose a scheduling algorithm and an auto-scaling triggering technique that explore user patience in order to identify critical times when auto-scaling is needed and the appropriate volume of capacity by which the cloud platform should either extend or shrink. The proposed technique assists service providers in reducing costs related to resource allocation while keeping the same QoS to users. Our experiments show that it is possible to reduce resource-hour by up to approximately 8% compared to auto-scaling based on system utilisation.


conference on network and service management | 2013

Leveraging attention scarcity to improve the overall user experience of Cloud services

Marco Aurelio Stelmar Netto; Marcos Dias De Assuncao; Silvia Cristina Sardela Bianchi

Applications for mobile devices are increasingly relying on Cloud services to provide content and offload data processing tasks. Traditionally, web-based systems have been optimised to improve response time. Touch sensitive screens and the various sensors of mobile devices allow for better instrumentation, enabling providers to obtain more honest signals on how users utilise a service and learn about their behaviours. This work introduces an architecture that explores honest signals to determine a few user behaviours, e.g. the tendency to perform multiple tasks at a time, change focus, and expect fast response from a service. The architecture relies on a novel resource management strategy that considers such behaviours to prioritise service requests from users who demand fast response from a service. By comparing the proposed strategy with one that does not consider the signals, we show that the experience of users who demand faster response time can be improved without degrading the quality of service of those who often perform multiple activities. The proposed strategy also brings benefits to service providers as no additional resources are necessary to enhance overall user experience, which we modelled using Prospect Theory.


winter simulation conference | 2012

Scheduling with preemption for incident management: when interrupting tasks is not such a bad idea

Marcos Dias De Assuncao; Victor Fernandes Cavalcante; Maira Athanazio de Cerqueira Gatti; Marco Aurelio Stelmar Netto; Claudio S. Pinhanez; Cleidson R. B. de Souza

Large IT service providers comprise hundreds or even thousands of system administrators to handle customers IT infrastructure. As part of the Information Systems that support the decision making of this environment, Incident Management Systems are used and usually provide human resource assignment functionalities. However, the assignment poses several challenges, such as establishing priorities to tasks and defining when and how tasks are allocated to available system administrators. This paper describes a set of incident dispatching policies that can be used, and by using workloads from different departments of an IT service provider, this work evaluates the impact of task preemption on incident resolution and service level agreement attainment.


Advances in Computers | 2012

State of the Art on Technology and Practices for Improving the Energy Efficiency of Data Storage

Marcos Dias De Assuncao; Laurent Lefèvre

Abstract Information is at the core of any business, but storing and making available all the information required to run today’s businesses have become real challenges. While large enterprises currently face difficulties in providing sufficient power and cooling capacity for their data centers, midsize companies are challenged with finding enough floor space for their storage systems. Data storage being responsible for a large part of the energy consumed by data centers, it is essential to make storage systems more energy efficient and to choose solutions appropriately when deploying infrastructure. This chapter presents the state of the art on technologies and best practices to improve the energy efficiency of data storage infrastructure of enterprises and data centers. It describes techniques available for individual storage components—such as hard disks and tapes—and for composite storage solutions—such as those based on disk arrays and storage area networks.


Archive | 2011

SYSTEM, METHOD AND PROGRAM PRODUCT FOR OPTIMIZING VIRTUAL MACHINE PLACEMENT AND CONFIGURATION

Marcos Dias De Assuncao; Marco Aurelio Stelmar Netto

Researchain Logo
Decentralizing Knowledge