Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daniel Gmach is active.

Publication


Featured researches published by Daniel Gmach.


measurement and modeling of computer systems | 2012

Renewable and cooling aware workload management for sustainable data centers

Zhenhua Liu; Yuan Chen; Cullen E. Bash; Adam Wierman; Daniel Gmach; Zhikui Wang; Manish Marwah; Chris D. Hyser

Recently, the demand for data center computing has surged, increasing the total energy footprint of data centers worldwide. Data centers typically comprise three subsystems: IT equipment provides services to customers; power infrastructure supports the IT and cooling equipment; and the cooling infrastructure removes heat generated by these subsystems. This work presents a novel approach to model the energy flows in a data center and optimize its operation. Traditionally, supply-side constraints such as energy or cooling availability were treated independently from IT workload management. This work reduces electricity cost and environmental impact using a holistic approach that integrates renewable supply, dynamic pricing, and cooling supply including chiller and outside air cooling, with IT workload planning to improve the overall sustainability of data center operations. Specifically, we first predict renewable energy as well as IT demand. Then we use these predictions to generate an IT workload management plan that schedules IT workload and allocates IT resources within a data center according to time varying power supply and cooling efficiency. We have implemented and evaluated our approach using traces from real data centers and production systems. The results demonstrate that our approach can reduce both the recurring power costs and the use of non-renewable energy by as much as 60% compared to existing techniques, while still meeting the Service Level Agreements.


Computer Networks | 2009

Resource pool management: Reactive versus proactive or let's be friends

Daniel Gmach; Jerry Rolia; Ludmila Cherkasova; Alfons Kemper

The consolidation of multiple workloads and servers enables the efficient use of server and power resources in shared resource pools. We employ a trace-based workload placement controller that uses historical information to periodically and proactively reassign workloads to servers subject to their quality of service objectives. A reactive migration controller is introduced that detects server overload and underload conditions. It initiates the migration of workloads when the demand for resources exceeds supply. Furthermore, it dynamically adds and removes servers to maintain a balance of supply and demand for capacity while minimizing power usage. A host load simulation environment is used to evaluate several different management policies for the controllers in a time effective manner. A case study involving three months of data for 138 SAP applications compares three integrated controller approaches with the use of each controller separately. The study considers trade-offs between: (i) required capacity and power usage, (ii) resource access quality of service for CPU and memory resources, and (iii) the number of migrations. Our study sheds light on the question of whether a reactive controller or proactive workload placement controller alone is adequate for resource pool management. The results show that the most tightly integrated controller approach offers the best results in terms of capacity and quality but requires more migrations per hour than the other strategies.


2011 International Green Computing Conference and Workshops | 2011

Minimizing data center SLA violations and power consumption via hybrid resource provisioning

Anshul Gandhi; Yuan Chen; Daniel Gmach; Martin F. Arlitt; Manish Marwah

This paper presents a novel approach to correctly allocate resources in data centers, such that SLA violations and energy consumption are minimized. Our approach first analyzes historical workload traces to identify long-term patterns that establish a “base” workload. It then employs two techniques to dynamically allocate capacity: predictive provisioning handles the estimated base workload at coarse time scales (e.g., hours or days) and reactive provisioning handles any excess workload at finer time scales (e.g., minutes). The combination of predictive and reactive provisioning achieves a significant improvement in meeting SLAs, conserving energy, and reducing provisioning costs. We implement and evaluate our approach using traces from four production systems. The results show that our approach can provide up to 35% savings in power consumption and reduce SLA violations by as much as 21% compared to existing techniques, while avoiding frequent power cycling of servers.


network operations and management symposium | 2010

Integrated management of application performance, power and cooling in data centers

Yuan Chen; Daniel Gmach; Chris D. Hyser; Zhikui Wang; Cullen E. Bash; Christopher Hoover; Sharad Singhal

Data centers contain IT, power and cooling infrastructures, each of which is typically managed independently. In this paper, we propose a holistic approach that couples the management of IT, power and cooling infrastructures to improve the efficiency of data center operations. Our approach considers application performance management, dynamic workload migration/consolidation, and power and cooling control to “right-provision” computing, power and cooling resources for a given workload. We have implemented a prototype of this for virtualized environments and conducted experiments in a production data center. Our experimental results demonstrate that the integrated solution is practical and can reduce energy consumption of servers by 35% and cooling by 15%, without degrading application performance.


international conference on autonomic computing | 2010

Probabilistic performance modeling of virtualized resource allocation

Brian J. Watson; Manish Marwah; Daniel Gmach; Yuan Chen; Martin F. Arlitt; Zhikui Wang

Virtualization technologies enable organizations to dynamically flex their IT resources based on workload fluctuations and changing business needs. However, only through a formal understanding of the relationship between application performance and virtualized resource allocation can over-provisioning or over-loading of physical IT resources be avoided. In this paper, we examine the probabilistic relationships between virtualized CPU allocation, CPU contention, and application response time, to enable autonomic controllers to satisfy service level objectives (SLOs) while more effectively utilizing IT resources. We show that with only minimal knowledge of application and system behaviors, our methodology can model the probability distribution of response time with a mean absolute error of less than 6% when compared with the measured response time distribution. We then demonstrate the usefulness of a probabilistic approach with case studies. We apply basic laws of probability to our model to investigate whether and how CPU allocation and contention affect application response time, correcting for their effects on CPU utilization. We find mean absolute differences of 8-10% between the modeled response time distributions of certain allocation states, and a similar difference when we add CPU contention. This methodology is general, and should also be applicable to non-CPU virtualized resources and other performance modeling problems.


conference on network and service management | 2010

Capacity planning and power management to exploit sustainable energy

Daniel Gmach; Jerry Rolia; Cullen E. Bash; Yuan Chen; Tom Christian; Amip J. Shah; Ratnesh Sharma; Zhikui Wang

This paper describes an approach for designing a power management plan that matches the supply of power with the demand for power in data centers. Power may come from the grid, from local renewable sources, and possibly from energy storage subsystems. The supply of renewable power is often time-varying in a manner that depends on the source that provides the power, the location of power generators, and the weather conditions. The demand for power is mainly determined by the time-varying workloads hosted in the data center and the power management policies implemented by the data center. A case study demonstrates how our approach can be used to design a plan for realistic and complex data center workloads. The study considers a data centers deployment in two geographic locations with different supplies of power. Our approach offers greater precision than other planning methods that do not take into account time-varying power supply and demand and data center power management policies.


IEEE Transactions on Network and Service Management | 2009

AppRAISE: application-level performance management in virtualized server environments

Zhikui Wang; Yuan Chen; Daniel Gmach; Sharad Singhal; Brian J. Watson; Wilson Rivera; Xiaoyun Zhu; Chris D. Hyser

Managing application-level performance for multitier applications in virtualized server environments is challenging because the applications are distributed across multiple virtual machines, and workloads are dynamic in their intensity and transaction mix resulting in time-varying resource demands. In this paper, we present AppRAISE, a system that manages performance of multi-tier applications by dynamically resizing the virtual machines hosting the applications. We extend a traditional queuing model to represent application performance in virtualized server environments, where virtual machine capacity is dynamically tuned. Using this performance model, AppRAISE predicts the performance of the applications due to workload changes, and proactively resizes the virtual machines hosting the applications to meet performance thresholds. By integrating feedforward prediction and feedback reactive control, AppRAISE provides a robust and efficient performance management solution. We tested AppRAISE using Xen virtual machines and the RUBiS benchmark application. Our empirical results show that AppRAISE can effectively allocate CPU resources to application components of multiple applications to meet end-to-end mean response time targets in the presence of variable workloads, while maintaining reasonable trade-offs between application performance, resource efficiency, and transient behavior.


intersociety conference on thermal and thermomechanical phenomena in electronic systems | 2012

Towards the design and operation of net-zero energy data centers

Martin F. Arlitt; Cullen E. Bash; Sergey Blagodurov; Yuan Chen; Tom Christian; Daniel Gmach; Chris D. Hyser; Niru Kumari; Zhenhua Liu; Manish Marwah; Alan McReynolds; Chandrakant D. Patel; Amip J. Shah; Zhikui Wang; Rongliang Zhou

Reduction of resource consumption in data centers is becoming a growing concern for data center designers, operators and users. Accordingly, interest in the use of renewable energy to provide some portion of a data centers overall energy usage is also growing. One key concern is that the amount of renewable energy necessary to satisfy a typical data centers power consumption can lead to prohibitively high capital costs for the power generation and delivery infrastructure, particularly if on-site renewables are used. In this paper, we introduce a method to operate a data center with renewable energy that minimizes dependence on grid power while minimizing capital cost. We achieve this by integrating data center demand with the availability of resource supplies during operation. We discuss results from the deployment of our method in a production data center.


international conference on autonomic computing | 2012

Adaptive green hosting

Nan Deng; Christopher Stewart; Daniel Gmach; Martin F. Arlitt; Jaimie Kelley

The growing carbon footprint of Web hosting centers contributes to climate change and could harm the publics perception of Web hosts and Internet services. A pioneering cadre of Web hosts, called green hosts, lower their footprints by cutting into their profit margins to buy carbon offsets. This paper argues that an adaptive approach to buying carbon offsets can increase a green hosts total profit by exploiting daily, bursty patterns in Internet service workloads. We make the case in three steps. First, we present a realistic, geographically distributed service that meets strict SLAs while using green hosts to lower its carbon footprint. We show that the service routes requests between competing hosts differently depending on its request arrival rate and on how many carbon offsets each host provides. Second, we use empirical traces of request arrivals to compute how many carbon offsets a host should provide to maximize its profit. We find that diurnal fluctuations and bursty surges interrupted long contiguous periods where the best carbon offset policy held steady, leading us to propose a reactive approach. For certain hosts, our approach can triple the profit compared to a fixed approach used in practice. Third, we simulate 9 services with diverse carbon footprint goals that distribute their workloads across 11 Web hosts worldwide. We use real data on the location of Web hosts and their provided carbon offset policies to show that adaptive green hosting can increase profit by 152% for one of todays larger green hosts.


ieee international symposium on sustainable systems and technology | 2010

Profiling Sustainability of Data Centers

Daniel Gmach; Yuan Chen; Amip J. Shah; Jerry Rolia; Cullen E. Bash; Tom Christian; Ratnesh Sharma

Todays data centers consume vast amounts of energy, leading to high operational costs, excessive water consumption, and significant greenhouse gas emissions. With the approach of micro grids, an opportunity exists to reduce the environmental impact and cost of power in data centers. To realize this, demand side power consumption needs to be understood and co-managed from the perspectives of both supply and demand. We present an approach to achieve this via data center power profiling and demonstrate its applicability for an enterprise data center.

Collaboration


Dive into the Daniel Gmach's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge