Christopher Hoover
Hewlett-Packard
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Christopher Hoover.
network operations and management symposium | 2010
Yuan Chen; Daniel Gmach; Chris D. Hyser; Zhikui Wang; Cullen E. Bash; Christopher Hoover; Sharad Singhal
Data centers contain IT, power and cooling infrastructures, each of which is typically managed independently. In this paper, we propose a holistic approach that couples the management of IT, power and cooling infrastructures to improve the efficiency of data center operations. Our approach considers application performance management, dynamic workload migration/consolidation, and power and cooling control to “right-provision” computing, power and cooling resources for a given workload. We have implemented a prototype of this for virtualized environments and conducted experiments in a production data center. Our experimental results demonstrate that the integrated solution is practical and can reduce energy consumption of servers by 35% and cooling by 15%, without degrading application performance.
ASME 2009 International Mechanical Engineering Congress and Exposition, IMECE2009 | 2009
Zhikui Wang; Alan McReynolds; Carlos Felix; Cullen E. Bash; Christopher Hoover; Monem H. Beitelmal; Rocky Shih
In data centers with raised floor architecture, the floor tiles are typically perforated, delivering the cold air from the plenum to the inlets of equipment located in racks. The environment of these data centers is dynamic in that the workload and power dissipation fluctuate considerably over both short-term and long-term time scales. As such, airflow requirements vary continuously. However, due to labor costs and lack of expertise, the tiles are adjusted infrequently, and many data centers are grossly over provisioned for airflow in general and/or lack sufficient airflow delivery in certain local areas. This wastes energy and reduces data center thermal capacity. We have previously introduced Kratos, an Adaptive Vent Tile (AVT) technology that addresses this problem by automatically adjusting mechanical louvers mounted to the tiles in response to the needs of nearby IT equipment. Our initial results were limited to a 3-tile test bed that allowed us to prove concept but did not provide for scalability. This paper extends the previous work by expanding the size of the test bed to 28 tiles and 29 racks located in multiple thermal zones. We present experimental modeling results on the MIMO (Multi-Input Multi-Output) system and provide insights on the external behavior of the system through CFD (Computational Fluid Dynamic) analysis. We develop an MPC (Model-based Predictive Control) controller to maintain the temperatures of racks below the thresholds through vent tile tuning. Experimental results show that the controller can maintain the temperature below the thresholds while reducing overall cooling air requirements.Copyright
modeling, analysis, and simulation on computer and telecommunication systems | 2009
Eric Anderson; Christopher Hoover; Xiaozhou Li; Joseph Tucek
Distributed systems are notoriously difficult to implement and debug. One important tool for understanding the behavior of distributed systems is tracing. Unfortunately, effective tracing for modern distributed systems faces several challenges. First, many interesting behaviors in distributed systems only occur rarely, or at full production scale. Hence we need tracing mechanisms which impose minimal overhead, in order to allow always-on tracing of production instances. Second, for high-speed systems, messages can be delivered in significantly less time than the error of traditional time synchronization techniques such as network time protocol (NTP), necessitating time adjustment techniques with much higher precision. Third, distributed systems today may generate millions of events per second systemwide, resulting in traces consisting of billions of events. Such large traces can overwhelm existing trace analysis tools. These challenges make effective tracing difficult. We present techniques that address these three challenges. Our contributions include 1) a low-overhead tracing mechanism, which allows tracing of large systems without impacting their behavior or performance (0.14 μs/event), 2) a post hoc technique for producing highly accurate time synchronization across hosts (within 10 /ts, compared to between 100 μs to 2 ms for NTP), and 3) incremental data processing techniques which facilitate analyzing traces containing billions of trace points on desktop systems. We have successfully applied these techniques to two distributed systems, a cooperative caching system and a distributed storage system, and from our experience, we believe our techniques are applicable to other distributed systems.
conference on automation science and engineering | 2010
Zhikui Wang; Cullen E. Bash; Christopher Hoover; Alan McReynolds; Carlos Felix; Rocky Shih
To accommodate the dynamic environment within raised floor data centers, cooling capacity is tuned during operation through zonal control means, e.g., active management of air conditioning resources. However, due to the spatial variance of cooling efficiency and time-varying cooling demand within zones, zonal adjustments alone are not able to maximize the thermal capacity of data centers. Without making local adjustments to the physical structure, such as altering vent tile openings, a data center can suffer significant reduction in thermal capacity and cooling efficiency, and such that facility lifespan. In this paper, we present active cooling technologies using both local and zonal actuators that improve overall cooling efficiency. Experimental evaluation in a data center shows that the integrated controller can adapt to changes to the system under control, significantly improve the controllability of the temperatures and reduce the energy consumption of the cooling facility.
ieee international symposium on sustainable systems and technology | 2009
Brian J. Watson; Ratnesh Sharma; Susan K. Charles; Amip J. Shah; Chandrakant D. Patel; Manish Marwah; Christopher Hoover; Thomas W. Christian; Cullen E. Bash
In this paper, we describe an integrated design and management approach to creating a sustainable IT ecosystem: a physical infrastructure where information technology has been seamlessly interwoven to improve environmental efficiency while achieving lower cost. Specifically, we describe five principles to achieve such integration: ecosystem-scale life-cycle design; scalable and configurable resource microgrids; pervasive sensing; knowledge discovery and visualization; and autonomous control. Application of the approach is demonstrated for the case study of an urban water infrastructure, and we find that the proposed approach could potentially enable reduction of life-cycle energy use by over 15%.
ASME 2011 5th International Conference on Energy Sustainability, Parts A, B, and C | 2011
Christopher Hoover; Brian J. Watson; Ratnesh Sharma; Sue Charles; Amip J. Shah; Chandrakant D. Patel; Manish Marwah; Tom Christian; Cullen E. Bash
In this paper, we describe an integrated design and management approach for building next-generation cities. This approach leverages IT technology in both the design and operational phases to optimize sustainability over a broad set of metrics while lowering costs. We call this approach a Sustainable IT Ecosystem. Our approach is based on five principles: ecosystem-scale life-cycle design; scalable and configurable infrastructure building blocks; pervasive sensing; data analytics and visualization; and autonomous control. Application of the approach is demonstrated for two case studies: an urban water infrastructure and an urban power microgrid. We conclude by discussing future opportunities to co-design and integrate these independent infrastructures, gaining further efficiencies.Copyright
modeling, analysis, and simulation on computer and telecommunication systems | 2010
Eric Anderson; Christopher Hoover; Xiaozhou Li
We present two new cooperative caching algorithms that allow a cluster of file system clients to cache chunks of files instead of directly accessing them from origin file servers. The first algorithm, called C-LRU (Cooperative-LRU), is based on the simple D-LRU (Distributed-LRU) algorithm, but moves a chunks position closer to the tail of its local LRU list when the number of copies of the chunk increases. The second algorithm, called RobinHood, is based on the N-Chance algorithm, but targets chunks cached at many clients for replacement when forwarding a singlet to a peer. We evaluate these algorithms on a variety of workloads, including several publicly available traces, and find that the new algorithms significantly outperform their predecessors.
ieee international symposium on sustainable systems and technology | 2010
Martin F. Arlitt; Sujata Banerjee; Cullen E. Bash; Yuan Chen; Daniel Gmach; Christopher Hoover; Priya Mahadevan; Dejan S. Milojicic; Eric Pelletier; Rn Vishwanath; Amip J. Shah; Puneet Sharma
Cloud computing is gaining in popularity [1]. At the same time, the global community is becoming increasingly conscious about sustainability [2]. While intuitively Cloud computing, due to resource consolidation and virtualization, economies of large scale, delivery on demand, etc., has the potential to be more sustainable than well tuned data centers, this is not guaranteed. To evaluate and understand Cloud sustainability, we propose the Cloud Sustainability Dashboard (CSD), which models and assesses the overall sustainability impact of services hosted in the Cloud. We employ our approach to empirically evaluate the sustainability of Open Cirrus, an open source Cloud computing environment.
ieee international symposium on sustainable systems and technology | 2009
Tom Christian; Yuan Chen; Rocky Shih; Ratnesh Sharma; Christopher Hoover; Manish Marwah; Amip J. Shah; Daniel Gmach
Next generation data centers must be designed to meet Service Level Agreements (SLAs) for application performance while reducing costs and environmental impact. Traditional design approaches are manually intensive and must integrate thousands of components at multiple granularities, often with conflicting goals. We propose an Automated Data Center Synthesizer to design Sustainable Data Centers that meet SLA goals, minimize carbon emissions and embedded exergy, are optimally efficient and deliver significantly reduced Total Cost of Ownership (TCO). The paper concludes with a use case study that employs the synthesizer process flow to design an optimal data center to deliver a set of services for a hypothetical city using state of the art sustainable technologies.
ASME 2011 Pacific Rim Technical Conference and Exhibition on Packaging and Integration of Electronic and Photonic Systems, MEMS and NEMS: Volume 2 | 2011
Christopher Hoover; Niru Kumari; Carlos Felix; Cullen E. Bash
We present Avatar, a data center environmental advisory system for raised floor data centers. Using limited information such as inlet and reference temperatures for IT equipment and basic floor plan geometry, Avatar produces recommendation to adjust the operation of computer room air conditioners (CRACs) and the configuration of vent tiles in a data center so as to reducing excess provisioning of cooling and to remove hot spots. Avatar reduces operating expenses by cooling the same load with less energy. Avatar reduces capital expenses by recovering stranded cooling capacity that would otherwise have to be replaced.Copyright