Cullen E. Bash
Hewlett-Packard
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Cullen E. Bash.
2003 International Electronic Packaging Technical Conference and Exhibition, Volume 2 | 2003
Chandrakant D. Patel; Cullen E. Bash; Abdlmonem H. Beitelmal
A cooling system is configured to adjust cooling fluid flow to various racks located throughout a data center based upon the detected or anticipated temperatures at various locations throughout the data center. In one respect, by substantially increasing the cooling fluid flow to those racks dissipating greater amounts of heat and by substantially decreasing the cooling fluid flow to those racks dissipating lesser amounts of heat, the amount of energy required to operate the cooling system may be relatively reduced. Thus, instead of operating the devices, e.g., compressors, fans, etc., of the cooling system at substantially 100 percent of the anticipated heat dissipation from the racks, those devices may be operated according to the actual cooling needs. In addition, the racks may be positioned throughout the data center according to their anticipated heat loads to thereby enable computer room air conditioning (CRAC) units located at various positions throughout the data center to operate in a more efficient manner. In one respect, the positioning of the racks may be determined through implementation of numerical modeling and metrology of the cooling fluid flow throughout the data center. In addition, the numerical modeling may be implemented to control the volume flow rate and velocity of the cooling fluid flow through each of the vents.
IEEE Internet Computing | 2005
Ratnesh Sharma; Cullen E. Bash; Chandrakant D. Patel; Richard J. Friedrich; Jeffrey S. Chase
Internet-based applications and their resulting multitier distributed architectures have changed the focus of design for large-scale Internet computing. Internet server applications execute in a horizontally scalable topology across hundreds or thousands of commodity servers in Internet data centers. Increasing scale and power density significantly impacts the data centers thermal properties. Effective thermal management is essential to the robustness of mission-critical applications. Internet service architectures can address multisystem resource management as well as thermal management within data centers.
measurement and modeling of computer systems | 2012
Zhenhua Liu; Yuan Chen; Cullen E. Bash; Adam Wierman; Daniel Gmach; Zhikui Wang; Manish Marwah; Chris D. Hyser
Recently, the demand for data center computing has surged, increasing the total energy footprint of data centers worldwide. Data centers typically comprise three subsystems: IT equipment provides services to customers; power infrastructure supports the IT and cooling equipment; and the cooling infrastructure removes heat generated by these subsystems. This work presents a novel approach to model the energy flows in a data center and optimize its operation. Traditionally, supply-side constraints such as energy or cooling availability were treated independently from IT workload management. This work reduces electricity cost and environmental impact using a holistic approach that integrates renewable supply, dynamic pricing, and cooling supply including chiller and outside air cooling, with IT workload planning to improve the overall sustainability of data center operations. Specifically, we first predict renewable energy as well as IT demand. Then we use these predictions to generate an IT workload management plan that schedules IT workload and allocates IT resources within a data center according to time varying power supply and cooling efficiency. We have implemented and evaluated our approach using traces from real data centers and production systems. The results demonstrate that our approach can reduce both the recurring power costs and the use of non-renewable energy by as much as 60% compared to existing techniques, while still meeting the Service Level Agreements.
intersociety conference on thermal and thermomechanical phenomena in electronic systems | 2006
Cullen E. Bash; Chandrakant D. Patel; Ratnesh Sharma
Increases in server power dissipation time placed significant pressure on traditional data center thermal management systems. Traditional systems utilize computer room air conditioning (CRAC) units to pressurize a raised floor plenum with cool air that is passed to equipment racks via ventilation tiles distributed throughout the raised floor. Temperature is typically controlled at the hot air return of the CRAC units away from the equipment racks. Due primarily to a lack of distributed environmental sensing, these CRAC systems are often operated conservatively resulting in reduced computational density and added operational expense. This paper introduces a data center environmental control system that utilizes a distributed sensor network to manipulate conventional CRAC units within an air-cooled environment. The sensor network is attached to standard racks and provides a direct measurement of the environment in close proximity to the computational resources. A calibration routine is used to characterize the response of each sensor in the network to individual CRAC actuators. A cascaded control algorithm is used to evaluate the data from the sensor network and manipulate supply air temperature and flow rate from individual CRACs to ensure thermal management while reducing operational expense. The combined controller and sensor network has been deployed in a production data center environment. Results from the algorithm will be presented that demonstrate the performance of the system and evaluate the energy savings compared with conventional data center environmental control architecture
intersociety conference on thermal and thermomechanical phenomena in electronic systems | 2002
Chandrakant D. Patel; Ratnesh Sharma; Cullen E. Bash; Abdlmonem H. Beitelmal
A high compute density data center of today is characterized as one consisting of thousands of racks each with multiple computing units. The computing units include multiple microprocessors, each dissipating approximately 250 W of power. The heat dissipation from a rack containing such computing units exceeds 10 KW. Todays data center, with 1000 racks, over 30,000 square feet, requires 10 MW of power for the computing infrastructure. A 100,000 square foot data center of tomorrow will require 50 MW of power for the computing infrastructure. Energy required to dissipate this heat will be an additional 20 MW. A hundred thousand square foot planetary scale data center, with five thousand 10 KW racks, would cost /spl sim/
Hvac&r Research | 2003
Cullen E. Bash; Chandrakant D. Patel; Ratnesh Sharma
44 million per year (@
network operations and management symposium | 2010
Yuan Chen; Daniel Gmach; Chris D. Hyser; Zhikui Wang; Cullen E. Bash; Christopher Hoover; Sharad Singhal
100/MWh) just to power the servers &
conference on network and service management | 2010
Daniel Gmach; Jerry Rolia; Cullen E. Bash; Yuan Chen; Tom Christian; Amip J. Shah; Ratnesh Sharma; Zhikui Wang
18 million per year to power the cooling infrastructure for the data center. Cooling design considerations by virtue of proper layout of racks can yield substantial savings in energy. This paper shows an overview of a data center cooling design and presents the results of a case study where layout change was made by virtue of numerical modeling to avail efficient use of air conditioning resources.
ASME 2003 International Mechanical Engineering Congress and Exposition | 2003
Chandrakant D. Patel; Ratnesh Sharma; Cullen E. Bash; Sven Graupner
Power dissipation within computer rooms or data centers has been steadily increasing over the past decade. With the spread of CMOS technology into microprocessors and memory in the 1980s and 1990s, water-cooled mainframe systems were largely supplanted by lower power air-cooled systems. These systems were typically stacked into 2-m-high racks for efficient use of expensive data center floor space. Data center environmental cooling infrastructures correspondingly evolved into designs that recirculate hot exhaust air from the computer systems into air-conditioning units. The air-conditioning units remove the heat and return the cool air back into the room in a closed-loop fashion. These air-cooled infrastructures are largely open, nonducted environments where hot and cold airstreams are free to mix. The evolution of microprocessor fabrication technology has enabled the construction of high-power processors. The push by business, academia, and consumers for greater processing speed has motivated the design of computer systems that enable the greatest number of processors, and, thus, the greatest processing power, per rack volume. This increase in microprocessor density places a great deal of strain on current computer room environmental control technology. Furthermore, the rate of increase in power density in the data center is outpacing that of HVAC technology improvements. Because of this, computer manufacturers are faced with the choice of either limiting system performance in favor of reduced power consumption or of providing customers with higher performance products that are impractical to deploy. In this paper, we will highlight some of the primary challenges with cooling high-power density data centers. We will demonstrate that existing environmental infrastructures have inherent inefficiencies that can be very costly, and we will explore alternatives. Additionally, the use of numerical modeling to diagnose problems with data center design and layout will be demonstrated, and limitations to its effective use will be discussed. Finally, the high-power densities involved have increased the need for a theoretical treatment of data center thermophysics. We will discuss this need in detail and will suggest ways in which it might be addressed. Throughout the paper, focus will be placed on future directions with the hope of instilling enthusiasm for further research and development by academia and industry in this particular area of HVAC&R that will soon reach a critical point in its continuing evolution.
Journal of Electronic Packaging | 2011
Thomas J. Breen; Ed Walsh; Jeff Punch; Amip J. Shah; Cullen E. Bash
Data centers contain IT, power and cooling infrastructures, each of which is typically managed independently. In this paper, we propose a holistic approach that couples the management of IT, power and cooling infrastructures to improve the efficiency of data center operations. Our approach considers application performance management, dynamic workload migration/consolidation, and power and cooling control to “right-provision” computing, power and cooling resources for a given workload. We have implemented a prototype of this for virtualized environments and conducted experiments in a production data center. Our experimental results demonstrate that the integrated solution is practical and can reduce energy consumption of servers by 35% and cooling by 15%, without degrading application performance.