Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jeffrey A. Zitz is active.

Publication


Featured researches published by Jeffrey A. Zitz.


IEEE Transactions on Components and Packaging Technologies | 2007

A Practical Implementation of Silicon Microchannel Coolers for High Power Chips

Evan G. Colgan; Bruce K. Furman; Michael A. Gaynes; Willian S. Graham; Nancy C. LaBianca; John Harold Magerlein; Robert J. Polastre; Mary Beth Rothwell; Raschid J. Bezama; Rehan Choudhary; Kenneth C. Marston; Hilton T. Toy; Jamil A. Wakil; Jeffrey A. Zitz; Roger R. Schmidt

This paper describes a practical implementation of a single-phase Si microchannel cooler designed for cooling very high power chips such as microprocessors. Through the use of multiple heat exchanger zones and optimized cooler fin designs, a unit thermal resistance 10.5 C-mm2 /W from the cooler surface to the inlet water was demonstrated with a fluid pressure drop of <35kPa. Further, cooling of a thermal test chip with a microchannel cooler bonded to it packaged in a single chip module was also demonstrated for a chip power density greater than 300W/cm2. Coolers of this design should be able to cool chips with average power densities of 400W/cm2 or more


Ibm Journal of Research and Development | 2002

An advanced multichip module (MCM) for high-performance UNIX servers

John U. Knickerbocker; Frank L. Pompeo; Alice F. Tai; Donald L. Thomas; Roger D. Weekly; Michael G. Nealon; Harvey C. Hamel; Anand Haridass; James N. Humenik; Richard A. Shelleman; Srinivasa S. N. Reddy; Kevin M. Prettyman; Benjamin V. Fasano; Sudipta K. Ray; Thomas E. Lombardi; Kenneth C. Marston; Patrick A. Coico; Peter J. Brofman; Lewis S. Goldmann; David L. Edwards; Jeffrey A. Zitz; Sushumna Iruvanti; Subhash L. Shinde; Hai P. Longworth

In 2001, IBM delivered to the marketplace a high-performance UNIX?®-class eServer based on a four-chip multichip module (MCM) code named Regatta. This MCM supports four POWER4 chips, each with 170 million transistors, which utilize the IBM advanced copper back-end interconnect technology. Each chip is attached to the MCM through 7018 flip-chip solder connections. The MCM, fabricated using the IBM high-performance glass-ceramic technology, features 1.7 million internal copper vias and high-density top-surface contact pad arrays with 100-?µm pads on 200-?µm centers. Interconnections between chips on the MCM and interconnections to the board for power distribution and MCM-to-MCM communication are provided by 190 meters of co-sintered copper wiring. Additionally, the 5100 off-module connections on the bottom side of the MCM are fabricated at a 1-mm pitch and connected to the board through the use of a novel land grid array technology, thus enabling a compact 85-mm ?? 85-mm module footprint that enables 8- to 32-way systems with processors operating at 1.1 GHz or 1.3 GHz. The MCM also incorporates advanced thermal solutions that enable 156 W of cooling per chip. This paper presents a detailed overview of the fabrication, assembly, testing, and reliability qualification of this advanced MCM technology.


IEEE Journal of Solid-state Circuits | 2014

Circuit and Physical Design of the zEnterprise™ EC12 Microprocessor Chips and Multi-Chip Module

James D. Warnock; Yuen H. Chan; Hubert Harrer; Sean M. Carey; Gerard M. Salem; Doug Malone; Ruchir Puri; Jeffrey A. Zitz; Adam R. Jatkowski; Gerald Strevig; Ayan Datta; Anne E. Gattiker; Aditya Bansal; Guenter Mayer; Yiu-Hing Chan; Mark D. Mayo; David L. Rude; Leon J. Sigal; Thomas Strach; Howard H. Smith; Huajun Wen; Pak-Kin Mak; Chung-Lung Kevin Shum; Donald W. Plass; Charles F. Webb

This work describes the circuit and physical design implementation of the processor chip (CP), level-4 cache chip (SC), and the multi-chip module at the heart of the EC12 system. The chips were implemented in IBMs high-performance 32nm high-k/metal-gate SOI technology. The CP chip contains 6 super-scalar, out-of-order processor cores, running at 5.5 GHz, while the SC chip contains 192 MB of eDRAM cache. Six CP chips and two SC chips are mounted on a high-performance glass-ceramic substrate, which provides high-bandwidth, low-latency interconnections. Various aspects of the design are explored in detail, with most of the focus on the CP chip, including the circuit design implementation, clocking, thermal modeling, reliability, frequency tuning, and comparison to the previous design in 45nm technology.


international solid-state circuits conference | 2015

4.1 22nm Next-generation IBM System z microprocessor

James D. Warnock; Brian W. Curran; John Badar; Gregory J. Fredeman; Donald W. Plass; Yuen H. Chan; Sean M. Carey; Gerard M. Salem; Friedrich Schroeder; Frank Malgioglio; Guenter Mayer; Christopher J. Berry; Michael H. Wood; Yiu-Hing Chan; Mark D. Mayo; John Mack Isakson; Charudhattan Nagarajan; Tobias Werner; Leon J. Sigal; Ricardo H. Nigaglioni; Mark Cichanowski; Jeffrey A. Zitz; Matthew M. Ziegler; Tim Bronson; Gerald Strevig; Daniel M. Dreps; Ruchir Puri; Douglas J. Malone; Dieter Wendel; Pak-Kin Mak

The next-generation System z design introduces a new microprocessor chip (CP) and a system controller chip (SC) aimed at providing a substantial boost to maximum system capacity and performance compared to the previous zEC12 design in 32nm [1,2]. As shown in the die photo, the CP chip includes 8 high-frequency processor cores, 64MB of eDRAM L3 cache, interface IOs (“XBUS”) to connect to two other processor chips and the L4 cache chip, along with memory interfaces, 2 PCIe Gen3 interfaces, and an I/O bus controller (GX). The design is implemented on a 678 mm2 die with 4.0 billion transistors and 17 levels of metal interconnect in IBMs high-performance 22nm high-x CMOS SOI technology [3]. The SC chip is also a 678 mm2 die, with 7.1 billion transistors, running at half the clock frequency of the CP chip, in the same 22nm technology, but with 15 levels of metal. It provides 480 MB of eDRAM L4 cache, an increase of more than 2× from zEC12 [1,2], and contains an 18 MB eDRAM L4 directory, along with multi-processor cache control/coherency logic to manage inter-processor and system-level communications. Both the CP and SC chips incorporate significant logical, physical, and electrical design innovations.


electronic components and technology conference | 2014

Bonding technologies for chip level and wafer level 3D integration

Katsuyuki Sakuma; Spyridon Skordas; Jeffrey A. Zitz; Eric D. Perfecto; William L. Guthrie; Luc Guerin; Richard Langlois; Hsichang Liu; Wei Lin; Kevin R. Winstel; Sayuri Kohara; Kuniaki Sueoka; Matthew Angyal; Troy L. Graves-Abe; Daniel George Berger; John U. Knickerbocker; Subramanian S. Iyer

This paper provides a comparison of bonding process technologies for chip and wafer level 3D integration (3Di). We discuss bonding methods and comparison of the reflow furnace, thermo-compression, Cavity ALignment Method (CALM) for chip level bonding, and oxide bonding for 300 mm wafer level 3Di. For chip 3Di, challenges related to maintaining thin die and laminate co-planarity were overcome. Stacking of large thin Si die with 22 nm CMOS devices was achieved. The size of the die was more than 600 mm2. Also, 300 mm 3Di wafer stacking with 45 nm CMOS devices was demonstrated. Wafers thinned to 10 μm with Cu through-silicon-via (TSV) interconnections were formed after bonding to another device wafer. In either chip or wafer level 3Di, testing results show no loss of integrity due to the bonding technologies.


intersociety conference on thermal and thermomechanical phenomena in electronic systems | 2012

Thermal and mechanical analysis and design of the IBM Power 775 water cooled supercomputing central electronics complex

Gary F. Goth; Amilcar R. Arvelo; Jason R. Eagle; Michael J. Ellsworth; Kenneth C. Marston; Arvind K. Sinha; Jeffrey A. Zitz

Back in 2008 IBM reintroduced water cooling technology into its high performance computing platform, the Power 575 Supercomputing node/system. Water cooled cold plates were used to cool the processor modules which represented about half of the total system (rack) heat load. An air-to-liquid heat exchanger was also mounted in the rear door of the rack to remove a significant fraction of the other half of the rack heat load; the heat load to air. Water cooling enabled a compute node with 34% greater performance (Flops), resulted in a processor temperature 20-30°C lower than that typically provided with air cooling, and reduced the power consumed in the data center to transfer the IT heat to the outside ambient by as much as 45%. The next generation of this platform, the Power 775 Supercomputing node/system, is a significant leap forward in computing performance and energy efficiency. The compute node and system were designed from the start with water cooling in mind. The result, a system with greater than 95% of its heat load conducted directly to water; a system that, together with a rear door heat exchanger, removes 100% of its heat load to water with no requirement for room air conditioning. In addition to the processor, memory, power conversion, and I/O electronics conduct their heat to water. Included within the framework of the system is a disk storage unit (disc enclosure) containing an inter-board air-to-water heat exchanger. This paper will detail key thermal and mechanical design issues associated with the Power 775 server drawer or central electronics complex (CEC). Topics to be addressed include processor and optical I/O Hub Module thermal design (including thermal interfaces); water cooled memory design; module cold plate designs; CEC level water distribution; module level structural analyses for thermal performance; module/board land grid array (LGA) load distribution; effect of load distribution on module thermal interfaces; and the effect of cold plate tubing on module (LGA) loading.


ASME 2005 Pacific Rim Technical Conference and Exhibition on Integration and Packaging of MEMS, NEMS, and Electronic Systems collocated with the ASME 2005 Heat Transfer Summer Conference | 2005

Internal Thermal Management of IBM P-Server Large Format Multi-Chip Modules Utilizing Small Gap Technology

Patrick A. Coico; Gaetano P. Messina; Steven P. Ostrander; Jeffrey A. Zitz; Wei Zou

The large Multi-Chip Modules (MCM) used in the IBM p-Server computer systems, and their predecessors, have required rather unique cooling solutions and module hardware designs in order to meet the thermal, mechanical and reliability requirements placed on the package. The module internal thermal solution has evolved from a spring-loaded metal contact technology to a thermal compound based design using a novel gap adjustment technology employing a soldered conduction component. This current MCM makes use of a novel technology called Small Gap Technology (SGT). This technique makes it possible to control thermal compound interface thicknesses or gaps to a very tight tolerance from chip-to-chip and module-to-module. Heat flux values that have been handled vary from approximately 20 to 53 W/cm2 depending on the type of chip and the system performance level. Even higher heat fluxes have been projected for next generation products. The hardware and processing techniques employed to manufacture these modules are quite unique. These products are typically on the order of 100mm chip carrier size or 140mm overall module footprint on a side (approximately 90 cm2 of carrier area) and contain 8 chips and numerous discrete devices. The process fixturing and equipment must be able to handle the relatively large thermal mass of the components. The sequence of processing steps must take into account limitations on the material properties of the various module components. This paper will describe the SGT thermal management solution. The hardware and process employed to make the gap adjustments and the thermal interface material used in these high heat flux applications will be discussed. In addition, supporting thermal/mechanical modelling, thermal performance data and reliability data will be presented.Copyright


intersociety conference on thermal and thermomechanical phenomena in electronic systems | 2016

Thermal-mechanical Co-design of Cold Plate, Second Level Thermal Interface Material (TIM2) and Heat Spreaders for Optimal Thermal Performance for High-end Processor Cooling

Xiaojin Wei; Allan C. Vandeventer; S. Canfield; Y. Yu; John G. Torok; Peter W. Kelly; Don Porter; W. Kostenko; Jeffrey A. Zitz; Kamal K. Sikka

Cooling high-end system processors has become increasingly more challenging due to the increase in both total power and peak power density in processor cores. Junction peak temperature at worst case corner conditions often establish the limits on the maximum supportable circuit speed as well as processor chip yield. While significant progress has been made in cooling technology (e.g., cold plate design and thermal interface materials at first and the second level package), a systematic approach is needed to optimize the entire thermal and mechanical stack to achieve the overall (optimal) thermal performance objectives. The necessity and importance of this is due to the thermal and mechanical design interdependencies contained with the overall stack. This paper reports an in-depth study of the thermal-mechanical interactions associated with the cold plate, second level thermal interface material (TIM2) and heat spreaders. Thermal test results are reported for different cold plate designs and TIM2 pad sizes. Thermal and mechanical modeling results are provided to quantify the TIM2 thermal performance as a function of the TIM2 mechanical stress, the TIM2 dimensions and cold plate design. As described via both modeling and testing results, an optimal TIM2 pad size results as a trade-off between heat transfer area for conduction and TIM2 compressive pressure. In addition, pressure sensitive film study results are also provided revealing that heat spreader design affects the module level and TIM2 thermal performance. Results from this set of work clearly demonstrate the close interactions between cooling hardware in the stack hence the importance of thermal-mechanical co-design to achieve optimal thermal performance for the high-end processors.


Archive | 2001

Area Array Leverages: Why and How to Choose a Package

David L. Thomas; Daniel O’Connor; Jeffrey A. Zitz

Performance is an attribute that defines how well a given requirement is satisfied and is predicated on priority so may depend upon one or a multiplicity of factors. Among those most often considered in microelectronic packaging are thermal and electrical characteristics, size, cost and reliability. The details of array technology at the die, module and board levels have been discussed in the preceding chapters. It is the purpose of this chapter to provide comparative data and some guidance in how to select a technology in consideration of the various options and strategies available based upon the application.


intersociety conference on thermal and thermomechanical phenomena in electronic systems | 2017

A systematic experimental investigation of thermal degradation mechanisms in lidded flip-chip packages: Effects of thermal aging and cyclic loading

Tuhin Sinha; Jeffrey A. Zitz

This research effort is geared towards establishing a robust virtual-qualification methodology for thermal performance of flip-chip packages. In the experimental analysis presented here, test vehicles were designed and tested for degradation in the module-level thermal interface material under high temperature storage (at 100C, 125C and 150C) exposure and deep thermal cycling (−40C/+125C) conditions. The experiments conducted in this study will encompass a wide range of thermo-mechanical conditions that not only explore known JEDEC variables but also provide unique insights into understanding the effects of indirect thermal degradation drivers such as package assembly loads and chip-junction temperature variations during thermal power inputs during readouts.

Researchain Logo
Decentralizing Knowledge