John J. Wuu
Advanced Micro Devices
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by John J. Wuu.
international electron devices meeting | 2010
Rajesh N. Gupta; Farid Nemati; Scott Robins; Kevin J. Yang; Vasudevan Gopalakrishnan; Joseph John Sundarraj; Rajesh Chopra; Rich Roy; Hyun-jin Cho; W. Maszara; Nihar R. Mohapatra; John J. Wuu; Don Weiss; Sam Nakib
Thyristor Random Access Memory (T-RAM) is an ideal candidate for application as an embedded memory due to its substantially better density vs. performance tradeoff and logic process compatibility [1–3]. T-RAM memory embedded in a 32nm logic process with read and write times of 1ns and a bit fail rate less than 0.5ppm is reported for the first time. T-RAM memory cell median read current of 250µA/cell at 1.2V with an Ion/Ioff current ratio of more than 108 is demonstrated at 105°C. Robust margins to dynamic disturb due to the access (read/write) of neighboring bits in the memory array have also been verified.
international solid-state circuits conference | 2009
Anant Singh; Michael K. Ciraula; Don Weiss; John J. Wuu; Philippe Bauser; Paul de Champs; Hamid Daghighian; David Fisch; Philippe Graber; Michel Bron
To meet advancing market demands, microprocessor embedded memory applications require denser and faster memory arrays with each process generation. Recent work presented an 18.5ns 128Mb DRAM with a floating body cell for conventional DRAM products [1] and a 4Mb memory macro using a memory cell built with two floating body transistors [2]. This paper presents a floating-body Z-RAM® memory cell [3] to fabricate a high-density low-latency and high-bandwidth 4Mb memory macro building block, targeted at the requirements of microprocessor caches. It uses a single transistor (1T), unlike traditional 1T1C DRAM [4], or six transistor 6T-SRAM memory cells [5].
international solid-state circuits conference | 2011
Don Weiss; Michael Dreesen; Michael K. Ciraula; Carson Henrion; Chris Helt; Ryan Freese; Tommy Miles; Anita Karegar; Russell Schreiber; Bryan Schneller; John J. Wuu
High-performance multi-core processors require efficient multi-level cache hierarchies to meet high-bandwidth data requirements. Because level-3 (L3) cache is typically the largest cache on the die, the drive to lower cost places pressure on density, yields, and test time. Performance-per-watt goals and total power constraints also compel a variety of circuit techniques to reduce power. The next-generation server processor codenamed “Orochi”, implemented on a 32nm high-k metal-gate SOI process with 11 metal layers, consists of four 2-core modules using AMDs next-generation architecture, code named “Bulldozer”, with 2MB of dedicated L2 cache per module and an 8MB shared L3 cache [1].
Archive | 2012
Donald R. Weiss; John J. Wuu
Archive | 2009
John J. Wuu; Samuel D. Naffiziger; Donald R. Weiss
Archive | 2003
Blaine Stackhouse; John J. Wuu; Donald R. Weiss
Archive | 2012
John J. Wuu; Donald R. Weiss
Archive | 2001
Samuel D. Naffziger; Donald R. Weiss; John J. Wuu
Archive | 2011
John J. Wuu; Don Weiss; Kathryn E. Wilcox; Alex W. Schaefer; Kerrie V. Underhill
Archive | 2011
John J. Wuu; Donald R. Weiss