Nicholas P. Carter
Intel
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Nicholas P. Carter.
high-performance computer architecture | 2013
Nicholas P. Carter; Aditya Agrawal; Shekhar Borkar; Romain Cledat; Howard S. David; Dave Dunning; Joshua B. Fryman; Ivan Ganev; Roger A. Golliver; Rob C. Knauerhase; Richard Lethin; Benoît Meister; Asit K. Mishra; Wilfred R. Pinfold; Justin Teller; Josep Torrellas; Nicolas Vasilache; Ganesh Venkatesh; Jianping Xu
DARPAs Ubiquitous High-Performance Computing (UHPC) program asked researchers to develop computing systems capable of achieving energy efficiencies of 50 GOPS/Watt, assuming 2018-era fabrication technologies. This paper describes Runnemede, the research architecture developed by the Intel-led UHPC team. Runnemede is being developed through a co-design process that considers the hardware, the runtime/OS, and applications simultaneously. Near-threshold voltage operation, fine-grained power and clock management, and separate execution units for runtime and application code are used to reduce energy consumption. Memory energy is minimized through application-managed on-chip memory and direct physical addressing. A hierarchical on-chip network reduces communication energy, and a codelet-based execution model supports extreme parallelism and fine-grained tasks. We present an initial evaluation of Runnemede that shows the design process for our on-chip network, demonstrates 2-4x improvements in memory energy from explicit control of on-chip memory, and illustrates the impact of hardware-software co-design on the energy consumption of a synthetic aperture radar algorithm on our architecture.
design, automation, and test in europe | 2010
Nicholas P. Carter; Helia Naeimi; Donald S. Gardner
Current electronic systems implement reliability using only a few layers of the system stack, which simplifies the design of other layers but is becoming increasingly expensive over time. In contrast, cross-layer resilient systems, which distribute the responsibility for tolerating errors, device variation, and aging across the system stack, have the potential to provide the resilience required to implement reliable, high-performance, low-power systems in future fabrication processes at significantly lower cost. These systems can implement less-frequent resilience tasks in software to save power and chip area, can tune their reliability guarantees to the needs of applications, and can use the information available at each level in the system stack to optimize performance and power consumption. In this paper, we outline an approach to cross-layer system design that describes resilience as a set of tasks that systems must perform in order to detect and tolerate errors and variation. We then present strawman examples of how this task-based design process could be used to implement general-purpose computing and SoC systems, drawing on previous work and identifying key areas for future research.
design, automation, and test in europe | 2010
André DeHon; Heather Quinn; Nicholas P. Carter
We are rapidly approaching an inflection point where the conventional target of producing perfect, identical transistors that operate without upset can no longer be maintained while continuing to reduce the energy per operation. With power requirements already limiting chip performance, continuing to demand perfect, upset-free transistors would mean the end of scaling benefits. The big challenges in device variability and reliability are driven by uncommon tails in distributions, infrequent upsets, one-size-fits-all technology requirements, and a lack of information about the context of each operation. Solutions co-designed across traditional layer boundaries in our system stack can change the game, allowing architecture and software (a) to compensate for uncommon variation, environments, and events, (b) to pass down invariants and requirements for the computation, and (c) to monitor the health of collections of devices. Cross-layer codesign provides a path to continue extracting benefits from further scaled technologies despite the fact that they may be less predictable and more variable. While some limited multi-layer mitigation strategies do exist, to move forward redefining traditional layer abstractions and developing a framework that facilitates cross-layer collaboration is necessary.
APPLICATION OF ACCELERATORS IN RESEARCH AND INDUSTRY: Twenty-First International Conference | 2011
Heather Quinn; Andrea Manuzzato; Paul S. Graham; André DeHon; Nicholas P. Carter
As computer automation continues to increase in our society, the need for greater radiation reliability is necessary. Already critical infrastructure is failing too frequently. In this paper, we will introduce the Cross‐Layer Reliability concept for designing more reliable computer systems.
international conference on parallel architectures and compilation techniques | 2011
Byn Choi; Rakesh Komuravelli; Hyojin Sung; Robert Smolinski; Nima Honarmand; Sarita V. Adve; Vikram S. Adve; Nicholas P. Carter; Ching Tsun Chou
Archive | 2012
Nicholas P. Carter; Joshua B. Fryman; Robert Knauerhase; Aditya Agrawal; Josep Torrellas
Archive | 2011
Nicholas P. Carter; Donald S. Gardner; Eric C. Hannah; Helia Naeimi; Shekhar Borkar; Matthew B. Haycock
Archive | 2011
Heather Quinn; Andre' De Hon; Nicholas P. Carter
Archive | 2010
Heather Quinn; Andre' De Hon; Nicholas P. Carter
Archive | 2012
Joshua B. Fryman; Nicholas P. Carter; Robert Knauerhase; Sebastian Schoenberg; Aditya Agrawal