Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mark Gebhart is active.

Publication


Featured researches published by Mark Gebhart.


international symposium on computer architecture | 2011

Energy-efficient mechanisms for managing thread context in throughput processors

Mark Gebhart; Daniel R. Johnson; David Tarjan; Stephen W. Keckler; William J. Dally; Erik Lindholm; Kevin Skadron

Modern graphics processing units (GPUs) use a large number of hardware threads to hide both function unit and memory access latency. Extreme multithreading requires a complicated thread scheduler as well as a large register file, which is expensive to access both in terms of energy and latency. We present two complementary techniques for reducing energy on massively-threaded processors such as GPUs. First, we examine register file caching to replace accesses to the large main register file with accesses to a smaller structure containing the immediate register working set of active threads. Second, we investigate a two-level thread scheduler that maintains a small set of active threads to hide ALU and local memory access latency and a larger set of pending threads to hide main memory latency. Combined with register file caching, a two-level thread scheduler provides a further reduction in energy by limiting the allocation of temporary register cache resources to only the currently active subset of threads. We show that on average, across a variety of real world graphics and compute workloads, a 6-entry per-thread register file cache reduces the number of reads and writes to the main register file by 50% and 59% respectively. We further show that the active thread count can be reduced by a factor of 4 with minimal impact on performance, resulting in a 36% reduction of register file energy.


international symposium on microarchitecture | 2012

Unifying Primary Cache, Scratch, and Register File Memories in a Throughput Processor

Mark Gebhart; Stephen W. Keckler; Brucek Khailany; Ronny Krashinsky; William J. Dally

Modern throughput processors such as GPUs employ thousands of threads to drive high-bandwidth, long-latency memory systems. These threads require substantial on-chip storage for registers, cache, and scratchpad memory. Existing designs hard-partition this local storage, fixing the capacities of these structures at design time. We evaluate modern GPU workloads and find that they have widely varying capacity needs across these different functions. Therefore, we propose a unified local memory which can dynamically change the partitioning among registers, cache, and scratchpad on a per-application basis. The tuning that this flexibility enables improves both performance and energy consumption, and broadens the scope of applications that can be efficiently executed on GPUs. Compared to a hard-partitioned design, we show that unified local memory provides a performance benefit as high as 71% along with an energy reduction up to 33%.


architectural support for programming languages and operating systems | 2009

An evaluation of the TRIPS computer system

Mark Gebhart; Bertrand A. Maher; Katherine E. Coons; Jeffrey R. Diamond; Paul V. Gratz; Mario Marino; Nitya Ranganathan; Behnam Robatmili; Aaron Smith; James H. Burrill; Stephen W. Keckler; Doug Burger; Kathryn S. McKinley

The TRIPS system employs a new instruction set architecture (ISA) called Explicit Data Graph Execution (EDGE) that renegotiates the boundary between hardware and software to expose and exploit concurrency. EDGE ISAs use a block-atomic execution model in which blocks are composed of dataflow instructions. The goal of the TRIPS design is to mine concurrency for high performance while tolerating emerging technology scaling challenges, such as increasing wire delays and power consumption. This paper evaluates how well TRIPS meets this goal through a detailed ISA and performance analysis. We compare performance, using cycles counts, to commercial processors. On SPEC CPU2000, the Intel Core 2 outperforms compiled TRIPS code in most cases, although TRIPS matches a Pentium 4. On simple benchmarks, compiled TRIPS code outperforms the Core 2 by 10% and hand-optimized TRIPS code outperforms it by factor of 3. Compared to conventional ISAs, the block-atomic model provides a larger instruction window, increases concurrency at a cost of more instructions executed, and replaces register and memory accesses with more efficient direct instruction-to-instruction communication. Our analysis suggests ISA, microarchitecture, and compiler enhancements for addressing weaknesses in TRIPS and indicates that EDGE architectures have the potential to exploit greater concurrency in future technologies.


international symposium on microarchitecture | 2011

A compile-time managed multi-level register file hierarchy

Mark Gebhart; Stephen W. Keckler; William J. Dally

As processors increasingly become power limited, performance improvements will be achieved by rearchitecting systems with energy efficiency as the primary design constraint. While some of these optimizations will be hardware based, combined hardware and software techniques likely will be the most productive. This work redesigns the register file system of a modern throughput processor with a combined hardware and software solution that reduces register file energy without harming system performance. Throughput processors utilize a large number of threads to tolerate latency, requiring a large, energy-intensive register file to store thread context. Our results show that a compiler controlled register file hierarchy can reduce register file energy by up to 54%, compared to a hardware only caching approach that reduces register file energy by 34%. We explore register allocation algorithms that are specifically targeted to improve energy efficiency by sharing temporary register file resources across concurrently running threads and conduct a detailed limit study on the further potential to optimize operand delivery for throughput processors. Our efficiency gains represent a direct performance gain for power limited systems, such as GPUs.


ACM Transactions on Computer Systems | 2012

A Hierarchical Thread Scheduler and Register File for Energy-Efficient Throughput Processors

Mark Gebhart; Daniel R. Johnson; David Tarjan; Stephen W. Keckler; William J. Dally; Erik Lindholm; Kevin Skadron

Modern graphics processing units (GPUs) employ a large number of hardware threads to hide both function unit and memory access latency. Extreme multithreading requires a complex thread scheduler as well as a large register file, which is expensive to access both in terms of energy and latency. We present two complementary techniques for reducing energy on massively-threaded processors such as GPUs. First, we investigate a two-level thread scheduler that maintains a small set of active threads to hide ALU and local memory access latency and a larger set of pending threads to hide main memory latency. Reducing the number of threads that the scheduler must consider each cycle improves the scheduler’s energy efficiency. Second, we propose replacing the monolithic register file found on modern designs with a hierarchical register file. We explore various trade-offs for the hierarchy including the number of levels in the hierarchy and the number of entries at each level. We consider both a hardware-managed caching scheme and a software-managed scheme, where the compiler is responsible for orchestrating all data movement within the register file hierarchy. Combined with a hierarchical register file, our two-level thread scheduler provides a further reduction in energy by only allocating entries in the upper levels of the register file hierarchy for active threads. Averaging across a variety of real world graphics and compute workloads, the active thread count can be reduced by a factor of 4 with minimal impact on performance and our most efficient three-level software-managed register file hierarchy reduces register file energy by 54%.


ACM Transactions on Architecture and Code Optimization | 2018

Software-Directed Techniques for Improved GPU Register File Utilization

Dani Voitsechov; Arslan Zulfiqar; Mark Stephenson; Mark Gebhart; Stephen W. Keckler

Throughput architectures such as GPUs require substantial hardware resources to hold the state of a massive number of simultaneously executing threads. While GPU register files are already enormous, reaching capacities of 256KB per streaming multiprocessor (SM), we find that nearly half of real-world applications we examined are register-bound and would benefit from a larger register file to enable more concurrent threads. This article seeks to increase the thread occupancy and improve performance of these register-bound applications by making more efficient use of the existing register file capacity. Our first technique eagerly deallocates register resources during execution. We show that releasing register resources based on value liveness as proposed in prior states of the art leads to unreliable performance and undue design complexity. To address these deficiencies, our article presents a novel compiler-driven approach that identifies and exploits last use of a register name (instead of the value contained within) to eagerly release register resources. Furthermore, while previous works have leveraged “scalar” and “narrow” operand properties of a program for various optimizations, their impact on thread occupancy has been relatively unexplored. Our article evaluates the effectiveness of these techniques in improving thread occupancy and demonstrates that while any one approach may fail to free very many registers, together they synergistically free enough registers to launch additional parallel work. An in-depth evaluation on a large suite of applications shows that just our early register technique outperforms previous work on dynamic register allocation, and together these approaches, on average, provide 12% performance speedup (23% higher thread occupancy) on register bound applications not already saturating other GPU resources.


Archive | 2011

Two-Level Scheduler for Multi-Threaded Processing

William J. Dally; Stephen W. Keckler; David Tarjan; John Erik Lindholm; Mark Gebhart; Daniel R. Johnson


Archive | 2007

Software Infrastructure and Tools for the TRIPS Prototype

Bill Yoder; Jim Burrill; Robert McDonald; Kevin Bush; Katherine E. Coons; Mark Gebhart; Sibi Govindan; Bertrand A. Maher; Ramadas Nagarajan; Behnam Robatmili; Karthikeyan Sankaralingam; Sadia Sharif; Aaron Smith; Doug Burger; Stephen W. Keckler; Kathryn S. McKinley


Archive | 2013

PARTITIONED REGISTER FILE

Brucek Khailany; Mark Gebhart


Pespma 2010 - Workshop on Parallel Execution of Sequential Programs on Multi-core Architecture | 2010

ReFLEX: Block Atomic Execution on Conventional ISA Cores

Mark Gebhart; Stephen W. Keckler

Collaboration


Dive into the Mark Gebhart's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bertrand A. Maher

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Katherine E. Coons

University of Texas at Austin

View shared research outputs
Researchain Logo
Decentralizing Knowledge