Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michael Mantor is active.

Publication


Featured researches published by Michael Mantor.


international parallel and distributed processing symposium | 2014

A Case for a Flexible Scalar Unit in SIMT Architecture

Yi Yang; Ping Xiang; Michael Mantor; Norman Rubin; Lisa R. Hsu; Qunfeng Dong; Huiyang Zhou

The wide availability and the Single-Instruction Multiple-Thread (SIMT)-style programming model have made graphics processing units (GPUs) a promising choice for high performance computing. However, because of the SIMT style processing, an instruction will be executed in every thread even if the operands are identical for all the threads. To overcome this inefficiency, the AMDs latest Graphics Core Next (GCN) architecture integrates a scalar unit into a SIMT unit. In GCN, both the SIMT unit and the scalar unit share a single SIMT style instruction stream. Depending on its type, an instruction is issued to either a scalar or a SIMT unit. In this paper, we propose to extend the scalar unit so that it can either share the instruction stream with the SIMT unit or execute a separate instruction stream. The program to be executed by the scalar unit is referred to as a scalar program and its purpose is to assist SIMT-unit execution. The scalar programs are either generated from SIMT programs automatically by the compiler or manually developed by expert developers. We make a case for our proposed flexible scalar unit through three collaborative execution paradigms: data prefetching, control divergence elimination, and scalar-workload extraction. Our experimental results show that significant performance gains can be achieved using our proposed approaches compared to the state-of-art SIMT style processing.


ACM Transactions on Architecture and Code Optimization | 2018

GPU Performance vs. Thread-Level Parallelism: Scalability Analysis and a Novel Way to Improve TLP

Zhen Lin; Michael Mantor; Huiyang Zhou

Graphics Processing Units (GPUs) leverage massive thread-level parallelism (TLP) to achieve high computation throughput and hide long memory latency. However, recent studies have shown that the GPU performance does not scale with the GPU occupancy or the degrees of TLP that a GPU supports, especially for memory-intensive workloads. The current understanding points to L1 D-cache contention or off-chip memory bandwidth. In this article, we perform a novel scalability analysis from the perspective of throughput utilization of various GPU components, including off-chip DRAM, multiple levels of caches, and the interconnect between L1 D-caches and L2 partitions. We show that the interconnect bandwidth is a critical bound for GPU performance scalability. For the applications that do not have saturated throughput utilization on a particular resource, their performance scales well with increased TLP. To improve TLP for such applications efficiently, we propose a fast context switching approach. When a warp/thread block (TB) is stalled by a long latency operation, the context of the warp/TB is spilled to spare on-chip resource so that a new warp/TB can be launched. The switched-out warp/TB is switched back when another warp/TB is completed or switched out. With this fine-grain fast context switching, higher TLP can be supported without increasing the sizes of critical resources like the register file. Our experiment shows that the performance can be improved by up to 47% and a geometric mean of 22% for a set of applications with unsaturated throughput utilization. Compared to the state-of-the-art TLP improvement scheme, our proposed scheme achieves 12% higher performance on average and 16% for unsaturated benchmarks.


Archive | 2001

Method and apparatus for executing a predefined instruction set

Ralph Clayton Taylor; Michael A. Mang; Michael Mantor


Archive | 1998

Method and apparatus for texture level of detail dithering

Thomas A. Piazza; Michael Mantor; Ralph Clayton Taylor; Steven Manno


Archive | 1998

Method and apparatus for effective level of detail selection

Thomas A. Piazza; Michael Mantor; Ralph Clayton Taylor; Val G. Cook


Archive | 2009

Local and global data share

Michael Mantor; Brian Emberling


Archive | 1998

Method and apparatus to efficiently interpolate polygon attributes in two dimensions at a prescribed clock rate

Thomas A. Piazza; R. Scott Hartog; Michael Mantor; Jeffrey D. Potter; Ralph Clayton Taylor; Michael A. Mang


Archive | 2010

Processing Unit that Enables Asynchronous Task Dispatch

Michael Mantor; Rex McCrary


Archive | 2010

Processing Unit with a Plurality of Shader Engines

Michael Mantor; Ralph Clay Taylor; Jeffrey T. Brady


IEEE Micro | 2014

Kabini: An AMD Accelerated Processing Unit System on A Chip

Dan Bouvier; Brad Cohen; Walter Fry; Sreekanth Godey; Michael Mantor

Collaboration


Dive into the Michael Mantor's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rex McCrary

Advanced Micro Devices

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge