Bastiaan Aarts
Nvidia
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Bastiaan Aarts.
symposium on code generation and optimization | 2010
John A. Stratton; Vinod Grover; Jaydeep Marathe; Bastiaan Aarts; Michael Murphy; Ziang Hu; Wen-mei W. Hwu
In this paper we describe techniques for compiling fine-grained SPMD-threaded programs, expressed in programming models such as OpenCL or CUDA, to multicore execution platforms. Programs developed for manycore processors typically express finer thread-level parallelism than is appropriate for multicore platforms. We describe options for implementing fine-grained threading in software, and find that reasonable restrictions on the synchronization model enable significant optimizations and performance improvements over a baseline approach. We evaluate these techniques in a production-level compiler and runtime for the CUDA programming model targeting modern CPUs. Applications tested with our tool often showed performance parity with the compiled C version of the application for single-thread performance. With modest coarse-grained multithreading typical of todays CPU architectures, an average of 3.4x speedup on 4 processors was observed across the test applications.
international conference on conceptual structures | 2012
Gautam Chakrabarti; Vinod Grover; Bastiaan Aarts; Xiangyun Kong; Manjunath Kudlur; Yuan Lin; Jaydeep Marathe; Michael Murphy; Jian-Zhong Wang
Abstract Graphics processor units (GPUs) have evolved to handle throughput oriented workloads where a large number of parallel threads must make progress. Such threads are organized around shared memory making it possible to synchronize and cooperate on shared data. Current GPUs can run tens of thousands of hardware threads and have been optimized for graphics workloads. Several high level languages have been developed to easily program the GPUs for general purpose computing problems. The use of high-level languages introduces the need for highly optimizing compilers that target the parallel GPU device. In this paper, we present our experiences in developing compilation techniques for a high level language called CUDA C. We explain the CUDA architecture and programming model and provide insights into why certain optimizations are important for achieving high performance on a GPU. In addition to classical optimizations, we present optimizations developed specifically for the CUDA architecture. We evaluate these techniques, and present performance results that show significant improvements on hundreds of kernels as well as applications.
Archive | 2006
Ian Buck; Bastiaan Aarts
Archive | 2009
Vinod Grover; Bastiaan Aarts; Michael Murphy
Archive | 2009
Vinod Grover; Bastiaan Aarts; Michael Murphy
Archive | 2009
Vinod Grover; Bastiaan Aarts; Michael Murphy; Boris Beylin; Jayant B. Kolhe; Douglas Saylor
Archive | 2009
Vinod Grover; Bastiaan Aarts; Michael Murphy; Jayant B. Kolhe; John Bryan Pormann; Douglas Saylor
Archive | 2013
Stephen Jones; Jaydeep Marathe; Vivek Kini; Bastiaan Aarts
Archive | 2009
Vinod Grover; Bastiaan Aarts; Michael Murphy
Archive | 2007
Julius VanderSpek; Nicholas Patrick Wilt; Jayant B. Kolhe; Ian Buck; Bastiaan Aarts