Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Richard M. Yoo is active.

Publication


Featured researches published by Richard M. Yoo.


ieee international conference on high performance computing data and analytics | 2013

Performance evaluation of Intel® transactional synchronization extensions for high-performance computing

Richard M. Yoo; Christopher J. Hughes; Konrad K. Lai; Ravi Rajwar

Intel has recently introduced Intel® Transactional Synchronization Extensions (Intel® TSX) in the Intel 4th Generation Core™ Processors. With Intel TSX, a processor can dynamically determine whether threads need to serialize through lock-protected critical sections. In this paper, we evaluate the first hardware implementation of Intel TSX using a set of high-performance computing (HPC) workloads, and demonstrate that applying Intel TSX to these workloads can provide significant performance improvements. On a set of real-world HPC workloads, applying Intel TSX provides an average speedup of 1.41x. When applied to a parallel user-level TCP/IP stack, Intel TSX provides 1.31x average bandwidth improvement on network intensive applications. We also demonstrate the ease with which we were able to apply Intel TSX to the various workloads.


acm symposium on parallel algorithms and architectures | 2013

Locality-aware task management for unstructured parallelism: a quantitative limit study

Richard M. Yoo; Christopher J. Hughes; Changkyu Kim; Yen-Kuang Chen; Christoforos E. Kozyrakis

As we increase the number of cores on a processor die, the on-chip cache hierarchies that support these cores are getting larger, deeper, and more complex. As a result, non-uniform memory access effects are now prevalent even on a single chip. To reduce execution time and energy consumption, data access locality should be exploited. This is especially important for task-based programming systems, where a scheduler decides when and where on the chip the code segments, i.e., tasks, should execute. Capturing locality for structured task parallelism has been done effectively, but the more difficult case, unstructured parallelism, remains largely unsolved - little quantitative analysis exists to demonstrate the potential of locality-aware scheduling, and to guide future scheduler implementations in the most fruitful direction. This paper quantifies the potential of locality-aware scheduling for unstructured parallelism on three different many-core processors. Our simulation results of 32-core systems show that locality-aware scheduling can bring up to 2.39x speedup over a randomized schedule, and 2.05x speedup over a state-of-the-art baseline scheduling scheme. At the same time, a locality-aware schedule reduces average energy consumption by 55% and 47%, relative to the random and the baseline schedule, respectively. In addition, our 1024-core simulation results project that these benefits will only increase: Compared to 32-core executions, we see up to 1.83x additional locality benefits. To capture such potentials in a practical setting, we also perform a detailed scheduler design space exploration to quantify the impact of different scheduling decisions. We also highlight the importance of locality-aware stealing, and demonstrate that a stealing scheme can exploit significant locality while performing load balancing. Over randomized stealing, our proposed scheme shows up to 2.0x speedup for stolen tasks.


ieee international conference on high performance computing data and analytics | 2013

Location-aware cache management for many-core processors with deep cache hierarchy

Jongsoo Park; Richard M. Yoo; Daya Shanker Khudia; Christopher J. Hughes; Daehyun Kim

As cache hierarchies become deeper and the number of cores on a chip increases, managing caches becomes more important for performance and energy. However, current hardware cache management policies do not always adapt optimally to the applications behavior: e.g., caches may be polluted by data structures whose locality cannot be captured by the caches, and producer-consumer communication incurs multiple round trips of coherence messages per cache line transferred. We propose load and store instructions that carry hints regarding into which cache(s) the accessed data should be placed. Our instructions allow software to convey locality information to the hardware, while incurring minimal hardware cost and not affecting correctness. Our instructions provide a 1.07× speedup and a 1.24× energy efficiency boost, on average, according to simulations on a 64-core system with private L1 and L2 caches. With a large shared L3 cache added, the benefits increase, providing 1.33× energy reduction on average.


Archive | 2013

Object liveness tracking for use in processing device cache

Christopher J. Hughes; Daehyun Kim; Jongsoo Park; Richard M. Yoo; Ganesh Bikshandi


Archive | 2013

Method and apparatus for selecting cache locality for atomic operations

Christopher J. Hughes; Daehyun Kim; Camilo A. Moreno; Jongsoo Park; Richard M. Yoo


Archive | 2014

Monitoring Vector Lane Duty Cycle For Dynamic Optimization

Daehyun Kim; Jongsoo Park; Dong Hyuk Woo; Richard M. Yoo; Christopher J. Hughes


Archive | 2013

AUTOMATIC TRANSACTION COARSENING

Christopher J. Hughes; Richard M. Yoo


Archive | 2014

DYNAMIC HOME TILE MAPPING

Christopher J. Hughes; Daehyun Kim; Jongsoo Park; Richard M. Yoo


Archive | 2014

Early Experience on Transactional

Richard M. Yoo; Sandhya Viswanathan; Vivek R. Deshpande; Christopher J. Hughes; Shirish Aundhe


Archive | 2013

Apparatus and method for implementing a scratchpad memory

Christopher J. Hughes; Daya Shankar Khudia; Daehyun Kim; Jongsoo Park; Richard M. Yoo

Collaboration


Dive into the Richard M. Yoo's collaboration.

Researchain Logo
Decentralizing Knowledge