Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where George Z. Chrysos is active.

Publication


Featured researches published by George Z. Chrysos.


international symposium on microarchitecture | 1997

ProfileMe : hardware support for instruction-level profiling on out-of-order processors

Jeffrey Dean; James E. Hicks; Carl A. Waldspurger; William E. Weihl; George Z. Chrysos

Profile data is valuable for identifying performance bottlenecks and guiding optimizations. Periodic sampling of a processors performance monitoring hardware is an effective, unobtrusive way to obtain detailed profiles. Unfortunately, existing hardware simply counts events, such as cache misses and branch mispredictions, and cannot accurately attribute these events to instructions, especially on out-of-order machines. We propose an alternative approach, called ProfileMe, that samples instructions. As a sampled instruction moves through the processor pipeline, a detailed record of all interesting events and pipeline stage latencies is collected. ProfileMe also supports paired sampling, which captures information about the interactions between concurrent instructions, revealing information about useful concurrency and the utilization of various pipeline stages while an instruction is in flight. We describe an inexpensive hardware implementation of ProfileMe, outline a variety of software techniques to extract useful profile information from the hardware, and explain several ways in which this information can provide valuable feedback for programmers and optimizers.


international parallel and distributed processing symposium | 2013

Design and Implementation of the Linpack Benchmark for Single and Multi-node Systems Based on Intel® Xeon Phi Coprocessor

Alexander Heinecke; Karthikeyan Vaidyanathan; Mikhail Smelyanskiy; Alexander Kobotov; Roman Dubtsov; Greg Henry; Aniruddha G. Shet; George Z. Chrysos; Pradeep Dubey

Dense linear algebra has been traditionally used to evaluate the performance and efficiency of new architectures. This trend has continued for the past half decade with the advent of multi-core processors and hardware accelerators. In this paper we describe how several flavors of the Linpack benchmark are accelerated on Intels recently released Intel® Xeon Phi™1 co-processor (code-named Knights Corner) in both native and hybrid configurations. Our native DGEMM implementation takes full advantage of Knights Corners salient architectural features and successfully utilizes close to 90% of its peak compute capability. Our native Linpack implementation running entirely on Knights Corner employs novel dynamic scheduling and achieves close to 80% efficiency - the highest published co-processor efficiency. Similarly to native, our single-node hybrid implementation of Linpack also achieves nearly 80% efficiency. Using dynamic scheduling and an enhanced look-ahead scheme, this implementation scales well to a 100-node cluster, on which it achieves over 76% efficiency while delivering the total performance of 107 TFLOPS.


international symposium on microarchitecture | 2002

Compiler managed micro-cache bypassing for high performance EPIC processors

Youfeng Wu; Ryan N. Rakvic; Li-Ling Chen; Chyi-chang Miao; George Z. Chrysos; Jesse Fang

Advanced microprocessors have been increasing clock rates, well beyond the Gigahertz boundary. For such high performance microprocessors, a small and fast data micro-cache (ucache) is important to overall performance, and proper management of it via load bypassing has a significant performance impact. In this paper, we propose and evaluate a hardware-software collaborative technique to manage ucache bypassing for EPIC processors. The hardware supports the ucache bypassing with a fag in the load instruction format, and the compiler employs static analysis and profiling to identify loads that should bypass the ucache. The collaborative method achieves a significant improvement in performance for the SpecInt2000 benchmarks. On average, about 40%, 30%, 24%, and 22% of load references are identified to bypass 256 B, 1 K, 4 K, and 8 K sized ucaches, respectively. This reduces the ucache miss rates by 39%, 32%, 28%, and 26%. The number of pipeline stalls from loads to their uses is reduced by 13%, 9%, 6%, and 5%. Meanwhile, the L1 and L2 cache misses remain largely unchanged. For the 256 B ucache, bypassing improves overall performance on average by 5%.


Archive | 1997

Method for scheduling threads in a multithreaded processor

George Z. Chrysos; James E. Hicks; Carl A. Waldspurger; William E. Weihl


Archive | 1997

Method for estimating statistics of properties of instructions processed by a processor pipeline

George Z. Chrysos; James E. Hicks; Carl A. Waldspurger; William E. Weihl


Archive | 1997

Apparatus for randomly sampling instructions in a processor pipeline

George Z. Chrysos; James E. Hicks; Daniel L. Leibholz; Edward J. Mclellan; Carl A. Waldspurger; William E. Weihl


Archive | 1997

Apparatus and method for monitoring a computer system to guide optimization

James E. Hicks; George Z. Chrysos; Carl A. Waldspurger; William E. Weihl


Archive | 2003

Method and apparatus for affinity-guided speculative helper threads in chip multiprocessors

Hong Wang; Perry H. Wang; Jeffery A. Brown; Per Hammarlund; George Z. Chrysos; Doron Orenstein; Steve Shih-wei Liao; John Paul Shen


Archive | 1997

Apparatus for sampling instruction execution information in a processor pipeline

George Z. Chrysos; Jeffrey Dean; James E. Hicks; Carl A. Waldspurger; William E. Weihl; Daniel L. Leibholz; Edward J. Mclellan


Archive | 1997

Method and apparatus for sampling multiple potentially concurrent instructions in a processor pipeline

George Z. Chrysos; Jeffrey Dean; James E. Hicks; Daniel L. Leibholz; Edward J. Mclellan; Carl A. Waldspurger; William E. Weihl

Collaboration


Dive into the George Z. Chrysos's collaboration.

Researchain Logo
Decentralizing Knowledge