Adi Yoaz
Intel
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Adi Yoaz.
international symposium on computer architecture | 1999
Adi Yoaz; Mattan Erez; Ronny Ronen; Stephan J. Jourdan
State of the art microprocessors achieve high performance by executing multiple instructions per cycle. In an out-of-order engine, the instruction scheduler is responsible for dispatching instructions to execution units based on dependencies, latencies, and resource availability. Most existing instruction schedulers are doing a less than optimal job of scheduling memory accesses and instructions dependent on them, for the following reasons:• Memory dependencies cannot be resolved prior to execution, so loads are not advanced ahead of preceding stores.• The dynamic latencies of load instructions are unknown, so scheduling dependent instructions is based on either optimistic load-use delay (may cause re-scheduling and re-execution) or pessimistic delay (creating unnecessary delays).• Memory pipelines are more expensive than other execution units, and as such, are a scarce resource. Currently, an increase in the memory execution bandwidth is usually achieved through multi-banked caches where bank conflicts limit efficiency.In this paper we present three techniques to address these scheduler limitations. One is to improve the scheduling of load instructions by using a simple memory disambiguation mechanism. The second is to improve the scheduling of load dependent instructions by employing a Data Cache Hit-Miss Predictor to predict the dynamic load latencies. And the third is to improve the efficiency of load scheduling in a multi-banked cache through Cache-Bank Prediction.
international symposium on microarchitecture | 1998
Stephen J. Jourdan; Ronny Ronen; Michael Bekerman; Bishara Shomar; Adi Yoaz
Hardware renaming schemes provide multiple physical locations (register or memory) for each logical name. In current renaming schemes, a new physical location is allocated for each dispatched instruction regardless of its result value. However, these values exhibit a high level of temporal locality (result redundancy). This paper proposes: Physical Register Reuse. To reuse a physical location whenever it is detected that an incoming result value matches a previous one. This is performed during register renaming and requires some VALUE-IDENTITY DETECTION hardware. By mapping several logical registers holding the same value to the same physical register, Physical Register Reuse gives the opportunities: SHARING-exploit the high level of value-redundancy in the register file to either reduce the file size and complexity, or effectively enlarge the active instruction window. Our results suggest reduction factors of 2 to 4 in some cases. Performance is increased either by the enlarged instruction window or by the higher frequency enabled by a smaller register file requiring fewer ports. RESULT REUSE AND DEPENDENCY REDIRECTION-move the responsibility of generating results: (1) From the functional units to the register renamer, resulting in the possible elimination of processed instructions from the execution stream. (2) From one instruction to an earlier instruction stream, possibly allowing instructions to be scheduled earlier. This way, large performance speedups are achieved. 2. Unification. To combine the memory renamer with the register renamer in order to extend the above-stated sharing and result reuse and dependency redirection ideas to both registers and memory locations. This allows even greater hardware savings and performance improvements. This also simplifies the processing of store instructions.
international symposium on computer architecture | 1999
Michael Bekerman; Stephan J. Jourdan; Ronny Ronen; Gilad Kirshenboim; Lihu Rappoport; Adi Yoaz; Uri C. Weiser
As microprocessors become faster, the relative performance cost of memory accesses increases. Bigger and faster caches significantly reduce the absolute load-to-use time delay. However, increase in processor operational frequencies impairs the relative load-to-use latency, measured in processor cycles (e.g. from two cycles on the Pentium® processor to three cycles or more in current designs). Load-address prediction techniques were introduced to partially cut the load-to-use latency. This paper focuses on advanced address-prediction schemes to further shorten program execution time.Existing address prediction schemes are capable of predicting simple address patterns, consisting mainly of constant addresses or stride-based addresses. This paper explores the characteristics of the remaining loads and suggests new enhanced techniques to improve prediction effectiveness:• Context-based prediction to tackle part of the remaining, difficult-to-predict, load instructions.• New prediction algorithms to take advantage of global correlation among different static loads.• New confidence mechanisms to increase the correct prediction rate and to eliminate costly mispredictions.• Mechanisms to prevent long or random address sequences from polluting the predictor data structures while providing some hysteresis behavior to the predictions.Such an enhanced address predictor accurately predicts 67% of all loads, while keeping the misprediction rate close to 1%. We further prove that the proposed predictor works reasonably well in a deep pipelined architecture where the predict-to-update delay may significantly impair both prediction rate and accuracy.
international symposium on computer architecture | 2000
Michael Bekerman; Adi Yoaz; Freddy Gabbay; Stephan J. Jourdan; Maxim Kalaev; Ronny Ronen
Higher microprocessor frequencies accentuate the performance cost of memory accesses. This is especially noticeable in the Intels IA32 architecture where lack of registers results in increased number of memory accesses. This paper presents novel, non-speculative technique that partially hides the increasing load-to-use latency, by allowing the early issue of load instructions. Early load address resolution relies on register tracking to safely compute the addresses of memory references in the front-end part of the processor pipeline. Register tracking enables decode-time computation of register values by tracking simple operations of the form reg±immediate. Register tracking may be performed in any pipeline stage following instruction decode and prior to execution. Several tracking schemes are proposed in this paper:Stack pointer tracking allows safe early resolution of stack references by keeping track of the value of the ESP register (the stack pointer). About 25% of all loads are stack loads and 95% of these loads may be resolved in the front-end. Absolute address tracking allows the early resolution of constant-address loads. Displacement-based tracking tackles all loads with addresses of the form reg±immediate by tracking the values of all general-purpose registers. This class corresponds to 82% of all loads, and about 65% of these loads can be safely resolved in the front-end pipeline. The paper describes the tracking schemes, analyzes their performance potential in a deeply pipelined processor and discusses the integration of tracking with memory disambiguation.
international symposium on computer architecture | 2002
Robert S. Chappell; Francis Tseng; Adi Yoaz; Yale N. Patt
Branch misprediction penalties continue to increase as microprocessor cores become wider and deeper. Thus, improving branch prediction accuracy remains an important challenge. Simultaneous Subordinate Microthreading (SSMT) provides a means to improve branch prediction accuracy. SSMT machines run multiple, concurrent microthreads in support of the primary thread. We propose to dynamically construct microthreads that can speculatively and accurately pre-compute branch outcomes along frequently mispredicted paths. The mechanism is intended to be implemented entirely in hardware. We present the details for doing so. We show how to select the right paths, how to generate accurate predictions, and how to get this information in a timely way. We achieve an average gain of 8.4% (42% maximum) over a very aggressive baseline machine on the SPECint95 and SPECint2000 benchmark suites.
high performance computer architecture | 2000
Stephan J. Jourdan; Lihu Rappoport; Yoav Almog; Mattan Erez; Adi Yoaz; Ronny Ronen
This paper describes a new instruction-supply mechanism, called the eXtended Block Cache (XBC). The goal of the XBC is to improve on the Trace Cache (TC) hit rate, while providing the same bandwidth. The improved hit rate is achieved by having the XBC a nearly redundant free structure. The basic unit recorded in the XBC is the extended block (XB), which is a multiple-entry single-exit instruction block. A XB is a sequence of instructions ending on a conditional or an indirect branch. Unconditional direct jumps do not end a XB. In order to enable multiple entry points per XB, the XB index is derived from the IP of its ending instruction. Instructions within the XB are recorded in reverse order, enabling easy extension of XBs. The multiple entry-points remove most of the redundancy. Since there is at most one conditional branch per XB, we can fetch up to n XBs per cycle by predicting n branches. The multiple fetch enables the XBC to march the TC bandwidth.
international symposium on microarchitecture | 2002
Robert S. Chappell; Francis Tseng; Adi Yoaz; Yale N. Patt
Research has shown that precomputation microthreads can be useful for improving branch prediction and prefetching. However, it is not obvious how to provide the necessary microarchitectural support, and few details have been given in the literature. By judiciously constraining microthreads, we can easily adapt a superscalar machine to support many simultaneous microthreads. The nature of precomputation microthreads also requires efficient usage of resources. Our proposed implementation addresses this issue by dynamically identifying and aborting useless microthreads.
Archive | 1997
Gershon Kedem; Ronny Ronen; Adi Yoaz
Archive | 1999
Adi Yoaz; Mattan Erez; Ronny Ronen
Archive | 1999
Stephan J. Jourdan; Ronny Ronen; Adi Yoaz