Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Roger C. Wood is active.

Publication


Featured researches published by Roger C. Wood.


international symposium on microarchitecture | 1991

DISC: dynamic instruction stream computer

Mario D. Nemirovsky; Forrest Brewer; Roger C. Wood

The Dynamic Instruction Stream Computer is a novel computer architecture which addresses many of the problems present in real-time systems. The DISC operates by allowing multiple instruction streams (ISs), representing different processes to run concurrently by instruction interleaving on the pipeline. Also, the throughput of the DISC can be partitioned in any way between the multiple ISs. Conventional architectures are more concerned with overall performance and throughput than with real-time response. In other words, they optimize the system to the functions that are more heavily used without regard to responsiveness to individual requests. Applications abound where a high degree of responsiveness is required, without too much sacrifice of overall efficiency. This is particularly true in real-time control applications where it is important to optimize the critical loops and respond promptly to interrupts. DISC addresses this problem by dynamically partitioning the processor throughput between multiple instruction streams based upon requirement demands. In this way different tasks and interrupt priorities can be assigned to guarantee their deadlines.


hawaii international conference on system sciences | 1994

Performance estimation of multistreamed, superscalar processors

Wayne Yamamoto; Mauricio J. Serrano; Adam R. Talcott; Roger C. Wood; M. Nemirosky

Multistreamed processors can significantly improve processor throughput by allowing interleaved execution of instructions from multiple instruction streams. We present an analytical modeling technique to evaluate the effect of dynamically interleaving additional instruction streams within superscalar architectures. Using this technique, estimates of the instructions executed per cycle (IPC) for a processor architecture are quickly calculated given simple descriptions of the workload and hardware characteristics. To validate this technique, estimates of the SPEC89 benchmark suite obtained from the model are compared to results from a hardware simulator. Our results show that the technique produces accurate estimates with an average deviation of /spl sim/4% from the simulation results. Finally, we demonstrate that as the number of functional units increases, multistreaming is an effective technique to exploit these additional resources.<<ETX>>


Journal of the ACM | 1966

Time-Shared Computer Operations With Both Interarrival and Service Times Exponential

B. Krishnamoorthi; Roger C. Wood

The concept of time-shared computer operations is briefly described and a model of a time-sharing system is proposed, based on the assumption that both interarrival and service times possess an exponential distribution. Although the process described by this model is non-Markovian, an imbedded Markov chain is analyzed by exploiting the fact that the instants of completion of a “quantum” of service are regeneration points. It is shown that user congestion possesses a limiting distribution, and the method of generating functions is used to derive this distribution. The concept of cycle time is discussed and two measures of cycle time developed for a scheduling discipline employing a single queue. Finally, a number of numerical examples are presented to illustrate the effect of the system parameters upon user congestion, system response time and computer efficiency.


hawaii international conference on system sciences | 1993

DISC: dynamic instruction stream computer-an evaluation of performance

Mauricio J. Serrano; Roger C. Wood; Mario Nemirovsky

DISC is a simple processor architecture targeted for real-time applications. The architecture is based on dynamic fine-grained multithreading where the next instruction is fetched from one of several possible simultaneously active threads. The DISC architecture uses a combination of concepts including, a register stack file, a four stage pipeline, up to four active threads, a dynamic scheduler, and special input/output (I/O) and interrupt constructs to allow maximization of performance for real-time control applications. Previous stochastic results were very encouraging and so a synthetic benchmark was developed to allow more detailed testing. The benchmark was based on a Hughes Aircraft Company satellite control system, and assembled with the DISC assembler. The model was designed and run in the Verilog simulation language.<<ETX>>


international symposium on computer architecture | 1994

The impact of unresolved branches on branch prediction scheme performance

Adam R. Talcott; Wayne Yamamoto; Mauricio J. Serrano; Roger C. Wood; Mario Nemirovsky

In this paper, we examine the benefits of the early resolution of branch instructions and the impact of unresolved branches on history-based branch prediction schemes by using two new metrics that are more revealing than branch prediction accuracy alone. We first briefly review a number of branch prediction schemes and introduce two new branch prediction scheme performance metrics. We then utilize these metrics to gauge the improvement in branch prediction scheme performance when only the outcomes of unresolved branches are predicted. Finally, we examine two approaches for handling multiple unresolved branches in history-based branch prediction schemes, and determine that prediction accuracy remains quite stable when older branch histories are used.


Proceedings of the 7th international conference on Computer performance evaluation : modelling techniques and tools: modelling techniques and tools | 1994

A model for performance estimation in a multistreamed superscalar processor

Mauricio J. Serrano; Wayne Yamamoto; Roger C. Wood; Mario Nemirovsky

The current trend is integrating more hardware functional units within the superscalar processor. However, the functional units are not fully utilized due to the inherent limit of instruction-level parallelism in a single instruction stream. The use of simultaneous execution of instructions from multiple streams, referred to as multistreaming, can increase the number of instructions dispatched per cycle by providing more ready-to-issue instructions. We present an analytical modeling technique to evaluate the effect of dynamically interleaving additional instruction streams within superscalar architectures. Estimates of the instructions executed per cycle (IPC) are calculated given simple descriptions of the workload and hardware. To validate this technique, estimates obtained from the model for several benchmarks are compared against results from a hardware simulator.


Ibm Journal of Research and Development | 1975

Performance analysis of a multiprogrammed computer system

Willy W. Chiu; Donald N. Dumont; Roger C. Wood

A combination of analytical modeling and measurement is employed for the performance analysis of a multiprogrammed computer system. First, a cyclic queue model is developed for the system under study. Then, model validation is attempted in both controlled and normal environments. The success of the model is demonstrated by its prediction of performance improvements from system reconfigurations. Reasonable correlation between the measured performance and the model predictions under various degrees of multiprogramming is observed. Finally, possible system reconfigurations are explored with the insight gained from the performance analysis.


Communications of The ACM | 1974

Dynamic memory repacking

E. Balkovich; W. Chiu; Leon Presser; Roger C. Wood

A probabilistic model of a multiprogramming system is exercised in order to determine the conditions under which the dynamic repacking of main memory is beneficial. An expression is derived for the maximum interference that a repacking process may introduce before the original performance of the system is degraded. Alternative approaches to repacking are discussed, and the operating conditions that lead to improved system throughput through repacking are delineated.


annual simulation symposium | 1975

Sensitivity of predictive scheduling

Ka-Lai Leung; Roger C. Wood; Willy W. Chiu

A popular CPU (Central Processing Unit) scheduling strategy is to give high priority to those jobs with short CPU service times. There are several algorithms for predicting which jobs among all the resident jobs have short service times. No scheme, however, has proved to be 100% accurate in identifying the short jobs. This paper, utilizing results from both simulation and mathematical models, studies the sensitivity of CPU utilization to the accuracy of the predictive algorithm.


Journal of the ACM | 1974

Comments on a Paper by Gaver

E. Balkovich; W. Chiu; Leon Presser; Roger C. Wood

In a 1967 publication, D. P. Gaver studied a probabilistic model of a multiprogramming computer system. His results have been utilized recently by a number of authors. However, we have observed that Gavers results contain inconsistencies. These inconsistencies are discussed in detail and a correction suggested and verified through an independent derivation.

Collaboration


Dive into the Roger C. Wood's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wayne Yamamoto

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

E. Balkovich

University of California

View shared research outputs
Top Co-Authors

Avatar

Leon Presser

University of California

View shared research outputs
Top Co-Authors

Avatar

W. Chiu

University of California

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge