Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Marion G. Harmon is active.

Publication


Featured researches published by Marion G. Harmon.


IEEE Transactions on Computers | 1999

Bounding pipeline and instruction cache performance

Christopher A. Healy; Robert D. Arnold; Frank Mueller; David B. Whalley; Marion G. Harmon

Predicting the execution time of code segments in real-time systems is challenging. Most recently designed machines contain pipelines and caches. Pipeline hazards may result in multicycle delays. Instruction or data memory references may not be found in cache and these misses typically require several cycles to resolve. Whether an instruction will stall due to a pipeline hazard or a cache miss depends on the dynamic sequence of previous instructions executed and memory references performed. Furthermore, these penalties are not independent since delays due to pipeline stalls and cache miss penalties may overlap. This paper describes an approach for bounding the worst and best case performance of large code segments on machines that exploit both pipelining and instruction caching. First, a method is used to analyze a programs control flow to statically categorize the caching behavior of each instruction. Next, these categorizations are used in the pipeline analysis of sequences of instructions representing paths within the program. A timing analyzer uses the pipeline path analysis to estimate the worst and best-case execution performance of each loop and function in the program. Finally, a graphical user interface is invoked that allows a user to request timing predictions on portions of the program. The results indicate that the timing analyzer efficiently produces tight predictions of worst and best-case performance for pipelining and instruction caching.


real-time systems symposium | 1995

Integrating the timing analysis of pipelining and instruction caching

Christopher A. Healy; David B. Whalley; Marion G. Harmon

Recently designed machines contain pipelines and caches. While both features provide significant performance advantages, they also pose problems for predicting execution time of code segments in real-time systems. Pipeline hazards may result in multicycle delays. Instruction or data memory references may not be found in cache and these misses typically require several cycles to resolve. Whether an instruction will stall due to a pipeline hazard or a cache miss depends on the dynamic sequence of previous instructions executed and memory references performed. Furthermore, these penalties are not independent since delays due to pipeline stalls and cache miss penalties may overlap. This paper describes an approach for bounding the worst-case performance of large code segments on machines that exploit both pipelining and instruction caching. First, a method is used to analyze a programs control flow to statically categorize the caching behavior of each instruction. Next, these categorizations are used in the pipeline analysis of sequences of instructions representing paths within the program. A timing analyzer uses the pipeline path analysis to estimate the worst-case execution performance of each loop and function in the program. Finally, a graphical user interface is invoked that allows a user to request timing predictions on portions of the program.


real time technology and applications symposium | 1997

Timing analysis for data caches and set-associative caches

Randall T. White; Frank Mueller; Christopher A. Healy; David B. Whalley; Marion G. Harmon

The contributions of this paper are twofold. First, an automatic tool-based approach is described to bound worst-case data cache performance. The given approach works on fully optimized code, performs the analysis over the entire control flow of a program, detects and exploits both spatial and temporal locality within data references, produces results typically within a few seconds, and estimates, on average, 30% tighter WCET bounds than can be predicted without analyzing data cache behavior. Results obtained by running the system on representative programs are presented and indicate that timing analysis of data cache behavior can result in significantly tighter worst-case performance predictions. Second, a framework to bound worst-case instruction cache performance for set-associative caches is formally introduced and operationally described. Results of incorporating instruction cache predictions within pipeline simulation show that timing predictions for set-associative caches remain just as tight as predictions for direct-mapped caches. The cache simulation overhead scales linearly with increasing associativity.


real-time systems symposium | 1992

A retargetable technique for predicting execution time

Marion G. Harmon; Theodore P. Baker; David B. Whalley

A novel technique for predicting point-to-point execution times on contemporary microprocessors is presented. It uses machine-description rules, similar to those that have proven useful for code generation and peephole optimization, to translate compiled object code into a sequence of very low-level instructions. The stream of micro-instructions is then analyzed for tuning, via a three-level pattern matching scheme. The timing tool is currently predicting execution time of code segments targeted for the Motorola 68020 and Intel 80386 processor. The timing tool has been integrated with a version of the vpo C compiler and the ease environment. A prototype has been built and preliminary tests are very promising.<<ETX>>


Real-time Systems | 1999

Timing Analysis for Data and Wrap-Around Fill Caches

Randall T. White; Frank Mueller; Christopher A. Healy; David B. Whalley; Marion G. Harmon

The contributions of this paper are twofold. First, an automatic tool-based approach is described to bound worst-case data cache performance. The approach works on fully optimized code, performs the analysis over the entire control flow of a program, detects and exploits both spatial and temporal locality within data references, and produces results typically within a few seconds. Results obtained by running the system on representative programs are presented and indicate that timing analysis of data cache behavior usually results in significantly tighter worst-case performance predictions. Second, a method to deal with realistic cache filling approaches, namely wrap-around-filling for cache misses, is presented as an extension to pipeline analysis. Results indicate that worst-case timing predictions become significantly tighter when wrap-around-fill analysis is performed. Overall, the contribution of this paper is a comprehensive report on methods and results of worst-case timing analysis for data caches and wrap-around caches. The approach taken is unique and provides a considerable step toward realistic worst-case execution time prediction of contemporary architectures and its use in schedulability analysis for hard real-time systems.


Real-time Systems | 1994

A retargetable technique for predicting execution time of code segments

Marion G. Harmon; Theodore P. Baker; David B. Whalley

Predicting the execution times of straight-line code sequences is a fundamental problem in the design and evaluation of hard real-time systems. The reliability of system-level timings and schedulability analysis rests on the accuracy of execution time predictions for the basic schedulable units of work. Obtaining such predictions for contemporary microprocessors is difficult. This paper presents a new technique called micro-analysis for predicting point-to-point execution times on code segments. It uses machine-description rules, similar to those that have proven useful for code generation and peephole optimization, to translate compiled object code into a sequence of very low-level (micro) instructions. The stream of micro-instructions is then analyzed for timing, via a three-level pattern matching scheme. At this low level, the effect of advanced features such as instruction caching and overlap can be taken into account. This technique is compiler and language-independent, and retargetable. This paper also describes a prototype system in which the micro-analysis technique is integrated with an existing C compiler. This system predicts the bounded execution time of statement ranges or simple (non-nested) C functions at compile time.


real time technology and applications symposium | 1996

Supporting the specification and analysis of timing constraints

Lo Ko; Christopher A. Healy; Emily Jane Ratliff; Robert D. Arnold; David B. Whalley; Marion G. Harmon

Real-time programmers have to deal with the problem of relating timing constraints associated with source code to sequences of machine instructions. This paper describes an environment to assist users in the specification and analysis of timing constraints. A user is allowed specify timing constraints within the source code of a C program. A user interface for a timing analyzer was developed to depict whether these constraints were violated or met. In addition, the interface allows portions of programs to be quickly selected with the corresponding bounded times, source code lines, and machine instructions automatically displayed The result is a user-friendly environment that supports the user specification and analysis of timing constraints at a high (source code) level and retains the accuracy of low (machine code) level analysis.


Software - Practice and Experience | 1999

Timing constraint specification and analysis

Lo Ko; Naghan Al-Yaqoubi; Christopher A. Healy; Emily Jane Ratliff; Robert D. Arnold; David B. Whalley; Marion G. Harmon

Real-time programmers have to deal with the problem of relating timing constraints associated with source code to sequences of machine instructions. This paper describes an environment to assist users in the specification and analysis of timing constraints. A timing analyzer predicts the best and worst case bounds for these constrained portions of code. A user interface for this timing analyzer was developed to depict whether these constraints were violated or met. A user is allowed to specify timing constraints within the source code of a C program. The user interface also provides three different methods for interactively selecting portions of programs. After each selection the corresponding bounded times, source code lines, and machine instructions are automatically displayed. Users are pre vented from only selecting portions of the program for which timing bounds cannot be obtained. In addition, a technique is presented that allows the timing analysis to scale efficiently with complex functions and loops. The result is a user-friendly environment that supports the user specification and analysis of timing constraints at a high (source code) level and retains the accuracy of low (machine code) level analysis.


languages, compilers, and tools for embedded systems | 1995

Supporting user-friendly analysis of timing constraints

Lo Ko; David B. Whalley; Marion G. Harmon

Real-time programmers have to deal with the problem of relating timing constraints associated with source code lines to sequences of machine instructions. This paper describes an interface that was developed to assist users in this task. Portions of programs can be quickly selected and the corresponding bounded times, source code lines, and machine instructions are automatically displayed. In addition, users are restricted to only selecting portions of the program for which timing bounds can be obtained. The result is a user-friendly interface that assists programmers in the analysis of timing constraints within a program.


international parallel processing symposium | 1999

Transparent Real-Time Monitoring in MPI

Samuel H. Russ; Rashid Jean-Baptiste; Tangirala Shailendra Krishna Kumar; Marion G. Harmon

MPI has emerged as a popular way to write architecture-independent parallel programs. By modifying an MPI library and associated MPI run-time environment, transparent extraction of timestamped information is possible. The wall-clock time at which specific MPI communication events begin and end can be recorded, collected, and provided to a central scheduler. The infrastructure to create and collect these events has been implemented and tested, and a future architecture that can use this information is described.

Collaboration


Dive into the Marion G. Harmon's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Frank Mueller

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar

Lo Ko

Florida State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge