Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gordon Lyon is active.

Publication


Featured researches published by Gordon Lyon.


The Journal of Supercomputing | 1994

Synthetic-perturbation tuning of MIMD programs

Gordon Lyon; Robert Snelick; Raghu N. Kacker

Synthetic-perturbation tuning (SPT) is a novel technique for assaying and improving the performance of programs on MIMD systems. Conceptually, SPT brings the powerful, mathematical perspective of statistically designed experiments to the interdependent, sometimes refractory aspects of MIMD program tuning. Practically, synthetic perturbations provide a much needed quick-change mechanism for what otherwise would be ad hoc, hand-configured experiment setups. Overall, the technique identifies bottlenecks in programs directly as quantitative effects upon a measured response. SPT works on programs for both shared and distributed memory and it scales well with increasing system size.


Software - Practice and Experience | 1994

Synthetic-perturbation techniques for screening shared memory programs

Robert Snelick; Joseph JáJá; Raghu N. Kacker; Gordon Lyon

The synthetic‐perturbation screening (SPS) methodology is based on an empirical approach; SPS introduces artificial perturbations into the MIMD program and captures the effects of such perturbations by using the modern branch of statistics called design of experiments. SPS can provide the basis of a powerful tool for screening MIMD programs for performance bottlenecks. This technique is portable across machines and architectures, and scales extremely well on massively parallel processors. The purpose of this paper is to explain the general approach and to extend it to address specific features that are the main source of poor performance on the shared memory programming model. These include performance degradation due to load imbalance and insufficient parallelism, and overhead introduced by synchronizations and by accessing shared data structures. We illustrate the practicality of SPS by demonstrating its use on two very different case studies: a large image understanding benchmark and a parallel quicksort.


Communications of The ACM | 1978

Packed scatter tables

Gordon Lyon

Scatter tables for open addressing benefit from recursive entry displacements, cutoffs for unsuccessful searches, and auxiliary cost functions. Compared with conventional methods, the new techniques provide substantially improved tables that resemble exact-solution optimal packings. The displacements are depth-limited approximations to an enumerative (exhaustive) optimization, although packing costs remain linear—O(n)—with table size n. The techniques are primarily suited for important fixed (but possibly quite large) tables for which reference frequencies may be known: op-code tables, spelling dictionaries, access arrays. Introduction of frequency weights further improves retrievals, but the enhancement may degrade cutoffs.


ACM Standardview | 1997

Metrology for information technology

Lisa J. Carnahan; Gary Carver; M M. Gray; Michael D. Hogan; Theodore Hopp; Jeffrey Horlick; Gordon Lyon; Elena R. Messina

Abstract : In May 1996, NIST management requested a white paper on metrology for information technology (IT). A task group was formed to develop this white paper with representatives from the Manufacturing Engineering Laboratory (MEL), the Information Technology Laboratory (ITL), and Technology Services (TS). The task group members had a wide spectrum of experiences and perspectives on testing and measuring physical and IT quantities. The task group believed that its collective experience and knowledge were probably sufficient to investigate the underlying question of the nature of IT metrology. During the course of its work, the task group did not find any previous work addressing the overall subject of metrology for IT. The task group found it to be both exciting and challenging to possibly be first in what should be a continuing area of study. After some spirited deliberations, the task group was able to reach consensus on its white paper. Also, as a result of its deliberations, the task group decided that this white paper should suggest possible answers rather than assert definitive conclusions. In this spirit, the white paper suggests: a scope and a conceptual basis for IT metrology; a taxonomy for IT methods of testing; status of IT testing and measurement; opportunities to advance IT metrology; overall roles for NIST; and recapitulates the importance of IT metrology to the U.S. The task group is very appreciative of having had the opportunity to produce this white paper. The task group hopes that this white paper will provide food for thought for our intended audience: NIST management and technical staff and our colleagues elsewhere who are involved in various aspects of testing and measuring IT.


Software - Practice and Experience | 1995

A scalability test for parallel code

Gordon Lyon; Raghu N. Kacker; Arnaud Linz

Code scalability, crucial on any parallel system, determines how well parallel code avoids becoming a bottleneck as its host computer is made larger. The scalability of computer code can be estimated by statistically designed experiments that empirically approximate a multivariate Taylor expansion of the codes execution response function. Each suspected code bottleneck corresponds to a first‐order term in the expansion, the coefficient for that term indicating how sensitive execution is to changes in the suspect location. However, it is the expansion coefficients for second‐order interactions between code segments and the number of processors that are fundamental to discovering which program elements impede parallel speedup. A new, unified view of these second‐order coefficients yields an informal relative scalability test of high utility in code development. Discussion proceeds through actual examples, including a straightforward illustration of the test applied to SLALOM, a complex, multiphase benchmark. A quick graphical shortcut makes the scalability test readily accessible.


Theoretical Computer Science | 1989

Design factors for parallel processing benchmarks

Gordon Lyon

Abstract Performance benchmarks should be embedded in comprehensive frameworks that suitably set their context of use. One universal framework appears beyond reach, since distinct architectural clusters are emerging with separate emphases. Large application benchmarks are most successful when they run well on a machine, and thereby demonstrate the economic compatibility of job and architecture. The present value of smaller benchmarks is diagnostic, although sets of them would encourage the parametric study of architectures and applications; an extended example illustrates this last aspect.


Software - Practice and Experience | 1975

Simple transforms for instrumenting fortran decks

Gordon Lyon; Rona B. Stillman

A recent revival of interest in measuring program execution behaviour has led to a number of distinct approaches. Arguments are given for a fairly simple method of modifying FORTRAN source code to collect frequency counts. No symbol table is necessary and only a single reserved name is introduced into the source.


Information Processing Letters | 2002

Comparison of two code scalability tests

Gordon Lyon

When a computer system is expensive to use or is not often available, one may want to tune software for it via analytical models that run on more common, less costly machines. In contrast, if the host system is readily available, the attraction of analytical models is far less. One instead employs the actual system, testing and tuning its software empirically. Two examples of code scalability testing illustrate how these approaches differ in objectives and costs, and, how they complement each other in usefulness. Concurrent computing requires scalable code [1,8, 12]. Successes of a parallel application often fuel demands that it handle an expanded range. It should do this without undue waste of additional system resources. Definitions of scalability will vary according to circumstances — when looking for speedup, problem size is fixed and the host system grows. In another case, one evaluates an enlarged problem together with a larger host [3]. The discussion that follows assumes


Information Processing Letters | 1995

Using synthetic perturbations and statistical screening to assay shared-memory programs

Robert Snelick; Joseph JáJá; Raghu N. Kacker; Gordon Lyon

Synthetic-perturbation screening (SPS, hereafter, for brevity) is a diagnostic technique employing artificial code in discussion to follow, delays placed within segments of an MIMD program. These insertions simulate code changes in suspected program bottlenecks [6]. Screening techniques based upon statistical experimental design then flag those program segments that are most sensitive to perturbation (delay). A subset of program segments so flagged can be candidates for improvement [1,4]. The results are sensitivity analyses of specimen programs in terms of their questionable sections of code. This provides a portable, scalable and generic basis for assaying MIMD programs; the approach is quite powerful_ Core ideas for SPS are developed in [6].


ieee international software metrics symposium | 1994

A simple scalability test for MIMD code

Gordon Lyon; Raghu N. Kacker

Code scalability, which is crucial in any parallel system, determines how well parallel code avoids becoming a bottleneck as its host computer is made larger. The scalability of computer code can be estimated by statistically designed experiments that empirically approximate a multivariate Taylor expansion of the codes execution response function. Each suspected code bottleneck corresponds to a first-order term in the expansion, the coefficient for that term indicating how sensitive execution is to changes in the suspect location. However, it is the coefficients for second-order interactions between code segments and the number of processors that are fundamental in discovering which program elements limit parallel speedup. Extending an earlier formulation, a new unified view via these second-order terms yields an informal scaling test of high utility in code development.<<ETX>>

Collaboration


Dive into the Gordon Lyon's collaboration.

Top Co-Authors

Avatar

Raghu N. Kacker

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Robert Snelick

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Lisa J. Carnahan

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Elena R. Messina

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Gary Carver

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Jeffrey Horlick

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

M M. Gray

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Michael D. Hogan

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Arnaud Linz

National Institute of Standards and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge