Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jon Mauney is active.

Publication


Featured researches published by Jon Mauney.


software engineering symposium on practical software development environments | 1984

The Poe language-based editor project

Charles N. Fischer; Gregory F. Johnson; Jon Mauney; Anil Pal; Daniel L. Stock

Editor Allan Poe (Pascal Oriented Editor) is a full-screen language-based editor (LBE) that knows the syntactic and semantic rules of Pascal. It is the first step in development of a comprehensive Pascal program development environment. Poes design began in 1979; version 1 is currently operational on Vax 11s under Berkeley Unix and on HP 9800-series personal workstations. Poe is written in Pascal, and is designed to be readily transportable to new machines. An editor-generating system called Poegen is operational, and much of the language-specific character of Poe is table-driven and retargetable.


IEEE Transactions on Parallel and Distributed Systems | 1995

A scalable scheduling scheme for functional parallelism on distributed memory multiprocessor systems

Santosh Pande; Dharma P. Agrawal; Jon Mauney

We attempt a new variant of the scheduling problem by investigating the scalability of the schedule length with the required number of processors, by performing scheduling partially at compile time and partially at run time. Assuming infinite number of processors, the compile time schedule is found using a new concept of the threshold of a task that quantifies a trade-off between the schedule-length and the degree of parallelism. The schedule is found to minimize either the schedule length or the number of required processors and it satisfies: A feasibility condition which guarantees that the schedule delay of a task from its earliest start time is below the threshold, and an optimality condition which uses a merit function to decide the best task-processor match for a set of tasks competing for a given processor. At run time, the tasks are merged producing a schedule for a smaller number of available processors. This allows the program to be scaled down to the processors actually available at run time. Usefulness of this scheduling heuristic has been demonstrated by incorporating the scheduler in the compiler backend for targeting Sisal (Streams and Iterations in a Single Assignment Language) on iPSC/860. >


Journal of Parallel and Distributed Computing | 1994

A Threshold Scheduling Strategy for Sisal on Distributed-Memory Machines

Santoshkumar S. Pande; Dharma P. Agrawal; Jon Mauney

The problem of scheduling tasks on distributed memory machines is known to be NP-complete in the strong sense, ruling out the possibility of a pseudo-polynomial algorithm. This paper introduces a new heuristic algorithm for scheduling Sisal (Streams and Iterations In a Single Assignment Language) programs on a distributed memory machine, Intel Touchstone i860. Our compile time scheduling method works on IF-2, an intermediate form based on the dataflow parallelism in the program. We initially carry out a dependence analysis, to bind the implicit dependencies across IF-2 graph boundaries, followed by a cost assignment based on Intel Touchstone i860 timings. The scheduler works in two phases. The first phase of the scheduler finds the earliest and latest completion times of each task given by the shortest and longest paths from root task to the given task respectively. A threshold defined as the difference between the latest and the earliest start times of the task, is found. The scheduler varies the value of the allowable threshold, and determines the best value for minimal schedule length. In the second phase of the scheduler, we merge the processors to generate schedule to match the available number of processors. Schedule results for several benchmark programs have been included to demonstrate the effectiveness of our approach.


Acta Informatica | 1992

A simple, fast, and effective LL(1) error repair algorithm

Charles N. Fischer; Jon Mauney

Validation and locally least-cost repair are two simple and effective techniques for dealing with syntax errors. We show how the two can be combined into an efficient and effective error-handler for use with LL(1) parsers. Repairs are computed using an extension of the FMQ algorithm. Tables are created as necessary, rather than precomputed, and possible repairs are kept in a priority queue. Empirical results show that the repairs chosen with this strategy are of very high quality and that speed is quite acceptable.


ACM Transactions on Programming Languages and Systems | 1988

Determining the extent of lookahead in syntactic error repair

Jon Mauney; Charles N. Fischer

Many syntactic error repair strategies examine several additional symbols of input to guide the choice of a repair; a problem is determining how many symbols to examine. The goal of gathering all relevant information is discussed and shown to be impractical; instead we can gather all information relevant to choosing among a set of “minimal repairs.” We show that finding symbols with the property “Moderate Phrase-Level Uniqueness” is sufficient to establish that all information relevant to these minimal repairs has been seen. Empirical results on the occurrence of such symbols in Pascal are presented.


IEEE Parallel & Distributed Technology: Systems & Applications | 1994

Compiling functional parallelism on distributed-memory systems

Santosh S. Pande; Dharma P. Agrawal; Jon Mauney

We have developed an automatic compilation method that combines data- and code-based approaches to schedule a programs functional parallelism onto distributed memory systems. Our method works with Sisal, a parallel functional language, and replaces the back end of the Optimizing Sisal Compiler so that it produces code for distributed memory systems. Our extensions allow the compiler to generate code for Intels distributed-memory Touchstone iPSC/860 machines (Gamma, Delta, and Paragon). The modified compiler can generate a partition that minimizes program completion time (for systems with many processors) or the required number of processors (for systems with few processors). To accomplish this, we have developed a heuristic algorithm that uses the new concept of threshold to treat the problem of scheduling as a trade-off between schedule length and the number of required processors. Most compilers for distributed memory systems force the programmer to partition the data or the program code. This modified version of a Sisal compiler handles both tasks automatically in a unified framework, and lets the programmer compile for a chosen number of processors.<<ETX>>


static analysis symposium | 1994

From processor timing specifications to static instruction scheduling

Edwin A. Harcourt; Jon Mauney; Todd A. Cook

We show how to derive a static instruction scheduler from a formal specification of an instruction-level parallel processor. The mathematical formalism used is SCCS, a synchronous process algebra for specifying timed, concurrent systems. We illustrate the technique by specifying a hypothetical processor that shares many properties of commercial processors (such as the MIPS or SuperSparc) including delayed loads and branches, interlocked floating-point instructions, resource constraints, and multiple instruction issue.


Proceedings of the IEEE | 1989

Computational models and resource allocation for supercomputers

Jon Mauney; Dharma P. Agrawal; Y.K. Choe; E.A. Harcourt; S. Kim; W.J. Staats

There are several different architectures used in supercomputers, with differing computational models. These different models present a variety of resource allocation problems that must be solved. The computational needs of a program must be cast in terms of the computational model supported by the supercomputer, and this must be done in a way that makes effective use of the machines resources. This is the resource allocation problem. The computational models of available supercomputers and the associated resource allocation techniques are surveyed. It is shown that many problems and solutions appear repeatedly in very different computing environments. Some case studies are presented, sowing concrete computational models and the allocation strategies used. >


international parallel processing symposium | 1991

A message segmentation technique to minimize task completion time

Sukil Kim; Santoshkumar S. Pande; Dharma P. Agrawal; Jon Mauney

Optimal partitioning of multiprocessor programs is a trade-off: as the granularity of subtasks of a parallel task increases, the communication overhead decreases but so does the total parallelism. The authors propose a new technique to determine the optimal segment size of messages between a producer and a consumer to minimize the overall execution time, and apply it to allocation of DOACROSS loops.<<ETX>>


hawaii international conference on system sciences | 1990

B-HIVE: hardware and software for an experimental multiprocessor

Dhanna P. Agrawal; Winser E. Alexander; Edward F. Gehringer; Jon Mauney; Thomas K. Miller

B-HIVE, a 24-node experimental multiprocessor computer, is based on a generalized hypercube interconnection structure and features two processors at each node: one to perform application processing and the other to handle communication. The design of the B-HIVE hardware and software is intended to keep run-time overhead to a minimum, and thus provide high performance on a variety of problems, especially signal and image processing. Topics addressed include the B-HIVE multicomputer architecture and node structure, interprocessor communication, software support, and project status.<<ETX>>

Collaboration


Dive into the Jon Mauney's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Charles N. Fischer

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Santoshkumar S. Pande

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anil Pal

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Daniel L. Stock

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Edward F. Gehringer

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gregory F. Johnson

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge