Jerry L. Potter
Kent State University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jerry L. Potter.
IEEE Computer | 1994
Jerry L. Potter; Johnnie W. Baker; Stephen L. Scott; Arvind K. Bansal; Chokchai Leangsuksun; Chandra R. Asthagiri
Todays increased computing speeds allow conventional sequential machines to effectively emulate associative computing techniques. We present a parallel programming paradigm called ASC (ASsociative Computing), designed for a wide range of computing engines. Our paradigm has an efficient associative-based, dynamic memory-allocation mechanism that does not use pointers. It incorporates data parallelism at the base level, so that programmers do not have to specify low-level sequential tasks such as sorting, looping and parallelization. Our paradigm supports all of the standard data-parallel and massively parallel computing algorithms. It combines numerical computation (such as convolution, matrix multiplication, and graphics) with nonnumerical computing (such as compilation, graph algorithms, rule-based systems, and language interpreters). This article focuses on the nonnumerical aspects of ASC.<<ETX>>
IEEE Computer | 1977
Donald Rohrbacher; Jerry L. Potter
Although not designed with image processing specifically in mind, the Staran parallel processor has had a significant impact on that application area. The machine is full programmable, yet when applied to image processing it has demonstrated speeds usually associated only with hardwired systems. The result is an ability to execute interactively (response time under 10 seconds) a number of sophisitcated image processing algorithms and provide a significant improvement in throughput for batch processing systems.
Proceedings of the IEEE | 1989
Jerry L. Potter; W.C. Meilander
The authors describe the range of hardware variations of array processors, a form of SIMD (simple instruction stream, multiple dates stream architecture), comparing and contrasting the significant differences among them and briefly illustrating the wide range of algorithms that can effectively utilize them. Three applications are reviewed. The first application, image convolution, represents the traditional numerically computationally intensive areas of application. SIMD array processors are sufficiently powerful to process digital imagery in real time easily. The second application, an example of real-time database management, is the air traffic control problem. The problem cannot be solved today by networks of computers that are successfully used in similar, less time-critical applications. With an array processor there is sufficient real time remaining after the present system tasks are accomplished to realize additional system enhancements. The third application area, graph algorithms, which is more theoretical, is representative of problems for which the simplicity of the array processor solution results in an execution time better than the best theoretical case for a conventional sequential implementation. >
international parallel and distributed processing symposium | 2001
Robert A. Walker; Jerry L. Potter; Yanping Wang; Meiduo Wu
This paper describes an initial design of an associative processor for implementation using field-programmable logic devices (FPLDs). The processor is based loosely on earlier work on the STARAN computer, but updated to reflect modern design practices. We also draw on a large body of research at Kent State on the ASC and MASC models of associative processing, and take advantage of an existing compiler for the ASC model. The resulting design consists of an associative array of 8-bit RISC Processing Elements (PEs), operating in byte-serial fashion under the control of an Instruction Stream (IS) Control Unit that can execute assembly language code produced by a machine-specific back-end compiler.
Engineering Applications of Artificial Intelligence | 1992
Arvind K. Bansal; Jerry L. Potter
Abstract A model is presented which exploits data level massive parallelism present in associative computers for the efficient execution of logic programs with large knowledge bases. The exploitation of data parallelism in goal reduction efficiently prunes non-unifiable clauses resulting in effective reduction of shallow backtracking, and marks the potential bindings for the variables with single occurrence in a manner which is independent of the number of clauses. During deep backtracking, bindings are released simultaneously using associative search resulting in a significant reduction in execution time overhead of backtracking and garbage collection. A scheme for a logical data structure representation incorporating direct interface between lists and vectors is described. This allows the efficient integration of symbolic computation and a large class of vectorizable numerical computation on associative supercomputers.
international parallel and distributed processing symposium | 2003
Michael Scherger; Johnnie W. Baker; Jerry L. Potter
This paper describes a system software design for multiple instruction stream control in a massively parallel associative computing environment. The purpose of providing multiple instruction stream control is to increase throughput and reduce the amount of parallel slackness inherent in single instruction stream parallel programming constructs. The multiple associative computing (MASC) model is used to describe this technique and a brief introduction to the MASC model of parallel computation is presented. A simple parallel computing example is used to illustrate the techniques for multiple instruction stream control in a massively parallel runtime environment.
Proceedings Heterogeneous Computing Workshop | 1994
Chokchai Leangsuksun; Jerry L. Potter
We have developed and studied various mapping heuristics for a heterogeneous processing (HP) environment. We modify the mapping mechanisms with different communication considerations, processor selection, and mapping refinement policies. We then compare the results from the heuristics with past effort. Experiments indicate that our methods give a significant improvement over previous research.<<ETX>>
Proceedings. Workshop on Heterogeneous Processing, | 1993
Jerry L. Potter
associative computing can be applied to heterogeneous networks to provide a high level method of programming. The principles of
IEEE Transactions on Applications and Industry | 1990
Arvind K. Bansal; Jerry L. Potter
A model is presented which is designed to exploit the data parallelism present in associative computers for the efficient execution of logic programs with very large knowledge bases. A scheme is described for a logical data structure representation incorporating a direct interface between lists and vectors. This interface allows the partial integration of symbolic and numerical computation on existing associative supercomputers. A data parallel goal reduction algorithm which is almost independent of the number of clauses is discussed. This associative goal reduction scheme performs parallel clause pruning and binding of variables with a single occurrence. The associative property of the model effectively reduces the cost of shallow backtracking, deep backtracking, and garbage collection.<<ETX>>
international parallel processing symposium | 1992
Chandra R. Asthagiri; Jerry L. Potter
Presents near constant time associative parallel lexing (APL) algorithms. The best time complexity thus far claimed is O(logn) (n denotes the number of input characters for the parallel prefix lexing (PPL) algorithm. The linear state recording step in the PPL algorithm, which needs to be done only once for each grammar has been ignored in claiming the log n time complexity for the PPL algorithm. Furthermore, the PPL algorithm does not consider recording line numbers for the tokens and distinguishing identifier tokens as keywords or user-identifiers. The APL algorithms perform all of these functions. Thus, without considering the efforts spent on these functions, the APL algorithm takes constant time since every step depends on the length of the tokens, not on the length of the input. Generalizing and including these extra functions, the APL algorithm takes near constant time.<<ETX>>