Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David A. Poplawski is active.

Publication


Featured researches published by David A. Poplawski.


workshop on computer architecture education | 2007

A pedagogically targeted logic design and simulation tool

David A. Poplawski

JLS is a GUI-based digital logic simulation tool specifically designed for use in a wide range of digital logic and computer organization courses. It is comparable in features and functionality to commercial products, but includes many student and instructor-friendly aspects not found in those products such as state-machine and truth table editors, extensive error checking, and multiple simulation-result views. Students quickly become proficient in its use, enabling them to concentrate on circuit design and debugging issues. The circuit drawing interface is convenient enough to allow instructors to use it for classroom presentations, and circuits can be modified and tested so quickly that it promotes exploring alternatives not prepared for in advance. Its non-interractive (batch) execution capability, with parameter settings, configuration files and textual output simplifies the grading of large numbers of student projects.


Journal of Computer and System Sciences | 1979

On LL-regular grammars

David A. Poplawski

Abstract Several results pertaining to LL -regular grammars are presented. The decidability of whether or not a grammar is LL -regular for a particular regular partition, which was first stated by Nijholt, and the undecidability of whether or not a regular partition exists for which a grammar is LL -regular are proved. An algorithm for converting an LL -regular grammar into a strong LL -regular grammar that generates the same language is presented, and the construction of a two pass parser is described.


Journal of Parallel and Distributed Computing | 1991

Synthetic models of distributed-memory parallel programs

David A. Poplawski

Abstract This paper deals with the construction and use of simple synthetic programs that model the behavior of more complex, real parallel programs. Synthetic programs can be used in many ways: to construct an easily ported suite of benchmark programs, to experiment with alternate parallel implementations of a program without actually writing them, and to predict the behavior and performance of an algorithm on a new or hypothetical machine. Synthetic programs are constructed easily from scratch and from existing programs and can even be constructed using nothing but information obtained from traces of the real programs execution.


technical symposium on computer science education | 2008

JLS: a pedagogically targeted logic design and simulation tool

David A. Poplawski; Zachary Kurmas

JLS is a GUI-based digital logic simulation tool specifically designed for use in a wide range of digital logic and computer organization courses. It is comparable in features and functionality to commercial products, but includes many student and instructor-friendly aspects not found in those products such as state-machine and truth table editors, extensive error checking, and multiple simulation result views. Students quickly become proficient in its use, enabling them to concentrate on circuit design and debugging issues. The circuit drawing interface is convenient enough to allow instructors to use it for classroom presentations, and circuits can be modified and tested so quickly that it promotes exploring alternatives not prepared for in advance.


distributed memory computing conference | 1990

Visualizing the Performance of Parallel Matrix Algorithms

R.F. Paul; David A. Poplawski

We have constructed an animation tool called MaTRIX (Matrix TRace In X) for performance evaluation of parallel algorithms for dense matrix operations. It portrays the execution of a program in the context of the application by displaying the primary matrix and showing which parts of the matrix are being operated on, which processors are operating on those parts, and what operations are being performed. Colors and pattems are used to identify activity and differentiate between unique processors and various operations. The animation uses postprocessed trace files generated during the execution of a program, thereby enabling the display to be run at various speeds. Coupled with displays of processor activity and utilization, the animation provides application-oriented performance information that is useful in determining causes of poor performance. The tool is written to use XWindows and employ the tracing facilities in the PICL library, and is thereby portable to a wide range of parallel architectures and visual display devices.


Communications of The ACM | 1973

A simple technique for structured variable lookup

Geoffrey W. Gates; David A. Poplawski

A simple technique for the symbol-table lookup of structured variables based on simple automata theory is presented. The technique offers a deterministic solution to a problem which is currently handled in a nondeterministic manner in PL/I and COBOL compilers.


parallel computing | 1988

Mapping rings and grids onto the FPS T-series hypercube

David A. Poplawski

Abstract One of the desirable aspects of a hypercube is that many other interconnection topologies are contained within it. Two commonly used topologies are the ring and two-dimensional grid. The mapping of these topologies onto the hypercube is straightforward, but in the FPS T-Series hypercube, algorithms using the standard mapping based on binary reflective Gray codes will not perform well. This is because the standard mapping requires the use of communication links that, in some of the nodes, cannot be communicated on simultaneously. In such nodes, a very time consuming reset of the link configuration is necessary between every use of the conflicting links. For many algorithms, this resilts in a large overhead and degrades performance. In this paper it is shown how to configure the links in each node once to map a ring and grid onto the T-Series, thereby eliminating the overhead of resetting the links repeatedly during execution. The mappings are extended to a more general class of hypercube called modulus link-bounded hypercubes, and various properties of the mappings are presented.


Applied Mathematics and Computation | 1986

Parallel computer architectures

David A. Poplawski

Parallel processing is becoming a dominant way in which very high performance is being achieved in modern supercomputer systems. It is therefore becoming increasingly important that scientists and engineers know how supercomputers achieve parallelism in order to take advantage of the computers enormous problem solving capability. A computer solution to a problem must often be expressed so that the parallelism provided by the machine is reflected in the implementation of the program. Otherwise the system will not perform at even a fraction of its potential speed since the computing resources will not be used efficiently to solve the problem. In this paper, several ways in which computers are organized to achieve parallelism will be described. The descriptions are primarily conceptual and would be a useful introduction for someone wishing to make effective use of a machine that uses one or more of the parallel processing techniques.


Proceedings of the 8th International Conference on Computing Education Research | 2008

JLS/JLSCircuitTester: a comprehensive logic design and simulation tool

David A. Poplawski; Zachary Kurmas

JLS and JLSCircuitTester are logic design, simulation and testing tools that meet the needs of instructors and students in logic design and computer organization courses. They were designed and implemented by instructors of such courses expressly to lecture with, to do student projects, and to subsequently grade those assignments. They are free, portable, easy to install and easy to learn and use, yet powerful enough to create and test circuits ranging from simple collections of gates to complete CPUs. They come with on-line tutorials, help, and pre-made circuits taken directly from the pages of several commonly used computer organization textbooks.


hypercube concurrent computers and applications | 1988

Virtual memory for a hypercube multiprocessor

J. M. Francioni; David A. Poplawski; S. Pahwa

Most hypercube programs are structured so that all nodes contain an identical copy of the node program, as well as a complete copy of the node operating system program. This is a tremendous waste of memory, which ends up limiting the size and complexity of hypercube application programs. One way around this problem is to implement a virtual memory on the hypercube, whereby one copy of the node and operating system program is distributed among all the nodes of the hypercube and each node performs demand paging for the pages not resident in that node. Since almost none of the existing hypercubes have the hardware to support a virtual memory, this must be done in software. In this paper, we discuss a feasibility study of a hypercube virtual memory for executable code based on an implementation which requires no hardware support. We also explain the general principles involved in this type of virtual memory, and discuss how the features and restrictions of a hypercube architecture affect the implementation. In particular we show, via simulation results from real hypercube applications, ways to reduce the paging traffic in the hypercube.

Collaboration


Dive into the David A. Poplawski's collaboration.

Top Co-Authors

Avatar

Zachary Kurmas

Grand Valley State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

J. M. Francioni

Michigan Technological University

View shared research outputs
Top Co-Authors

Avatar

R.F. Paul

Michigan Technological University

View shared research outputs
Top Co-Authors

Avatar

S. Pahwa

Michigan Technological University

View shared research outputs
Researchain Logo
Decentralizing Knowledge