David Klappholz
Stevens Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by David Klappholz.
IEEE Transactions on Parallel and Distributed Systems | 1991
Xiangyun Kong; David Klappholz; Kleanthis Psarris
The I test is a subscript dependence test which extends both the range of applicability and the accuracy of the GCD and Banerjee tests (U. Banerjee, 1976), standard subscript dependence tests used to determine whether loops may be parallelized/vectorized. It is shown that the I test is useful when, in the event that a positive result must be reported, a definitive positive is of more use than a tentative positive and when insufficient loop iterations are known for the Banerjee test to apply. >
IEEE Transactions on Parallel and Distributed Systems | 1993
Kleanthis Psarris; Xiangyun Kong; David Klappholz
The GCD and Banerjee tests are the standard data dependence tests used to determine whether a loop may be parallelized/vectorized. In an earlier work, (1991) the authors presented a new data dependence test, the I test, which extends the accuracy of the GCD and the Banerjee tests. In the original presentation, only the case of general dependence was considered, i.e., the case of dependence with a direction vector of the form (*,*,...,*). In the present work, the authors generalize the I test to check for data dependence subject to an arbitrary direction vector. >
Journal of Parallel and Distributed Computing | 1991
Kleanthis Psarris; David Klappholz; Xiangyun Kong
Abstract The Banerjee test is commonly considered to be the more accurate of the two major approximate tests used in automatic vectorization/parallelization of loops, the other being the GCD test. From its derivation, however, there is no simple explanation of why the Banerjee test should be nearly as accurate as it is given credit for. We prove a sufficient condition for the Banerjee tests accuracy and explain the tests perceived superiority by showing that under circumstances which occur extremely frequently in actual code, it is, in fact, not approximate, but perfectly accurate.
symposium on principles of programming languages | 1994
Lawrence Feigen; David Klappholz; Robert Casazza; Xing Xue
The notion that a definition of a variable is <italic>dead</italic> is used by optimizing compilers to delete code whose execution is useless. We extend the notion of <italic>deadness</italic> to that of <italic>partial deadness</italic>, and define a transformation, the <italic>revival transformation</italic>, which eliminates useless executions of a (partially dead) definition by tightening its execution conditions without changing the set of uses which it reaches or the conditions under which it reaches each of them.
international conference on supercomputing | 1990
David Klappholz; Kleanthis Psarris; Xiangyun Kong
The Banerjee test is commonly considered to be the more accurate of the two major approximate data dependence tests used in automatic vectorization/parallelization of loops, the other being the GCD test. From its derivation, however, there is no simple explanation of why the Banerjee test should be nearly as accurate as it is given credit for. We present a set of sufficient conditions for the Banerjee tests accuracy, and explain its perceived accuracy in actual practice by proving that under circumstances which occur extremely frequently in actual code, the Banerjee test is, in fact, not approximate, but perfectly accurate.
conference on software engineering education and training | 2003
David Klappholz; Lawrence Bernstein; Daniel Port
Software development is one of the most economically critical engineering activities. It is unsettling, therefore, that regularly published analyses reveal that the percentage of projects that fail, by coming in far over budget or far past schedule, or by being cancelled with significant financial loss, is considerably greater in software development than in any other branch of engineering. The reason is that successful software development requires expertise in both state of the art (software technology) and state of the practice (software development process). It is widely recognized that failure to follow best practice, rather than technological incompetence, is the cause of most failures. It is critically important, therefore, that (i) computer science departments be able assess the quality of the software development process component of their curricula and that industry be able to assess the efficacy of SPI (software process improvement) efforts. While assessment instruments/tools exist for knowledge of software technology, none exist for attitude toward, knowledge of, or ability to use, software development process. We have developed instruments for measuring attitude and knowledge, and are working on an instrument to measure ability to use. The current version of ATSE, the instrument for measuring attitude toward software engineering, is the result of repeated administrations to both students and software development professionals, post-administration focus groups, rewrites, and statistical reliability analyses. In this paper we discuss the development of ATSE, results, both expected an unexpected, of recent administrations of ATSE to students and professionals, the various uses to which ATSE is currently being put and to which it could be put, and ATSEs continuing development and improvement.
conference on software engineering education and training | 2004
Daniel Port; David Klappholz
We describe how empirical research performed in the context of a software engineering project course can provide results useful to both students in later offerings of the course and to industry. The secondary purpose is to encourage the formation of an effort to share ideas on classroom-based research, to perform research collaboratively, and to share results with both the academic and industrial software engineering communities.
conference on software engineering education and training | 2002
Lawrence Bernstein; David Klappholz; Catherine Kelley
If the level of adoption of software engineering best practice is to be increased in industry, then an appreciation of its importance must be conveyed to computer science students. Accomplishment of this goal is often severely hampered by the fact that many computer science faculty view the software process as intellectually shallow and that many computer science students come to the field with an aversion to the oppressive discipline which they perceive to be required to follow it. We have devised a method of forcing students to recognize the necessity of software engineering best practice by bringing them to the realization that without it they will fail, not in their course work, but in real-world software development projects. The method has been tested twice at Stevens Institute and is about to be used at a number of other universities. Evaluation of results is being done through the use of two standard instruments, the Felder Learning Styles Inventory and the Academic Locus of Control Scale and of a novel Attitude Toward Software Engineering (ATSE) instrument designed by the authors.
languages and compilers for parallel computing | 1992
David Klappholz; Xiangyun Kong
The Banerjee-Wolfe test is one of the major data dependence tests used in automatic parallelization of sequential code. Though it is only an approximate test, its relatively high accuracy and its relatively low cost account for its great popularity. Being an approximate test, the Banerjee-Wolfe test does, however, sometimes result in a loss of parallelism. One of its potential sources of failure is the fact it does not traditionally take execution conditions into account. The purpose of the present paper is to show that the Banerjee-Wolfe test may be extended to handle simple execution conditions without significant additional cost.
conference on high performance computing supercomputing | 1989
David Klappholz; Xiangyun Kong; Apostolos D. Kalis
Refined Languages (Refined Fortran, Refined C, etc.) are extensions of their parent languages in which it is possible to express parallelism, but impossible to create races or deadlocks. Where strictly deterministic behavior is desired, multiple executions of a Refined Fortran program with the same input data can be guaranteed to either compute the same results or terminate with the same run-time errors regardless of differences in scheduling. Where asynchronous behavior is desired, freedom from races can be guaranteed. The Refined Languages approach achieves its goal by extending sequential imperative programming languages with data- (rather than control-) oriented constructs, and by viewing the expression of parallelism in data- (rather than control-) oriented terms. Earlier versions of Refined Fortran are discussed in [1]-[2]; the present work supersedes and extends work reported in these earlier publications.