Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Stewart N. Weiss is active.

Publication


Featured researches published by Stewart N. Weiss.


IEEE Transactions on Software Engineering | 1993

An experimental comparison of the effectiveness of branch testing and data flow testing

Phyllis G. Frankl; Stewart N. Weiss

An experiment comparing the effectiveness of the all-uses and all-edges test data adequacy criteria is discussed. The experiment was designed to overcome some of the deficiencies of previous software testing experiments. A large number of test sets was randomly generated for each of nine subject programs with subtle errors. For each test set, the percentages of executable edges and definition-use associations covered were measured, and it was determined whether the test set exposed an error. Hypothesis testing was used to investigate whether all-uses adequate test sets are more likely to expose errors than are all-edges adequate test sets. Logistic regression analysis was used to investigate whether the probability that a test set exposes an error increases as the percentage of definition-use associations or edges covered by it increases. Error exposing ability was shown to be strongly positively correlated to percentage of covered definition-use associations in only four of the nine subjects. Error exposing ability was also shown to be positively correlated to the percentage of covered edges in four different subjects, but the relationship was weaker. >


Journal of Systems and Software | 1997

All-uses vs mutation testing: an experimental comparison of effectiveness

Phyllis G. Frankl; Stewart N. Weiss; Cang Hu

The effectiveness of a test data adequacy criterion for a given program and specification is the probability that a test set satisfying the criterion will expose a fault. Experiments were performed to compare the effectiveness of the mutation testing and all-uses test data adequacy criteria at various coverage levels, for randomly generated test sets. Large numbers of test sets were generated and executed, and for each, the proportion of mutants killed or def-use associations covered was measured. This data was used to estimate and compare the effectiveness of the criteria. The results were mixed: at the highest coverage levels considered, mutation was more effective than all-uses for five of the nine subjects, all-uses was more effective than mutation for two subjects, and there was no clear winner for two subjects. However, mutation testing was much more expensive than all-uses. The relationship between coverage and effectiveness for fixed-sized test sets was also explored and was found to be nonlinear and, in many cases, nonmonotonic.


international symposium on software testing and analysis | 1991

Comparison of program testing strategies

Elaine J. Weyuker; Stewart N. Weiss; Dick Hamlet

A person testing a program has many methods to choose from, but little solid information about how these methods compare. Where analytic comparisons do exist, their significance is often in doubt. In this paper we examine various comparisons that have been used or proposed for test data selection and adequacy criteria. We characterize them by type and identify their strengths and weaknesses. We examine useful properties of comparisons and study the relationship between analytical and probabilistic comparisons. We find that analytical comparisons provide information of limited value, and that probabilistic comparisons overcome some of these limitations.


IEEE Transactions on Software Engineering | 1988

An extended domain-based model of software reliability

Stewart N. Weiss; Elaine J. Weyuker

A definition of software reliability is proposed in which reliability is treated as a generalization of the probability of correctness of the software in question. A tolerance function is introduced as a method of characterizing an acceptable level of correctness. This in turn is used, together with the probability function defining the operational input distribution, as a parameter of the definition of reliability. It is shown that the definition can be used to provide many natural models of reliability by varying the tolerance function and that it may be reasonably approximated using well-chosen test sets. It is also shown that there is an inherent limitation to the measurement of reliability using finite test sets. >


international symposium on software testing and analysis | 1991

An experimental comparison of the effectiveness of the all-uses and all-edges adequacy criteria

Phyllis G. Frankl; Stewart N. Weiss

An experimental comparison of the effectiveness of the all-uses and all-edges test data adequacy criteria was performed. A large number of test sets was randomly generated for each of nine subject programs with subtle errors. For each test set, the percentages of (executable) edges and definition-use associations covered were measured and it was determined whether the test set exposed an error. Hypothesis testing was used to investigate whether all-uses adequate test sets are more likely to expose errors than are all-edges adequate test sets. All-uses was shown to be significantly more effective than all-edges for five of the subjects; moreover, for four of these, all-uses appeared to guarantee detection of the error. Further analysis showed that in four subjects, all-uses adequate test sets appeared to be more effective than all-edges adequate test sets of the same size. Logistic regression showed that in some, but not all of the subjects there was a strong positive correlation between the percentage of definition-use associations covered by a test set and its error-exposing ability.


[1988] Proceedings. Second Workshop on Software Testing, Verification, and Analysis | 1988

A formal framework for the study of concurrent program testing

Stewart N. Weiss

The author has developed a formal theory for reasoning about concurrent program testing by representing such programs as sets of simulating sequential programs. He has shown that if such a representation exists for all programs in a concurrent language, then it serves as the basis for a solution to the reproducible testing problem of programs in that language. The author does not know under what circumstances such a representation must necessarily exist; that is an open question. However, he has shown that it exists for a simple concurrent language in the CSP family, denoted language CL, and he believes that the model is applicable to other languages in the family. Because CL is currently unimplemented, no pragmatic studies have been done on the feasibility of applying the model.<<ETX>>


ACM Sigsoft Software Engineering Notes | 1989

Comparing test data adequacy criteria

Stewart N. Weiss

Test data adequacy criteria have been compared in a multitude of ways in the literature, ranging from the relative difficulty of satisfying them to the relative probability that test sets that satisfy them will expose errors in programs. Each method of comparison gives rise to an ordering of criteria, many of which differ significantly from the others. We investigate the various methods of comparing criteria, and show how the induced orderings are related. There are presently no methods of comparison that are based on the cost of using criteria; we propose a formal model of cost comparison of criteria. We categorize methods of comparison as being satisfiability based, correctness based, or complexity based.


international symposium on software testing and analysis | 1993

Improved serial algorithms for mutation analysis

Stewart N. Weiss; Vladimir N. Fleyshgakker

Existing serial algorithms to do mutation analysis are inefficient, and descriptions of parallel mutation systems pre-suppose that these serial algorithms are the best one can do serially. We present a universal mutation analysis data structure and new serial algorithms for both strong and weak mutation analysis that on average should perform much faster than existing ones, and can never do worse. We describe these algorithms as well as the results of our analysis of their run time complexities. We believe that this is the first paper in which analytical methods have been applied to obtain the run time complexities of mutation analysis algorithms.


international symposium on software testing and analysis | 1994

Efficient mutation analysis: a new approach

Vladimir N. Fleyshgakker; Stewart N. Weiss

In previously reported research we designed and analyzed algorithms that improved upon the run time complexity of all known weak and strong mutation analysis methods at the expense of increased space complexity. Here we describe a new serial strong mutation algorithm whose running time is on the average much faster than the previous ones and that uses significantly less space than them also. Its space requirement is approximately the same as that of Mothra, a well-known and readily available implemented system. Moreover, while this algorithm can serve as basis for a new mutation system, it is designed to be consistent with the Mothra architecture, in the sense that, by replacing certain modules of that system with new ones, a much faster system will result. Such a Mothra-based implementation of the new work is in progress. Like the previous algorithms, this one, which we call Lazy Mutant Analysis or LMA, tries to determine whether a mutant is strongly killed by a given test only if it is already known that it is weakly killed by that test. Unlike those algorithms, LMA avoids executing many mutants by dynamically discovering classes of mutants that have the “same” behavior, and executing representatives of those classes. The overhead it incurs is small in proportion to the time saved, and the algorithm has a very natural parallel implementation. In comparison to the fastest known algorithms for strong mutation analysis, in the best case, LMA can improve the speed by a factor proportional to the average number of mutants per program statement. In the worst case, there is no improvement in the running time, but such a case is hard to construct. This work enables us to apply mutation analysis to significantly larger programs than is currently possible.


Journal of Computational Chemistry | 2010

A generalized higher order kernel energy approximation method

Stewart N. Weiss; Lulu Huang; Lou Massa

We present a general mathematical model that can be used to improve almost all fragment‐based methods for ab initio calculation of total molecular energy. Fragment‐based methods of computing total molecular energy mathematically decompose a molecule into smaller fragments, quantum‐mechanically compute the energies of single and multiple fragments, and then combine the computed fragment energies in some particular way to compute the total molecular energy. Because the kernel energy method (KEM) is a fragment‐based method that has been used with much success on many biological molecules, our model is presented in the context of the KEM in particular. In this generalized model, the total energy is not based on sums of all possible double‐, triple‐, and quadruple‐kernel interactions, but on the interactions of precisely those combinations of kernels that are connected in the mathematical graph that represents the fragmented molecule. This makes it possible to estimate total molecular energy with high accuracy and no superfluous computation and greatly extends the utility of the KEM and other fragment‐based methods. We demonstrate the practicality and effectiveness of our model by presenting how it has been used on the yeast initiator tRNA molecule, ytRN  iMet (1YFG in the Protein Data Bank), with kernel computations using the Hartree‐Fock equations with a limited basis of Gaussian STO‐3G type.

Collaboration


Dive into the Stewart N. Weiss's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dick Hamlet

Portland State University

View shared research outputs
Top Co-Authors

Avatar

Joanna Klukowska

City University of New York

View shared research outputs
Top Co-Authors

Avatar

Lou Massa

City University of New York

View shared research outputs
Top Co-Authors

Avatar

Lulu Huang

United States Naval Research Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge