Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Steven J. Zeil is active.

Publication


Featured researches published by Steven J. Zeil.


IEEE Transactions on Software Engineering | 1989

A formal evaluation of data flow path selection criteria

Lori A. Clarke; Andy Podgurski; Debra J. Richardson; Steven J. Zeil

The authors report on the results of their evaluation of path-selection criteria based on data-flow relationships. They show how these criteria relate to each other, thereby demonstrating some of their strengths and weaknesses. A subsumption hierarchy showing their relationship is presented. It is shown that one of the major weaknesses of all the criteria is that they are based solely on syntactic information and do not consider semantic issues such as infeasible paths. The authors discuss the infeasible-path problem as well as other issues that must be considered in order to evaluate these criteria more meaningfully and to formulate a more effective path-selection criterion. >


IEEE Transactions on Software Engineering | 1989

Perturbation techniques for detecting domain errors

Steven J. Zeil

Perturbation testing is an approach to software testing which focuses on faults within arithmetic expressions appearing throughout a program. This approach is expanded to permit analysis of individual test points rather than entire paths, and to concentrate on domain errors. Faults are modeled as perturbing functions drawn from a vector space of potential faults and added to the correct form of an arithmetic expression. Sensitivity measures are derived which limit the possible size of those faults that would go undetected after the execution of a given test set. These measures open up an interesting view of testing, in which attempts are made to reduce the volume of possible faults which, were they present in the program being tested, would have escaped detection on all tests performed so far. The combination of these measures with standard optimization techniques yields a novel test-data-generation method called arithmetic fault detection. >


software engineering symposium on practical software development environments | 1989

TEAM: a support environment for testing, evaluation, and analysis

Lori A. Clarke; Debra J. Richardson; Steven J. Zeil

Current research indicates that software reliability needs to be achieved through the careful integration of a number of diverse testing and analysis techniques. To address this need, the TEAM environment has been designed to support the integration of and experimentation with an ever growing number of software testing and analysis tools. To achieve this flexibility, we exploit three design principles: component technology so that common underlying functionality is recognized; generic realizations so that these common functions can be instantiated as diversely as possible; and language independence so that tools can work on multiple languages, even allowing some tools to be applicable to different phases of the software lifecycle. The result is an environment that contains building blocks for easily constructing and experimenting with new testing and analysis techniques. Although the first prototype has just recently been implemented, we feel it demonstrates how modularity, genericity, and language independence further extensibility and integration.


ACM Transactions on Software Engineering and Methodology | 1992

Detection of linear errors via domain testing

Steven J. Zeil; Faten H. Afifi; Lee J. White

Domain testing attempts to find errors in the numeric expressions affecting the flow of control through a program. Intuitively, domain testing provides a systematic form of boundary value testing for the conditional statements within a program. Several forms of domain testing have been proposed, all dealing with the detection of linear errors in linear functions. Perturbation analysis has been previously developed as a measure of the volume of faults, from within a selected space of possible faults, left undetected by a test set. It is adapted here to errors and error spaces. The adapted form is used to show that the different forms of domain testing are closer in error detection ability than had been supposed. They may all be considered effective for finding linear errors in linear predicate functions. A simple extension is proposed, which allows them to detect linear errors in nonlinear predicate functions using only a single additional test point.


international conference on asian digital libraries | 2007

Automated template-based metadata extraction architecture

Paul Flynn; Li Zhou; Kurt Maly; Steven J. Zeil; Mohammad Zubair

This paper describes our efforts to develop a toolset and process for automated metadata extraction from large, diverse, and evolving document collections. A number of federal agencies, universities, laboratories, and companies are placing their collections online and making them searchable via metadata fields such as author, title, and publishing organization. Manually creating metadata for a large collection is an extremely time-consuming task, but is difficult to automate, particularly for collections consisting of documents with diverse layout and structure. Our automated process enables many more documents to be available online than would otherwise have been possible due to time and cost constraints. We describe our architecture and implementation and illustrate the effectiveness of the tool-set by providing experimental results on two major collections DTIC (Defense Technical Information Center) and NASA (National Aeronautics and Space Administration).


international conference on software engineering | 1996

A reliability model combining representative and directed testing

Brian Mitchell; Steven J. Zeil

Directed testing methods, such as functional or structural testing, have been criticized for a lack of quantifiable results. Representative testing permits reliability modeling, which provides the desired quantification. Over time, however, representative testing becomes inherently less effective as a means of improving the actual quality of the software under test. A model is presented which permits representative and directed testing to be used in conjunction. Representative testing can be used early, when the rate of fault revelation is high. Later results from directed testing can be used to update the reliability estimates conventionally associated with representative methods. The key to this combination is shifting the observed random variable from interfailure time to a post-mortem analysis of the debugged faults, using order statistics to combine the observed failure rates of faults no matter how those faults were detected.


international conference on software engineering | 1992

Testing for linear errors in nonlinear computer programs

Faten H. Afifi; Lee J. White; Steven J. Zeil

This paper provides an approach to test nonlinear functions in computer programs, whether this function is used for control flow, such as a predicate inequality or equality constraint, or is given as an input-output relationship. This approach will obtain test data to detect linear errors in the given nonlinear function. An error-space criterion previously given by Zeil will be utilized, and a necessary and sufficient condition for the test data will be specified to guarantee the satisfaction of this criterion. This leads to a simple and efficient method to select test data which satisfies that condition; only (n+2) tests are required, where n is the number of input variables. An analysis will be given to show that this simple approach can be very effective in detecting nonlinear errors as well.


[1988] Proceedings. Second Workshop on Software Testing, Verification, and Analysis | 1988

Selectivity of data-flow and control-flow path criteria

Steven J. Zeil

A given path-selection criterion is more selective than another such criterion with respect to some testing goal if it never requires more, and sometimes requires fewer, test paths to achieve that goal. The author presents canonical forms of control-flow and data-flow path selection criteria and demonstrates that, for some simple testing goals, the data-flow criteria as a general class are more selective than the control-flow criteria. It is shown, however, that this result does not hold for general testing goals, a limitation that appears to stem directly from the practice of defining data-flow criteria on the computation history contributing to a single result.<<ETX>>


automated software engineering | 1993

A knowledge base for software test refinement

Steven J. Zeil; J. Christian Wild

Software testing criteria produce test descriptions that may be viewed as systems of constraints describing desired test cases. Refinement of test descriptions is possible by adding additional constraints to each test description, reducing the solution space and focusing attention upon tests that are more likely to reveal faults. This paper describes the structure of a knowledge base intended to capture potentially useful refinements, based either upon the expert knowledge of a tester or upon the software faults uncovered in prior, related projects.<<ETX>>


Software Testing, Verification & Reliability | 1992

Employing accumulated knowledge to refine test descriptions

J. Christian Wild; Steven J. Zeil; Gao Feng; Ji Chen

Most testing methods generate test descriptions which define the desired characteristics of the input data in a test case. This paper describes the use of accumulated knowledge about a problem domain to refine these test descriptions, with the goal of increasing the probability that the input data generated from the refined test descriptions will reveal faults in a software system. A knowledge base is introduced to hold information about object semantics and object class/subclass relationships. Knowledge accumulates with experience in a particular domain and can be focused on those objects and relationships in that domain which experience has shown to be error‐prone. This paper also defines a knowledge‐driven functional testing (KDFT) method which derives test descriptions from a formal specification and refines these descriptions using that knowledge base. A case study of the KDFT method using data from a previous study of the launch intercept control problem is described. These preliminary results indicate that knowledge‐based refinement of test descriptions can dramatically improve their ability to detect certain classes of faults.

Collaboration


Dive into the Steven J. Zeil's collaboration.

Top Co-Authors

Avatar

Kurt Maly

Old Dominion University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hui Shi

University of Southern Indiana

View shared research outputs
Top Co-Authors

Avatar

Lori A. Clarke

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andy Podgurski

Case Western Reserve University

View shared research outputs
Top Co-Authors

Avatar

Faten H. Afifi

Case Western Reserve University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lee J. White

Case Western Reserve University

View shared research outputs
Researchain Logo
Decentralizing Knowledge