Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Debra J. Richardson is active.

Publication


Featured researches published by Debra J. Richardson.


international conference on software engineering | 1992

Specification-based test oracles for reactive systems

Debra J. Richardson; Stephanie Leif Aha; T. Owen O'Malley

The testing process is typically systematic in test data se- lection and test execution. For the most part, however, the effective use of test oracles has been neglected, even though they are a critical component of the testing process. Test or- acles prescribe acceptable behavior for test execution. In the absence of judging test results with formal oracles, testing does not achieve its goal of revealing failures or assuring cor- rect behavior in a practical manner; manual result checking is neither reliable nor cost-effective. We argue that test oracles should be derived from specifications in conjunction with testing criteria, represented in a common form, and their use made integral to the test- ing process. For complex, reactive systems, oracles must reflect the multiparadigm nature of the required behavior. Such systems are often specified using multiple languages, each selected for its utility specifying in a particular computational paradigm. Thus, we are developing an approach for deriving and using oracles based on multiparadigm and multilingual specifications to enable the verification of test results for reactive systems as well as less complex systems.


IEEE Transactions on Software Engineering | 1989

A formal evaluation of data flow path selection criteria

Lori A. Clarke; Andy Podgurski; Debra J. Richardson; Steven J. Zeil

The authors report on the results of their evaluation of path-selection criteria based on data-flow relationships. They show how these criteria relate to each other, thereby demonstrating some of their strengths and weaknesses. A subsumption hierarchy showing their relationship is presented. It is shown that one of the major weaknesses of all the criteria is that they are based solely on syntactic information and do not consider semantic issues such as infeasible paths. The authors discuss the infeasible-path problem as well as other issues that must be considered in order to evaluate these criteria more meaningfully and to formulate a more effective path-selection criterion. >


IEEE Transactions on Software Engineering | 1985

Partition Analysis: A Method Combining Testing and Verification

Debra J. Richardson; Lori A. Clarke

The partition analysis method compares a procedures implementation to its specification, both to verify consistency between the two and to derive test data. Unlike most verification methods, partition analysis is applicable to a number of different types of specification languages, including both procedural and nonprocedural languages. It is thus applicable to high-level descriptions as well as to low-level designs. Partition analysis also improves upon existing testing criteria. These criteria usually consider only the implementation, but partition analysis selects test data that exercise both a procedures intended behavior (as described in the specifications) and the structure of its implementation. To accomplish these goals, partition analysis divides or partitions a procedures domain into subdomains in which all elements of each subdomain are treated uniformly by the specification and processed uniformly by the implementation. This partition divides the procedure domain into more manageable units. Information related to each subdomain is used to guide in the selection of test data and to verify consistency between the specification and the implementation. Moreover, the testing and verification processes are designed to enhance each other. Initial experimentation has shown that through the integration of testing and verification, as well as through the use of information derived from both the implementation and the specification, the partition analysis method is effective for evaluating program reliability. This paper describes the partition analysis method and reports the results obtained from an evaluation of its effectiveness.


international symposium on software testing and analysis | 1994

TAOS: Testing with Analysis and Oracle Support

Debra J. Richardson

Few would question that software testing is a necessary activity for assuring software quality, yet the typical testing process is a human intensive activity and as such, it is unproductive, error-prone, and often inadequately done. Moreover, testing is seldom given a prominent place in software development or maintenance processes, nor is it an integral part of them. Major productivity and quality enhancements can be achieved by automating the testing process through tool development and use and effectively incorporating it with development and maintenance processes. The TAOS toolkit, Testing with Analysis and Oracle Support, provides support for the testing process. It includes tools that automate many tasks in the testing process, including management and persistence of test artifacts and the relationships between those artifacts, test development, test execution, and test measurement. A unique aspect of TAOS is its support for test oracles and their use to verify behavioral correctness of test executions. TAOS also supports structural/dependence coverage, by measuring the adequacy of test criteria coverage, and regression testing, by identifying tests associated or dependent upon modified software artifacts. This is accomplished by integrating the ProDAG toolset, Program Dependence Analysis Graph, with TAOS, which supports the use of program dependence analysis in testing, debugging, and maintenance. This paper describes the TAOS toolkit and its capabilities as well as testing, debugging and maintenance processes based on program dependence analysis. We also describe our experience with the toolkit and discuss our future plans.


IEEE Transactions on Software Engineering | 1982

A Close Look at Domain Testing

Lori A. Clarke; Johnette Hassell; Debra J. Richardson

White and Cohen have proposed the domain testing method, which attempts to uncover errors in a path domain by selecting test data on and near the boundary of the path domain. The goal of domain testing is to demonstrate that the boundary is correct within an acceptable error bound. Domain testing is intuitively appealing in that it provides a method for satisfying the often suggested guideline that boundary conditions should be tested.


Joint proceedings of the second international software architecture workshop (ISAW-2) and international workshop on multiple perspectives in software development (Viewpoints '96) on SIGSOFT '96 workshops | 1996

Software testing at the architectural level

Debra J. Richardson; Alexander L. Wolf

Machine Model. IEEE Transactions on Software Engineering, 21(4):373-386, April 1995. [9] D.C. Luckham and J. Vera. An Event-based Architecture Definition Language. IEEE Transactions on Software Engineering, 21(9):717-734, September 1995. [lo] L.J. Morels. A Theory of Fault-based Testing. IEEE Transactions on Sofiware Engineering, 16(8):844-857, August 1990. [ll] 0. O’MaIIey, D.J. Richardson, and L.K. Dillon. Efficient Specification-based Oracles for Critical Systems. In Proceedings of the California Software Symposium. Irvine Research Unit in Software, April 1996. [12] D.E. Perry and A.L. Wolf. Foundations for the Study of Software Architecture. SIGSOFT Software Engineering Notes, 17(4):40-52, October 1992. [13] A. Podgurski and L.A. Clarke. A Formal Model of Program Dependencies and its Implications for Software Testing, Debugging, and Maintenance. IEEE Transactions on Software Engineering, 16(9):965-979, September 1990. [14] D.J. Richardson and M.C. Thompson. The RELAY Model of Error Detection and its Application. In Proceedings of the Second Workshop on Software Testing, Analysis, and Verification (TAV??), pages 223-230. ACM SIGSOFT, July 1988.


IEEE Transactions on Software Engineering | 1993

An analysis of test data selection criteria using the RELAY model of fault detection

Debra J. Richardson; Margaret C. Thompson

RELAY is a model of faults and failures that defines failure conditions, which describe test data for which execution will guarantee that a fault originates erroneous behavior that also transfers through computations and information flow until a failure is revealed. This model of fault detection provides a framework within which other testing criterias capabilities can be evaluated. Three test data selection criteria that detect faults in six fault classes are analyzed. This analysis shows that none of these criteria is capable of guaranteeing detection for these fault classes and points out two major weaknesses of these criteria. The first weakness is that the criteria do not consider the potential unsatisfiability of their rules. Each criterion includes rules that are sufficient to cause potential failures for some fault classes, yet when such rules are unsatisfiable, many faults may remain undetected. Their second weakness is failure to integrate their proposed rules. >


ACM Sigsoft Software Engineering Notes | 1989

Approaches to specification-based testing

Debra J. Richardson; T. Owen O'Malley; C. Tittle

Current software testing practices focus, almost exclusively, on the implementation, despite widely acknowledged benefits of testing based on software specifications. We propose approaches to specification-based testing by extending a wide variety of implementation-based testing techniques to be applicable to formal specification languages. We demonstrate these approaches for the Anna and Larch specification languages.


Journal of Systems and Software | 1985

Applications of symbolic evaluation

Lori A. Clarke; Debra J. Richardson

Abstract Symbolic evaluation is a program analysis method that represents a programs computations and domain by symbolic expressions. In this paper a general functional model of a program is first presented. Then, three related methods of symbolic evaluation, which create this functional description from a program, are described: path-dependent symbolic evaluation provides a representation of a specified path; dynamic symbolic evaluation, which is more restrictive but less costly than path-dependent symbolic evaluation, is a data-dependent method; and global symbolic evaluation, which is the most general yet most costly method, captures the functional behavior of an entire program when successful. All three methods have been implemented in experimental systems. Some of the major implementation concerns, which include effectively representing loops, determining path feasibility, dealing with compound data structures, and handling routine invocations, are explained. The remainder of the paper surveys the range of applications to which symbolic evaluation techniques are being applied. The current and potential role of symbolic evaluation in verification, testing, debugging, optimization, and software development is explored.


[1988] Proceedings. Second Workshop on Software Testing, Verification, and Analysis | 1988

The RELAY model of error detection and its application

Debra J. Richardson; Margaret C. Thompson

The authors report on a model of error detection called RELAY, which provides a fault-based criterion for test data selection. The RELAY model builds on the testing theory introduced by M.H. Morell (1981), where an error is created when an incorrect state is introduced at some fault location and is propagated if it persists to the output. The authors refine this theory by more precisely defining the notion of when an error is introduced and by differentiating between the persistence of an error through computations and its persistence through data-flow operations. They introduce similar concepts, origination and transfer, as the first erroneous evaluation and the persistence of that erroneous evaluation respectively.<<ETX>>

Collaboration


Dive into the Debra J. Richardson's collaboration.

Top Co-Authors

Avatar

Hadar Ziv

University of California

View shared research outputs
Top Co-Authors

Avatar

Lori A. Clarke

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar

Marcio S. Dias

University of California

View shared research outputs
Top Co-Authors

Avatar

Chang Liu

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ankita Raturi

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bill Tomlinson

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge