Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Paul A. Strooper is active.

Publication


Featured researches published by Paul A. Strooper.


automation of software test | 2007

Automated Generation of Test Cases Using Model-Driven Architecture

Abu Zafer Javed; Paul A. Strooper

In this paper, we demonstrate a method that uses the model transformation technology of MDA to generate unit test cases from a platform-independent model of the system. The method we propose is based on sequence diagrams. First we model the sequence diagram and then this model is automatically transformed into a general unit test case model (an xUnit model which is independent of a particular unit testing framework), using model-to-model transformations. Then model-to-text transformations are applied on the xUnit model to generate platform- specific (JUnit, SUnit etc.) test cases that are concrete and executable. We have implemented the transformations in a prototype tool based on the Tefkat transformation tool and MOFScript. The paper gives details of the tool and the transformations that we have developed. We have applied the method to a small example (ATM simulation).


IEEE Transactions on Software Engineering | 2003

Tool support for testing concurrent Java components

Brad Long; Daniel Hoffman; Paul A. Strooper

Concurrent programs are hard to test due to the inherent nondeterminism. This paper presents a method and tool support for testing concurrent Java components. Tool support is offered through ConAn (Concurrency Analyser), a tool for generating drivers for unit testing Java classes that are used in a multithreaded context. To obtain adequate controllability over the interactions between Java threads, the generated driver contains threads that are synchronized by a clock. The driver automatically executes the calls in the test sequence in the prescribed order and compares the outputs against the expected outputs specified in the test sequence. The method and tool are illustrated in detail on an asymmetric producer-consumer monitor. Their application to testing over 20 concurrent components, a number of which are sourced from industry and were found to contain faults, is presented and discussed.


IEEE Transactions on Software Engineering | 1991

Automated module testing in Prolog

Daniel Hoffman; Paul A. Strooper

Tools and techniques for writing scripts in Prolog that automatically test modules implemented in C are presented. Both the input generation and the test oracle problems are addressed, focusing on a balance between the adequacy of the test inputs and the cost of developing the output oracle. The authors investigate automated input generation according to functional testing, random testing, and a novel approach based on trace invariants. For each input generation scheme, a mechanism for generating the expected outputs has been developed. The methods are described and illustrated in detail. Script development and maintenance costs appear to be reasonable, and run-time performance appears to be acceptable. >


Software - Practice and Experience | 1997

ClassBench: a framework for automated class testing

Daniel Hoffman; Paul A. Strooper

In contrast to the explosion of activity in object-oriented design and programming, little attention has been given to object testing. We present a novel approach to automated testing designed especially for collection classes. In the ClassBench methodology, a testgraph partially models the states and transitions of the Class-Under-Test (CUT) state/transition graph. To determine the expected behavior for the test cases generated from the testgraph, the tester develops an oracle class, providing essentially the same operations as the CUT but supporting only the testgraph states and transitions. Surprisingly thorough testing is achievable with simple testgraphs and oracles. The ClassBench framework supports the tester by providing a testgraph editor, automated testgraph traversal, and a variety of utility classes. Test suites can be easily configured for regression testing-where many test cases are run-and debugging-where a few test cases are selected to isolate the bug. We present the ClassBench methodology and framework in detail, illustrated on both simple examples and on test suites from commercial collection class libraries


Software Testing, Verification & Reliability | 2000

From Object-Z specifications to ClassBench test suites

David A. Carrington; Ian MacColl; Jason McDonald; Leesa Murray; Paul A. Strooper

This paper describes a method for specification‐based class testing that incorporates test case generation, execution, and evaluation based on formal specifications. This work builds on previous achievements in the areas of specification‐based testing and class testing by integrating the two within a single framework. The initial step of the method is to generate test templates for individual operations from a specification written in the Object‐Z specification language. These test templates are combined to produce a finite state machine for the class that is used as the basis for test case execution using the ClassBench test execution framework. An oracle derived from the Object‐Z specification is used to evaluate the outputs. The method is explained using a simple example and its application to a more substantial case study is also discussed. Copyright


Software Testing, Verification & Reliability | 1999

Boundary values and automated component testing

Daniel Hoffman; Paul A. Strooper; Lee J. White

Structural coverage approaches to software testing are mature, having been thoroughly studied for decades. Significant tool support, in the form of instrumentation for statement or branch coverage, is available in commercial compilers. While structural coverage is sensitive to which code structures are covered, it is insensitive to the values of the variables when those structures are executed. Data coverage approaches, e.g. boundary value coverage, are far less mature. They are known to practitioners mostly as a few useful heuristics with very little support for automation. Because of its sensitivity to variable values, data coverage has significant potential, especially when used in combination with structural coverage. This paper generalizes the traditional notion of boundary coverage, and formalizes it with two new data coverage measures. These measures are used to generate test cases automatically and from these, sophisticated test suites for functions from the C++ Standard Template Library. Finally, the test suites are evaluated with respect to both structural coverage and discovery of seeded faults. Copyright


asia pacific software engineering conference | 1997

Possum: an animator for the SUM specification language

Daniel Hazel; Paul A. Strooper; Owen Traynor

We present an overview of the Possum specification animation system, an addition to the Cogito methodology and toolset. Possum allows interpretation (or animation) of specifications written in SUM, which is the specification language used in Cogito. We give an account of the functionality of Possum, illustrated by some simple examples, and describe the way in which Possum is used in a typical Cogito development. The current capabilities and limitations of Possum are reviewed from a technical perspective and an overview of other systems that support the animation of formal specification languages is presented.


international conference on formal engineering methods | 1998

Translating Object-Z specifications to passive test oracles

Jason McDonald; Paul A. Strooper

A test oracle provides a means for determining whether an implementation functions according to its specification. A passive test oracle checks the behaviour of the implementation, but does not attempt to reproduce this behaviour. The paper describes the translation of formal specifications of container classes to passive test oracles. Specifically, we use Object-Z for specifications and C++ for oracles. We discuss several practical issues for the use of formal specifications in test oracle generation. We then present the translation process and illustrate it with an example based on an integer set class. Our approach is illustrated with an example based on an integer set class.


Proceedings of 9th Conference on Software Engineering Education | 1996

Teaching and testing

Daniel Hoffman; Paul A. Strooper; Peter Walsh

We present a novel approach to the use of testing in teaching software engineering, based on more than a decade of experience. We teach tools and techniques for automated testing to both undergraduate and graduate students. With the undergraduates we focus on fundamental principles, illustrated with test suites for C modules and systems. With the graduates we emphasize state-of-the-art methods, demonstrated on test suites for C++ class libraries. Throughout, a hands-on approach dominates; the students receive numerous complete test suites for study, execution, and modification. We also make extensive use of automated testing in grading, to reduce grading time and to allow graders to focus on issues such as code style. Even more important, automated grading reinforces key software engineering principles such as implementation to specification.


Concurrency and Computation: Practice and Experience | 2007

A method for verifying concurrent Java components based on an analysis of concurrency failures

Brad Long; Paul A. Strooper; Luke Wildman

The Java programming language supports concurrency. Concurrent programs are harder to verify than their sequential counterparts due to their inherent non‐determinism and a number of specific concurrency problems, such as interference and deadlock. In previous work, we have developed the ConAn testing tool for the testing of concurrent Java components. ConAn has been found to be effective at testing a large number of components, but there are certain classes of failures that are hard to detect using ConAn. Although a variety of other verification tools and techniques have been proposed for the verification of concurrent software, they each have their strengths and weaknesses. In this paper, we propose a method for verifying concurrent Java components that includes ConAn and complements it with other static and dynamic verification tools and techniques. The proposal is based on an analysis of common concurrency problems and concurrency failures in Java components. As a starting point for determining the concurrency failures in Java components, a Petri‐net model of Java concurrency is used. By systematically analysing the model, we come up with a complete classification of concurrency failures. The classification and analysis are then used to determine suitable tools and techniques for detecting each of the failures. Finally, we propose to combine these tools and techniques into a method for verifying concurrent Java components. Copyright

Collaboration


Dive into the Paul A. Strooper's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ian J. Hayes

University of Queensland

View shared research outputs
Top Co-Authors

Avatar

Leesa Murray

University of Queensland

View shared research outputs
Top Co-Authors

Avatar

Robert Colvin

University of Queensland

View shared research outputs
Top Co-Authors

Avatar

Brad Long

University of Queensland

View shared research outputs
Top Co-Authors

Avatar

Jason McDonald

University of Queensland

View shared research outputs
Top Co-Authors

Avatar

Luke Wildman

University of Queensland

View shared research outputs
Top Co-Authors

Avatar

Tim Miller

University of Melbourne

View shared research outputs
Top Co-Authors

Avatar

David Hemer

University of Queensland

View shared research outputs
Researchain Logo
Decentralizing Knowledge