Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Dave Towey is active.

Publication


Featured researches published by Dave Towey.


IEEE Transactions on Software Engineering | 2014

How Effectively Does Metamorphic Testing Alleviate the Oracle Problem

Huai Liu; Fei-Ching Kuo; Dave Towey; Tsong Yueh Chen

In software testing, something which can verify the correctness of test case execution results is called an oracle. The oracle problem occurs when either an oracle does not exist, or exists but is too expensive to be used. Metamorphic testing is a testing approach which uses metamorphic relations, properties of the software under test represented in the form of relations among inputs and outputs of multiple executions, to help verify the correctness of a program. This paper presents new empirical evidence to support this approach, which has been used to alleviate the oracle problem in various applications and to enhance several software analysis and testing techniques. It has been observed that identification of a sufficient number of appropriate metamorphic relations for testing, even by inexperienced testers, was possible with a very small amount of training. Furthermore, the cost-effectiveness of the approach could be enhanced through the use of more diverse metamorphic relations. The empirical studies presented in this paper clearly show that a small number of diverse metamorphic relations, even those identified in an ad hoc manner, had a similar fault-detection capability to a test oracle, and could thus effectively help alleviate the oracle problem.


Lecture Notes in Computer Science | 2002

Restricted Random Testing

Kwok-Ping Chan; Tsong Yueh Chen; Dave Towey

This paper presents a novel adaptation of traditional random testing, called Restricted Random Testing (RRT). RRT offers a significant improvement over random testing, as measured by the F-measure. This paper describes the ideology behind RRT and explains its algorithm. RRTs performance is examined using several experiments, the results of which are presented and discussed.


International Journal of Software Engineering and Knowledge Engineering | 2006

RESTRICTED RANDOM TESTING: ADAPTIVE RANDOM TESTING BY EXCLUSION

Kwok-Ping Chan; Tsong Yueh Chen; Dave Towey

Restricted Random Testing (RRT) is a new method of testing software that improves upon traditional Random Testing (RT) techniques. Research has indicated that failure patterns (portions of an input domain which, when executed, cause the program to fail or reveal an error) can influence the effectiveness of testing strategies. For certain types of failure patterns, it has been found that a widespread and even distribution of test cases in the input domain can be significantly more effective at detecting failure compared with ordinary RT. Testing methods based on RT, but which aim to achieve even and widespread distributions, have been called Adaptive Random Testing (ART) strategies. One implementation of ART is RRT. RRT uses exclusion zones around executed, but non-failure-causing, test cases to restrict the regions of the input domain from which subsequent test cases may be drawn. In this paper, we introduce the motivation behind RRT, explain the algorithm and detail some empirical analyses carried out to examine the effectiveness of the method. Two versions of RRT are presented: Ordinary RRT (ORRT) and Normalized RRT (NRRT). The two versions share the same fundamental algorithm, but differ in their treatment of non-homogeneous input domains. Investigations into the use of alternative exclusion shapes are outlined, and a simple technique for reducing the computational overheads of RRT, prompted by the alternative exclusion shape investigations, is also explained. The performance of RRT is compared with RT and another ART method based on maximized minimum test case separation (DART), showing excellent improvement over RT and a very favorable comparison with DART.


international conference on reliable software technologies | 2003

Normalized restricted random testing

Kwok-Ping Chan; Tsong Yueh Chen; Dave Towey

Restricted Random Testing (RRT) is a new method of testing software that improves upon traditional random testing (RT) techniques. This paper presents new data in support of the efficiency of RRT, and presents a variation of the algorithm, Normalized Restricted Random Testing (NRRT). NRRT permits the tester to have better information about the target exclusion rate (R) of RRT, the main control parameter of the method. We examine the performance of the NRRT and Original RRT (ORRT) methods using simulations and experiments, and offer some guidance for their use in practice.


Science in China Series F: Information Sciences | 2015

A revisit of three studies related to random testing

Tsong Yueh Chen; Fei-Ching Kuo; Dave Towey; Zhi Quan Zhou

Software testing is an approach that ensures the quality of software through execution, with a goal being to reveal failures and other problems as quickly as possible. Test case selection is a fundamental issue in software testing, and has generated a large body of research, especially with regards to the effectiveness of random testing (RT), where test cases are randomly selected from the software’s input domain. In this paper, we revisit three of our previous studies. The first study investigated a sufficient condition for partition testing (PT) to outperform RT, and was motivated by various controversial and conflicting results suggesting that sometimes PT performed better than RT, and sometimes the opposite. The second study aimed at enhancing RT itself, and was motivated by the fact that RT continues to be a fundamental and popular testing technique. This second study enhanced RT fault detection effectiveness by making use of the common observation that failure-causing inputs tend to cluster together, and resulted in a new family of RT techniques: adaptive random testing (ART), which is random testing with an even spread of test cases across the input domain. Following the successful use of failure-causing region contiguity insights to develop ART, we conducted a third study on how to make use of other characteristics of failure-causing inputs to develop more effective test case selection strategies. This third study revealed how best to approach testing strategies when certain characteristics of the failure-causing inputs are known, and produced some interesting and important results. In revisiting these three previous studies, we explore their unexpected commonalities, and identify diversity as a key concept underlying their effectiveness. This observation further prompted us to examine whether or not such a concept plays a role in other areas of software testing, and our conclusion is that, yes, diversity appears to be one of the most important concepts in the field of software testing.


computer software and applications conference | 2006

Forgetting Test Cases

Kwok-Ping Chan; Tsong Yueh Chen; Dave Towey

Adaptive random testing (ART) methods are software testing methods which are based on random testing, but which use additional mechanisms to ensure more even and widespread distributions of test cases over an input domain. Restricted random testing (RRT) is a version of ART which uses exclusion regions and restriction of test case generation to outside these regions. RRT has been found to perform very well, but incurs some additional computational cost in its restriction of the input domain. This paper presents a method of reducing overheads called forgetting, where the number of test cases used in the restriction algorithm can be limited, and thus the computational overheads reduced. The motivation for forgetting comes from its importance as a human strategy for learning. Several implementations are presented and examined using simulations. The results are very encouraging


Future Generation Computer Systems | 2015

Search-based QoS ranking prediction for web services in cloud environments

Chengying Mao; Jifu Chen; Dave Towey; Jinfu Chen; Xiaoyuan Xie

Unlike traditional quality of service (QoS) value prediction, QoS ranking prediction examines the order of services under consideration for a particular user. To address this NP-Complete problem, greedy strategy-based solutions, such as CloudRank algorithm, have been widely adopted. However, they can only produce locally approximate solutions. In this paper, we propose a search-based prediction framework to address the QoS ranking problem. The traditional particle swarm optimization (PSO) algorithm has been adapted to optimize the order of services according to their QoS records. In real situations, QoS records for a given consumer are often incomplete, so the related data from close neighbour users is often used to determine preference relations among services. In order to filter the neighbours for a specific user, we present an improved method for measuring the similarity between two users by considering the occurrence probability of service pairs. Based on the similarity computation, the top- k neighbours are selected to provide QoS information support for evaluation of the service ranking. A fitness function for an ordered service sequence is defined to guide search algorithm to find high-quality ranking results, and some additional strategies, such as initial solution selection and trap escaping, are also presented. To validate the effectiveness of our proposed solution, experimental studies have been performed on real-world QoS data, the results from which show that our PSO-based approach has a better ranking for services than that computed by the existing CloudRank algorithm, and that the improvement is statistically significant, in most cases. An improved similarity measurement for two ranked sequences is proposed.A new solution for predicting QoS ranking is proposed by adopting PSO algorithm.The PSO-based QoS ranking prediction algorithm is better than CloudRank.


International Journal of Software Engineering and Knowledge Engineering | 2013

Prioritization of combinatorial test cases by incremental interaction coverage

Rubing Huang; Xiaodong Xie; Dave Towey; Tsong Yueh Chen; Yansheng Lu; Jinfu Chen

Combinatorial testing is a well-recognized testing method, and has been widely applied in practice. To facilitate analysis, a common approach is to assume that all test cases in a combinatorial test suite have the same fault detection capability. However, when testing resources are limited, the order of executing the test cases is critical. To improve testing cost-effectiveness, prioritization of combinatorial test cases is employed. The most popular approach is based on interaction coverage, which prioritizes combinatorial test cases by repeatedly choosing an unexecuted test case that covers the largest number on uncovered parameter value combinations of a given strength (level of interaction among parameters). However, this approach suffers from some drawbacks. Based on previous observations that the majority of faults in practical systems can usually be triggered with parameter interactions of small strengths, we propose a new strategy of prioritizing combinatorial test cases by incrementally adjusting the strength values. Experimental results show that our method performs better than the random prioritization technique and the technique of prioritizing combinatorial test suites according to test case generation order, and has better performance than the interaction-coverage-based test prioritization technique in most cases.


IEEE Computer | 2016

Metamorphic Testing for Cybersecurity

Tsong Yueh Chen; Fei-Ching Kuo; Wenjuan Ma; Willy Susilo; Dave Towey; Jeffrey M. Voas; Zhi Quan Zhou

Metamorphic testing (MT) can enhance security testing by providing an alternative to using a test oracle, which is often unavailable or impractical. The authors report how MT detected previously unknown bugs in real-world critical applications such as code obfuscators, giving evidence that software testing requires diverse perspectives to achieve greater cybersecurity.


international conference on reliable software technologies | 2004

Good Random Testing

Kwok-Ping Chan; Tsong Yueh Chen; Dave Towey

Software Testing is recognized as an essential part of the Software Development process. Random Testing (RT), the selection of test cases at random from the input domain, is a simple and efficient method of Software Testing. Previous research has indicated that, under certain circumstances, the performance of RT can be improved by enforcing a more even, well-spread distribution of test cases over the input domain. Test cases that contribute to this goal can be considered ‘good,’ and are more desirable when choosing potential test cases than those that do not contribute. Fuzzy Set Theory enables a calculation of the degree of membership of the set of ‘good’ test cases for any potential test case, in other words, a calculation of how ‘good’ the test case is. This paper presents research in the area of improving on the failure finding efficiency of RT using Fuzzy Set Theory. An approach is proposed and evaluated according to simulation results and comparison with other testing methods.

Collaboration


Dive into the Dave Towey's collaboration.

Top Co-Authors

Avatar

Tsong Yueh Chen

Swinburne University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Fei-Ching Kuo

Swinburne University of Technology

View shared research outputs
Top Co-Authors

Avatar

Zhi Quan Zhou

University of Wollongong

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chang-ai Sun

University of Science and Technology Beijing

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tianchong Wang

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge