Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Fei-Ching Kuo is active.

Publication


Featured researches published by Fei-Ching Kuo.


Journal of Systems and Software | 2010

Adaptive Random Testing: The ART of test case diversity

Tsong Yueh Chen; Fei-Ching Kuo; Robert G. Merkel; T. H. Tse

Random testing is not only a useful testing technique in itself, but also plays a core role in many other testing methods. Hence, any significant improvement to random testing has an impact throughout the software testing community. Recently, Adaptive Random Testing (ART) was proposed as an effective alternative to random testing. This paper presents a synthesis of the most important research results related to ART. In the course of our research and through further reflection, we have realised how the techniques and concepts of ART can be applied in a much broader context, which we present here. We believe such ideas can be applied in a variety of areas of software testing, and even beyond software testing. Amongst these ideas, we particularly note the fundamental role of diversity in test case selection strategies. We hope this paper serves to provoke further discussions and investigations of these ideas.


formal methods | 2013

A theoretical analysis of the risk evaluation formulas for spectrum-based fault localization

Xiaoyuan Xie; Tsong Yueh Chen; Fei-Ching Kuo; Baowen Xu

An important research area of Spectrum-Based Fault Localization (SBFL) is the effectiveness of risk evaluation formulas. Most previous studies have adopted an empirical approach, which can hardly be considered as sufficiently comprehensive because of the huge number of combinations of various factors in SBFL. Though some studies aimed at overcoming the limitations of the empirical approach, none of them has provided a completely satisfactory solution. Therefore, we provide a theoretical investigation on the effectiveness of risk evaluation formulas. We define two types of relations between formulas, namely, equivalent and better. To identify the relations between formulas, we develop an innovative framework for the theoretical investigation. Our framework is based on the concept that the determinant for the effectiveness of a formula is the number of statements with risk values higher than the risk value of the faulty statement. We group all program statements into three disjoint sets with risk values higher than, equal to, and lower than the risk value of the faulty statement, respectively. For different formulas, the sizes of their sets are compared using the notion of subset. We use this framework to identify the maximal formulas which should be the only formulas to be used in SBFL.


IEEE Transactions on Software Engineering | 2014

How Effectively Does Metamorphic Testing Alleviate the Oracle Problem

Huai Liu; Fei-Ching Kuo; Dave Towey; Tsong Yueh Chen

In software testing, something which can verify the correctness of test case execution results is called an oracle. The oracle problem occurs when either an oracle does not exist, or exists but is too expensive to be used. Metamorphic testing is a testing approach which uses metamorphic relations, properties of the software under test represented in the form of relations among inputs and outputs of multiple executions, to help verify the correctness of a program. This paper presents new empirical evidence to support this approach, which has been used to alleviate the oracle problem in various applications and to enhance several software analysis and testing techniques. It has been observed that identification of a sufficient number of appropriate metamorphic relations for testing, even by inexperienced testers, was possible with a very small amount of training. Furthermore, the cost-effectiveness of the approach could be enhanced through the use of more diverse metamorphic relations. The empirical studies presented in this paper clearly show that a small number of diverse metamorphic relations, even those identified in an ad hoc manner, had a similar fault-detection capability to a test oracle, and could thus effectively help alleviate the oracle problem.


Eleventh Annual International Workshop on Software Technology and Engineering Practice | 2003

Metamorphic testing and beyond

Tsong Yueh Chen; Fei-Ching Kuo; T. H. Tse; Zhiquan Zhou

When testing a program, correctly executed test cases are seldom explored further, even though they may carry useful information. Metamorphic testing proposes to generate follow-up test cases to check important properties of the target function. It does not need a human oracle for output prediction and comparison. In this paper, we highlight the basic concepts of metamorphic testing and some interesting extensions in the areas of program testing, proving, and debugging. Future research directions are also proposed.


Journal of Systems and Software | 2006

On the statistical properties of testing effectiveness measures

Tsong Yueh Chen; Fei-Ching Kuo; Robert G. Merkel

We examine the statistical variability of three commonly used software testing effectiveness measures-the E-measure (expected number of failures detected), P-measure (probability of detecting at least one failure), and F-measure (number of tests required to detect the first failure). We show that for random testing with replacement, the F-measure will be distributed according to the geometric distribution. A simulation study examines the distribution of two adaptive random testing methods, to investigate how closely their sampling distributions approximate the geometric distribution. One key observation is that in the worst case scenario, the sampling distribution of adaptive random testing is very similar to that of random testing. The E-measure and P-measure have a normal sampling distribution, but high variability, meaning that large sample sizes are required to obtain results with satisfactorily narrow confidence intervals. We illustrate this with a simulation study for the P-measure. Our results have reinforced, from a perspective other than empirical analysis, that adaptive random testing is a more effective alternative to random testing, with reference to the F-measure. We consider the implications of our findings for previous studies conducted in the area, and make recommendations to future studies.


international conference on quality software | 2004

On the statistical properties of the F-measure

Tsong Yueh Chen; Fei-Ching Kuo; Robert G. Merkel

The F-measure - the number of distinct test cases to detect the first program failure - is an effectiveness measure for debug testing strategies. We show that for random testing with replacement, the F-measure is distributed according to the geometric distribution. A simulation study examines the distribution of two adaptive random testing methods, to study how closely their sampling distributions approximate the geometric distribution, revealing that in the worst case scenario, the sampling distribution for adaptive random testing is very similar to random testing. Our results have provided an answer to a conjecture that adaptive random testing is always a more effective alternative to random testing, with reference to the F-measure. We consider the implications of our findings for previous studies conducted in the area, and make recommendations to future studies.


Software Testing, Verification & Reliability | 2012

Automated functional testing of online search services

Zhi Quan Zhou; Shujia Zhang; Markus Hagenbuchner; T. H. Tse; Fei-Ching Kuo; Tsong Yueh Chen

Search services are the main interface through which people discover information on the Internet. A fundamental challenge in testing search services is the lack of oracles. The sheer volume of data on the Internet prohibits testers from verifying the results. Furthermore, it is difficult to objectively assess the ranking quality because different assessors can have very different opinions on the relevance of a Web page to a query. This paper presents a novel method for automatically testing search services without the need of a human oracle. The experimental findings reveal that some commonly used search engines, including Google, Yahoo!, and Live Search, are not as reliable as what most users would expect. For example, they may fail to find pages that exist in their own repositories, or rank pages in a way that is logically inconsistent. Suggestions are made for search service providers to improve their service quality. Copyright


symposium on search based software engineering | 2013

Provably Optimal and Human-Competitive Results in SBSE for Spectrum Based Fault Localisation

Xiaoyuan Xie; Fei-Ching Kuo; Tsong Yueh Chen; Shin Yoo; Mark Harman

Fault localisation uses so-called risk evaluation formulae to guide the localisation process. For more than a decade, the design and improvement of these formulae has been conducted entirely manually through iterative publication in the fault localisation literature. However, recently we demonstrated that SBSE could be used to automatically design such formulae by recasting this as a problem for Genetic ProgrammingGP. In this paper we prove that our GP has produced four previously unknown globally optimal formulae. Though other human competitive results have previously been reported in the SBSE literature, this is the first SBSE result, in any application domain, for which human competitiveness has been formally proved. We also show that some of these formulae exhibit counter-intuitive characteristics, making them less likely to have been found solely by further human effort.


computer software and applications conference | 2004

A revisit of adaptive random testing by restriction

Kwok-Ping Chan; Tsong Yueh Chen; Fei-Ching Kuo; Dp Towey

Adaptive random testing is a black box testing method based on the intuition that random testing failure-finding efficiency can be improved upon, in certain situations, by ensuring a more widespread and evenly distributed spread of test cases in the input domain. One way of achieving this distribution is through the use of exclusion zones and restriction, resulting in a method called restricted random testing (RRT). Recent investigations into the RRT method have revealed several interesting and significant insights. A method of reducing the computational overheads of testing methods by partitioning an input domain, and applying the method to only one of the subdomains, mapping the test cases to other subdomains, has recently been introduced. This method, called mirroring, in addition to alleviating computational costs, has some properties which fit nicely with the insights into RRT, offering solutions to some possible shortcomings of RRT. In this paper we discuss the RRT method and additional insights; we explain mirroring; and we detail applications of mirroring to RRT. The mirror RRT method proves to be a very attractive variation of RRT


IEEE Transactions on Reliability | 2013

Code Coverage of Adaptive Random Testing

Tsong Yueh Chen; Fei-Ching Kuo; Huai Liu; W. E. Wong

Random testing is a basic software testing technique that can be used to assess the software reliability as well as to detect software failures. Adaptive random testing has been proposed to enhance the failure-detection capability of random testing. Previous studies have shown that adaptive random testing can use fewer test cases than random testing to detect the first software failure. In this paper, we evaluate and compare the performance of adaptive random testing and random testing from another perspective, that of code coverage. As shown in various investigations, a higher code coverage not only brings a higher failure-detection capability, but also improves the effectiveness of software reliability estimation. We conduct a series of experiments based on two categories of code coverage criteria: structure-based coverage, and fault-based coverage. Adaptive random testing can achieve higher code coverage than random testing with the same number of test cases. Our experimental results imply that, in addition to having a better failure-detection capability than random testing, adaptive random testing also delivers a higher effectiveness in assessing software reliability, and a higher confidence in the reliability of the software under test even when no failure is detected.

Collaboration


Dive into the Fei-Ching Kuo's collaboration.

Top Co-Authors

Avatar

Tsong Yueh Chen

Swinburne University of Technology

View shared research outputs
Top Co-Authors

Avatar

Huai Liu

Swinburne University of Technology

View shared research outputs
Top Co-Authors

Avatar

Zhi Quan Zhou

University of Wollongong

View shared research outputs
Top Co-Authors

Avatar

Dave Towey

The University of Nottingham Ningbo China

View shared research outputs
Top Co-Authors

Avatar

Xiaoyuan Xie

Swinburne University of Technology

View shared research outputs
Top Co-Authors

Avatar

T. H. Tse

University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Kwan Yong Sim

Swinburne University of Technology Sarawak Campus

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mark Harman

University College London

View shared research outputs
Top Co-Authors

Avatar

Huai Liu

Swinburne University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge