Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tsong Yueh Chen is active.

Publication


Featured researches published by Tsong Yueh Chen.


Journal of Systems and Software | 2013

An orchestrated survey of methodologies for automated software test case generation

Saswat Anand; Edmund K. Burke; Tsong Yueh Chen; John A. Clark; Myra B. Cohen; Wolfgang Grieskamp; Mark Harman; Mary Jean Harrold; Phil McMinn

Test case generation is among the most labour-intensive tasks in software testing. It also has a strong impact on the effectiveness and efficiency of software testing. For these reasons, it has been one of the most active research topics in software testing for several decades, resulting in many different approaches and tools. This paper presents an orchestrated survey of the most prominent techniques for automatic generation of software test cases, reviewed in self-standing sections. The techniques presented include: (a) structural testing using symbolic execution, (b) model-based testing, (c) combinatorial testing, (d) random testing and its variant of adaptive random testing, and (e) search-based testing. Each section is contributed by world-renowned active researchers on the technique, and briefly covers the basic ideas underlying the method, the current state of the art, a discussion of the open research problems, and a perspective of the future development of the approach. As a whole, the paper aims at giving an introductory, up-to-date and (relatively) short overview of research in automatic test case generation, while ensuring a comprehensive and authoritative treatment.


Lecture Notes in Computer Science | 2004

Adaptive random testing

Tsong Yueh Chen; Hing Leung; I. K. Mak

In this paper, we introduce an enhanced form of random testing called Adaptive Random Testing. Adaptive random testing seeks to distribute test cases more evenly within the input space. It is based on the intuition that for non-point types of failure patterns, an even spread of test cases is more likely to detect failures using fewer test cases than ordinary random testing. Experiments are performed using published programs. Results show that adaptive random testing does outperform ordinary random testing significantly (by up to as much as 50%) for the set of programs under study. These results are very encouraging, providing evidences that our intuition is likely to be useful in improving the effectiveness of random testing.


Journal of Systems and Software | 2010

Adaptive Random Testing: The ART of test case diversity

Tsong Yueh Chen; Fei-Ching Kuo; Robert G. Merkel; T. H. Tse

Random testing is not only a useful testing technique in itself, but also plays a core role in many other testing methods. Hence, any significant improvement to random testing has an impact throughout the software testing community. Recently, Adaptive Random Testing (ART) was proposed as an effective alternative to random testing. This paper presents a synthesis of the most important research results related to ART. In the course of our research and through further reflection, we have realised how the techniques and concepts of ART can be applied in a much broader context, which we present here. We believe such ideas can be applied in a variety of areas of software testing, and even beyond software testing. Amongst these ideas, we particularly note the fundamental role of diversity in test case selection strategies. We hope this paper serves to provoke further discussions and investigations of these ideas.


international conference on quality software | 2008

Adaptive Random Testing

Tsong Yueh Chen

Summary form only given. Random testing is a basic testing technique. Motivated by the observation that neighboring inputs normally exhibit similar failure behavior, the approach of adaptive random testing has recently been proposed to enhance the fault detection capability of random testing. The intuition of adaptive random testing is to evenly spread the randomly generated test cases. Experimental results have shown that adaptive random testing can use as fewer as 50% of test cases required by random testing with replacement to detect the first failure. These results have very significant impact in software testing, because random testing is a basic and popular technique in software testing. In view of such a significant improvement of adaptive random testing over random testing, it is very natural to consider to replace random testing by adaptive random testing. Hence, many works involving random testing may be worthwhile to be reinvestigated using adaptive random testing instead. Obviously, there are different approaches of evenly spreading random test cases. In this tutorial, we are going to present several approaches, and discuss their advantages and disadvantages. Furthermore, the favorable and unfavorable conditions for adaptive random testing would also be discussed. Most existing research on adaptive random testing involves only numeric programs. The recent success of applying adaptive random testing for non-numeric programs would be discussed.


ACM Transactions on Software Engineering and Methodology | 1998

In black and white: an integrated approach to class-level testing of object-oriented programs

Huo Yan Chen; T. H. Tse; F. T. Chan; Tsong Yueh Chen

Because of the growing importance of object-oriented programming, a number of testing strategies have been proposed. They are based either on pure black-box or white-box techniques. We propose in this article a methodology to integrate the black- and white-box techniques. The black-box technique is used to select test cases. The white-box technique is mainly applied to determine whether two objects resulting from the program execution of a test care are observationally equivalent. It is also used to select test cases in some situations. We define the concept of a fundamental pair as a pair of equivalent terms that are formed by replacing all the variables on both sides of an axiom by normal forms. We prove that an implementation is consistent with respect to all equivalent terms if and only if it is consistent with respect to all fundamental pairs. In other words, the testing coverage of fundamental pairs is as good as that of all possible term rewritings, and hence we need only concentrate on the testing of fundamental pairs. Our strategy is based on mathematical theorems. According to the strategy, we propose an algorithm for selecting a finite set of fundamental pairs as test cases. Given a pair of equivalent terms as a test case, we should then determine whether the objects that result from executing the implemented program are observationally equivalent. We prove, however, that the observational equivalence of objects cannot be determined using a finite set of observable contexts (which are operation sequences ending with an observer function) derived from any black-box technique. Hence we supplement our approach with a “relevant observable context” technique, which is a heuristic white-box technique to select a relevant finite subset of the set of observable contexts for determining the observational equivalence. The relevant observable contezxts are constructed from a data member relevance graph (DRG), which is an abstraction of the given implementation for a given specificatin. A semiautomatic tool hass been developed to support this technique.


Information & Software Technology | 1998

A new heuristic for test suite reduction

Tsong Yueh Chen; Man Fai Lau

Abstract A testing objective has to be defined in testing a program. A test suite is then constructed to satisfy the testing objective. The constructed test suite contains redundancy when some of its proper subsets can still satisfy the same testing objective. Since the costs of executing test cases and maintaining a test suite for regression testing may be expensive, the problem of test suite reduction arises. This paper proposes a heuristic towards the optimization of a test suite.


ACM Transactions on Software Engineering and Methodology | 2001

TACCLE: a methodology for object-oriented software testing at the class and cluster levels

Huo Yan Chen; T. H. Tse; Tsong Yueh Chen

Object-oriented programming consists of several different levels of abstraction, namely, the algorithmic level, class level, cluster level, and system level. The testing of object-oriented software at the algorithmic and system levels is similar to conventional program testing. Testing at the class and cluster levels poses new challenges. Since methods and objects may interact with one another with unforeseen combinations and invocations, they are much more complex to simulate and test than the hierarchy of functional calls in conventional programs. In this paper, we propose a methodology for object-oriented software testing at the class and cluster levels. In class-level testing, it is essential to determine whether objects produced from the execution of implemented systems would preserve the properties defined by the specification, such as behavioral equivalence and nonequivalence. Our class-level testing methodology addresses both of these aspects. For the testing of behavioral equivalence, we propose to select fundamental pairs of equivalent ground terms as test cases using a black-box technique based on algebraic specifications, and then determine by means of a white-box technique whether the objects resulting from executing such test cases are observationally equivalent. To address the testing of behavioral nonequivalence, we have identified and analyzed several nontrivial problems in the current literature. We propose to classify term equivalence into four types, thereby setting up new concepts and deriving important properties. Based on these results, we propose an approach to deal with the problems in the generation of nonequivalent ground terms as test cases. Relatively little research has contributed to cluster-level testing. In this paper, we also discuss black-box testing at the cluster level. We illustrate the feasibility of using contract, a formal specification language for the behavioral dependencies and interactions among cooperating objects of different classes in a given cluster. We propose an approach to test the interactions among different classes using every individual message-passing rule in the given Contract specification. We also present an approach to examine the interactions among composite message-passing sequences. We have developed four testing tools to support our methodology.


formal methods | 2013

A theoretical analysis of the risk evaluation formulas for spectrum-based fault localization

Xiaoyuan Xie; Tsong Yueh Chen; Fei-Ching Kuo; Baowen Xu

An important research area of Spectrum-Based Fault Localization (SBFL) is the effectiveness of risk evaluation formulas. Most previous studies have adopted an empirical approach, which can hardly be considered as sufficiently comprehensive because of the huge number of combinations of various factors in SBFL. Though some studies aimed at overcoming the limitations of the empirical approach, none of them has provided a completely satisfactory solution. Therefore, we provide a theoretical investigation on the effectiveness of risk evaluation formulas. We define two types of relations between formulas, namely, equivalent and better. To identify the relations between formulas, we develop an innovative framework for the theoretical investigation. Our framework is based on the concept that the determinant for the effectiveness of a formula is the number of statements with risk values higher than the risk value of the faulty statement. We group all program statements into three disjoint sets with risk values higher than, equal to, and lower than the risk value of the faulty statement, respectively. For different formulas, the sizes of their sets are compared using the notion of subset. We use this framework to identify the maximal formulas which should be the only formulas to be used in SBFL.


IEEE Transactions on Software Engineering | 1996

On the expected number of failures detected by subdomain testing and random testing

Tsong Yueh Chen; Yuen-Tak Yu

We investigate the efficacy of subdomain testing and random testing using the expected number of failures detected (the E-measure) as a measure of effectiveness. Simple as it is, the E-measure does provide a great deal of useful information about the fault detecting capability of testing strategies. With the E-measure, we obtain new characterizations of subdomain testing, including several new conditions that determine whether subdomain testing is more or less effective than random testing. Previously, the efficacy of subdomain testing strategies has been analyzed using the probability of detecting at least one failure (the P-measure) for the special case of disjoint subdomains only. On the contrary, our analysis makes use of the E-measure and considers also the general case in which subdomains may or may not overlap. Furthermore, we discover important relations between the two different measures. From these relations, we also derive corresponding characterizations of subdomain testing in terms of the P-measure.


Information & Software Technology | 2003

Fault-based testing without the need of oracles

Tsong Yueh Chen; T. H. Tse; Zhiquan Zhou

There are two fundamental limitations in software testing, known as the reliable test set problem and the oracle problem. Fault-based testing is an attempt by Morell to alleviate the reliable test set problem. In this paper, we propose to enhance fault-based testing to alleviate the oracle problem as well. We present an integrated method that combines metamorphic testing with fault-based testing using real and symbolic inputs.

Collaboration


Dive into the Tsong Yueh Chen's collaboration.

Top Co-Authors

Avatar

Fei-Ching Kuo

Swinburne University of Technology

View shared research outputs
Top Co-Authors

Avatar

T. H. Tse

University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Dave Towey

The University of Nottingham Ningbo China

View shared research outputs
Top Co-Authors

Avatar

Pak-Lok Poon

Hong Kong Polytechnic University

View shared research outputs
Top Co-Authors

Avatar

Zhi Quan Zhou

University of Wollongong

View shared research outputs
Top Co-Authors

Avatar

Yuen-Tak Yu

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Huai Liu

Swinburne University of Technology

View shared research outputs
Top Co-Authors

Avatar

Xiaoyuan Xie

Swinburne University of Technology

View shared research outputs
Top Co-Authors

Avatar

Man Fai Lau

University of Melbourne

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge