Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where en-Tak Yu is active.

Publication


Featured researches published by en-Tak Yu.


Journal of Systems and Software | 2006

A comparison of MC/DC, MUMCUT and several other coverage criteria for logical decisions

Yuen-Tak Yu; Man Fai Lau

Many testing criteria, including condition coverage and decision coverage, are inadequate for software characterised by complex logical decisions, such as those in safety-critical software. In the past decade, more sophisticated testing criteria have been advocated. In particular, compliance of the MC/DC criterion has been mandated in the commercial aviation industry for the approval of airborne software. Recently, the MUMCUT criterion has been proposed as it guarantees the detection of certain faults in logical decisions in disjunctive normal form in which no variable is redundant. This paper compares MC/DC, MUMCUT and several other related coverage criteria for logical decisions by both formal and empirical analysis, focusing on the fault-detecting ability of test sets satisfying these testing criteria. Our results show that MC/DC test sets are effective, but they may still miss some faults that can almost always be detected by test sets satisfying the MUMCUT criterion.


IEEE Transactions on Software Engineering | 1996

On the expected number of failures detected by subdomain testing and random testing

Tsong Yueh Chen; Yuen-Tak Yu

We investigate the efficacy of subdomain testing and random testing using the expected number of failures detected (the E-measure) as a measure of effectiveness. Simple as it is, the E-measure does provide a great deal of useful information about the fault detecting capability of testing strategies. With the E-measure, we obtain new characterizations of subdomain testing, including several new conditions that determine whether subdomain testing is more or less effective than random testing. Previously, the efficacy of subdomain testing strategies has been analyzed using the probability of detecting at least one failure (the P-measure) for the special case of disjoint subdomains only. On the contrary, our analysis makes use of the E-measure and considers also the general case in which subdomains may or may not overlap. Furthermore, we discover important relations between the two different measures. From these relations, we also derive corresponding characterizations of subdomain testing in terms of the P-measure.


ACM Transactions on Software Engineering and Methodology | 2005

An extended fault class hierarchy for specification-based testing

Man Fai Lau; Yuen-Tak Yu

Kuhn, followed by Tsuchiya and Kikuno, have developed a hierarchy of relationships among several common types of faults (such as variable and expression faults) for specification-based testing by studying the corresponding fault detection conditions. Their analytical results can help explain the relative effectiveness of various fault-based testing techniques previously proposed in the literature. This article extends and complements their studies by analyzing the relationships between variable and literal faults, and among literal, operator, term, and expression faults. Our analysis is more comprehensive and produces a richer set of findings that interpret previous empirical results, can be applied to the design and evaluation of test methods, and inform the way that test cases should be prioritized for earlier detection of faults. Although this work originated from the detection of faults related to specifications, our results are equally applicable to program-based predicate testing that involves logic expressions.


Information & Software Technology | 1996

Proportional sampling strategy: guidelines for software testing practitioners

F. T. Chan; Tsong Yueh Chen; I. K. Mak; Yuen-Tak Yu

Abstract Recently, several sufficient conditions have been developed that guarantee partition testing to have a higher probability of detecting at least one failure than random testing. One of these conditions is that the number of test cases selected from each partition is proportional to the size of the partition. We call such a method of allocating test cases the proportional sampling strategy. Although this condition is not the most general one, it is the most easily and practically applicable one. In this paper, we discuss how the proportional sampling strategy can be applied effectively in practice. Some practical issues that need to be attended are identified and guidelines to deal with these issues are suggested.


IEEE Transactions on Software Engineering | 1994

On the relationship between partition and random testing

Tsong Yueh Chen; Yuen-Tak Yu

Weyuker and Jeng (ibid., vol. SE-17, pp. 703-711, July 1991) have investigated the conditions that affect the performance of partition testing and have compared analytically the fault-detecting ability of partition testing and random testing. This paper extends and generalizes some of their results. We give more general ways of characterizing the worst case for partition testing, along with a precise characterization of when this worst case is as good as random testing. We also find that partition testing is guaranteed to perform at least as well as random testing so long as the number of test cases selected is in proportion to the size of the subdomains. >


Journal of Systems and Software | 2001

Proportional sampling strategy: a compendium and some insights

Tsong Yueh Chen; T. H. Tse; Yuen-Tak Yu

There have been numerous studies on the effectiveness of partition and random testing. In particular, the proportional sampling strategy has been proved, under certain conditions, to be the only form of partition testing that outperforms random testing regardless of where the failure-causing inputs are. This paper provides an integrated synthesis and overview of our recent studies on the proportional sampling strategy and its related work. Through this synthesis, we offer a perspective that properly interprets the results obtained so far, and present some of the interesting issues involved and new insights obtained during the course of this research.


Journal of Systems and Software | 2006

Automatic generation of test cases from Boolean specifications using the MUMCUT strategy

Yuen-Tak Yu; Man Fai Lau; Tsong Yueh Chen

A recent theoretical study has proved that the MUMCUT testing strategy (1) guarantees to detect seven types of fault in Boolean specifications in irredundant disjunctive normal form, and (2) requires only a subset of the test sets that satisfy the previously proposed MAX-A and MAX-B strategies, which can detect the same types of fault. This paper complements previous work by investigating various methods for the automatic generation of test cases to satisfy the MUMCUT strategy. We evaluate these methods by using several sets of Boolean expressions, including those derived from real airborne software systems. Our results indicate that the greedy CUN and UCN methods are clearly better than others in consistently producing significantly smaller test sets, whose sizes exhibit linear correlation with the length of the Boolean expressions in irredundant disjunctive normal form. This study provides empirical evidences that the MUMCUT strategy is indeed cost-effective for detecting the faults considered in this paper.


asia pacific software engineering conference | 1999

MUMCUT: a fault-based strategy for testing Boolean specifications

T.Y. Chen; M. F. Lau; Yuen-Tak Yu

We study the MUMCUT strategy that integrates the MUTP, MNFP and CUTPNFP strategies previously proposed separately for testing Boolean specifications. The MUMCUT strategy guarantees to detect seven types of faults found in Boolean expressions. We describe an implementation of generating test sets that satisfy the MUMCUT strategy, and empirically evaluate its cost effectiveness. With respect to a previously published set of Boolean expressions derived from a real specification, we find that on average the MUMCUT strategy requires only about one quarter the size of an exhaustive test set. Moreover, the MUMCUT strategy proves to be a substantial improvement to the MAX-A and MAX-B strategies which detect the same types of faults.


Journal of Systems and Software | 2011

Non-parametric statistical fault localization

Zhenyu Zhang; W. K. Chan; T. H. Tse; Yuen-Tak Yu; Peifeng Hu

Abstract: Fault localization is a major activity in program debugging. To automate this time-consuming task, many existing fault-localization techniques compare passed executions and failed executions, and suggest suspicious program elements, such as predicates or statements, to facilitate the identification of faults. To do that, these techniques propose statistical models and use hypothesis testing methods to test the similarity or dissimilarity of proposed program features between passed and failed executions. Furthermore, when applying their models, these techniques presume that the feature spectra come from populations with specific distributions. The accuracy of using a model to describe feature spectra is related to and may be affected by the underlying distribution of the feature spectra, and the use of a (sound) model on inapplicable circumstances to describe real-life feature spectra may lower the effectiveness of these fault-localization techniques. In this paper, we make use of hypothesis testing methods as the core concept in developing a predicate-based fault-localization framework. We report a controlled experiment to compare, within our framework, the efficacy, scalability, and efficiency of applying three categories of hypothesis testing methods, namely, standard non-parametric hypothesis testing methods, standard parametric hypothesis testing methods, and debugging-specific parametric testing methods. We also conduct a case study to compare the effectiveness of the winner of these three categories with the effectiveness of 33 existing statement-level fault-localization techniques. The experimental results show that the use of non-parametric hypothesis testing methods in our proposed predicate-based fault-localization model is the most promising.


international conference on quality software | 2006

Experiences with PASS: Developing and Using a Programming Assignment aSsessment System

Yuen-Tak Yu; Chung Keung Poon; Marian Choy

Computer programming is a skill required in many study disciplines, but acquiring the skill has been known to be difficult for many beginners. With the primary aim to improving the teaching and learning of computer programming, we have developed a Web-based system, namely Programming Assignment aSsessment System (PASS), for use in our courses. This paper presents our experiences with the development and usage of the system, and discusses its impact on our practices in teaching and learning

Collaboration


Dive into the en-Tak Yu's collaboration.

Top Co-Authors

Avatar

Tsong Yueh Chen

Swinburne University of Technology

View shared research outputs
Top Co-Authors

Avatar

W. K. Chan

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Chung Man Tang

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Pak-Lok Poon

Hong Kong Polytechnic University

View shared research outputs
Top Co-Authors

Avatar

Man Fai Lau

Swinburne University of Technology

View shared research outputs
Top Co-Authors

Avatar

Chung Keung Poon

Caritas Institute of Higher Education

View shared research outputs
Top Co-Authors

Avatar

T. H. Tse

University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Eric Ying Kwong Chan

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Changjiang Jia

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Marian Choy

City University of Hong Kong

View shared research outputs
Researchain Logo
Decentralizing Knowledge