Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where W. Eric Wong is active.

Publication


Featured researches published by W. Eric Wong.


international conference on software engineering | 1995

Effect of test set minimization on fault detection effectiveness

W. Eric Wong; Joseph Robert Horgan; Saul London; Aditya P. Mathur

Size and code coverage are important attributes of a set of tests. When a program P is executed on elements of the test set T, we can observe the fault detecting capability of T for P. We can also observe the degree to which T induces code coverage on P according to some coverage criterion. We would like to know whether it is the size of T or the coverage of T on P which determines the fault detection effectiveness of T for P. To address this issue we ask the following question: While keeping coverage constant, what is the effect on fault detection of reducing the size of a test set? We report results from an empirical study using the block and all-uses criteria as the coverage measures.


Journal of Systems and Software | 1995

Reducing the cost of mutation testing: an empirical study

W. Eric Wong; Aditya P. Mathur

Of the various testing strategies, mutation testing has been empirically found to be effective in detecting faults. However, mutation often imposes unacceptable demands on computing and human resources because of the large number of mutants that need to be compiled and executed on one or more test cases. In addition, the tester needs to examine many mutants and analyze these for possible equivalence with the program under test. For these reasons, mutation is generally regarded as too expensive to use. Because one significant component of the cost of mutation is the execution of mutants against test cases, we believe that this cost can be reduced dramatically by reducing the number of mutants that need to be examined. We report results from a case study designed to investigate two alternatives for reducing the cost of mutation. The alternatives considered are randomly selected x% mutation and constrained abs/ror mutation. We provide experimental data indicating that both alternatives lead to test sets that distinguish a significant number of nonequivalent mutants and provide high all-uses coverage.


Journal of Systems and Software | 2010

A family of code coverage-based heuristics for effective fault localization

W. Eric Wong; Vidroha Debroy; Byoungju Choi

Locating faults in a program can be very time-consuming and arduous, and therefore, there is an increased demand for automated techniques that can assist in the fault localization process. In this paper a code coverage-based method with a family of heuristics is proposed in order to prioritize suspicious code according to its likelihood of containing program bugs. Highly suspicious code (i.e., code that is more likely to contain a bug) should be examined before code that is relatively less suspicious; and in this manner programmers can identify and repair faulty code more efficiently and effectively. We also address two important issues: first, how can each additional failed test case aid in locating program faults; and second, how can each additional successful test case help in locating program faults. We propose that with respect to a piece of code, the contribution of the first failed test case that executes it in computing its likelihood of containing a bug is larger than or equal to that of the second failed test case that executes it, which in turn is larger than or equal to that of the third failed test case that executes it, and so on. This principle is also applied to the contribution provided by successful test cases that execute the piece of code. A tool, @gDebug, was implemented to automate the computation of the suspiciousness of the code and the subsequent prioritization of suspicious code for locating program faults. To validate our method case studies were performed on six sets of programs: Siemens suite, Unix suite, space, grep, gzip, and make. Data collected from the studies are supportive of the above claim and also suggest Heuristics III(a), (b) and (c) of our method can effectively reduce the effort spent on fault localization.


international conference on software testing, verification, and validation | 2010

Using Mutation to Automatically Suggest Fixes for Faulty Programs

Vidroha Debroy; W. Eric Wong

This paper proposes a strategy for automatically fixing faults in a program by combining the processes of mutation and fault localization. Statements that are ranked in order of their suspiciousness of containing faults can then be mutated in the same order to produce possible fixes for the faulty program. The proposed strategy is evaluated against the seven benchmark programs of the Siemens suite and the Ant program. Results indicate that the strategy is effective at automatically suggesting fixes for faults without any human intervention.


IEEE Transactions on Software Engineering | 2016

A Survey on Software Fault Localization

W. Eric Wong; Ruizhi Gao; Yihao Li; Franz Wotawa

Software fault localization, the act of identifying the locations of faults in a program, is widely recognized to be one of the most tedious, time consuming, and expensive - yet equally critical - activities in program debugging. Due to the increasing scale and complexity of software today, manually locating faults when failures occur is rapidly becoming infeasible, and consequently, there is a strong demand for techniques that can guide software developers to the locations of faults in a program with minimal human intervention. This demand in turn has fueled the proposal and development of a broad spectrum of fault localization techniques, each of which aims to streamline the fault localization process and make it more effective by attacking the problem in a unique way. In this article, we catalog and provide a comprehensive overview of such techniques and discuss key issues and concerns that are pertinent to software fault localization as a whole.


Software Testing, Verification & Reliability | 1994

An empirical comparison of data flow and mutation-based test adequacy criteria

Aditya P. Mathur; W. Eric Wong

Evaluation of the adequacy of a test set consisting of one or more test cases is a problem oftes encountered in software testing environments. Two test adequacy criiteria are considered, namely the data flow based all‐uses criterion and a mutation based criterion. An empirical study was conducted to compare the ‘difficulty’ of satisfying the two criteria and their costs. Similar studies conducted in the past are discussed in the light of this study. A discussion is also presented of how and why the results of this study, when viewed in conjunction with the results of earlier comparisons of testing methods, are useful to a software test team.


Journal of Systems and Software | 2000

Quantifying the closeness between program components and features

W. Eric Wong; Swapna S. Gokhale; Joseph Robert Horgan

Abstract One of the most important steps towards effective software maintenance of a large complicated system is to understand how program features are spread over the entire system and their interactions with the program components. However, we must first be able to represent an abstract feature in terms of some concrete program components. In this paper, we use an execution slice-based technique to identify the basic blocks which are used to implement a program feature. Three metrics are then defined, based on this identification, to determine quantitatively , the disparity between a program component and a feature, the concentration of a feature in a program component, and the dedication of a program component to a feature. The computations of these metrics are automated by incorporating them in a tool ( χ Suds), which makes the use of our metrics immediately applicable in real-life contexts. We demonstrate the effectiveness of our technique by experimenting with a reliability and performance evaluator. Results of our study suggest that these metrics can provide an indication of the closeness between a feature and a program component which is very useful for software programmers and maintainers to better understand the system at hand.


Performance Evaluation | 2004

An analytical approach to architecture-based software performance and reliability prediction

Swapna S. Gokhale; W. Eric Wong; Joseph Robert Horgan; Kishor S. Trivedi

Conventional approaches to analyze the behavior of software applications are black box based, that is, the software application is treated as a whole and only its interactions with the outside world are modeled. The black box approaches ignore information about the internal structure of the application and the behavior of the individual parts. Hence, they are inadequate to model the behavior of a realistic software application, which is likely to be made up of several interacting parts. Architecture-based analysis, which seeks to assess the behavior of a software application taking into consideration the behavior of its parts and the interactions among the parts is thus essential. Most of the research in the area of architecture-based analysis has been devoted to developing analytical models, with very little, if any effort being devoted to how these models might be applied to real software applications. In order to apply these models to software applications, methods must be developed to extract the parameters of the analytical models from information collected during the execution of the application. In this paper, we present an experimental approach to extract the parameters of architecture-based models from code coverage measurements obtained during the execution of the application. To facilitate this, we use a coverage analysis tool called automatic test analyzer in C (ATAC), which is a part of Telcordia Software Visualization and Analysis Toolsuite (TSVAT) developed at Telcordia Technologies. We demonstrate the approach by predicting the performance and reliability of an application called Symbolic Hierarchical Automated Reliability Predictor (SHARPE), which has been widely used to solve stochastic models of reliability, performance and performability.


IEEE Transactions on Reliability | 2014

The DStar Method for Effective Software Fault Localization

W. Eric Wong; Vidroha Debroy; Ruizhi Gao; Yihao Li

Effective debugging is crucial to producing reliable software. Manual debugging is becoming prohibitively expensive, especially due to the growing size and complexity of programs. Given that fault localization is one of the most expensive activities in program debugging, there has been a great demand for fault localization techniques that can help guide programmers to the locations of faults. In this paper, a technique named DStar (D*) is proposed which can suggest suspicious locations for fault localization automatically without requiring any prior information on program structure or semantics. D* is evaluated across 24 programs, and is compared to 38 different fault localization techniques. Both single-fault and multi-fault programs are used. Results indicate that D* is more effective at locating faults than all the other techniques it is compared to. An empirical evaluation is also conducted to illustrate how the effectiveness of D* increases as the exponent * grows, and then levels off when the exponent * exceeds a critical value. Discussions are presented to support such observations.


Journal of Systems and Software | 1999

Test set size minimization and fault detection effectiveness: a case study in a space application

W. Eric Wong; Joseph Robert Horgan; Aditya P. Mathur; Alberto Pasquini

Abstract An important question in software testing is whether it is reasonable to apply coverage-based criteria as a filter to reduce the size of a test set. An empirical study was conducted using a test set minimization technique to explore the effect of reducing the size of a test set, while keeping block coverage constant, on the fault detection strength of the resulting minimized test set. Two types of test sets were examined. For those with respect to a fixed size, no test case screening was conducted during the generation, whereas for those with respect to a fixed coverage, each subsequent test case had to improve the overall coverage in order to be included. The study reveals that regardless of how a test set is generated, with or without any test case screening, block minimized test sets have a size/effectiveness advantage, in terms of a significant reduction in test set size and with almost the same fault detection effectiveness, over the original non-minimized test sets.

Collaboration


Dive into the W. Eric Wong's collaboration.

Top Co-Authors

Avatar

Vidroha Debroy

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Ruizhi Gao

University of Texas at Dallas

View shared research outputs
Top Co-Authors

Avatar

Kendra M. L. Cooper

University of Texas at Dallas

View shared research outputs
Top Co-Authors

Avatar

Yu Qi

University of Texas at Dallas

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

T. H. Tse

University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

João W. Cangussu

University of Texas at Dallas

View shared research outputs
Top Co-Authors

Avatar

Fevzi Belli

University of Paderborn

View shared research outputs
Researchain Logo
Decentralizing Knowledge