Chu-Ti Lin
National Chiayi University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Chu-Ti Lin.
IEEE Transactions on Reliability | 2006
Chin-Yu Huang; Chu-Ti Lin
Over the past 30 years, many software reliability growth models (SRGM) have been proposed. Often, it is assumed that detected faults are immediately corrected when mathematical models are developed. This assumption may not be realistic in practice because the time to remove a detected fault depends on the complexity of the fault, the skill and experience of personnel, the size of debugging team, the technique(s) being used, and so on. During software testing, practical experiences show that mutually independent faults can be directly detected and removed, but mutually dependent faults can be removed iff the leading faults have been removed. That is, dependent faults may not be immediately removed, and the fault removal process lags behind the fault detection process. In this paper, we will first give a review of fault detection & correction processes in software reliability modeling. We will then illustrate the fact that detected faults cannot be immediately corrected with several examples. We also discuss the software fault dependency in detail, and study how to incorporate both fault dependency and debugging time lag into software reliability modeling. The proposed models are fairly general models that cover a variety of known SRGM under different conditions. Numerical examples are presented, and the results show that the proposed framework to incorporate both fault dependency and debugging time lag for SRGM has a better prediction capability. In addition, an optimal software release policy for the proposed models, based on cost-reliability criterion, is proposed. The main purpose is to minimize the cost of software development when a desired reliability objective is given
Journal of Systems and Software | 2008
Chu-Ti Lin; Chin-Yu Huang
Software testing is necessary to accomplish highly reliable software systems. If the project manager can conduct well-planned testing activities, the consumption of related testing-resources will be cost-effective. Over the past 30 years, many software reliability growth models (SRGMs) have been proposed to estimate the reliability growth of software, and they are mostly applicable to the late stages of testing in software development. Thus far, it appears that most SRGMs do not take possible changes of testing-effort consumption rates into consideration. However, in some cases, the policies of testing-resource allocation could be changed or adjusted. Thus, in this paper, we will incorporate the important concept of multiple change-points into Weibull-type testing-effort functions. The applicability and performance of the proposed models are demonstrated through two real data sets. Experimental results show that the proposed models give a fairly accurate prediction capability. Finally, based on the proposed SRGM, constructive rules are developed for determining optimal software release times.
IEEE Transactions on Computers | 2010
Chin-Yu Huang; Chu-Ti Lin
This paper is an attempt to relax and improve the assumptions regarding software reliability modeling. To approximate reality much more closely, we take into account the concepts of testing compression factor and the quantified ratio of faults to failures in the modeling. Numerical examples based on real failure data show that the proposed framework has a fairly good prediction capability. Further, we also address the optimal software release time problem and conduct a detailed sensitivity analysis through the proposed model.
international conference on genetic and evolutionary computing | 2012
Chu-Ti Lin; Kai-Wei Tang; Cheng-Ding Chen; Gregory M. Kapfhammer
Test suite reduction techniques decrease the cost of software testing by removing the redundant test cases from the test suite while still producing a reduced set of tests that yields the same level of code coverage as the original suite. Most of the existing approaches to reduction aim to decrease the size of the test suite. Yet, the difference in the execution cost of the tests is often significant and it may be costly to use a test suite consisting of a few long-running test cases. Thus, this paper proposes an algorithm, based on the concept of test irreplaceability, which creates a reduced test suite with a decreased execution cost. Leveraging widely used benchmark programs, the empirical study shows that, in comparison to existing techniques, the presented algorithm is the most effective at reducing the cost of running a test suite.
IEEE Transactions on Reliability | 2009
Chu-Ti Lin; Chin-Yu Huang
Research in the field of software reliability, dedicated to the analysis of software failure processes, is quite diverse. In recent years, several attractive rate-based simulation approaches have been proposed. Thus far, it appears that most existing simulation approaches do not take into account the number of available debuggers (or developers). In practice, the number of debuggers will be carefully controlled. If all debuggers are busy, they may not address newly detected faults for some time. Furthermore, practical experience shows that fault-removal time is not negligible, and the number of removed faults generally lags behind the total number of detected faults, because fault detection activities continue as faults are being removed. Given these facts, we apply the queueing theory to describe and explain possible debugging behavior during software development. Two simulation procedures are developed based on G/G/infin, and G/G/m queueing models, respectively. The proposed methods will be illustrated using real software failure data. The analysis conducted through the proposed framework can help project managers assess the appropriate staffing level for the debugging team from the standpoint of performance, and cost-effectiveness.
computer software and applications conference | 2004
Chin-Yu Huang; Chu-Ti Lin; Sy-Yen Kuo; Michael R. Lyu; Chuan Ching Sue
Software reliability is defined as the probability of failure-free software operation for a specified period of time in a specified environment. Over the past 30 years, many software reliability growth models (SRGMs) have been proposed and most SRGMs assume that detected faults are immediately corrected. Actually, this assumption may not be realistic in practice. In this paper we first give a review of fault detection and correction processes in software reliability modeling. Furthermore, we show how several existing SRGMs based on NHPP models can be derived by applying the time-dependent delay function. On the other hand, it is generally observed that mutually independent software faults are on different program paths. Sometimes mutually dependent faults can be removed if and only if the leading faults were removed. Therefore, here we incorporate the ideas of fault dependency and time-dependent delay function into software reliability growth modeling. Some new SRGMs are proposed and several numerical examples are included to illustrate the results. Experimental results show that the proposed framework to incorporate both fault dependency and time-dependent delay function for SRGMs has a fairly accurate prediction capability
Information & Software Technology | 2014
Chu-Ti Lin; Kai Wei Tang; Gregory M. Kapfhammer
Context: In software development and maintenance, a software system may frequently be updated to meet rapidly changing user requirements. New test cases will be designed to ensure the correctness of new or modified functions, thus gradually increasing the test suites size. Test suite reduction techniques aim to decrease the cost of regression testing by removing the redundant test cases from the test suite and then obtaining a representative set of test cases that still yield a high level of code coverage. Objective: Most of the existing reduction algorithms focus on decreasing the test suites size. Yet, the differences in execution costs among test cases are usually significant and it may take a lot of execution time to run a test suite consisting of a few long-running test cases. This paper presents and empirically evaluates cost-aware algorithms that can produce the representative sets with lower execution costs. Method: We first use a cost-aware test case metric, called Irreplaceability, and its enhanced version, called EIrreplaceability, to evaluate the possibility that each test case can be replaced by others during test suite reduction. Furthermore, we construct a cost-aware framework that incorporates the concept of test irreplaceability into some well-known test suite reduction algorithms. Results: The effectiveness of the cost-aware framework is evaluated via the subject programs and test suites collected from the Software-artifact Infrastructure Repository - frequently chosen benchmarks for experimentally evaluating test suite reduction methods. The empirical results reveal that the presented algorithms produce representative sets that normally incur a low cost to yield a high level of test coverage. Conclusion: The presented techniques indeed enhance the capability of the traditional reduction algorithms to reduce the execution cost of a test suite. Especially for the additional Greedy algorithm, the presented techniques decrease the costs of the representative sets by 8.10-46.57%.
pacific rim international symposium on dependable computing | 2005
Chin-Yu Huang; Chu-Ti Lin
In this paper, we investigate some techniques for reliability prediction and assessment of fielded software. We first review how several existing software reliability growth models based on non-homogeneous Poisson processes (NHPPs) can be readily derived based on a unified theory for NHPP models. Furthermore, based on the unified theory, we can incorporate the concept of multiple change-points into software reliability modeling. Some models are proposed and discussed under both ideal and imperfect debugging conditions. A numerical example by using real software failure data is presented in detail and the result shows that the proposed models can provide fairly good capability to predict software operational reliability.
international conference on engineering of complex computer systems | 2013
Chu-Ti Lin; Cheng-Ding Chen; Chang-Shi Tsai; Gregory M. Kapfhammer
Test case prioritization techniques schedule the test cases in an order based on some specific criteria so that the tests with better fault detection capability are executed at an early position in the regression test suite. Many existing test case prioritization approaches are code-based, in which the testing of each software version is considered as an independent process. Actually, the test results of the preceding software versions may be useful for scheduling the test cases of the later software versions. Some researchers have proposed history-based approaches to address this issue, but they assumed that the immediately preceding test result provides the same reference value for prioritizing the test cases of the successive software version across the entire lifetime of the software development process. Thus, this paper describes ongoing research that studies whether the reference value of the immediately preceding test results is version-aware and proposes a test case prioritization approach based on our observations. The experimental results indicate that, in comparison to existing approaches, the presented one can schedule test cases more effectively.
Mathematical and Computer Modelling | 2011
Chu-Ti Lin
Abstract During a debugging operation, there is a high probability that an additional fault will be introduced into the program when removing an existing fault. Thus, perfect debugging is an ideal but impractical assumption when modeling software reliability. If the debugging of a software system is imperfect, more faults may be introduced and detected. In such cases, it may be necessary to add more staff to the debugging team to share the load and ensure the quality of the software. To investigate the effects of imperfect debugging, we simulate the fault detection and correction processes by a single-queue multichannel queuing model with feedback. In this paper, two debugging procedures are discussed. The first, called Procedure_perfect_debugging, is based on a single-queue multichannel queuing model and deals with the case of perfect debugging. Then, we relax the restriction on perfect debugging, and further propose Procedure_imperfect_ debugging based on a queuing model with feedback to address the case of imperfect debugging. We demonstrate the implementation of the procedures via two case studies in which we quantify the effects of imperfect debugging in terms of throughput, time consumption, and debugger utilization. Finally, based on the measurement results, we determine the most suitable staffing level (i.e., the number of debuggers required) for a debugging system under different degrees of imperfect debugging.