Nachimuthu Karunanithi
Colorado State University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Nachimuthu Karunanithi.
international conference on software engineering | 1999
Siddhartha R. Dalal; Ashish Jain; Nachimuthu Karunanithi; J. M. Leaton; Christopher M. Lott; Gardner C. Patton; Bruce M. Horowitz
Model-based testing is a new and evolving technique for generating a suite of test cases from requirements. Testers using this approach concentrate on a data model and generation infrastructure instead of hand-crafting individual tests. Several relatively small studies have demonstrated how combinatorial test generation techniques allow testers to achieve broad coverage of the input domain with a small number of tests. We have conducted several relatively large projects in which we applied these techniques to systems with millions of lines of code. Given the complexity of testing, the model-based testing approach was used in conjunction with test automation harnesses. Since no large empirical study has been conducted to measure efficacy of this new approach, we report on our experience with developing tools and methods in support of model-based testing. The four case studies presented here offer details and results of applying combinatorial test-generation techniques on a large scale to diverse applications. Based on the four projects, we offer our insights into what works in practice and our thoughts about obstacles to transferring this technology into testing organizations.
IEEE Software | 1992
Nachimuthu Karunanithi; Darrell Whitley; Yashwant K. Malaiya
It is shown that neural network reliability growth models have a significant advantage over analytic models in that they require only failure history as input and not assumptions about either the development environment or external parameters. Using the failure history, the neural-network model automatically develops its own internal model of the failure process and predicts future failures. Because it adjusts model complexity to match the complexity of the failure history, it can be more accurate than some commonly used analytic models. Results with actual testing and debugging data which suggest that neural-network models are better at endpoint predictions than analytic models are presented.<<ETX>>
IEEE Transactions on Software Engineering | 1992
Nachimuthu Karunanithi; Darrell Whitley; Yashwant K. Malaiya
The usefulness of connectionist models for software reliability growth prediction is illustrated. The applicability of the connectionist approach is explored using various network models, training regimes, and data representation methods. An empirical comparison is made between this approach and five well-known software reliability growth models using actual data sets from several different software projects. The results presented suggest that connectionist models may adapt well across different data sets and exhibit a better predictive accuracy. The analysis shows that the connectionist approach is capable of developing models of varying complexity. >
IEEE Transactions on Reliability | 1992
Yashwant K. Malaiya; Nachimuthu Karunanithi; Pradeep Verma
A two-component predictability measure that characterizes the long-term predictive capability of a model is presented. One component, average error, measures how well a model predicts throughout the testing phase. The other component, average bias, measures the general tendency to overestimate or underestimate the number of faults. Data sets for both large and small projects from diverse sources with various initial fault density ranges have been analyzed. The results show that: (i) the logarithmic model seems to predict well in most data sets, (ii) the inverse polynomial model can be used as the next alternative, and (iii) the delayed S-shaped model, which in some data sets fit well generally performed poorly. The statistical analysis shows that these models have appreciably different predictive capabilities. >
international symposium on software reliability engineering | 1991
Nachimuthu Karunanithi; Yashwant K. Malaiya; Darrell Whitley
Software reliability growth models have achieved considerable importance in estimating reliability of software products. The authors explore the use of feed-forward neural networks as a model for software reliability growth prediction. To empirically evaluate the predictive capability of this new approach, data sets from different software projects are used. The neural networks approach exhibits a consistent behavior in prediction and the predictive performance is comparable to that of parametric models.<<ETX>>
international symposium on software reliability engineering | 1998
Siddhartha R. Dalal; Ashish Jain; Nachimuthu Karunanithi; J. M. Leaton; Christopher M. Lott
The paradigm of model based testing shifts the focus of testing from writing individual test cases to developing a model from which a test suite can be generated automatically. We report on our experience with model based testing of a highly programmable system that implements intelligent telephony services in the US telephone network. Our approach used automatic test case generation technology to develop sets of self checking test cases based on a machine readable specification of the messages in the protocol under test. The AETG/sup TM/ software system selected a minimal number of test data tuples that covered pairwise combinations of tuple elements. We found the combinatorial approach of covering pairwise interactions between input fields to be highly effective. Our tests revealed failures that would have been difficult to detect using traditional test designs. However, transferring this technology to the testing organization was difficult. Automatic generation of cases represents a significant departure from conventional testing practice due to the large number of tests and the amount of software development involved.
international symposium on software reliability engineering | 1992
Nachimuthu Karunanithi; Yashwant K. Malaiya
Recently, neural networks have been applied for software reliability growth prediction. Although the predictive capability of the neural network models are better than some of the well known analytic models, the scaling problem has not been completely addressed yet. With the present neural network models, it is necessary to scale the cumulative faults over a 0.0 to 1.0 range, so the user has to estimate in advance a maximum value for the total number of faults to be detected at the end of the test phase. In practice, such an estimate may not be accurate. Use of an inaccurate value for scaling the cumulative faults can severely affect the predictive capability of neural network models. This paper presents a solution to the scaling problem which uses a clipped linear unit in the output layer. With a clipped linear output unit, the neural networks can predict positive values in any unbounded range. The authors demonstrate the applicability of the proposed network structure with three data sets and compare its predictive accuracy with that of earlier models. Expressions for the failure rate process represented by the models of the proposed network structure are also derived.<<ETX>>
computer software and applications conference | 1990
Yashwant K. Malaiya; Nachimuthu Karunanithi; Pradeep Verma
A two-component predictability measure is presented that characterizes the long-term predictability of a software reliability growth model. The first component, average predictability, measures how well a model predicts throughout the testing phase. The second component, average bias, is a measure of the general tendency to overestimate or underestimate the number of faults. Data sets for both large and small projects from diverse sources have been analyzed. The results seem to support the observation that the logarithmic model appears to have good predictability is most cases. However, at very low fault densities, the exponential model may be slightly better. The delayed S-shaped model which in some cases has been shown to have good fit, generally performed poorly.<<ETX>>
international symposium on software reliability engineering | 1993
Nachimuthu Karunanithi
One of the key assumptions made in most of the time-domain based software reliability growth models is that the complete code for the system is available before testing starts and that the code remains frozen during testing. However, this assumption is often violated in large software projects. Thus, the existing models may not be able to provide an accurate description of the failure process in the presence of code churn. Daal and McIntosh (1992) developed an extended stochastic model by incorporating continuous code churn into a standard Poisson process model and observed an improvement in the models estimation accuracy. This paper demonstrates the applicability of the neural network approach to the problem of developing an extended software reliability growth model in the face of continuous code churn. In this preliminary study, a comparison is made between two neural network models, one with the code churn information and the other without the code churn information, for the accuracy of fit and the predictive quality using a data set from a large telecommunication system. The preliminary results suggest that the neural network model that incorporates the code churn information is capable of providing a more accurate prediction than the network without the code churn information.
Journal of Computing in Civil Engineering | 1994
Nachimuthu Karunanithi; William J. Grenney; Darrell Whitley; Ken Bovee