Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Siddhartha R. Dalal is active.

Publication


Featured researches published by Siddhartha R. Dalal.


IEEE Transactions on Software Engineering | 1997

The AETG system: an approach to testing based on combinatorial design

David M. Cohen; Siddhartha R. Dalal; Michael L. Fredman; Gardner C. Patton

This paper describes a new approach to testing that uses combinatorial designs to generate tests that cover the pairwise, triple, or n-way combinations of a systems test parameters. These are the parameters that determine the systems test scenarios. Examples are system configuration parameters, user inputs and other external events. We implemented this new method in the AETG system. The AETG system uses new combinatorial algorithms to generate test sets that cover all valid n-way parameter combinations. The size of an AETG test set grows logarithmically in the number of test parameters. This allows testers to define test models with dozens of parameters. The AETG system is used in a variety of applications for unit, system, and interoperability testing. It has generated both high-level test plans and detailed test cases. In several applications, it greatly reduced the cost of test plan development.


international conference on software engineering | 1999

Model-based testing in practice

Siddhartha R. Dalal; Ashish Jain; Nachimuthu Karunanithi; J. M. Leaton; Christopher M. Lott; Gardner C. Patton; Bruce M. Horowitz

Model-based testing is a new and evolving technique for generating a suite of test cases from requirements. Testers using this approach concentrate on a data model and generation infrastructure instead of hand-crafting individual tests. Several relatively small studies have demonstrated how combinatorial test generation techniques allow testers to achieve broad coverage of the input domain with a small number of tests. We have conducted several relatively large projects in which we applied these techniques to systems with millions of lines of code. Given the complexity of testing, the model-based testing approach was used in conjunction with test automation harnesses. Since no large empirical study has been conducted to measure efficacy of this new approach, we report on our experience with developing tools and methods in support of model-based testing. The four case studies presented here offer details and results of applying combinatorial test-generation techniques on a large scale to diverse applications. Based on the four projects, we offer our insights into what works in practice and our thoughts about obstacles to transferring this technology into testing organizations.


IEEE Software | 1996

The combinatorial design approach to automatic test generation

David M. Cohen; Siddhartha R. Dalal; Jesse Parelius; Gardner C. Patton

The combinatorial design method substantially reduces testing costs. The authors describe an application in which the method reduced test plan development from one month to less than a week. In several experiments, the method demonstrated good code coverage and fault detection ability.


international symposium on software reliability engineering | 1994

The Automatic Efficient Test Generator (AETG) system

David M. Cohen; Siddhartha R. Dalal; A. Kajla; Gardner C. Patton

Software testing is expensive, tedious and time consuming. Thus, the problem of making testing more efficient and mechanical, without losing its effectiveness, is very important. The Automatic Efficient Test Generator (AETG) is a new tool that mechanically generates efficient test sets from user defined test requirements. It is based on algorithms that use ideas from statistical experimental design theory to minimize the number of tests needed for a specific level of test coverage of the input test space. The savings due to AETG are substantial when compared to exhaustive testing or other methods of testing. AETG has been used in Bellcore for screen testing, interoperability testing and for protocol conformance testing. The paper describes the current system and it constructs and reports some preliminary results obtained during initial trials.<<ETX>>


Technometrics | 1998

Factor-covering designs for testing software

Siddhartha R. Dalal; Colin L. Mallows

Testing is a critical component of modern software development. The problem of designing a suite of test cases is superficially similar to that of designing an experiment to estimate main effects and interactions, but there are crucial differences. Additive models are unhelpful, and classical design criteria are also. We propose a new class of models and new measures of effectiveness. We compare several designs.


Journal of the American Statistical Association | 1989

Risk Analysis of the Space Shuttle: Pre-Challenger Prediction of Failure

Siddhartha R. Dalal; Edward B. Fowlkes; Bruce Hoadley

Abstract The Rogers Commission report on the space shuttle Challenger accident concluded that the accident was caused by a combustion gas leak through a joint in one of the booster rockets, which was sealed by a device called an O-ring. The commission further concluded that O-rings do not seal properly at low temperatures. In this article, data from the 23 preaccident launches of the space shuttle is used to predict O-ring performance under the Challenger launch conditions and relate it to the catastrophic failure of the shuttle. Analyses via binomial and binary logistic regression show that there is strong statistical evidence of a temperature effect on incidents of O-ring thermal distress. In addition, a probabilistic risk assessment at 31°F, the temperature at which Challenger was launched, yields at least a 13% probability of catastrophic field-joint O-ring failure. Postponement to 60°F would have reduced the probability to at least 2%. To assess uncertainty in estimates and for any future prediction ...


IEEE Journal on Selected Areas in Communications | 1990

Some graphical aids for deciding when to stop testing software

Siddhartha R. Dalal; Colin L. Mallows

It is noted that the developers of large software systems must decide how much software should be tested before releasing it. An explicit tradeoff between the costs of testing and releasing is considered. The former may include the opportunity cost of continued testing, and the latter may include the cost of customer dissatisfaction and of fixing faults found in the field. Exact stopping rules were obtained by Dalal and Mallows (J. Amer., Statist. Assoc., vol.83, p.872, 1988), under the assumption that the distribution of the fault finding rate is known. Here, two important variants where the fault finding distribution is not completely known are considered. They are (i) the distribution is exponential with unknown mean and (ii) the distribution is locally exponential with the rate changing smoothly over time. New procedures for both cases are presented. In case (i) it is shown how to incorporate information from related projects and subjective inputs. Several novel graphical procedures which are easy to implement are proposed, and these are illustrated for data from a large telecommunications software system. >


IEEE Transactions on Software Engineering | 1994

When to stop testing for large software systems with changing code

Siddhartha R. Dalal; Allen A. McIntosh

Developers of large software systems must decide how long software should be tested before releasing it. A common and usually unwarranted assumption is that the code remains frozen during testing. We present a stochastic and economic framework to deal with systems that change as they are tested. The changes can occur because of the delivery of software as it is developed, the way software is tested, the addition of fixes, and so on. Specifically, we report the details of a real time trial of a large software system that had a substantial amount of code added during testing. We describe the methodology, give all of the relevant details, and discuss the results obtained. We pay particular attention to graphical methods that are easy to understand, and that provide effective summaries of the testing process. Some of the plots found useful by the software testers include: the Net Benefit Plot, which gives a running chart of the benefit; the Stopping Plot, which estimates the amount of additional time needed for testing; and diagnostic plots. To encourage other researchers to try out different models, all of the relevant data are provided. >


international symposium on software reliability engineering | 1998

Model-based testing of a highly programmable system

Siddhartha R. Dalal; Ashish Jain; Nachimuthu Karunanithi; J. M. Leaton; Christopher M. Lott

The paradigm of model based testing shifts the focus of testing from writing individual test cases to developing a model from which a test suite can be generated automatically. We report on our experience with model based testing of a highly programmable system that implements intelligent telephony services in the US telephone network. Our approach used automatic test case generation technology to develop sets of self checking test cases based on a machine readable specification of the messages in the protocol under test. The AETG/sup TM/ software system selected a minimal number of test data tuples that covered pairwise combinations of tuple elements. We found the combinatorial approach of covering pairwise interactions between input fields to be highly effective. Our tests revealed failures that would have been difficult to detect using traditional test designs. However, transferring this technology to the testing organization was difficult. Automatic generation of cases represents a significant departure from conventional testing practice due to the large number of tests and the amount of software development involved.


international conference on software engineering | 1993

Reliable software and communication: software quality, reliability, and safety

Siddhartha R. Dalal; Joseph Robert Horgan; Jon R. Kettenring

Examines the software development process and suggests opportunities for improving the process by using a combination of statistical and other process control techniques. Each phase of the software process affects the ultimate quality, reliability, and safety of the software. Control of the process, supported by appropriate tools to collect and analyze data, is essential to improvement of the software product. Since the ability to observe, control, and improve software depends on the ability to measure and analyze data drawn from the software process, data collection is central to the approach. Detailed data about each of the subprocesses are needed, along with tools to measure and analyze the data. Statistical process control techniques, besides improving system reliability, can produce a substantial economic gain in the software development process. The views are based upon experiences with large telecommunications systems.<<ETX>>

Collaboration


Dive into the Siddhartha R. Dalal's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ashish Jain

Telcordia Technologies

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge