Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Padmanabhan Santhanam.
Ibm Systems Journal | 2002
Brent Hailpern; Padmanabhan Santhanam
In commercial software development organizations, increased complexity of products, shortened development cycles, and higher customer expectations of quality have placed a major responsibility on the areas of software debugging, testing, and verification. As this issue of the IBM Systems Journal illustrates, there are exciting improvements in the underlying technology on all three fronts. However, we observe that due to the informal nature of software development as a whole, the prevalent practices in the industry are still immature, even in areas where improved technology exists. In addition, tools that incorporate the more advanced aspects of this technology are not ready for large-scale commercial use. Hence there is reason to hope for significant improvements in this area over the next several years.
international symposium on software reliability engineering | 1998
Shriram Biyani; Padmanabhan Santhanam
Traditional defect analyses of software modules have focused on either identifying error prone modules or predicting the number of faults in a module, based on a set of module attributes such as complexity, lines of code, etc. In contrast to these metrics based modeling studies, the paper explores the relationship of the number of faults per module to the prior history of the module. Specifically, we examine the relationship between: (a) the faults discovered during development of a product release and those escaped to the field; and (b) faults in the current release and faults in previous releases. Based on the actual data from four releases of a commercial application product consisting of several thousand modules, we show that: modules with more defects in development have a higher probability of failure in the field; there is a way to assess the relative quality of software releases without detailed information on the exact release content or code size; and it is sufficient to consider just the previous release for predicting the number of defects during development or field. These results can be used to improve the prediction of quality at the module level of future releases based on the past history.
IEEE Software | 1998
Kathryn A. Bassin; Theresa Kratschmer; Padmanabhan Santhanam
By employing the Orthogonal Defect Classification scheme, the authors are able to support management with a firm handle on technical decision making. Through the extensive capture and analysis of defect semantics, one can obtain information on project management, test effectiveness, reliability, quality, and customer usage. The article describes three real-life case studies, and demonstrates the applicability of their techniques,.
international conference on software maintenance | 2001
Kathryn A. Bassin; Padmanabhan Santhanam
From the perspective of maintenance, software systems that include COTS software, legacy, ported or outsourced code pose a major challenge. The dynamics of enhancing or adapting a product to address evolving customer usage and the inadequate documentation of these changes over a period of time (and several generations) are just two of the factors which may have a debilitating effect on the maintenance effort. While many approaches and solutions have been offered to address the underlying problems, few offer methods which directly affect a teams ability to quickly identify and prioritize actions targeting the product which is already in front of them. The paper describes a method to analyze the information contained in the form of defect data and arrive at technical actions to address explicit product and process weaknesses which can be feasibly addressed in the current effort. The defects are classified using Orthogonal Defect Classification (ODC) and actual case studies are used to illustrate the key points.
Ibm Systems Journal | 2002
Kathryn A. Bassin; Shriram Harikishan Biyani; Padmanabhan Santhanam
Various business considerations have led a growing number of organizations to rely on external vendors to develop software for their needs. Much of the day-to-day data from vendors are not available to the vendee, and typically the vendee organization ends up with its own system or acceptance test to validate the software. The 2000 Summer Olympics in Sydney was one such project in which IBM evaluated vendor-delivered code to ensure that all elements of a highly complex system could be integrated successfully. The readiness of the vendor-delivered code was evaluated based primarily on the actual test execution results. New metrics were derived to measure the degree of risk associated with a variety of test case failures such as functionality not enabled, bad fixes, and defects not fixed during successive iterations. The relationship of these metrics to the actual cause was validated through explicit communications with the vendor and the subsequent actions to improve the quality and completeness of the delivered code. This paper describes how these metrics can be derived from the execution data and used in a software project execution environment. Even though we have applied these metrics in a vendor-related project, the underlying concepts are useful to many software projects.
international symposium on software reliability engineering | 1997
Kathryn A. Bassin; Padmanabhan Santhanam
We have analyzed fault data comprising nearly 30,000 records (including in-process and field data) from two real products A and B over multiple releases, using orthogonal defect classification (ODC). We exploit the information captured by ODC triggers to evaluate the development activities, and identify specific actions for improvement in development. We illustrate the use of triggers to capture customer usage in a way directly meaningful to product development and show the complete trigger profiles by development activity for two releases of product A, and evaluate the effectiveness of product development activities, to target specific areas of improvement and assess the results. We show how systematically the appropriate trigger distribution during the development activities can be made to approach the trigger distribution in the field over six releases, as validated by /spl chi//sup 2/ tests. We discuss the field defect trigger distributions for product A over three releases, and demonstrate the consistency of the profile over multiple releases and discuss the origin of differences. We consider the field trigger profiles of products A and B and discuss the differences in their customer usage and environment.
Ibm Systems Journal | 2006
Avik Sinha; Clay Williams; Padmanabhan Santhanam
This paper presents a measurement framework for evaluating model-based test generation (MBTG) tools. The proposed framework is derived by using the Goal Question Metric methodology, which helps formulate the metrics of interest: complexity, ease of learning, effectiveness, efficiency, and scalability. We demonstrate the steps involved in evaluating MBTG tools by describing a case study designed for this purpose. This case study involves the use of four MBTG tools that differ in their modeling techniques, test specification techniques, and test generation algorithms.
IEEE Software | 2007
S. Chulani; Padmanabhan Santhanam; B. Hodges; K. Blacksten Anders
Commmercial software product vendors such as Microsoft, IBM, and Oracle develop and manage a large portfolio of software products, which might include operating systems, middleware, firmware, and applications. Many institutions (such as banks, universities, and hospitals) also create and manage their own custom applications. Managers at these companies face an important problem: How can you manage investment, revenue, quality, and customer expectations across such a large portfolio? A heuristics-based product maturity framework can help companies effectively manage the development and maintenance of a portfolio of software products
international symposium on software reliability engineering | 2001
Kathryn A. Bassin; Shriram Biyani; Padmanabhan Santhanam
The 2000 summer Olympic Games event was a major information technology challenge. With a fixed deadline for completion, its inevitable dependency on software systems and immense scope, the testing and verification effort was critical to its success. One way in which success was assured was the use of innovative techniques using ODC based analysis to evaluate planned and executed test activities. These techniques were used to verify that the plan was comprehensive, yet efficient, and ensured that progress could be accurately measured. This paper describes some of these techniques and provides examples of the benefits derived. We also discuss the applicability, of the techniques to other software projects.
conference on computers and accessibility | 2011
Nithin Santhanam; Shari Trewin; Calvin Swart; Padmanabhan Santhanam
This study focuses on the use of web accessibility software by people with cerebral palsy performing three typical user tasks. We evaluate the customization options in the IBM accessibility Works add-on to the Mozilla Firefox browser, as used by ten users. While specific features provide significant benefit, we find that users tend to pick unnecessary options, resulting in a potentially negative user experience.