Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nachiappan Nagappan is active.

Publication


Featured researches published by Nachiappan Nagappan.


international conference on software engineering | 2006

Mining metrics to predict component failures

Nachiappan Nagappan; Thomas Ball; Andreas Zeller

What is it that makes software fail? In an empirical study of the post-release defect history of five Microsoft software systems, we found that failure-prone software entities are statistically correlated with code complexity measures. However, there is no single set of complexity metrics that could act as a universally best defect predictor. Using principal component analysis on the code metrics, we built regression models that accurately predict the likelihood of post-release defects for new entities. The approach can easily be generalized to arbitrary projects; in particular, predictors obtained from one project can also be significant for new, similar projects.


international conference on software engineering | 2005

Use of relative code churn measures to predict system defect density

Nachiappan Nagappan; Thomas Ball

Software systems evolve over time due to changes in requirements, optimization of code, fixes for security and reliability bugs etc. Code churn, which measures the changes made to a component over a period of time, quantifies the extent of this change. We present a technique for early prediction of system defect density using a set of relative code churn measures that relate the amount of churn to other variables such as component size and the temporal extent of churn. Using statistical regression models, we show that while absolute measures of code chum are poor predictors of defect density, our set of relative measures of code churn is highly predictive of defect density. A case study performed on Windows Server 2003 indicates the validity of the relative code churn measures as early indicators of system defect density. Furthermore, our code churn metric suite is able to discriminate between fault and not fault-prone binaries with an accuracy of 89.0 percent.


international conference on software engineering | 2008

Predicting defects using network analysis on dependency graphs

Thomas Zimmermann; Nachiappan Nagappan

In software development, resources for quality assurance are limited by time and by cost. In order to allocate resources effectively, managers need to rely on their experience backed by code complexity metrics. But often dependencies exist between various pieces of code over which managers may have little knowledge. These dependencies can be construed as a low level graph of the entire system. In this paper, we propose to use network analysis on these dependency graphs. This allows managers to identify central program units that are more likely to face defects. In our evaluation on Windows Server 2003, we found that the recall for models built from network measures is by 10% points higher than for models built from complexity metrics. In addition, network measures could identify 60% of the binaries that the Windows developers considered as critical-twice as many as identified by complexity metrics.


acm special interest group on data communication | 2011

Understanding network failures in data centers: measurement, analysis, and implications

Phillipa Gill; Navendu Jain; Nachiappan Nagappan

We present the first large-scale analysis of failures in a data center network. Through our analysis, we seek to answer several fundamental questions: which devices/links are most unreliable, what causes failures, how do failures impact network traffic and how effective is network redundancy? We answer these questions using multiple data sources commonly collected by network operators. The key findings of our study are that (1) data center networks show high reliability, (2) commodity switches such as ToRs and AggS are highly reliable, (3) load balancers dominate in terms of failure occurrences with many short-lived software related faults,(4) failures have potential to cause loss of many small packets such as keep alive messages and ACKs, and (5) network redundancy is only 40% effective in reducing the median impact of failure.


symposium on cloud computing | 2010

Characterizing cloud computing hardware reliability

Kashi Venkatesh Vishwanath; Nachiappan Nagappan

Modern day datacenters host hundreds of thousands of servers that coordinate tasks in order to deliver highly available cloud computing services. These servers consist of multiple hard disks, memory modules, network cards, processors etc., each of which while carefully engineered are capable of failing. While the probability of seeing any such failure in the lifetime (typically 3-5 years in industry) of a server can be somewhat small, these numbers get magnified across all devices hosted in a datacenter. At such a large scale, hardware component failure is the norm rather than an exception. Hardware failure can lead to a degradation in performance to end-users and can result in losses to the business. A sound understanding of the numbers as well as the causes behind these failures helps improve operational experience by not only allowing us to be better equipped to tolerate failures but also to bring down the hardware cost through engineering, directly leading to a saving for the company. To the best of our knowledge, this paper is the first attempt to study server failures and hardware repairs for large datacenters. We present a detailed analysis of failure characteristics as well as a preliminary analysis on failure predictors. We hope that the results presented in this paper will serve as motivation to foster further research in this area.


foundations of software engineering | 2009

Cross-project defect prediction: a large scale experiment on data vs. domain vs. process

Thomas Zimmermann; Nachiappan Nagappan; Harald C. Gall; Emanuel Giger; Brendan Murphy

Prediction of software defects works well within projects as long as there is a sufficient amount of data available to train any models. However, this is rarely the case for new software projects and for many companies. So far, only a few have studies focused on transferring prediction models from one project to another. In this paper, we study cross-project defect prediction models on a large scale. For 12 real-world applications, we ran 622 cross-project predictions. Our results indicate that cross-project prediction is a serious challenge, i.e., simply using models from projects in the same domain or with the same process does not lead to accurate predictions. To help software engineers choose models wisely, we identified factors that do influence the success of cross-project predictions. We also derived decision trees that can provide early estimates for precision, recall, and accuracy before a prediction is attempted.


international conference on software engineering | 2008

The influence of organizational structure on software quality: an empirical case study

Nachiappan Nagappan; Brendan Murphy; Victor R. Basili

Often software systems are developed by organizations consisting of many teams of individuals working together. Brooks states in the Mythical Man Month book that product quality is strongly affected by organization structure. Unfortunately there has been little empirical evidence to date to substantiate this assertion. In this paper we present a metric scheme to quantify organizational complexity, in relation to the product development process to identify if the metrics impact failure-proneness. In our case study, the organizational metrics when applied to data from Windows Vista were statistically significant predictors of failure-proneness. The precision and recall measures for identifying failure-prone binaries, using the organizational metrics, was significantly higher than using traditional metrics like churn, complexity, coverage, dependencies, and pre-release bug measures that have been used to date to predict failure-proneness. Our results provide empirical evidence that the organizational metrics are related to, and are effective predictors of failure-proneness.


technical symposium on computer science education | 2003

Improving the CS1 experience with pair programming

Nachiappan Nagappan; Laurie Williams; Miriam Ferzli; Eric N. Wiebe; Kai Yang; Carol Miller; Suzanne Balik

Pair programming is a practice in which two programmers work collaboratively at one computer, on the same design, algorithm, or code. Prior research indicates that pair programmers produce higher quality code in essentially half the time taken by solo programmers. An experiment was run to assess the efficacy of pair programming in an introductory Computer Science course. Student pair programmers were more self-sufficient, generally perform better on projects and exams, and were more likely to complete the class with a grade of C or better than their solo counterparts. Results indicate that pair programming creates a laboratory environment conducive to more advanced, active learning than traditional labs; students and lab instructors report labs to be more productive and less frustrating.


international conference on software engineering | 2005

Static analysis tools as early indicators of pre-release defect density

Nachiappan Nagappan; Thomas Ball

During software development it is helpful to obtain early estimates of the defect density of software components. Such estimates identify fault-prone areas of code requiring further testing. We present an empirical approach for the early prediction of pre-release defect density based on the defects found using static analysis tools. The defects identified by two different static analysis tools are used to fit and predict the actual pre-release defect density for Windows Server 2003. We show that there exists a strong positive correlation between the static analysis defect density and the pre-release defect density determined by testing. Further, the predicted pre-release defect density and the actual pre-release defect density are strongly correlated at a high degree of statistical significance. Discriminant analysis shows that the results of static analysis tools can be used to separate high and low quality components with an overall classification rate of 82.91%.


IEEE Transactions on Software Engineering | 2006

On the value of static analysis for fault detection in software

Jiang Zheng; Laurie Williams; Nachiappan Nagappan; Will Snipes; John P. Hudepohl; Mladen A. Vouk

No single software fault-detection technique is capable of addressing all fault-detection concerns. Similarly to software reviews and testing, static analysis tools (or automated static analysis) can be used to remove defects prior to release of a software product. To determine to what extent automated static analysis can help in the economic production of a high-quality product, we have analyzed static analysis faults and test and customer-reported failures for three large-scale industrial software systems developed at Nortel Networks. The data indicate that automated static analysis is an affordable means of software fault detection. Using the orthogonal defect classification scheme, we found that automated static analysis is effective at identifying assignment and checking faults, allowing the later software production phases to focus on more complex, functional, and algorithmic faults. A majority of the defects found by automated static analysis appear to be produced by a few key types of programmer errors and some of these types have the potential to cause security vulnerabilities. Statistical analysis results indicate the number of automated static analysis faults can be effective for identifying problem modules. Our results indicate static analysis tools are complementary to other fault-detection techniques for the economic production of a high-quality software product.

Collaboration


Dive into the Nachiappan Nagappan's collaboration.

Top Co-Authors

Avatar

Laurie Williams

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mladen A. Vouk

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge