Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Todd L. Graves is active.

Publication


Featured researches published by Todd L. Graves.


IEEE Transactions on Software Engineering | 2000

Predicting fault incidence using software change history

Todd L. Graves; Alan F. Karr; J. S. Marron; Harvey P. Siy

This paper is an attempt to understand the processes by which software ages. We define code to be aged or decayed if its structure makes it unnecessarily difficult to understand or change and we measure the extent of decay by counting the number of faults in code in a period of time. Using change management data from a very large, long-lived software system, we explore the extent to which measurements from the change history are successful in predicting the distribution over modules of these incidences of faults. In general, process measures based on the change history are more useful in predicting fault rates than product metrics of the code: For instance, the number of times code has been changed is a better indication of how many faults it will contain than is its length. We also compare the fault rates of code of various ages, finding that if a module is, on the average, a year older than an otherwise similar module, the older module will have roughly a third fewer faults. Our most successful model measures the fault potential of a module as the sum of contributions from all of the times the module has been changed, with large, recent changes receiving the most weight.


IEEE Transactions on Software Engineering | 2001

Does code decay? Assessing the evidence from change management data

Stephen G. Eick; Todd L. Graves; Alan F. Karr; J. S. Marron; Audris Mockus

A central feature of the evolution of large software systems is that change-which is necessary to add new functionality, accommodate new hardware, and repair faults-becomes increasingly difficult over time. We approach this phenomenon, which we term code decay, scientifically and statistically. We define code decay and propose a number of measurements (code decay indices) on software and on the organizations that produce it, that serve as symptoms, risk factors, and predictors of decay. Using an unusually rich data set (the fifteen-plus year change history of the millions of lines of software for a telephone switching system), we find mixed, but on the whole persuasive, statistical evidence of code decay, which is corroborated by developers of the code. Suggestive indications that perfective maintenance can retard code decay are also discussed.


ACM Transactions on Software Engineering and Methodology | 2001

An empirical study of regression test selection techniques

Todd L. Graves; Mary Jean Harrold; Jung-Min Kim; Adam A. Porter; Gregg Rothermel

Regression testing is the process of validating modified software to detect whether new errors have been introduced into previously tested code and to provide confidence that modifications are correct. Since regression testing is an expensive process, researchers have proposed regression test selection techniques as a way to reduce some of this expense. These techniques attempt to reduce costs by selecting and running only a subset of the test cases in a programs existing test suite. Although there have been some analytical and empirical evaluations of individual techniques, to our knowledge only one comparative study, focusing on one aspect of two of these techniques, has been reported in the literature. We conducted an experiment to examine the relative costs and benefits of several regression test selection techniques. The experiment examined five techniques for reusing test cases, focusing on their relative ablilities to reduce regression testing effort and uncover faults in modified programs. Our results highlight several differences between the techiques, and expose essential trade-offs that should be considered when choosing a technique for practical application.


IEEE Transactions on Software Engineering | 2002

Visualizing software changes

Stephen G. Eick; Todd L. Graves; Alan F. Karr; Audris Mockus; Paul Schuster

A key problem in software engineering is changing the code. We present a sequence of visualizations and visual metaphors designed to help engineers understand and manage the software change process. The principal metaphors are matrix views, cityscapes, bar and pie charts, data sheets and networks. Linked by selection mechanisms, multiple views are combined to form perspectives that both enable discovery of high-level structure in software change data and allow effective access to details of those data. Use of the views and perspectives is illustrated in two important contexts: understanding software change by exploration of software change data and management of software development. Our approach complements existing visualizations of software structure and software execution.


Reliability Engineering & System Safety | 2004

A fully Bayesian approach for combining multilevel failure information in fault tree quantification and optimal follow-on resource allocation

Michael S. Hamada; Harry F. Martz; C.S. Reese; Todd L. Graves; V. Johnson; Alyson G. Wilson

Abstract This paper presents a fully Bayesian approach that simultaneously combines non-overlapping (in time) basic event and higher-level event failure data in fault tree quantification. Such higher-level data often correspond to train, subsystem or system failure events. The fully Bayesian approach also automatically propagates the highest-level data to lower levels in the fault tree. A simple example illustrates our approach. The optimal allocation of resources for collecting additional data from a choice of different level events is also presented. The optimization is achieved using a genetic algorithm.


IEEE Transactions on Software Engineering | 2002

Using version control data to evaluate the impact of software tools: a case study of the Version Editor

David L. Atkins; Thomas Ball; Todd L. Graves; Audris Mockus

Software tools can improve the quality and maintainability of software, but are expensive to acquire, deploy, and maintain, especially in large organizations. We explore how to quantify the effects of a software tool once it has been deployed in a development environment. We present an effort-analysis method that derives tool usage statistics and developer actions from a projects change history (version control system) and uses a novel effort estimation algorithm to quantify the effort savings attributable to tool usage. We apply this method to assess the impact of a software tool called VE, a version-sensitive editor used in Bell Labs. VE aids software developers in coping with the rampant use of certain preprocessor directives (similar to #if/#endif in C source files). Our analysis found that developers were approximately 40 percent more productive when using VE than when using standard text editors.


Statistical Science | 2006

Advances in Data Combination, Analysis and Collection for System Reliability Assessment

Alyson G. Wilson; Todd L. Graves; Michael S. Hamada; C. Shane Reese

The systems that statisticians are asked to assess, such as nuclear weapons, infrastructure networks, supercomputer codes and munitions, have become increasingly complex. It is often costly to conduct full system tests. As such, we present a review of methodology that has been proposed for addressing system reliability with limited full system testing. The first approaches presented in this paper are concerned with the combination of multiple sources of information to assess the reliability of a single component. The second general set of methodology addresses the combination of multiple levels of data to determine system reliability. We then present developments for complex systems beyond traditional series/parallel representations through the use of Bayesian networks and flowgraph models. We also include methodological contributions to resource allocation considerations for system relability assessment. We illustrate each method with applications primarily encountered at Los Alamos National Laboratory.


international conference on software engineering | 1999

Using version control data to evaluate the impact of software tools

David L. Atkins; Thomas Ball; Todd L. Graves; Audris Mockus

Software tools can improve the quality and maintainability of software, but are expensive to acquire, deploy and maintain, especially in large organizations. We explore how to quantify the effects of a software tool once it has been deployed in a development environment. We present a simple methodology for tool evaluation that correlates tool usage statistics with estimates of developer effort, as derived from a projects change history (version control system). Our work complements controlled experiments on software tools, which usually take place outside the industrial setting, and tool assessment studies that predict the impact of software tools before deployment. Our analysis is inexpensive, non-intrusive and can be applied to an entire software project in its actual setting. A key part of our analysis is how to control confounding variables such as developer work-style and experience in order accurately to quantify the impact of a tool on developer effort. We demonstrate our method in a case study of a software tool called VE, a version-sensitive editor used in BellLabs. VE aids software developers in coping with the rampant use of preprocessor directives (such as if/ endif) in C source files. Our analysis found that developers were approximately 36% more productive when using VE than when using standard text editors.


Reliability Engineering & System Safety | 2007

A fully Bayesian approach for combining multi-level information in multi-state fault tree quantification

Todd L. Graves; Michael S. Hamada; Richard Klamann; A. C. Koehler; Harry F. Martz

This paper presents a fully Bayesian approach that simultaneously combines non-overlapping (in time) basic event and higher-level event failure data in fault tree quantification with multi-state events. Such higher-level data often correspond to train, subsystem or system failure events. The fully Bayesian approach also automatically propagates the highest-level data to lower levels in the fault tree. A simple example illustrates our approach.


Reliability Engineering & System Safety | 2008

Using simultaneous higher-level and partial lower-level data in reliability assessments

Todd L. Graves; Michael S. Hamada; Richard Klamann; A. C. Koehler; Harry F. Martz

When a system is tested, besides system data, some lower-level data may become available such as a particular subsystem or component was successful or failed. Treating such simultaneous multi-level data as independent is a mistake because they are dependent. In this paper, we show how to handle simultaneous multi-level data correctly in a reliability assessment. We do this by determining what information the simultaneous data provides in terms of the component reliabilities using generalized cut sets. We illustrate this methodology with an example of a low-pressure coolant injection system using a Bayesian approach to make reliability assessments.

Collaboration


Dive into the Todd L. Graves's collaboration.

Top Co-Authors

Avatar

Michael S. Hamada

Los Alamos National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alyson G. Wilson

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

C. Shane Reese

Brigham Young University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Harry F. Martz

Los Alamos National Laboratory

View shared research outputs
Top Co-Authors

Avatar

J. S. Marron

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Richard Klamann

Los Alamos National Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge