Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where John C. Munson is active.

Publication


Featured researches published by John C. Munson.


IEEE Transactions on Software Engineering | 1992

The detection of fault-prone programs

John C. Munson; Taghi M. Khoshgoftaar

The use of the statistical technique of discriminant analysis as a tool for the detection of fault-prone programs is explored. A principal-components procedure was employed to reduce simple multicollinear complexity metrics to uncorrelated measures on orthogonal complexity domains. These uncorrelated measures were then used to classify programs into alternate groups, depending on the metric values of the program. The criterion variable for group determination was a quality measure of faults or changes made to the programs. The discriminant analysis was conducted on two distinct data sets from large commercial systems. The basic discriminant model was constructed from deliberately biased data to magnify differences in metric values between the discriminant groups. The technique was successful in classifying programs with a relatively low error rate. While the use of linear regression models has produced models of limited value, this procedure shows great promise for use in the detection of program modules with potential for faults. >


IEEE Journal on Selected Areas in Communications | 1990

Predicting software development errors using software complexity metrics

Taghi M. Khoshgoftaar; John C. Munson

Predictive models that incorporate a functional relationship of program error measures with software complexity metrics and metrics based on factor analysis of empirical data are developed. Specific techniques for assessing regression models are presented for analyzing these models. Within the framework of regression analysis, the authors examine two separate means of exploring the connection between complexity and errors. First, the regression models are formed from the raw complexity metrics. Essentially, these models confirm a known relationship between program lines of code and program errors. The second methodology involves the regression of complexity factor measures and measures of errors. These complexity factors are orthogonal measures of complexity from an underlying complexity domain model. From this more global perspective, it is believed that there is a relationship between program errors and complexity domains of program structure and size (volume). Further, the strength of this relationship suggests that predictive models are indeed possible for the determination of program errors from these orthogonal complexity domains. >


international conference on software maintenance | 1998

Code churn: a measure for estimating the impact of code change

John C. Munson; Sebastian G. Elbaum

This study presents a methodology that will produce a viable fault surrogate. The focus of the effort is on the precise measurement of software development process and product outcomes. Tools and processes for the static measurement of the source code have been installed and made operational in a large embedded software system. Source code measurements have been gathered unobtrusively for each build in the software evolution process. The measurements are synthesized to obtain the fault surrogate. The complexity of sequential builds is compared and a new measure, code churn, is calculated. This paper demonstrates the effectiveness of code complexity churn by validating it against the testing problem reports.


international conference on software engineering | 1989

The Dimensionality Of Program Complexity

John C. Munson; Taghi M. Khoshgoftaar

Software complexity metrics attempt to define the unique chaxacteristics of computer programs in an analytical way. Many such metrics have been developed to explain various perceived differences among programs. Many studies have been conducted to show the similarity among classes of these metrics. What is lacking in this body of literature is a technique which will aid in the establishment of the true dimensionality of the complexity problem space. The objective of this paper is to examine some recent investigations in the area of software complexity using factor analysis to begin an exploration of the actual di- mensionality of the complexity metrics. This technique can expose the relationships of these many metrics, one to another. Some correlation coefficients from recent empirical studies on software metrics were factor analyzed, showing the probable existence of five complexity dimensions within thirty five different complexity measures.


IEEE Transactions on Software Engineering | 1992

Predictive modeling techniques of software quality from software measures

Taghi M. Khoshgoftaar; John C. Munson; Bibhuti B. Bhattacharya; Gary Richardson

The objective in the construction of models of software quality is to use measures that may be obtained relatively early in the software development life cycle to provide reasonable initial estimates of the quality of an evolving software system. Measures of software quality and software complexity to be used in this modeling process exhibit systematic departures of the normality assumptions of regression modeling. Two new estimation procedures are introduced, and their performances in the modeling of software quality from software complexity in terms of the predictive quality and the quality of fit are compared with those of the more traditional least squares and least absolute value estimation techniques. The two new estimation techniques did produce regression models with better quality of fit and predictive quality when applied to data obtained from two software development projects. >


Journal of Systems and Software | 1990

Applications of a relative complexity metric for software project management

John C. Munson; Taghi M. Khoshgoftaar

Abstract The relationships among the many software complexity metrics has made the use of these metrics somewhat untenable as project management tools. In this article, we develop the notion of a single metric, called relative complexity, which assigns a single value to each program in a program set to order the programs by their complexity. For a test data set, relative program complexity was established for 27 programs. These relative complexity data were then examined in relation to the time each of the programs spent in the debugging phase. A significant relationship was found between relative complexity and programming-debugging time as a measure of effort. It is also clear that the relative complexity metric is stable throughout the design process. It may serve as a leading indicator as to the set of programs that will require large amounts of system resources during the development and maintenance phases.


Information & Software Technology | 1990

Regression modelling of software quality: empirical investigation☆

John C. Munson; Taghi M. Khoshgoftaar

Abstract The use of software complexity metrics in the determination of software quality has met with limited success. Many metrics measure similar aspects of program differences. Some lack a sound theoretical foundation. Attempts to use these metrics in quantitative modelling scenarios have been frustrated by a lack of understanding of the precise nature of exactly what is being measured. This is particularly true in the application of these metrics to predictive models. The paper investigates some basic issues associated with the modelling process, including problems of shared variance among metrics and the possible relationship between complexity metrics and measures of program quality. The modelling techniques are applied to a sample data set to explore the differences between modelling techniques with raw complexity metrics and complexity metrics that have been simplified through factor analysis. The ultimate objective is to provide the foundation for the use of complexity metrics in predictive models. This, in turn, will permit the effective use of these measures in the management of complex software projects.


ieee international software metrics symposium | 2003

Developing fault predictors for evolving software systems

John C. Munson

Over the past several years, we have been developing methods of predicting the fault content of software systems based on measured characteristics of their structural evolution. In previous work, we have shown there is a significant linear relationship between code churn, a synthesized metric, and the rate at which faults are inserted into the system in terms of number of faults per unit change in code churn. We have begun a new investigation of this relationship with a flight software technology development effort at the jet propulsion laboratory (JPL) and have progressed in resolving the limitations of the earlier work in two distinct steps. First, we have developed a standard for the enumeration of faults. Second, we have developed a practical framework for automating the measurement of these faults. we analyze the measurements of structural evolution and fault counts obtained from the JPL flight software technology development effort. Our results indicate that the measures of structural attributes of the evolving software system are suitable for forming predictors of the number of faults inserted into software modules during their development. The new fault standard also ensures that the model so developed has greater predictive validity.


Journal of Systems and Software | 1993

Measurement of data structure complexity

John C. Munson; Taghi M. Khoshgoftaar

Abstract A new measure of software complexity is introduced. This new complexity metric describes the data structure complexity of operands in a program from a functional point of view. The computational methodology for the metric is presented. An empirical study is included to show the behavior of the new metric in relation to an existing set of validated metric primitives. This study shows that the new metric measures a source of variation not accounted for in the set of metric primitives. It provides additional resolution on the description of differences among data structures among program elements. As a result, a new complexity domain, that of data structure, may be added to an emerging complexity domain model.


Journal of Systems and Software | 2000

Software evolution: code delta and code churn

Gregory A. Hall; John C. Munson

Abstract As software modules evolve over time so do the measurements associated with them. Measuring a software system just once might give some idea of where the system is, but it gives no insight into where it has been or where it is going. In order to evaluate how a system has changed over successive iterations, it is necessary to establish a baseline. This baseline provides the ability to compare different versions of the system to determine how its complexity has changed. Two measurements based on this idea, code delta and code churn, can be used to assess the amount of change in the complexity of the system across successive software builds. The concepts of code delta and code churn are illustrated by measuring a real, industrial sized software system.

Collaboration


Dive into the John C. Munson's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sebastian G. Elbaum

University of Nebraska–Lincoln

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bibhuti B. Bhattacharya

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar

Frank D. Anger

University of West Florida

View shared research outputs
Top Co-Authors

Avatar

Gary Richardson

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Joseph S. Sherif

California State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge