Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David L. Lanning is active.

Publication


Featured researches published by David L. Lanning.


Journal of Systems and Software | 1995

A neural network approach for early detection of program modules having high risk in the maintenance phase

Taghi M. Khoshgoftaar; David L. Lanning

Abstract A neural network model is developed to classify program modules as either high or low risk based on multiple criterion variables. The inputs to the model include a selection of software complexity metrics collected from a telecommunications system. Two criterion variables are used for class determination: the number of changes to enhance the program modules, and the number of changes required to remove faults from the modules. The data were deliberately biased to magnify differences in metrics values between the discriminant groups. The technique displayed a low classification error rate. This success, and the absence of the data assumptions typical of statistical techniques, demonstrate the utility of neural networks in isolating high-risk modules where class determination is based on multiple quality metrics.


IEEE Journal on Selected Areas in Communications | 1994

A comparative study of pattern recognition techniques for quality evaluation of telecommunications software

Taghi M. Khoshgoftaar; David L. Lanning; Abhijit S. Pandya

The extreme risks of software faults in the telecommunications environment justify the costs of data collection and modeling of software quality. Software quality models based on data drawn from past projects can identify key risk or problem areas in current similar development efforts. Once these problem areas are identified, the project management team can take actions to reduce the risks. Studies of several telecommunications systems have found that only 4-6% of the system modules were complex /spl lsqb/LeGall et al. 1990/spl rsqb/. Since complex modules are likely to contain a large proportion of a systems faults, the approach of focusing resources on high-risk modules seems especially relevant to telecommunications software development efforts. A number of researchers have recognized this, and have applied modeling techniques to isolate fault-prone or high-risk program modules. A classification model based upon discriminant analytic techniques has shown promise in performing this task. The authors introduce a neural network classification model for identifying high-risk program modules, and compare the quality of this model with that of a discriminant classification model fitted with the same data. They find that the neural network techniques provide a better management tool in software engineering environments. These techniques are simpler, produce more accurate models, and are easier to use. >


Annals of Software Engineering | 1995

Application of neural networks for predicting program faults

Taghi M. Khoshgoftaar; Abhijit S. Pandya; David L. Lanning

Accurately predicting the number of faults in program modules is a major problem in the quality control of large software development efforts. Some software complexity metrics are closely related to the distribution of faults across program modules. Using these relationships, software engineers develop models that provide early estimates of quality metrics that do not become available until late in the development cycle. By considering these early estimates, software engineers can take actions to avoid or prepare for emerging quality problems. Most often, the predictive models are based upon multiple regression analysis. However, measures of software quality and complexity exhibit systematic departures from the assumptions of these analyses. With extreme violations of these assumptions, multiple regression models become unstable and lose most of their predictive quality. Since neural network models carry no data assumptions, these models could be more appropriate than regression models for modeling software faults. In this paper, we explore a neural network methodology for developing models that predict the number of faults in program modules. We apply this methodology to develop neural network models based upon data collected during the development of two commercial software systems. After developing neural network models, we apply multiple linear regression methods to develop regression models on the same data. For the data sets considered, the neural network methodology produced better predictive models in terms of both quality of fit and predictive quality.


IEEE Computer | 1994

Modeling the relationship between source code complexity and maintenance difficulty

David L. Lanning; Taghi M. Khoshgoftaar

Canonical correlation analysis can be a useful exploratory tool for software engineers who want to understand relationships that are not directly observable and who are interested in understanding influences affecting past development efforts. These influences could also affect current development efforts. In this paper, we restrict our findings to one particular development effort. We do not imply that either the weights or the loadings of the relations generalize to all software development efforts. Such generalization is untenable, since the model omitted many important influences on maintenance difficulty. Much work remains to specify subsets of indicators and development efforts for which the technique becomes useful as a predictive tool. Canonical correlation analysis is explained as a restricted form of soft modeling. We chose this approach not only because the terminology and graphical devices of soft modeling allow straightforward high-level explanations, but also because we are interested in the general method. The general method allows models involving many latent variables having interdependencies. It is intended for modeling complex interdisciplinary systems having many variables and little established theory. Further, it incorporates parameter estimation techniques relying on no distributional assumptions. Future research will focus on developing general soft models of the software development process for both exploratory analysis and prediction of future performance.<<ETX>>


international conference on software maintenance | 1993

A comparative study of predictive models for program changes during system testing and maintenance

Taghi M. Khoshgoftaar; John C. Munson; David L. Lanning

By modeling the relationship between software complexity attributes and software quality attributes, software engineers can take actions early in the development cycle to control the cost of the maintenance phase. The effectiveness of these model-based actions depends heavily on the predictive quality of the model. An enhanced modeling methodology that shows significant improvements in the predictive quality of regression models developed to predict software changes during maintenance is applied here. The methodology reduces software complexity data to domain metrics by applying principal components analysis. It then isolates clusters of similar program modules by applying cluster analysis to these derived domain metrics. Finally, the methodology develops individual regression models for each cluster. These within-cluster models have better predictive quality than a general model fitted to all of the observations.<<ETX>>


Journal of Systems and Software | 1994

Alternative approaches for the use of metrics to order programs by complexity

Taghi M. Khoshgoftaar; John C. Munson; David L. Lanning

Abstract With many program complexity metrics available, it is difficult to rank programs by complexity: the different metrics can give different indications. There are two parts to this problem. First, because different metrics can measure the same program attribute, we need a method of evaluating a given program attribute based on the values of all metrics that measure this attribute. Second, because different metrics can measure distinct program attributes, we need a method of evaluating the overall program complexity based on the values of all program attributes. This article compares two methods of simultaneously detecting those aspects of software complexity measured by the Halstead metrics and the McCabe cyclomatic complexity number. The first method synthesizes a combined metric by weighting each Halstead metric to reflect program attributes measured by the cyclomatic complexity number. The second method uses principal components analysis. This method derives a relative complexity metric, which represents each complexity metric in proportion to the amount of unique variation that it contributes. A validation study establishes a useful statistical relationship between this relative complexity metric and faults for a military telecommunications development effort.


international symposium on software reliability engineering | 1993

A neural network modeling methodology for the detection of high-risk programs

Taghi M. Khoshgoftaar; David L. Lanning; Abhijit S. Pandya

The profitability of a software development effort is highly dependent on both timely market entry and the reliability of the released product. To get a highly reliable product to the market on schedule, software engineers must allocate resources appropriately across the development effort. Software quality models based upon data drawn from past projects can identify key risk or problem areas in current similar development efforts. Knowing the high-risk modules in a software design is a key to good design and staffing decisions. A number of researchers have recognized this, and have applied modeling technqiues to isolate fault-prone or high-risk program modules early in the development cycle. Discriminant analytic classification models have shown promise in performing this task. We introduce a neural network classification model for identifying high-risk program modules, and we compare the quality of this model with that of a discriminant classification model fitted with the same data. We find that the neural network techniques provide a better management tool in software engineering environments.


Journal of Systems and Software | 1997

An information theory-based approach to quantifying the contribution of a software metric

Taghi M. Khoshgoftaar; Edward B. Allen; David L. Lanning

Abstract Competitive pressures are forcing many companies to aggressively pursue software quality improvement based on software complexity metrics. A metrics database is often the key to a successful ongoing software metrics program. Contel had a successful metrics program that involved project-level metrics databases and planned a corporate level database. The U.S. Army has established a minimum set of metrics for Army software development and maintenance covering the development process, software quality, and software complexity. This program involves a central Army-wide metrics database and a validation program. In light of the importance of corporate metrics databases and the prevalence of multicolliner metrics, we define the contribution of any proposed metric in terms of the measured variation, irrespective of the metrics usefulness in quality models. This is of interest when full validation is not practical. We review two approaches to assessing the contribution of a new software complexity metric to a metrics database and present a new method based on information theory. The method is general and does not presume any particular set of metrics. We illustrate this method with three case studies, using data from full-scale operational software systems. The new method is less subjective than competing assessment methods.


ieee international software metrics symposium | 1994

Are the principal components of software complexity data stable across software products

Taghi M. Khoshgoftaar; David L. Lanning

The current software market is not suitable for organizations that place competitive bids, set schedules, or control projects without regard to past performance. Software quality models based upon data collected from past projects can help engineers to estimate costs of future development efforts, and to control ongoing efforts. Application of principal components analysis can improve the stability and predictive quality of software quality models. However, models based upon principal components are only appropriate for application to products having similar principal components. We apply a statistical technique for quantifying the similarity of principal components. We find that distinct but similar products developed by the same organization can share similar principal components, and that distinct products developed by distinct organizations will likely have dissimilar principal components.<<ETX>>


international symposium on software reliability engineering | 1994

On the impact of software product dissimilarity on software quality models

Taghi M. Khoshgoftaar; David L. Lanning

The current software market favors software development organizations that apply software quality models. Software engineers fit quality models to data collected from past projects. Predictions from these models provide guidance in setting schedules and allocating resources for new and ongoing development projects. To improve model stability and predictive quality, engineers select models from the orthogonal linear combinations produced using principal components analysis. However, recent research revealed that the principal components underlying source code measures are not necessarily stable across software products. Thus, the principal components underlying the product used to fit a regression model can vary from the principal components underlying the product for which we desire predictions. We investigate the impact of this principal components instability on the predictive quality of regression models. To achieve this, we apply an analytical technique for accessing the aptness of a given model to a particular application.<<ETX>>

Collaboration


Dive into the David L. Lanning's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Abhijit S. Pandya

Florida Atlantic University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Edward B. Allen

Mississippi State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge