Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lofton A. Bullard is active.

Publication


Featured researches published by Lofton A. Bullard.


high assurance systems engineering | 1996

A tree-based classification model for analysis of a military software system

Taghi M. Khoshgoftaar; Edward B. Allen; Lofton A. Bullard; Robert Halstead; Gary P. Trio

Tactical military software is required to have high reliability. Each software function is often considered mission critical, and the lives of military personnel often depend on mission success. The paper presents a tree based modeling method for identifying fault prone software modules, which has been applied to a subsystem of the Joint Surveillance Target Attack Radar System, JSTPARS, a large tactical military system. We developed a decision tree model using software product metrics from one iteration of a spiral life cycle to predict whether or not each module in the next iteration would be considered fault prone. Model results could be used to identify those modules that would probably benefit from extra reviews and testing and thus reduce the risk of discovering faults later on. Identifying fault prone modules early in the development can lead to better reliability. High reliability of each iteration translates into a highly reliable final product. A decision tree also facilitates interpretation of software product metrics to characterize the fault prone class. The decision tree was constructed using the TREED-ISC algorithm which is a refinement of the CHAID algorithm. This algorithm partitions the ranges of independent variables based on chi squared tests with the dependent variable. In contrast to algorithms used by previous tree based studies of software metric data, there is no restriction to binary trees, and statistically significant relationships with the dependent variable are the basis for branching.


international conference on case-based reasoning | 2003

Detecting outliers using rule-based modeling for improving CBR-based software quality classification models

Taghi M. Khoshgoftaar; Lofton A. Bullard; Kehan Gao

Deploying a software product that is of high quality is a major concern for the project management team. Significant research has been dedicated toward developing methods for improving the quality of metrics-based software quality classification models. Several studies have shown that the accuracy of such models improves when outliers and data noise are removed from the training data set. This study presents a new approach called Rule-Based Modeling (RBM) for detecting and removing training data outliers in an effort to improve the accuracy of a Case-Based Reasoning (CBR) classification model. We chose to study CBR models because of their sensitivity to outliers in the training data set. Furthermore, we wanted to affirmthe RBM technique as a viable outlier detector. We evaluate our approach by comparing the classification accuracy of CBR models built with and without removing outliers from the training data set. It is demonstrated that applying the RBM technique for eliminating outliers significantly improves the accuracy of CBR-based software quality classification models.


high assurance systems engineering | 2002

Cost-sensitive boosting in software quality modeling

Taghi M. Khoshgoftaar; Erik Geleyn; Laurent A. Nguyen; Lofton A. Bullard

Early prediction of the quality of software modules prior to software testing and operations can yield great benefits to the software development teams, especially those of high-assurance and mission-critical systems. Such an estimation allows effective use of the testing resources to improve the modules of the software system that need it most and achieve high reliability. To achieve high reliability, by the means of predictive methods, several tools are available. Software classification models provide a prediction of the class of a module, i.e., fault-prone or not fault-prone. Recent advances in the data mining field allow to improve individual classifiers (models) by using the combined decision from multiple classifiers. This paper presents a couple of algorithms using the concept of combined classification. The algorithms provided useful models for software quality modeling. A comprehensive comparative evaluation of the boosting and cost-boosting algorithms is presented. We demonstrate how the use of boosting algorithms (original and cost-sensitive) meets many of the specific requirements for software quality modeling. C4.5 decision trees and decision stumps were used to evaluate these algorithms with two large-scale case studies of industrial software systems.


International Journal of Reliability, Quality and Safety Engineering | 2009

ATTRIBUTE SELECTION USING ROUGH SETS IN SOFTWARE QUALITY CLASSIFICATION

Taghi M. Khoshgoftaar; Lofton A. Bullard; Kehan Gao

Finding techniques to reduce software developmental effort and produce highly reliable software is an extremely vital goal for software developers. One method that has proven quite useful is the application of software metrics-based classification models. Classification models can be constructed to identify faulty components in a software system with high accuracy. Significant research has been dedicated towards developing methods for improving the quality of software metrics-based classification models. It has been shown in several studies that the accuracy of these models improves when irrelevant attributes are identified and eliminated from the training data set. This study presents a rough set theory approach, based on classical set theory, for identifying and eliminating irrelevant attributes from a training data set. Rough set theory is used to find small groups of attributes, determined by the relationships that exist between the objects in a data set, with comparable discernibility as larger sets of attributes. This allows for the development of simpler classification models that are easy for analyst to understand and explain to others. We built case-based reasoning models in order to evaluate their classification performance on the smaller subsets of attributes selected using rough set theory. The empirical studies demonstrated that by applying a rough set approach to find small subsets of attributes we can build case-based reasoning models with an accuracy comparable to, and in some cases better than, a case-based reasoning model built with a complete set of attributes.


world congress on computational intelligence | 2008

Software quality modeling: The impact of class noise on the random forest classifier

Andres Folleco; Taghi M. Khoshgoftaar; J. Van Hulse; Lofton A. Bullard

This study investigates the impact of increasing levels of simulated class noise on software quality classification. Class noise was injected into seven software engineering measurement datasets, and the performance of three learners, random forests, C4.5, and Naive Bayes, was analyzed. The random forest classifier was utilized for this study because of its strong performance relative to well-known and commonly-used classifiers such as C4.5 and Naive Bayes. Further, relatively little prior research in software quality classification has considered the random forest classifier. The experimental factors considered in this study were the level of class noise and the percent of minority instances injected with noise. The empirical results demonstrate that the random forest obtained the best and most consistent classification performance in all experiments.


information reuse and integration | 2008

Identifying learners robust to low quality data

Andres Folleco; Taghi M. Khoshgoftaar; Jason Van Hulse; Lofton A. Bullard

Real world datasets commonly contain noise that is distributed in both the independent and dependent variables. Noise, which typically consists of erroneous variable values, has been shown to significantly affect the classification performance of learners. In this study, we identify learners with robust performance in the presence of low quality (noisy) measurement data. Noise was injected into five class imbalanced software engineering measurement datasets, initially relatively free of noise. The experimental factors considered included the learner used, the level of injected noise, the dataset used (each with unique properties), and the percentage of minority instances containing noise. No other related studies were found that have identified learners that are robust in the presence of low quality measurement data. Based on the results of this study, we recommend using the random forest learner for building classification models from noisy data.


international conference on machine learning and applications | 2007

An application of a rule-based model in software quality classification

Lofton A. Bullard; Taghi M. Khoshgoftaar; Kehan Gao

A new rule-based classification model (RBCM) and rule-based model selection technique are presented. The RBCM utilizes rough set theory to significantly reduce the number of attributes, discretation to partition the domain of attribute values, and Boolean predicates to generate the decision rules that comprise the model. When the domain values of an attribute are continuous and relatively large, rough set theory requires that they be discretized. The subsequent discretized domain must have the same characteristics as the original domain values. However, this can lead to a large number of partitions of the attributes domain space, which in turn leads to large rule sets. These rule sets tend to form models that over-fit. To address this issue, the proposed rule-based model adopts a new model selection strategy that minimizes over-fitting for the RBCM. Empirical validation of the RBCM is accomplished through a case study on a large legacy telecommunications system. The results demonstrate that the proposed RBCM and the model selection strategy are effective in identifying the classification model that minimizes over-fitting and high cost classification errors.


information reuse and integration | 2006

Software Quality Imputation in the Presence of Noisy Data

Taghi M. Khoshgoftaar; Andres Folleco; Jason Van Hulse; Lofton A. Bullard

The detrimental effects of noise in a dependent variable on the accuracy of software quality imputation techniques were studied. The imputation techniques used in this work were Bayesian multiple imputation, mean imputation, instance-based learning, regression imputation, and the REPTree decision tree. These techniques were used to obtain software quality imputations for a large military command, control, and communications system dataset (CCCS). The underlying quality of data was a significant factor affecting the accuracy of the imputation techniques. Multiple imputation and regression imputation were top performers, while mean imputation was ineffective


International Journal of Reliability, Quality and Safety Engineering | 2011

A COMPARATIVE STUDY OF FILTER-BASED AND WRAPPER-BASED FEATURE RANKING TECHNIQUES FOR SOFTWARE QUALITY MODELING

Taghi M. Khoshgoftaar; Kehan Gao; Lofton A. Bullard

Data mining techniques have been effectively used for software defect prediction in the last decade. The general process is that a classifier is first trained on historical software data (software metrics and fault data) collected during the software development process and then the classifier is used to predict new program modules (waiting for testing) as either fault-prone or not-fault-prone. The performance of the classifier is influenced by two factors: the software metrics in the training dataset and the proportions of the fault-prone and not-fault-prone modules in that dataset. When a dataset contains too many software metrics and/or very skewed proportions of the two types of modules, several problems may arise including extensive computation and a decline in predictive performance. In this paper, we use feature ranking and data sampling to deal with these problems. We investigate two types of feature ranking techniques (wrapper-based and filter-based), and compare their performances through two case studies on two groups of software measurement datasets. The empirical results demonstrate that filter-based ranking techniques not only show better classification performance but also have a lower computational cost.


International Journal of Reliability, Quality and Safety Engineering | 2016

Verifying the Security Characteristics of a Secure Physical Access Control Protocol

Clyde Carryl; Bassem Alhalabi; Taghi M. Khoshgoftaar; Lofton A. Bullard

Physical access control protocols provide a structured method of controlling the behavior of physical devices which in many cases are not only remotely located with respect to the accessing entity, but require the exchange of messages over one or more untrusted networks, such as the internet. Therefore, if it is necessary to prevent unauthorized access to the controlled physical devices, it is essential that the physical access control protocol exhibit certain verifiable security properties. We studied the Universal Physical Access Control System (UPACS) and used the formal protocol verification tool Proverif to verify that it possesses several key security properties. We also conducted a security analysis of the protocol and verified that it was resilient or otherwise invulnerable to several known forms of security attack, including Attacks on User Privacy and Anonymity, Session Key Security Attacks, Password Guessing Attacks, De-Synchronization Attacks, Replay Attacks, Eavesdropping Attacks, Denial-of-S...

Collaboration


Dive into the Lofton A. Bullard's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andres Folleco

Florida Atlantic University

View shared research outputs
Top Co-Authors

Avatar

Kehan Gao

Eastern Connecticut State University

View shared research outputs
Top Co-Authors

Avatar

Jason Van Hulse

Florida Atlantic University

View shared research outputs
Top Co-Authors

Avatar

Bassem Alhalabi

Florida Atlantic University

View shared research outputs
Top Co-Authors

Avatar

Clyde Carryl

Florida Atlantic University

View shared research outputs
Top Co-Authors

Avatar

Edward B. Allen

Florida Atlantic University

View shared research outputs
Top Co-Authors

Avatar

Erik Geleyn

Florida Atlantic University

View shared research outputs
Top Co-Authors

Avatar

J. Van Hulse

Florida Atlantic University

View shared research outputs
Top Co-Authors

Avatar

Laurent A. Nguyen

Florida Atlantic University

View shared research outputs
Researchain Logo
Decentralizing Knowledge