Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kellie B. Keeling is active.

Publication


Featured researches published by Kellie B. Keeling.


Computers & Operations Research | 2007

A hybrid grouping genetic algorithm for the cell formation problem

Tabitha L. James; Evelyn C. Brown; Kellie B. Keeling

The machine-part cell formation problem consists of constructing a set of machine cells and their corresponding product families with the objective of minimizing the inter-cell movement of the products while maximizing machine utilization. This paper presents a hybrid grouping genetic algorithm for the cell formation problem that combines a local search with a standard grouping genetic algorithm to form machine-part cells. Computational results using the grouping efficacy measure for a set of cell formation problems from the literature are presented. The hybrid grouping genetic algorithm is shown to outperform the standard grouping genetic algorithm by exceeding the solution quality on all test problems and by reducing the variability among the solutions found. The algorithm developed performs well on all test problems, exceeding or matching the solution quality of the results presented in previous literature for most problems.


Computational Statistics & Data Analysis | 2007

A comparative study of the reliability of nine statistical software packages

Kellie B. Keeling; Robert Pavur

The reliabilities of nine software packages commonly used in performing statistical analysis are assessed and compared. The (American) National Institute of Standards and Technology (NIST) data sets are used to evaluate the performance of these software packages with regard to univariate summary statistics, one-way ANOVA, linear regression, and nonlinear regression. Previous research has examined various versions of these software packages using the NIST data sets, but typically with fewer software packages than used in this study. This study provides insight into a relative comparison of a wide variety of software packages including two free statistical software packages, basic and advanced statistical software packages, and the popular Excel package. Substantive improvements from previous software reliability assessments are noted. Plots of principal components of a measure of the correct number of significant digits reveal how these packages tend to cluster for ANOVA and nonlinear regression.


Multivariate Behavioral Research | 2000

A Regression Equation for Determining the Dimensionality of Data.

Kellie B. Keeling

Parallel analysis has received much support and attention as a criterion for using eigenvalues to determine the dimensionality of data. Parallel analysis compares sample eigenvalues to expected eigenvalues of a sample from a correlation matrix generated by independent normally distributed random variables. To make parallel analysis more accessible to researchers, several studies have proposed multiple regression equations for estimating the expected value of the eigenvalues of a sample correlation matrix assuming that the population correlation matrix is the identity matrix. A new regression equation to estimate the mean value of eigenvalues is presented in this article and a comparative study reveals favorable performance of this proposed equation to previously published regression equations. This proposed technique has the advantage that a table of coefficients, listing regression coefficients for each eigenvalue root, is not needed.


Computers & Industrial Engineering | 2008

Neural network-based simulation metamodels for predicting probability distributions

Christopher W. Zobel; Kellie B. Keeling

Simulation is an important tool for supporting decision-making under uncertainty, particularly when the system under consideration is too complex to evaluate analytically. The amount of time required to generate large numbers of simulation replications can be prohibitive, however, necessitating the use of a simulation metamodel in order to describe the behavior of the system under new conditions. The purpose of this study is to examine the use of neural network metamodels for representing output distributions from a stochastic simulation model. A series of tests on a well-known simulation problem demonstrate the ability of the neural networks to capture the behavior of the underlying systems and to represent the inherent uncertainty with a reasonable degree of accuracy.


The American Statistician | 2011

Statistical Accuracy of Spreadsheet Software

Kellie B. Keeling; Robert Pavur

As the use of spreadsheet packages for statistical analysis increases, so does the need for assessing the reliability of these packages. This study compares the accuracy of six spreadsheet packages: Excel, Google Docs, Gnumeric, Numbers, OpenOffice Calc, and Quattro Pro. The National Institute of Standards and Technology (NIST) compiled sets of data specifically to test for computational accuracy. Certified statistically accurate computations for standard statistical procedures accompany these datasets. This study analyzes the accuracy of summary statistics such as the mean, standard deviation, and autocorrelation as well as the F statistics for a one-way ANOVA, and the coefficients and R2statistics in regression analysis using the Statistical Reference Datasets (StRD) provided by NIST. Wilkinson’s Tests are also examined to document a package’s ability to perform rounding, univariate statistics, scatterplots, and regression/correlation with particularly challenging data. The final analysis reports the accuracy of probability and percentile computations involving statistical distributions. The results suggest that Gnumeric is the most reliable both in performing statistical analysis and for calculations involving statistical distributions. Google Docs spreadsheet, while convenient, has deficiencies and should not be used for scientific statistical analysis. This article has supplementary material online.


winter simulation conference | 2004

Numerical accuracy issues in using Excel for simulation studies

Kellie B. Keeling; Robert Pavur

Many researchers use Excel to perform simulations, but with each upgrade to Excel - Excel 97, Excel 2000, Excel XP, and Excel 2003 - numerical accuracy problems have been noted. In the latest version, Excel 2003, some substantial changes have been made to its algorithms as noted on its Web site. This paper discusses generating random numbers in Excel - including uniform, normal, and Poisson variates. In addition, the study assesses how Excels accuracy stacks up to other statistical software by using the NIST Statistical Reference Datasets tests as certified benchmarks of numerical accuracy. This paper will reveal that Excel 2003 still has room for improvement.


Computers & Security | 2013

Using network-based text analysis to analyze trends in Microsoft's security innovations

Tabitha L. James; Lara Khansa; Deborah F. Cook; Olga Bruyaka; Kellie B. Keeling

As the use of networked computers and digital data increase, so have the reports of data compromise and malicious cyber-attacks. Increased use and reliance on technologies complicate the process of providing information security. This expanding complexity in supplying data security requirements coupled with the increased recognition of the value of information, have led to the need to quickly advance the information security area. In this paper, we examine the maturation of the information security area by analyzing the innovation activity of one of the largest and most ubiquitous information technology companies, Microsoft. We conduct a textual analysis of their patent application activity in the information security domain since the early 2000s using a novel text analysis approach based on concepts from social network analysis and algorithmic classification. We map our analysis to focal areas in information security and examine it against Microsofts own history, in order to determine the depth and breadth of Microsofts innovations. Our analysis shows the relevance of using a network-based text analysis. Specifically, we find that Microsoft has increasingly emphasized topics that fall into the identity and access management area. We also show that Microsofts innovations in information security showed tremendous growth after their Trustworthy Computing Initiative was announced. In addition, we are able to determine areas of focus that correspond to Microsofts major vulnerabilities. These findings indicate that while Microsoft is still actively, albeit not always successfully, fighting vulnerabilities in their products, they are quite vigorously and broadly innovating in the information security area.


Expert Systems With Applications | 2015

A framework to explore innovation at SAP through bibliometric analysis of patent applications

Tabitha L. James; Deborah F. Cook; Sumali Conlon; Kellie B. Keeling; Stephane Collignon; Trevor White

We provide an analysis of innovation at SAP using bibliometric analysis of patents.A rotation and sort procedure was applied to the text.A frequent itemset analysis was performed using the word occurrence matrix.A blockmodeling algorithm was applied to explore relatedness between terms.We integrate text and network analysis tools to perform bibliometric analysis. Easily accessible patent databases and advances in technology have enabled the exploration of organizational innovation through the analysis of patent records. However, the textual content of patents presents obstacles to gleaning useful information. In this study, we develop an expert system framework that utilizes text and data mining procedures for analyzing innovation through textual patent data. Specifically, we use patent titles representing the innovation activity at one company (SAP) and perform a bibliometric analysis using our proposed framework. Enterprise software, of which SAP is a pioneering developer, must serve a wide assortment of functions for companies in many different industries. In addition, SAPs sole focus is on enterprise software and it is a market leader in the category with substantial patent activity over the last decade. Using our framework to analyze SAPs patent activity provides a demonstration of how our bibliometric analysis can summarize and identify trends in innovation in a large software company. Our results illustrate that SAP has a breadth of innovative activity spread over the three-tier software engineering architecture and a lack of topical repetition indicative of limited depth. SAPs innovation is also seen to emphasize data management and quickly integrate emerging technologies. Results of an analysis on any company following our framework could be used for a variety of purposes, including: to examine the scope and scale of innovation of an organization, to examine the influence of technological trends on businesses, or to gain insight into corporate strategy that could be used to aid planning, investment, and purchasing decisions.


Archive | 2013

Using process capability analysis and simulation to improve patient flow

Kellie B. Keeling; Evelyn C. Brown; John F. Kros

Abstract This work investigates a regional hospital, which has an affiliated low-acuity emergency department (ED) facility that currently struggles to meet its service level goal (85% of its patients should be in the room in 60 minutes or less). A capability analysis using data from existing processes at this facility revealed that with the current processes and current level of resources, the facility is not capable of meeting existing service level goal. A simulation was developed to examine multiple alternatives that could improve patient flow at the facility. A set of scenarios were created that modified one or more of the resources such as doctors, nurses, and rooms by changing their schedules or their quantities. The impact of the response variables related to the facility’s service level goal was recorded for each scenario. Based on the results of the simulation, recommendations to the facility for alternative ways to schedule and allocate its resources in order to meet its current service level goal were given.


Communications in Statistics - Simulation and Computation | 2004

A Comparison of Methods for Approximating the Mean Eigenvalues of a Random Matrix

Kellie B. Keeling; Robert Pavur

Abstract This paper proposes several methods for approximating the expected value of an eigenvalue of a random matrix. A comparative study assesses the accuracy of these methods. The proposed methods for approximating the mean value of an eigenvalue of a random matrix consist of a regression approach using the dimensions of the data and position of the eigenvalue as predictors, two methods using expected values of order statistics for the normal distribution, and another approach using percentiles of the normal distribution. These methods provide researchers who desire to readily determine the dimensionality of data by using parallel analysis, with easy-to-implement alternatives to calculate the mean eigenvalues of a random matrix.

Collaboration


Dive into the Kellie B. Keeling's collaboration.

Top Co-Authors

Avatar

Robert Pavur

University of North Texas

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alan H. Kvanli

University of North Texas

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Darla Fent

Tarleton State University

View shared research outputs
Top Co-Authors

Avatar

John F. Kros

East Carolina University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge