Mark A. Gallagher
Air Force Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Mark A. Gallagher.
winter simulation conference | 1992
Randall B. Howard; Mark A. Gallagher; Kenneth W. Bauer; Peter S. Maybeck
The construction of confidence intervals for discrete-event simulation parameters must account for the correlated nature of simulation output. Through the determination of system equations and application of the Kalman filter to simulation output data, a new confidence interval construction technique has been developed. The technique uses Multiple Model Adap-tive Estimation (MMAE) to obtain a nonsymmetric confidence interval for the mean estimator of a uni-variate output sequence. A Monte Carlo analysis of data generated from simulations of M/M/1 queues was used to compare the performance of the proposed techniques with other published techniques. 1 INTRODUCTION This paper is organized in four sections. The first section discusses output analysis for discrete-event simulations , including a brief description of several published techniques for confidence interval construction. The next section develops a proposed Kalman filter confidence interval construction technique. Included are discussions on the Kalman filter, model formulation , Multiple Model Adaptive Estimation (MMAE), and the steps involved in the proposed confidence interval construction technique. The third section re-porte the results of a Monte Carlo analyfiia on data from simulations of M/M/1 queues. The final section provides a brief summary. The research presented in this paper draws heavily upon results found in Howard (1992) and Gallagher (1992). 1.1 Discrete-Event Simulations In discrete-event simulation, a typical output analysis objective is to obtain estimates of various output parameters. Often, a confidence interval is constructed to give the analyst a better measure of an estimates reliability. This research concentrates on constructing confidence intervals for parameters of discrete-event simulations. One class of discrete-event simulations is infinite-horizon simulations, in which one is interested in estimating steady-state parameters. Although it is possible that these parameters may be cyclic, we focus on output sequences that attain a stationary steady-state probability distribution. Often simulation output contains transient data which may bias parameter estimates. In this research, the effect of the start-up problem was diminished by truncating a large part of the simulation output. Typically, the simulation output sequence is positively correlated, and classical statistics for independent observations do not apply. Several techniques, each of which deal with the correlation problem in different ways, have been proposed for estimating confidence intervals based on one long simulation run The proposed Kalman filter technique offers a novel approach for addressing the correlation issue and constructing confidence intervals. Before discussing this approach, brief descriptions of four popular techniques are provided. These techniques will be used in …
The Journal of Cost Analysis | 1997
David A. Lee; Michael R. Hogue; Mark A. Gallagher
Abstract Often in developing budgets for programs with a development phase, analysts must determine a budget profile from a point estimate of the total development cost. Expenditures for Department of Defense (DoD) development programs, as recorded in Cost Performance Reports, are seen to fit a scaled Rayleigh cumulative density function reasonably well. Thus, given a point estimate of total development costs, a realistic expenditure profile can be determined using a Rayleigh model. Furthermore, these expenditures can be related to annual budget requirements through the DoD Comptrollers outlay rates. This paper describes a method for determining a budget profile from a point estimate of the total development cost.
The Journal of Cost Analysis | 2002
Thomas W. Brown; Edward D. White; Mark A. Gallagher
Abstract Norden (1970) demonstrates that the Rayleigh function can model manpower on research and development (R&D) programs. Several research efforts extend his work to modeling R&D program expenditures. The Rayleigh distribution, which is a degenerative of the Weibull distribution, suffers from two theoretical limitations that make the Weibull function a better model for R&D program expenditures. Using 128 completed R&D programs, we develop regression models to predict the requisite Weibull scale and shape parameters. To determine the Weibull models budget profile forecasting capability, we compare the completed R&D program budget profiles to Weibull modeled budget profiles and report an average correlation of 0.607. To determine the significance of our results we compare the same 128 completed program budget profiles to Rayleigh modeled budget profiles. Using the Weibull in lieu of the Rayleigh model we improve initial budget profile projections on average 60 percent.
The Journal of Cost Analysis | 2004
Eric J. Unger; Mark A. Gallagher; Edward D. White
Abstract In this article, we postulate and test that, if the initial budget supports Rayleigh-distributed expenditures, then less cost or schedule growth occurs. Norden (1970) shows that, if effort on a project is a function of linear improvement in skills and diminishing remaining tasks, then the cumulative efforts follows with the Rayleigh cumulative distribution function, which is a special case of the Weibull distribution. Norden demonstrates that manpower on research and development (R&D) programs can be modeled with the Rayleigh distribution. Numerous researchers successfully fit Rayleigh models to completed R&D program expenditures for a variety of technologies. Even if the initial program budget does not support the rapid increase and long tail of Rayleigh-distributed expenditures, then the program may still finish with Rayleigh-distributed expenditures through budget increases or program extensions. In this article, we evaluate whether the cost and schedule growth for R&D programs can be determined by how well the initial R&D program budget supports Rayleigh-distributed expenditures. We measure how well the expenditures from the initial budgets follow a Rayleigh distribution in two ways, by the values of the least squares Weibull parameters and by several goodness-of-fit statistics. We regress the values for 37 completed R&D defense programs and find our models explain 53.4% of cost-overrun and 50.5% of percent schedule-slip variations. Considering that funding is only one of many factors that can result in program growth, we contend these results demonstrate the significant impact of the proposed budget for completing R&D programs in their scheduled time and at their projected cost.
The Journal of Cost Analysis | 2016
Bradley C. Boehmke; Alan W. Johnson; Edward D. White; Jeffery D. Weir; Mark A. Gallagher
Current constraints in the fiscal environment are forcing the Air Force, and its sister services, to assess force reduction considerations. With significant force reduction comes the need to model and assess the potential impact that these changes may have on support resources. Previous research has remained heavily focused on a ratio approach for linking the tooth and tail ends of the Air Force cost spectrum and, although recent research has augmented this literature stream by providing more statistical rigor behind tooth-to-tail relationships, an adequate decision support tool has yet to be explored to aid decision-makers. The authors of this research directly address this concern by introducing a systematic approach to perform tooth-to-tail policy impact analysis. First, multivariate linear regression is applied to identify relationships between the tooth and tail. Then, a novel decision support system with Bayesian networks is introduced to model the tooth-to-tail cost consequences while capturing the uncertainty that often comes with such policy considerations. Through scenario analysis, the authors illustrate how a Bayesian network can provide decision-makers with (i) the ability to model uncertainty in the decision environment, (ii) a visual illustration of cause-and-effect impacts, and (iii) the ability to perform multi-directional reasoning in light of new information available to decision-makers.
The Engineering Economist | 2016
Bradley C. Boehmke; Alan W. Johnson; Edward D. White; Jeffery D. Weir; Mark A. Gallagher
ABSTRACT Indirect activities often represent an underemphasized, yet significant, contributing source of costs for organizations. In order to manage indirect costs, organizations must understand how these costs behave relative to changes in operational resources and activities. This is of particular interest to the Air Force and its sister services, because recent and projected reductions in defense spending are forcing reductions in their operational variables, and insufficient research exists to help them understand how this may influence indirect costs. Furthermore, although academic research on indirect costs has advanced the knowledge behind the modeling and behavior of indirect costs, significant gaps in the literature remain. Our research provides important and timely advances to the indirect cost literature. First, our research disaggregates the indirect cost pool and focuses on indirect personnel costs, which represent 33% of all Air Force indirect costs and are a leading source of indirect costs in many organizations. Second, we employ a multilevel modeling approach to capture the hierarchical nature of an enterprise, allowing us to assess the influence that each level of an organization has on indirect cost behavior and relationships. Third, we identify the operational variables that influence indirect personnel costs in the Air Force enterprise, providing Air Force decision-makers with evidence-based knowledge to inform decisions regarding budget reduction strategies.
The Journal of Cost Analysis | 2015
Bradley C. Boehmke; Alan W. Johnson; Edward D. White; Jeffery D. Weir; Mark A. Gallagher
“Bending the cost curve” has become the ambiguous jargon employed in recent years to emphasize the notion of changing unwanted cost trends. In response to the planned
Military Operations Research | 2014
Mark A. Gallagher; David J. Caswell; Brian Hanlon; Justin M. Hill
1 trillion Department of Defense budget reduction over the next six years, the Air Force has launched its own Bending the Cost Curve initiative in an effort to reduce cost growth. A principal concern with Bending the Cost Curve initiatives and research to date is the central focus on aggregate cost trajectories which can obscure the true underlying growth curves which require attention. In response, the authors apply a novel growth curve clustering approach to identify underlying cost curve behavior across the Air Force enterprise. They find that micro-level growth curves vary greatly from the aggregate cost curves. Furthermore, they illustrate how this approach can help decision-makers to direct their focus, proposals, and policy actions toward specific growth curves that must be “bent.”
Annals of Operations Research | 1994
Mark A. Gallagher; Kenneth W. Bauer; Peter S. Maybeck
F or decades, analysts within the defense community have categorized their analytic models and simulations for decision support through a hierarchy expressed in terms of resolution. The hierarchy is usually depicted as a pyramid that has levels of ‘‘engineering and physics,’’ ‘‘engagement,’’ and ‘‘mission,’’ to the most aggregate level of ‘‘campaign’’ models. In this article, we accomplish three enhancements. First, we document the importance of applying a hierarchy of models and simulations, which some have questioned because of increased computer speed. Second, we list factors to consider in describing a model or simulation’s resolution, which we hope aids interfacing model results and calibrating across levels. Third, we propose expanding the hierarchy to include two more levels beyond campaign to include levels of ‘‘defense enterprise’’ and ‘‘government, nongovernment, and coalition instruments of power.’’ We use these levels to categorize models constructed to aid decisions requiring evaluations beyond a single campaign’s results. The growing emphasis on disparate coalition operations along with the increasing interplay of broader government and nongovernmental capabilities points to a need to extend this traditional hierarchy. We propose refining the hierarchy to depict both model and simulation breadth (scope) and depth (resolution) along with our recommended two additional levels of abstraction.
The Journal of Cost Analysis | 2004
Paul H. Porter; Mark A. Gallagher
Data truncation is a commonly accepted method of dealing with initialization bias in discrete-event simulation. An algorithm for determining the appropriate initial-data truncation point for multivariate output is proposed. The technique entails averaging across independent replications and estimating a steady-state output model in a state-space framework. A Bayesian technique called Multiple Model Adaptive Estimation (MMAE) is applied to compute a time varying estimate of the outputs steady-state mean vector. This MMAE implementation features the use, in paralle, of a bank of Kalman filters. Each filter is constructed under a different assumption concerning the outputs steady-state mean vector. One of the filters assumes that the steady-state mean vector is accurately reflected by an estimate, called the “assumed steady-state mean vector”, taken from the last half of the simulation data. As the filters process the output through the effective transient, this particular filter becomes more likely (in a Bayesian sense) to be the best filter to represent the data and the MMAE mean estimator is influenced increasingly towards the assumed steady-state mean vector. The estimated truncation point is selected when a norm of the MMAE mean vector estimate is within a small tolerance of the assumed steady-state mean vector. A Monte Carlo analysis using data from simulations of open and closed queueing models is used to evaluate the technique. The evaluation criteria include the ability to construct accurate and reliable confidence regions for the mean response vector based on the truncated sequences.