Samuel E. Buttrey
Naval Postgraduate School
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Samuel E. Buttrey.
Computational Statistics & Data Analysis | 2002
Samuel E. Buttrey; Ciril Karo
We construct a hybrid (composite) classifier by combining two classifiers in common use--classification trees and k-nearest-neighbor (k-NN). In our scheme we divide the feature space up by a classification tree, and then classify test set items using the k-NN rule just among those training items in the same leaf as the test item. This reduces somewhat the computational load associated with k-NN, and it produces a classification rule that performs better than either trees or the usual k-NN in a number of well-known data sets.
Journal of Quantitative Analysis in Sports | 2011
Samuel E. Buttrey; Alan R. Washburn; Wilson L. Price
We propose a model to estimate the rates at which NHL teams score and yield goals. In the model, goals occur as if from a Poisson process whose rate depends on the two teams playing, the home-ice advantage, and the manpower (power-play, short-handed) situation. Data on all the games from the 2008-2009 season was downloaded and processed into a form suitable for the analysis. The model seems to perform adequately in prediction and should be useful for handicapping and for informing the decision as to when to pull the goalie.
Journal of Safety Research | 2011
Paul O'Connor; Samuel E. Buttrey; Angela O'Dea; Quinn Kennedy
INTRODUCTION There are a variety of qualitative and quantitative tools for measuring safety climate. However, questionnaires are by far the most commonly used methodology. METHOD This paper reports the descriptive analysis of a large sample of safety climate survey data (n=110,014) collected over 10 years from U.S. Naval aircrew using the Command Safety Assessment Survey (CSAS). RESULTS The analysis demonstrated that there was substantial non-random response bias associated with the data (the reverse worded items had a unique pattern of responses, there was a increasing tendency over time to only provide a modal response, the responses to the same item towards the beginning and end of the questionnaire did not correlate as highly as might be expected, and the faster the questionnaire was completed the higher the frequency of modal responses). It is suggested that the non-random responses bias was due to the negative effect on participant motivation of a number of factors (questionnaire design, lack of a belief in the importance of the response, participant fatigue, and questionnaire administration). CONCLUSIONS Researchers must consider the factors that increase the likelihood of non-random measurement error in safety climate survey data and cease to rely on data that are solely collected using a long and complex questionnaire. IMPACT ON INDUSTRY In the absence of valid and reliable data it will not be possible for organizations to take the measures required to improve safety climate.
Food and Chemical Toxicology | 2012
Josh M. Katz; Carl K. Winter; Samuel E. Buttrey; J.G. Fadel
Western and guideline based diets were compared to determine if dietary improvements resulting from following dietary guidelines reduce acrylamide intake. Acrylamide forms in heat treated foods and is a human neurotoxin and animal carcinogen. Acrylamide intake from the Western diet was estimated with probabilistic techniques using teenage (13-19 years) National Health and Nutrition Examination Survey (NHANES) food consumption estimates combined with FDA data on the levels of acrylamide in a large number of foods. Guideline based diets were derived from NHANES data using linear programming techniques to comport to recommendations from the Dietary Guidelines for Americans, 2005. Whereas the guideline based diets were more properly balanced and rich in consumption of fruits, vegetables, and other dietary components than the Western diets, acrylamide intake (mean±SE) was significantly greater (P<0.001) from consumption of the guideline based diets (0.508±0.003 μg/kg/day) than from consumption of the Western diets (0.441±0.003 μg/kg/day). Guideline based diets contained less acrylamide contributed by French fries and potato chips than Western diets. Overall acrylamide intake, however, was higher in guideline based diets as a result of more frequent breakfast cereal intake. This is believed to be the first example of a risk assessment that combines probabilistic techniques with linear programming and results demonstrate that linear programming techniques can be used to model specific diets for the assessment of toxicological and nutritional dietary components.
Computational Statistics & Data Analysis | 1998
Samuel E. Buttrey
Abstract A technique is presented for adopting nearest-neighbor classification to the case of categorical variables. The set of categories is mapped onto the real line in such a way as to maximize the ratio of total sum of squares to within-class sum of squares, aggregated over classes. The resulting real values then replace the categories, and nearest-neighbor classification proceeds with the Euclidean metric on these new values. Continuous variables can be included in this scheme with little added efort. This approach has been implemented in a computer program and tried on a number of data sets, with encouraging results. Nearest-neighbor classification is a well-known and efective classification technique. With this scheme, an unknown items distances to all known items are measured, and the unknown class is estimated by the class of the nearest neighbor or by the class most often represented among a set of nearest neighbors. This has proven effective in many examples, but an appropriate distance normalization is required when variables are scaled differently. For categorical variables “distance” is not even defined. In this paper categorical data values are replaced by real numbers in an optimal way: then those real numbers are used in nearest-neighbor classification.
Annals of Internal Medicine | 2016
Andrew Anglemyer; Matthew L. Miller; Samuel E. Buttrey; Lyn R. Whitaker
Suicide rates have increased by 60% worldwide during the past 47 years and is a leading cause of death among 15- to 44-year-olds (1). In 2010, suicide was the 10th leading cause of death in the United States (2). The overall suicide rate in the U.S. military has also increased, almost doubling from 2001 to 2011 (3). Potential factors that have changed over time, such as deployments and mental health conditions, have helped clarify the reasons for the increased suicide rate in the U.S. military. A recent study of suicide risk among veterans found that deployment did not increase the risk for suicide (4), whereas other studies explored risk for suicide after psychiatric hospitalization (5) as well as psychosocial risk (6). Research and debate are ongoing regarding the various motivations for choosing a particular method of suicide (7, 8). Previous studies showed that men are more likely to use violent methods of suicide (for example, firearm related), whereas women are more likely to use nonviolent means (for example, poisoning) (8). Within the military, research suggests that the suicide risk is significantly greater among personnel whose occupations provide easy access to firearms than among those in other occupations (9). Some researchers have suggested that both psychological and biological differences exist between people who choose violent methods and those who use nonviolent ones (10, 11). Empirical evidence suggests that among military conscripts, previous problems in school may predict violent suicide attempts, which also may be a strong indicator of subsequent suicide (12). Indeed, violent suicide attempts have been linked very strongly to subsequent suicide completion (12). In fact, a person who attempts suicide by firearm has a risk for subsequent completed suicide about 5 times higher than that of people who attempt suicide by nonviolent means (hazard ratio [HR], 5.18 [95% CI, 1.27 to 21.24]) (12). A common limitation in previous analyses evaluating suicide trends in the U.S. military was a lack of consolidated service data. Before 2008, risk factor analyses for U.S. Department of Defense (DOD) suicides were performed in relatively small populations, primarily at the military service level, by using service-unique databases (6). Moreover, the bulk of research has focused on the army (4, 5, 1318), whereas studies including all services have been limited to survey-type data or have had limited follow-up (1618). The objective of the present study is to evaluate suicide rates among active duty military personnel across years and to identify differences among branches. Further, with regard to completed suicides, we aim to identify the groups of active duty military personnel who are at greatest risk for firearm-specific suicide. Methods A joint endeavor by the Defense Suicide Prevention Office and the U.S. Department of Veterans Affairs resulted in development of the Suicide Data Repository (SDR), which has mitigated the problem of insufficient consolidated data. The SDR combines data from the Centers for Disease Control and Prevention (CDC) via the National Death Index (NDI), as well as the Military Mortality Database, to provide a collection of demographic and military-specific information on all service members and veterans who committed suicide and had served in the armed forces since 1974. The SDR was fully established in 2013 and to our knowledge is the most comprehensive source of demographic and military-specific data on suicides in the U.S. military. The data for this study were provided by the Defense Manpower Data Center (DMDC). Study Population We used 2 data sets: The first, extracted from the SDR, contains demographic and military-specific data for each suicide; the second contains the monthly end strength, or personnel count on the last day of the month, for each demographic subpopulation. All subpopulation strata were combinations of the following: year, sex, age, race, marital status, education, age at enlistment, rank, and Armed Forces Qualification Test (AFQT) category (higher AFQT categories represent lower cognitive ability; for example, category IIIB or higher is equal to a percentile score <50). Data on active duty personnel were not fully available for suicides occurring before 2005 or for those occurring outside the United States. Therefore, our study population comprises all enlisted personnel (that is, nonofficers) of the U.S. military regular component (including the army, air force, navy, and marines) who committed suicide while on active duty stateside between 2005 and 2011. For the analysis of firearm-specific suicides, we attempted to exclude suicides among enlistees who did not have military service exposure and perhaps had an unrecognized predisposition to suicidality before entering the military; therefore, we included only those who had already completed training. The DOD Human Research Protection Program and Naval Postgraduate Schools Institutional Review Board approved the collection of the data for this study (NPS.2014.0073). Statistical Analysis Temporal Trends of Active Duty Military Suicide Using the combined data set containing both suicides and personnel counts, we determined the branch-specific suicide rates. Service branches have different missions and recruit and attract different types of personnel; because we could not control for these differences with our available data, we analyzed data for each branch separately. Predictors of Violent Methods Among Active Duty Military Suicides Using the cohort of completed suicides in the SDR data set, we identified predictors of firearm-specific suicide. Predictor variables included factors previously identified in the literature and a priori hypothesized to affect both the service and the outcome. To evaluate the total effect of each military branch on firearm-specific suicide, we considered several covariates, including age at death, rank, sex, education, race, marital status, religion, length of service at the time of death, AFQT score category, and primary military occupation (that is, infantry/special operations). We identified covariates to adjust for in multivariable models using a directed acyclic graph approach (representing the relationships among service, suicide, and other variables) to determine minimally sufficient adjustment sets (19) (Appendix Figure). The minimally sufficient adjustment set identified included infantry/special operations job classification, age, sex, AFQT score category, and education. We restricted the multivariable models identifying predictors of firearm-specific suicide to men, because more than 95% of all suicides were committed by men (only 1 female marine and 9 female navy personnel committed suicide). Among the navy and air force suicides, we did not consider infantry/special operations job classification in multivariable models because only 2 suicides were identified and this job classification is found more commonly in the army and marines. Appendix Figure. Directed acyclic graph evaluating the relationship between branch of service and firearm-specific suicide and potential confounders. AFQT = Armed Forces Qualification Test; ops = operations. Of 1416 suicides, 366 (25.8%) had missing data for firearm-specific suicide or covariates (Appendix Table 1). We used multiple imputation to address missing data, and we assumed data were missing at random. The variables included in the imputation models included method of suicide, AFQT score category, education, infantry/special operations job classification, sex, and age; imputations were run separately by branch. We specified conditional models and performed the imputations based on these conditional models. We generated multiple sets (m= 10) of imputed values, allowing us to account for the uncertainty inherent in using the imputed values in our models (20, 21). The imputed data sets were combined by applying Rubin rules (22, 23), which are used to appropriately adjust estimated SEs, and thus CIs and P values, to account for the additional uncertainty associated with data missingness. Because the mechanism of these missing data is unknown and may not be consistent with the missing-at-random assumption, these results should be interpreted cautiously. In a secondary analysis, we compared multiple imputation results with a complete case analysis approach, excluding observations with missing data. Adjusted odds ratios (aORs) are reported with 95% CI. Data were analyzed using R version 3.1.2 (R Foundation for Statistical Computing) (24). The base package was used for the logistic models, and the mice (Multiple Imputation by Chained Equations) package (25) was used for the multiple imputation analyses. Appendix Table 1. Selected Variables and Percentage With a Missing Cause of Suicide Role of the Funding Source This study was unfunded. Results A total of 1455 suicides occurred during 125 million person-months among the active duty, regular component enlisted personnel in the U.S. Army, Air Force, Marine Corps, and Navy from 2005 to 2011, with an average end strength for that period of 451000, 268000, 173000, and 283000, respectively. The highest suicide rates (per 100000) from 2005 to 2011 were in the army in 2009 and 2010 (29.44 and 29.15 suicides, respectively) (Figure and Table 1), whereas the lowest suicide rates were in the air force and navy in 2005 (9.95 and 9.79 suicides, respectively). From 2006 to 2011, the rates were higher among army personnel (19.13 to 29.44 cases per 100000) than among members of any other branch. Figure. Suicide rates per 100000 persons (2005 to 2011), by branch of service. Table 1. Suicide Rates per 100000 Persons for Active Duty, Regular Component Enlisted Personnel* Characteristics Of these suicides, 1416 occurred among nontrainees, comprising our suicide cohort (Table 2). Most (52.5%) were among army personnel, and approximately 95% occurred in men. The median age was 25 years (interquartile range
International Journal of Human Factors and Ergonomics | 2012
Paul O'Connor; Douglas W Jones; Michael E. McCauley; Samuel E. Buttrey
The US Navy’s Crew Resource Management (CRM) training programme has not been evaluated within the last decade. Reactions were evaluated by analysing 51,570 responses to an item pertaining to CRM that is part of a safety climate survey. A total of 172 responses were obtained on a knowledge test. The attitudes of 553 naval aviators were assessed using an attitudes questionnaire. The CRM mishap rate from 1997 until 2007 was evaluated. It was found that naval aviators appear to think than CRM training is useful, are generally knowledgeable of, and display positive attitudes towards, the concepts addressed in the training. However, there is a lack of evidence to support the view that CRM training is having an effect on the mishap rate. As the next generation of highly automated aircraft becomes part of naval aviation, there is a need to ensure that CRM training evolves to meet this new challenge.
Journal of Quantitative Analysis in Sports | 2016
Samuel E. Buttrey
Abstract This article describes a method for predicting the outcome of National Hockey League (NHL) games. We combine a model for goal scoring and yielding, and one for penalty commission, in a Markov-type computation and a simulation model that produce predicted probabilities of victory for each team. Where these differ substantially from the market probabilities, we make “bets” according to a simple strategy. Our return on investment is both positive and statistically significant.
Chance | 2018
Samuel E. Buttrey; Lyn R. Whitaker; Jonathan K. Alt
38 Recruiting is an expensive, ongoing challenge for all of the U.S. military services. More than a quarter of the U.S. Department of Defense (DoD) budget—around
Journal of Defense Analytics and Logistics | 2017
Adam Haupt; Jonathan K. Alt; Samuel E. Buttrey
150 billion per year—goes to fund the pay and benefits of active and retired members of the military. Even modest gains in recruiting efficiency can translate to large dollar savings. Developments in the Statistical Modeling of Military Recruiting