Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David M. Allen is active.

Publication


Featured researches published by David M. Allen.


Technometrics | 1974

The Relationship Between Variable Selection and Data Agumentation and a Method for Prediction

David M. Allen

We show that data augmentation provides a rather general formulation for the study of biased prediction techniques using multiple linear regression. Variable selection is a limiting case, and Ridge regression is a special case of data augmentation. We propose a way to obtain predictors given a credible criterion of good prediction.


Technometrics | 1971

Mean Square Error of Prediction as a Criterion for Selecting Variables

David M. Allen

The mean square error of prediction is proposed as a criterion for selecting variables. This criterion utilizes the values of the predictor variables associated with the future observation and the magnitude of the estimated variance. Mean square error is believed to be more meaningful than the commonly used criterion, the residual sum of squares.


Computational Statistics & Data Analysis | 2001

Determining the number of components in mixtures of linear models

Dollena S. Hawkins; David M. Allen; Arnold J. Stromberg

Abstract Methods for determining the number of components in normal mixtures are extended to mixtures of linear regression models. This simulation study evaluates the influence of component separation and mixing proportions on the performance of 22 approximations of measures for determining the number of components in mixtures of linear regression models. Estimated measures based on the maximized log likelihood of the observed data are compared to estimated measures based on the maximized log likelihood of the complete data. Approximations of measures which previously required the convergence rate of the EM algorithm are presented which have no such restriction for their implementation. As an alternative to the EM algorithm, which is known to be sensitive to starting values, differential evolution was the implemented optimization algorithm. This study is further set apart in that the performances of the approximated component measures are explored without assuming the mixing proportions to be equal or assuming equal component variances. Based on the results of the k =1 and 2 component model simulations, the minimum description length, MDL, is the recommended criterion for choosing between one and two component mixtures of linear regression models.


Journal of Industrial Microbiology & Biotechnology | 1992

Efficacy of commercial inocula in enhancing biodegradation of weathered crude oil contaminating a Prince William Sound beach

Albert D. Venosa; John R. Haines; David M. Allen

SummaryIn a laboratory study evaluating the effectiveness of 10 commercial products in stimulating enhanced biodegradation of Alaska North Slope crude oil, two of the products provided significantly greater alkane degradation in closed flasks than indigenous Alaskan bacterial populations supplied only with excess nutrients. These two products, which were microbial in nature, were then taken to a Prince William Sound beach to determine if similar enhancements were achieveable in the field. A randomized complete block experiment was designed in which four small plots consisting of a no-nutrient control, a mineral nutrient plot, and two plots receiving mineral nutrients plus the two products were laid out in random order on a beach in Prince William Sound that had been contaminated 16 months earlier from the Exxon Valdez spill. These four plots comprised a ‘block’ of treatments, each oil residue weight and alkane hydrocarbon profile changes. The results indicated no significant differences (P<0.05) among the four treatments in the 27-day time period of the experiment. A statistical power analysis, however, revealed that the variability in the data prevented a firm conclusion in this regard. Failure to detect significant differences was attributed not only to variability in the data but also to the highly weathered nature of the oil and the lack of sufficient time for biodegradation to take place.


Biometrics | 1983

Parameter Estimation for Nonlinear Models with Emphasis on Compartmental Models

David M. Allen

Techniques are described for the estimation of parameters in nonlinear models. When implemented in computer programs, these techniques will reduce programming effort, facilitate inference about implicit functions of parameters, and allow a more general variance-covariance structure. The techniques seem particularly useful for the analysis of data from stochastic compartmental models. They are illustrated by an example.


The American Statistician | 1985

Orthogonalization-Triangularization Methods in Statistical Computations

Del T. Scott; G. Rex Bryce; David M. Allen

Abstract Procedures are presented for reducing a data matrix to triangular form by using orthogonal transformations. It is shown how an analysis of variance can be constructed from the triangular reduction of the data matrix. Procedures for calculating sums of squares, degrees of freedom, and expected mean squares are presented. It is demonstrated that all statistics needed for inference on linear combinations of parameters of a linear model may be calculated from the triangular reduction of the data matrix.


Communications in Statistics - Simulation and Computation | 1980

Computation of the estimated parameters and wald statistic for the generalized growth curve model

Neil C. Sehwertman; John R. Huseby; David M. Allen

Multivariate analysis is difficult when there are missing observations in the response vectors. Kleinbaum (1973) proposed a Wald statistic useful in the analysis of incomplete multivariate data. SUBROUTINE C0EF calculates the estimated parameter matrix g in the generalization of the Potthoff-Roy (1964) growth curve model proposed by Kleinbaum (1973). SUBROUTINE WALD calculates the Wald statistic for hypotheses of the form Hn: H 5 D = 0 as proposed by Kleinbaum (1973).


Communications in Statistics - Simulation and Computation | 1983

A computational procedure for combining data and prior information

David M. Allen; David C. Jordan

An algorithm for combining data information and prior information, for the prediction of specific values, is presented. Important features include the concept of the fraction of information provided by the data and a method to choose weights to achieve a desired fraction of information.


Journal of Water Pollution Control Federation | 1984

Disinfection of secondary effluent with ozone/UV

Albert D. Venosa; Albert C. Petrasek; Donald Brown; Harold L. Sparks; David M. Allen


Biometrics | 1982

The use of prior information for prediction

David M. Allen; David C. Jordan

Collaboration


Dive into the David M. Allen's collaboration.

Top Co-Authors

Avatar

Albert D. Venosa

United States Environmental Protection Agency

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Del T. Scott

Brigham Young University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

G. Rex Bryce

Brigham Young University

View shared research outputs
Top Co-Authors

Avatar

John R. Haines

United States Environmental Protection Agency

View shared research outputs
Top Co-Authors

Avatar

John R. Huseby

California State University

View shared research outputs
Top Co-Authors

Avatar

Neil C. Sehwertman

California State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge