Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Brett Presnell is active.

Publication


Featured researches published by Brett Presnell.


Journal of Computational and Graphical Statistics | 2000

On the LASSO and its Dual

M. R. Osborne; Brett Presnell; Berwin A. Turlach

Abstract Proposed by Tibshirani, the least absolute shrinkage and selection operator (LASSO) estimates a vector of regression coefficients by minimizing the residual sum of squares subject to a constraint on the l 1-norm of the coefficient vector. The LASSO estimator typically has one or more zero elements and thus shares characteristics of both shrinkage estimation and variable selection. In this article we treat the LASSO as a convex programming problem and derive its dual. Consideration of the primal and dual problems together leads to important new insights into the characteristics of the LASSO estimator and to an improved method for estimating its covariance matrix. Using these results we also develop an efficient algorithm for computing LASSO estimates which is usable even in cases where the number of regressors exceeds the number of observations. An S-Plus library based on this algorithm is available from StatLib.


Technometrics | 2001

A Functional Data—Analytic Approach to Signal Discrimination

Peter Hall; Donald Poskitt; Brett Presnell

Motivated by specific problems involving radar-range profiles, we suggest techniques for real-time discrimination in the context of signal analysis. The key to our approach is to regard the signals as curves in the continuum and employ a functional data-analytic (FDA) method for dimension reduction, based on the FDA technique for principal coordinates analysis. This has the advantage, relative to competing methods such as canonical variates analysis, of providing a signal approximation that is best possible, in an L2 sense, for a given dimension. As a result, it produces particularly good discrimination. We explore the use of both nonparametric and Gaussian-based discriminators applied to the dimensionreduced data.


Journal of the royal statistical society series b-methodological | 1999

Intentionally biased bootstrap methods

Peter Hall; Brett Presnell

A class of weighted bootstrap techniques, called biased bootstrap or b‐bootstrap methods, is introduced. It is motivated by the need to adjust empirical methods, such as the ‘uniform’ bootstrap, in a surgical way to alter some of their features while leaving others unchanged. Depending on the nature of the adjustment, the b‐bootstrap can be used to reduce bias, or to reduce variance or to render some characteristic equal to a predetermined quantity. Examples of the last application include a b‐bootstrap approach to hypothesis testing in nonparametric contexts, where the b‐bootstrap enables simulation ‘under the null hypothesis’, even when the hypothesis is false, and a b‐bootstrap competitor to Tibshiranis variance stabilization method. An example of the bias reduction application is adjustment of Nadaraya–Watson kernel estimators to make them competitive with local linear smoothing. Other applications include density estimation under constraints, outlier trimming, sensitivity analysis, skewness or kurtosis reduction and shrinkage.


Journal of the American Statistical Association | 1998

Projected multivariate linear models for directional data

Brett Presnell; Scott P. Morrison; Ramon C. Littell

Abstract We introduce the spherically projected multivariate linear model for directional data. This model treats directional observations as projections onto the unit sphere of unobserved responses from a multivariate linear model. Focusing on the important case of circular data, we show that maximum likelihood estimates for the model are readily computed using iterative methods, in sharp contrast with competing approaches. Examples are given to demonstrate the resulting methodology in realistic applications.


Journal of Nonparametric Statistics | 1999

U-Statistics and Imperfect Ranking in Ranked Set Sampling

Brett Presnell; Lora L. Bohn

Ranked set sampling has attracted considerable attention as an efficient sampling design, particularly for environmental and ecological studies. A number of authors have noted a gain in efficiency over ordinary random sampling when specific estimators and tests of hypotheses are applied to rank set sample data. We generalize such results by deriving the asymptotic distribution for random sample U-statistics when applied to ranked set sample data. Our results show that the ranked set sample procedure is asymptotically at least as efficient as the random sample procedure, regardless of the accuracy of judgement ranking. Some errors in the ranked set sampling literature are also revealed, and counterexamples provided. Finally, application of majorization theory to these results shows when perfect ranking can be expected to yield greater efficiency than imperfect ranking.


Journal of Computational and Graphical Statistics | 1999

Density estimation under constraints

Peter Hall; Brett Presnell

Abstract We suggest a general method for tackling problems of density estimation under constraints. It is, in effect, a particular form of the weighted bootstrap, in which resampling weights are chosen so as to minimize distance from the empirical or uniform bootstrap distribution subject to the constraints being satisfied. A number of constraints are treated as examples. They include conditions on moments, quantiles, and entropy, the latter as a device for imposing qualitative conditions such as those of unimodality or “interestingness.” For example, without altering the data or the amount of smoothing, we may construct a density estimator that enjoys the same mean, median, and quartiles as the data. Different measures of distance·give rise to slightly different results.


Journal of The Royal Statistical Society Series B-statistical Methodology | 1999

Biased Bootstrap Methods for Reducing the Effects of Contamination

Peter Hall; Brett Presnell

Contamination of a sampled distribution, for example by a heavy‐tailed distribution, can degrade the performance of a statistical estimator. We suggest a general approach to alleviating this problem, using a version of the weighted bootstrap. The idea is to ‘tilt’ away from the contaminated distribution by a given (but arbitrary) amount, in a direction that minimizes a measure of the new distributions dispersion. This theoretical proposal has a simple empirical version, which results in each data value being assigned a weight according to an assessment of its influence on dispersion. Importantly, distance can be measured directly in terms of the likely level of contamination, without reference to an empirical measure of scale. This makes the procedure particularly attractive for use in multivariate problems. It has several forms, depending on the definitions taken for dispersion and for distance between distributions. Examples of dispersion measures include variance and generalizations based on high order moments. Practicable measures of the distance between distributions may be based on power divergence, which includes Hellinger and Kullback–Leibler distances. The resulting location estimator has a smooth, redescending influence curve and appears to avoid computational difficulties that are typically associated with redescending estimators. Its breakdown point can be located at any desired value e∈ (0, ½) simply by ‘trimming’ to a known distance (depending only on e and the choice of distance measure) from the empirical distribution. The estimator has an affine equivariant multivariate form. Further, the general method is applicable to a range of statistical problems, including regression.


Journal of the American Statistical Association | 2004

The IOS Test for Model Misspecification

Brett Presnell; Dennis D. Boos

A new test of model misspecification is proposed, based on the ratio of in-sample and out-of-sample likelihoods. The test is broadly applicable and, in simple problems, approximates well-known, intuitive methods. Using jackknife influence curve approximations, it is shown that the test statistic can be viewed asymptotically as a multiplicative contrast between two estimates of the information matrix, both of which are consistent under correct model specification. This approximation is used to show that the statistic is asymptotically normally distributed, although it is suggested that p values be computed using the parametric bootstrap. The resulting methodology is demonstrated with various examples and simulations involving both discrete and continuous data.


Journal of the American Statistical Association | 1994

Testing the Minimal Repair Assumption in an Imperfect Repair Model

Brett Presnell; Myles Hollander; Jayaram Sethuraman

Abstract Models assuming minimal repair specify that on repair, a failed system is returned to the working state, while the effective age of the system is held constant; that is, the distribution of the time until the next failure of the repaired system is the same as for a system of the same age that has not yet failed. These models are common in the literature of operations research and reliability, and many probabilistic results as well as inferential procedures depend on the minimal repair assumption. We propose two nonparametric tests of the assumption that imperfectly repaired systems are minimally repaired in some models. The large sample theory for these tests is derived from the asymptotic joint distribution of a survival function estimator and the ordinary empirical survival function based on the initial failure times of new or perfectly repaired systems. Simulation results are also provided for the null hypothesis case and under other alternatives.


Journal of Computational and Graphical Statistics | 1998

Allocation of Monte Carlo Resources for the Iterated Bootstrap

James G. Booth; Brett Presnell

Abstract Use of the iterated bootstrap is often recommended for calibration of bootstrap intervals, using either direct calibration of the nominal coverage probability (prepivoting), or additive correction of the interval endpoints. Monte Carlo resampling is a straightforward, but computationally expensive way to approximate the endpoints of bootstrap intervals. Booth and Hall examined the case of coverage calibration of Efrons percentile interval, and developed an asymptotic approximation for the error in the Monte Carlo approximation of the endpoints. Their results can be used to determine an approximately optimal allocation of resamples to the first and second level of the bootstrap. An extension of this result to the case of the additively corrected percentile interval shows that the bias of the Monte Carlo approximation to the additively corrected endpoints is of smaller order than in the case of direct coverage calibration, and the asymptotic variance is the same. Because the asymptotic bias is con...

Collaboration


Dive into the Brett Presnell's collaboration.

Top Co-Authors

Avatar

Peter Hall

University of Melbourne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Berwin A. Turlach

University of Western Australia

View shared research outputs
Top Co-Authors

Avatar

M. R. Osborne

Australian National University

View shared research outputs
Top Co-Authors

Avatar

Dennis D. Boos

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Pavlina Rumcheva

Centers for Disease Control and Prevention

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge