Russell V. Lenth
University of Iowa
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Russell V. Lenth.
The American Statistician | 2001
Russell V. Lenth
Sample size determination is often an important step in planning a statistical study—and it is usually a difficult one. Among the important hurdles to be surpassed, one must obtain an estimate of one or more error variances and specify an effect size of importance. There is the temptation to take some shortcuts. This article offers some suggestions for successful and meaningful sample size determination. Also discussed is the possibility that sample size may not be the main issue, that the real goal is to design a high-quality study. Finally, criticism is made of some ill-advised shortcuts relating to power and sample size.
Technometrics | 1989
Russell V. Lenth
Box and Meyer (1986) introduced a method for assessing the sizes of contrasts in unreplicated factorial and fractional factorial designs. This is a useful technique, and an associated graphical display popularly known as a Bayes plot makes it even more effective. This article presents a competing technique that is also effective and is computationally simple. An advantage of the new method is that the results are given in terms of the original units of measurement. This direct association with the data may make the analysis easier to explain.
Technometrics | 1981
Russell V. Lenth
If an airplane crashes, an emergency locator transmitter (ELT) is activated. The crash site can then be located by taking bearings on the ELT signal. The statistical problem consists of estimating a two-dimensional location parameter, where the data consist of directional bearings observed, with some error, from several known positions. We develop methods for estimating location for several variations of the basic problem: unidirectional and bidirectional observations, biased observations, and finally, techniques that are insensitive to outliers. These methods can easily be implemented on a portable microcomputer, and seem to perform quite well when applied to some actual data.
Academic Radiology | 1998
Donald D. Dorfman; Kevin S. Berbaum; Russell V. Lenth; Yeh-Fong Chen; Brenda A. Donaghy
RATIONALE AND OBJECTIVES The authors conducted a series of null-case Monte Carlo simulations to evaluate the Dorfman-Berbaum-Metz (DBM) method for comparing modalities with multireader receiver operating characteristic (ROC) discrete rating data. MATERIALS AND METHODS Monte Carlo simulations were performed by using discrete ratings on fully crossed factorial designs with two modalities and three, five, and 10 hypothetical readers. The null hypothesis was true for all simulations. The population ROC areas, latent variable structures, case sample sizes, and normal/abnormal case sample ratios used in another study were used in these simulations. RESULTS For equal allocation ratios and small (Az = 0.702) and moderate (Az = 0.855) ROC areas, the empirical type I error rate closely matched the nominal alpha level. For very large ROC areas (Az = 0.961), however, the empirical type I error rate was somewhat smaller than the nominal alpha level. This conservatism increased with decreasing case sample size and asymmetric normal/abnormal case allocation ratio. The empirical type I error rate was sometimes slightly larger than the nominal alpha level with many cases and few readers, where there was large residual, relatively small treatment-by-case interaction and relatively large treatment-by-reader interaction. CONCLUSION The results suggest that the DBM method provides trustworthy alpha levels with discrete ratings when the ROC area is not too large and case and reader sample sizes are not too small. In other situations, the test tends to be somewhat conservative or slightly liberal.
Academic Radiology | 1997
Donald D. Dorfman; Kevin S. Berbaum; Charles E. Metz; Russell V. Lenth; James A. Hanley; Hatem Abu Dagga
RATIONALE AND OBJECTIVES The standard binormal model is the most commonly used model for fitting receiver operating characteristic rating data; however, it sometimes produces inappropriate fits that cross the chance line with degenerate data sets. The authors proposed and evaluated a proper constant-shape bigamma model to handle binormal degeneracy. METHODS Monte Carlo samples were generated from both a standard binormal population model and a proper constant-shape bigamma model in a series of Monte Carlo studies. RESULTS The results confirm that the standard binormal model is robust in large samples with no degenerate data sets and that the standard binormal model is not robust in small samples because of degenerate data sets. CONCLUSION A proper constant-shape bigamma model seems to solve the problem of degeneracy without inappropriate chance line crossings. The bigamma fitting model outperformed the standard binormal fitting model in small samples and gave similar results in large samples.
Technometrics | 1981
Russell V. Lenth
Established techniques for robust M estimation of a location parameter are adapted for use in directional data. In particular, a periodic version of any of the commonly used ψ functions can be used to define a comparable estimator of angular location. This technique is illustrated with a numerical example and a small simulation study. The proposed estimators appear to perform at efficiency levels similar to those of ordinary M estimators in the linear case.
Academic Radiology | 1995
Donald D. Dorfman; Kevin S. Berbaum; Russell V. Lenth
RATIONALE AND OBJECTIVES We evaluated by bootstrapping the conclusions obtained by the Dorfman-Berbaum-Metz (DBM) receiver operating characteristic (ROC) method and by the Toledano-Gatsonis (TG) method on a well-known data set. METHODS We bootstrapped in two ways, resampled cases while holding readers fixed and resampled both cases and readers. RESULTS When an analysis of variance of pseudovalues implies that reader variance and all random interactions with treatment are essentially zero, then case-resampling bootstrap and the DBM and TG methods should give the same results. Case-resampling bootstrap and the DBM and TG methods did give highly similar results for both individual readers and the averages over all readers. Both the case-resampling bootstrap and the reader-case resampling bootstrap gave smaller standard errors for group than for individual reader means, thereby providing evidence for a trade-off of readers and cases with regard to precision and power in this data set. CONCLUSION Case-resampling bootstrap provides some justification for the DBM and TG methods.
Biomedical Signal Processing and Control | 2010
Jordan Cannon; Pavlo A. Krokhmal; Russell V. Lenth; Robert Murphey
Abstract We consider the problem of on-the-fly detection of temporal changes in the cognitive state of human subjects due to varying levels of difficulty of performed tasks using real-time EEG and EOG data. We construct the Cognitive State Indicator (CSI) as a function that projects the multidimensional EEG/EOG signals onto the interval [0,1] by maximizing the Kullback–Leibler distance between distributions of the signals, and whose values change continuously with variations in cognitive load. During offline testing (i.e., when evolution in time is disregarded) it was demonstrated that the CSI can serve as a statistically significant discriminator between states of different cognitive loads. In the online setting, a trend detection heuristic (TDH) has been proposed to detect real-time changes in the cognitive state by monitoring trends in the CSI. Our results support the application of the CSI and the TDH in future closed-loop control systems with human supervision.
Technometrics | 1987
Charles E. Du Mond; Russell V. Lenth
Robust estimation procedures have been the subject of a large number of comparative studies. Less attention has been paid to confidence intervals and tests based on these procedures and to cases in which the underlying distribution is nonsymmetric. This article investigates the properties of just one interval M estimator of location. The estimator is quite easy to compute, and the associated point estimator of location has proven to be quite efficient in comparative studies. The results indicate that in most cases (including some skewed distributions) the confidence coefficient of the estimator holds quite close to its specified value.
Statistics & Probability Letters | 1995
Marianthi Markatou; Joel L. Horowitz; Russell V. Lenth
A new estimator of the scale parameter [sigma] of an absolutely continuous distribution F[(x - [mu])/[sigma]] in a location-scale family is described. The estimator is based on the empirical characteristic function of the data. It is affine equivariant, strongly consistent, asymptotically normal and has desirable robustness properties.