Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Keith E. Muller is active.

Publication


Featured researches published by Keith E. Muller.


International Journal of Computer Vision | 2003

Deformable M-Reps for 3D Medical Image Segmentation

Stephen M. Pizer; P. Thomas Fletcher; Sarang C. Joshi; Andrew Thall; James Z. Chen; Yonatan Fridman; Daniel S. Fritsch; A. Graham Gash; John M. Glotzer; Michael R. Jiroutek; Conglin Lu; Keith E. Muller; Gregg Tracton; Paul A. Yushkevich; Edward L. Chaney

M-reps (formerly called DSLs) are a multiscale medial means for modeling and rendering 3D solid geometry. They are particularly well suited to model anatomic objects and in particular to capture prior geometric information effectively in deformable models segmentation approaches. The representation is based on figural models, which define objects at coarse scale by a hierarchy of figures—each figure generally a slab representing a solid region and its boundary simultaneously. This paper focuses on the use of single figure models to segment objects of relatively simple structure.A single figure is a sheet of medial atoms, which is interpolated from the model formed by a net, i.e., a mesh or chain, of medial atoms (hence the name m-reps), each atom modeling a solid region via not only a position and a width but also a local figural frame giving figural directions and an object angle between opposing, corresponding positions on the boundary implied by the m-rep. The special capability of an m-rep is to provide spatial and orientational correspondence between an object in two different states of deformation. This ability is central to effective measurement of both geometric typicality and geometry to image match, the two terms of the objective function optimized in segmentation by deformable models. The other ability of m-reps central to effective segmentation is their ability to support segmentation at multiple levels of scale, with successively finer precision. Objects modeled by single figures are segmented first by a similarity transform augmented by object elongation, then by adjustment of each medial atom, and finally by displacing a dense sampling of the m-rep implied boundary. While these models and approaches also exist in 2D, we focus on 3D objects.The segmentation of the kidney from CT and the hippocampus from MRI serve as the major examples in this paper. The accuracy of segmentation as compared to manual, slice-by-slice segmentation is reported.


Statistics in Medicine | 2008

An R2 statistic for fixed effects in the linear mixed model

Lloyd J. Edwards; Keith E. Muller; Russell D. Wolfinger; Bahjat F. Qaqish; Oliver Schabenberger

Statisticians most often use the linear mixed model to analyze Gaussian longitudinal data. The value and familiarity of the R(2) statistic in the linear univariate model naturally creates great interest in extending it to the linear mixed model. We define and describe how to compute a model R(2) statistic for the linear mixed model by using only a single model. The proposed R(2) statistic measures multivariate association between the repeated outcomes and the fixed effects in the linear mixed model. The R(2) statistic arises as a 1-1 function of an appropriate F statistic for testing all fixed effects (except typically the intercept) in a full model. The statistic compares the full model with a null model with all fixed effects deleted (except typically the intercept) while retaining exactly the same covariance structure. Furthermore, the R(2) statistic leads immediately to a natural definition of a partial R(2) statistic. A mixed model in which ethnicity gives a very small p-value as a longitudinal predictor of blood pressure (BP) compellingly illustrates the value of the statistic. In sharp contrast to the extreme p-value, a very small R(2) , a measure of statistical and scientific importance, indicates that ethnicity has an almost negligible association with the repeated BP outcomes for the study.


Journal of Digital Imaging | 1998

Contrast Limited Adaptive Histogram Equalization image processing to improve the detection of simulated spiculations in dense mammograms

Etta D. Pisano; Shuquan Zong; Bradley M. Hemminger; Marla DeLuca; R. Eugene Johnston; Keith E. Muller; M. Patricia Braeuning; Stephen M. Pizer

The purpose of this project was to determine whether Contrast Limited Adaptive Histogram Equalization (CLAHE) improves detection of simulated spiculations in dense mammograms. Lines simulating the appearance of spiculations, a common marker of malignancy when visualized with masses, were embedded in dense mammograms digitized at 50 micron pixels, 12 bits deep. Film images with no CLAHE applied were compared to film images with nine different combinations of clip levels and region sizes applied. A simulated spiculation was embedded in a background of dense breast tissue, with the orientation of the spiculation varied. The key variables involved in each trial included the orientation of the spiculation, contrast level of the spiculation and the CLAHE settings applied to the image. Combining the 10 CLAHE conditions, 4 contrast levels and 4 orientations gave 160 combinations. The trials were constructed by pairing 160 combinations of key variables with 40 backgrounds. Twenty student observers were asked to detect the orientation of the spiculation in the image. There was a statistically significant improvement in detection performance for spiculations with CLAHE over unenhanced images when the region size was set at 32 with a clip level of 2, and when the region size was set at 32 with a clip level of 4. The selected CLAHE settings should be tested in the clinic with digital mammograms to determine whether detection of spiculations associated with masses detected at mammography can be improved.


Journal of the American Statistical Association | 1992

Power calculations for general linear multivariate models including repeated measures applications

Keith E. Muller; Lisa M. LaVange; Sharon Landesman Ramey; Craig T. Ramey

Recently developed methods for power analysis expand the options available for study design. We demonstrate how easily the methods can be applied by (1) reviewing their formulation and (2) describing their application in the preparation of a particular grant proposal. The focus is a complex but ubiquitous setting: repeated measures in a longitudinal study. Describing the development of the research proposal allows demonstrating the steps needed to conduct an effective power analysis. Discussion of the example also highlights issues that typically must be considered in designing a study. First, we discuss the motivation for using detailed power calculations, focusing on multivariate methods in particular. Second, we survey available methods for the general linear multivariate model (GLMM) with Gaussian errors and recommend those based on F approximations. The treatment includes coverage of the multivariate and univariate approaches to repeated measures, MANOVA, ANOVA, multivariate regression, and univariate regression. Third, we describe the design of the power analysis for the example, a longitudinal study of a childs intellectual performance as a function of mothers estimated verbal intelligence. Fourth, we present the results of the power calculations. Fifth, we evaluate the tradeoffs in using reduced designs and tests to simplify power calculations. Finally, we discuss the benefits and costs of power analysis in the practice of statistics. We make three recommendations: Align the design and hypothesis of the power analysis with the planned data analysis, as best as practical.Embed any power analysis in a defensible sensitivity analysis.Have the extent of the power analysis reflect the ethical, scientific, and monetary costs. We conclude that power analysis catalyzes the interaction of statisticians and subject matter specialists. Using the recent advances for power analysis in linear models can further invigorate the interaction.


Journal of the American Statistical Association | 1989

Approximate Power for Repeated-Measures ANOVA Lacking Sphericity

Keith E. Muller; Curtis N. Barton

Abstract Violation of sphericity of covariance across repeated measures inflates Type I error rates in univariate repeated-measures analysis of variance. Hence the use of Geisser—Greenhouse or Huynh—Feldt is now common (to provide improved Type I error rate). With nonsphericity, no method has been available to compute power. A convenient method is suggested for approximating power and test size under nonsphericity. New approximations are suggested for (a) a weighted sum of independent noncentral χ2,s, (b) the trace of a noncentral Wishart (or pseudo-Wishart) matrix, (c) the expected values of and , and (d) the noncentral test statistic, whether corrected or uncorrected. The new approximations are extensions of the work of Box (1954a,b) and Satterthwaite (1941). The method performed well when evaluated against published and new simulations.


Medical Image Analysis | 2002

Augmented reality guidance for needle biopsies: an initial randomized, controlled trial in phantoms.

Michael H. Rosenthal; Andrei State; Joohi Lee; Gentaro Hirota; Jeremy D. Ackerman; Kurtis Keller; Etta D. Pisano; Michael R. Jiroutek; Keith E. Muller; Henry Fuchs

We report the results of a randomized, controlled trial to compare the accuracy of standard ultrasound-guided needle biopsy to biopsies performed using a 3D Augmented Reality (AR) guidance system. A board-certified radiologist conducted 50 core biopsies of breast phantoms, with biopsies randomly assigned to one of the methods in blocks of five biopsies each. The raw ultrasound data from each biopsy was recorded. Another board-certified radiologist, blinded to the actual biopsy guidance mechanism, evaluated the ultrasound recordings and determined the distance of the biopsy from the ideal position. A repeated measures analysis of variance indicated that the head-mounted display method led to a statistically significantly smaller mean deviation from the desired target than did the standard display method (2.48 mm for control versus 1.62 mm for augmented reality, p<0.02). This result suggests that AR systems can offer improved accuracy over traditional biopsy guidance methods.


American Journal of Kidney Diseases | 1995

A review of therapeutic studies of idiopathic membranous glomerulopathy

Susan L. Hogan; Keith E. Muller; J. Charles Jennette; Ronald J. Falk

The treatment of idiopathic membranous glomerulopathy remains an enigma. We have reviewed many of the important clinical trials concerning membranous glomerulopathy using a meta-analysis and a secondary pooled analysis to test the effects of corticosteroid or alkylating, therapy compared with no treatment on renal survival and complete remission of the nephrotic syndrome. A search was performed using MEDLINE (1968 through 1993) for articles on idiopathic membranous glomerulopathy and glomerulonephritis. Bibliographies of articles were reviewed for completeness. Sixty-nine articles were reviewed. Meta-analysis was performed for four trials that evaluated corticosteroids compared with no treatment and for three trials that evaluated alkylating therapy compared with no treatment. Pooled analysis was performed on randomized and prospective studies (10 studies) and then with 22 case series added. All studies evaluated renal biopsy-proven disease. Meta-analysis was performed on the relative chance of being in complete remission for each study. Renal survival could be evaluated by pooled analysis only. For pooled analyses, Coxs proportional hazard and logistic regression models were used to test the effect of therapy on renal survival and the nephrotic syndrome, respectively. Data concerning gender, nephrotic syndrome, and geographic region were used in all statistical models. Evaluation of renal survival revealed no differences by treatment group (P > 0.1). By meta-analysis, the relative chance of complete remission was not improved for corticosteroid-treated patients (1.55; 95% confidence interval, 0.99 to 2.44; P > 0.1), but was improved for patients treated with alkylating agents (4.8; 95% confidence interval, 1.44 to 15.96; P < 0.05) when compared with no treatment. Pooled analysis of randomized and prospective studies, as well pooled analysis with all studies, supported the findings of the meta-analysis. Corticosteroids or alkylating therapy did not improve renal survival in idiopathic membranous glomerulopathy. Complete remission of the nephrotic syndrome was observed more frequently with the use of alkylating agents.


ieee visualization | 1990

Contrast-limited adaptive histogram equalization: speed and effectiveness

Stephen M. Pizer; R. E. Johnston; J.P. Ericksen; Bonnie C. Yankaskas; Keith E. Muller

An experiment intended to evaluate the clinical application of contrast-limited adaptive histogram equalization (CLAHE) to chest computer tomography (CT) images is reported. A machine especially designed to compute CLAHE in a few seconds is discussed. It is shown that CLAHE can be computed in 4 s after 5-s loading time using the specially designed parallel engine made from a few thousand dollars worth of off-the-shelf components. The processing appears to be useful for a wide range of medical images, but the limitations of observer calibration make it impossible to demonstrate such usefulness by agreement experiments.<<ETX>>


Trials | 2012

Adaptive trial designs: a review of barriers and opportunities

John A. Kairalla; Christopher S. Coffey; Mitchell A. Thomann; Keith E. Muller

Adaptive designs allow planned modifications based on data accumulating within a study. The promise of greater flexibility and efficiency stimulates increasing interest in adaptive designs from clinical, academic, and regulatory parties. When adaptive designs are used properly, efficiencies can include a smaller sample size, a more efficient treatment development process, and an increased chance of correctly answering the clinical question of interest. However, improper adaptations can lead to biased studies. A broad definition of adaptive designs allows for countless variations, which creates confusion as to the statistical validity and practical feasibility of many designs. Determining properties of a particular adaptive design requires careful consideration of the scientific context and statistical assumptions. We first review several adaptive designs that garner the most current interest. We focus on the design principles and research issues that lead to particular designs being appealing or unappealing in particular applications. We separately discuss exploratory and confirmatory stage designs in order to account for the differences in regulatory concerns. We include adaptive seamless designs, which combine stages in a unified approach. We also highlight a number of applied areas, such as comparative effectiveness research, that would benefit from the use of adaptive designs. Finally, we describe a number of current barriers and provide initial suggestions for overcoming them in order to promote wider use of appropriate adaptive designs. Given the breadth of the coverage all mathematical and most implementation details are omitted for the sake of brevity. However, the interested reader will find that we provide current references to focused reviews and original theoretical sources which lead to details of the current state of the art in theory and practice.


BMC Medical Research Methodology | 2013

Selecting a sample size for studies with repeated measures

Yi Guo; Henrietta L. Logan; Deborah H. Glueck; Keith E. Muller

Many researchers favor repeated measures designs because they allow the detection of within-person change over time and typically have higher statistical power than cross-sectional designs. However, the plethora of inputs needed for repeated measures designs can make sample size selection, a critical step in designing a successful study, difficult. Using a dental pain study as a driving example, we provide guidance for selecting an appropriate sample size for testing a time by treatment interaction for studies with repeated measures. We describe how to (1) gather the required inputs for the sample size calculation, (2) choose appropriate software to perform the calculation, and (3) address practical considerations such as missing data, multiple aims, and continuous covariates.

Collaboration


Dive into the Keith E. Muller's collaboration.

Top Co-Authors

Avatar

Paul W. Stewart

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Etta D. Pisano

Medical University of South Carolina

View shared research outputs
Top Co-Authors

Avatar

Vernon A. Benignus

United States Environmental Protection Agency

View shared research outputs
Top Co-Authors

Avatar

Bradley M. Hemminger

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Deborah H. Glueck

Colorado School of Public Health

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Curtis N. Barton

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Stephen M. Pizer

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Yi Guo

University of Florida

View shared research outputs
Top Co-Authors

Avatar

Lloyd J. Edwards

University of North Carolina at Chapel Hill

View shared research outputs
Researchain Logo
Decentralizing Knowledge