William Mietlowski
Novartis
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by William Mietlowski.
Journal of Clinical Oncology | 2005
David A. Reardon; T. Cloughesy; Charles A. Conrad; M. Prados; J. Xia; William Mietlowski; Margaret Dugan; Paul S. Mischel; Henry S. Friedman; A. Yung
3063 Background: AEE788 is an oral inhibitor with potent activity against multiple tyrosine kinases, including EGFR, HER2, and VEGFR2. This phase I study was to assess the safety and pharmacokinetics (PK) and to define the maximum tolerated dose (MTD)/dose limiting toxicity (DLT) of AEE788 in GBM patients (pts). Methods: Pts with GBM at 1st or 2nd recurrence were enrolled. Dose escalation followed accelerated titration design: 1 pt/cohort initially with expansion up to 3–6 pts. Two dose escalations occurred simultaneously: pts who received either non-enzyme inducing anticonvulsants (Gp A) or enzyme-inducing anticonvulsants (Gp B). Cardiac assessments were performed. A 24-hr PK was obtained on days 1, 15, and 28. A cycle (cyc) is 28 days. Results: To date, 26 pts (21 1st recurrent, 5 2nd recurrent; 20 male, 6 female), median age 50.5 (range 24–68), were treated with once daily AEE788 at doses of either 50 (2), 100 (6), 200 (1), 400 (3), 450 (1), or 550 (8) mg (Gp A); or 300 (2) or 600 (3) mg (Gp B). Two Gp...
European Urology | 2013
Andrew M. Stein; Joaquim Bellmunt; Bernard Escudier; D. Kim; Sotirios G. Stergiopoulos; William Mietlowski; Robert J. Motzer
BACKGROUND The phase 3 RECORD-1 study demonstrated clinical benefit of everolimus over placebo (median progression-free survival: 4.9 mo compared with 1.9 mo, p<0.001) in treatment-resistant patients with metastatic renal cell carcinoma (mRCC). However, the Response Evaluation Criteria in Solid Tumors (RECIST) objective response rate was low. OBJECTIVE To explore the potential role of tumor burden response to everolimus in predicting patient survival. DESIGN, SETTING, AND PARTICIPANTS RECORD-1 patients with at least two tumor assessments (baseline and weeks 2-14) were included (n=246). OUTCOME MEASUREMENTS AND STATISTICAL ANALYSIS A multivariate Cox proportional hazard model was used to assess the impact of various prognostic factors on overall survival (OS). Components of RECIST progression were explored using univariate Cox regression. RESULTS AND LIMITATIONS The baseline sum of longest tumor diameters (SLD) and progression at weeks 2-14 were prognostic factors of OS by multivariate analysis. Univariate analysis at weeks 2-14 demonstrated that growth of nontarget lesions and appearance of new lesions were predictive of OS (p<0.001). This retrospective analysis used data from one arm of one trial; patients in the placebo arm were excluded because of confounding effects when they crossed over to everolimus. CONCLUSIONS This analysis identified baseline SLD as a predictive factor of OS, and the appearance of a new lesion or progression of a nontarget lesion at first assessment after baseline also affects OS in patients with mRCC treated with everolimus.
Contemporary Clinical Trials | 2011
Stephanie Kovalchik; William Mietlowski
For cytostatic cancer therapies, alternatives to traditional phase II endpoints are needed. Von Hoff (1998) proposed an intrapatient progression-free survival (PFS) ratio, the growth modulation index (GMI). Current practice in estimation of the GMI success rate is conservative and omits a measure of uncertainty. We investigated nonparametric and parametric methods to estimate the GMI success rate, including an approach using midranks for paired survival outcomes (Hudgens and Satten (2002)). Estimators were applied to a phase II GMI dataset (Bonetti et al. (2001)). From simulation studies, it was determined that a rank-based estimator had the most favorable statistical properties. Its point estimate bias was consistently within 1.5%; its bias and precision were robust over a range of effect and censoring scenarios. Using a proof of concept criterion of {P(GMI≥1)≥θ}, a simulation investigation found that a θ of 50%, for sample sizes between 20 and 30 patients, had type I error of ≤20% and a power to detect Von Hoffs 1.33 effect of ≥80%. When the amount of censoring was ≥20%, the midrank estimator had a minimum of 14% greater power over the simple percentage estimator for the GMI success rate. Future investigations reporting the GMI should consider adopting the midrank methodology.
Clinical Cancer Research | 2012
José Baselga; Alain C. Mita; Patrick Schöffski; Herlinde Dumez; Frederico Rojo; Josep Tabernero; Clifford DiLea; William Mietlowski; Christie Low; Jerry Huang; Margaret Dugan; Kathryn Parker; Eric Walk; Allan T. van Oosterom; Erika Martinelli; C. H. Takimoto
Purpose: In this first-in-human study of AEE788, a tyrosine kinase inhibitor of epidermal growth factor receptor (EGFR), HER-2, and VEGFR-2, a comprehensive pharmacodynamic program was implemented in addition to the evaluation of safety, pharmacokinetics, and preliminary efficacy of AEE788 in cancer patients. Experimental design: Patients with advanced, solid tumors received escalating doses of oral AEE788 once daily. Primary endpoints were to determine dose-limiting toxicities (DLTs) and maximum-tolerated dose (MTD). A nonlinear model (Emax model) was used to describe the relationship between AEE788 exposure and target-pathway modulation in skin and tumor tissues. Results: Overall, 111 patients were treated (25 to 550 mg/day). DLTs included rash and diarrhea; MTD was 450 mg/day. Effects on biomarkers correlated to serum AEE788 concentrations. The concentration at 50% inhibition (IC50) for EGFR in skin (0.033 μmol/L) and tumor (0.0125 μmol/L) were similar to IC50 in vitro suggesting skin may be surrogate tissue for estimating tumor EGFR inhibition. No inhibition of p-MAPK and Ki67 was observed in skin vessels at ≤MTD. Hence, AEE788 inhibited EGFR, but not VEGFR, at doses ≤MTD. A total of 16 of 96 evaluable patients showed a >10% shrinkage of tumor size; one partial response was observed. Conclusion: Our pharmacodynamic-based study showed effective inhibition of EGFR, but not of VEGFR at tolerable AEE788 doses. Emax modeling integrated with biomarker data effectively guided real-time decision making in the early development of AEE788. Despite clinical activity, target inhibition of only EGFR led to discontinuation of further AEE788 development. Clin Cancer Res; 18(22); 6364–72. ©2012 AACR.
Communications in Statistics-theory and Methods | 2003
Govind S. Mudholkar; Gregory E. Wilding; William Mietlowski
Abstract The classical Pitman–Morgan test is known to be optimal for testing equality of the variances of components of a bivariate normal vector. We first show that it is also optimal for a generalized model involving the matrix spherical distribution. Then we discuss and demonstrate, both analytically and empirically, that it is nonrobust, i.e., its type I error control is inexact both asymptotically and in moderate size bivariate random samples.
Journal of Biopharmaceutical Statistics | 2011
William Mietlowski
Most survival analysis books have little, if any, coverage on the topic of frailty models. The only other book with a major emphasis on frailty models in survival analysis published after 2000 was written by Duchateau and Janssen (2008). The author mentions that this book has a broader coverage of univariate frailty models than other books in this area and also places a special emphasis on correlated frailty models as natural extensions of shared frailty models. The book has seven chapters. The first chapter gives an overview of the book and introduces seven data sets that are used throughout the book to illustrate key concepts. Chapter 2 provides a review of the basic concepts of survival analysis, including basic definitions, a summary of seven failure time distributions, regression models, and identifiability problems arising from dependence between the survival and censoring distributions. The author notes that chapter 2 may be skipped by readers familiar with survival analysis, but suggests that readers should at least review the notation used in the chapter. Chapter 3 deals with univariate frailty models. The advantages of using models that separate the variability in survival into observed heterogeneity measured using known covariates and unobserved heterogeneity using frailty are discussed. The author cites the importance of the Laplace transform in frailty models. He points out that the unconditional density and hazard functions as well as the mean and variance of the frailty distribution can be expressed in terms of the Laplace transform and its derivatives. Chapter 3 investigates discrete frailty distributions (two, three and four point). Four sections are spent on the gamma frailty model and its variations and extensions, and three sections are devoted to the lognormal frailty model. Eight less commonly used frailty distributions are discussed. Chapter 3 also presents univariate frailty cure models and discusses how frailty models can be used to reduce the bias from omitted covariates in proportional hazard models. The Halluca lung cancer data set is analyzed using 3 discrete and 15 continuous frailty models in chapter 3. In this data set, 1696 lung cancer patients were followed from diagnosis to death with 1349 deaths (79.5%) reported. A discrete four-point frailty model fit the data best. Professor Wienke points out that a small subgroup of the patients was diagnosed by autopsy resulting in a survival time of zero days—extremely high frailty. In addition to modeling unobserved heterogeneity in the univariate survival setting, frailty models can be used to induce correlation among the marginal survival
Journal of Biopharmaceutical Statistics | 2008
William Mietlowski
In their preface, the editors cite the significant developments in biostatistical methodology over the past decades that have lead to improvements in many areas of drug discovery and development. The objective of this book is to provide broad coverage of biostatistical methodology applied in the pharmaceutical industry to deal with practical problems. The focus is primarily on nonclinical and preclinical research and early development although there is some discussion of nonstandard topics in later development as well. The target audience of the book includes primarily biostatisticians engaged in pharmaceutical research and development, but it could also benefit basic scientists such as biologists and chemists as well as pharmacologists and pharmacokineticists. The book is organized into 14 chapters written by 42 authors (26 from the pharmaceutical industry, 13 academic, 2 nonpharmaceutical industry, and 1 government). Chapter 1 provides an overview of the role of the statistician in the pharmaceutical industry, describing activities in nonclinical, early, and late clinical development. The emerging opportunities for statisticians to demonstrate leadership in the Food and Drug Administration’s Critical Path Initiative are also described. Chapter 2 describes two modern classification methods used in drug discovery, namely boosting and partial least squares. The authors note that drug discovery is typically characterized by overspecified data, i.e., too many variables relative to the number of observations and/or highly correlated descriptors. Recursive partitioning followed by boosting (implemented using two author-supplied macros) and partial least squares linear discrimination analysis (using PROC PLS) are illustrated using a permeability data set. In Chapter 3, the authors observe that a series of optimization processes takes place after a basic skeletal structure having the desired target inhibition and activity is discovered. These processes investigate modification of the chemical structure to improve the pharmaceutical properties (e.g., solubility, permeability, etc,) of the drug. Statistical modeling of the pharmaceutical properties as a function of the chemical structure is a driving factor of the optimization process. Model building and variable selection using multiple linear regression, ridge regression, lasso regression, logistic regression, and discriminant analysis are discussed and illustrated, using a solubility data set. Chapter 4 discusses analytical method validation related to biological and/or chemical assays. The authors use the proposed classification of assay data by Lee
Clinical Trials | 2015
H. M James Hung; Lurdes Y. T. Inoue; Eve H. Pickering; Jarcy Zee; Michael W. Kattan; Pamela A. Shaw; Herbert I Weisberg; Ying Huang; Stuart G. Baker; William Mietlowski; Songbai Wang; Susan S. Ellenberg; Jeremy M. G. Taylor
James Hung: Disclaimer: The comments given here reflect only my personal views and should not be construed to represent the view or the policy of the Food and Drug Administration. The four presentations were clearly and well delivered. I will try to pinpoint the essence of each talk and make some suggestions. Dr Zee’s presentation is very interesting in that a marker or markers yield a true endpoint with relatively good precision, whereas a clinical diagnosis generates a (surrogate) endpoint with measurement error. This scenario is rather different from the scenario we are often used to where, at least in a clinical trial setting, the primary endpoint is based on clinical diagnosis and the marker is often challenged in terms of prognostic or predictive nature. I wonder how good the cerebrospinal fluid (CSF) is as a marker for Alzheimer’s disease; its utility seems to remain unsettled. Regardless, she presented a very viable and good approach to reducing bias in estimation of covariate effect and improving relative efficiency. The approach combines the data of the patients who have surrogate data and true endpoint data (internal validation subsample) and the data of the patients who have only surrogate data (internal non-validation subsample), assuming that the patients in the non-validation set and the patients in the validation set are basically the same type of patients. In the clinical setting, we are often faced with the scenario that the treatment effect on the marker is most likely larger than the treatment effect on the clinical endpoint of interest. In Dr Zee’s article, the covariate effect (b) seems equal for the surrogate and the endpoint. But this is not really important because what is of ultimate interest is the covariate effect on the clinical endpoint. Besides, perhaps in most practical situations, the clinical endpoint is also marker based; for instance, the definition of myocardial infarction may be based on markers (e.g. creatine kinase-MB (CKMB), troponin), where the optimal cutoff values of the markers in determination of the disease or event are still far from certain. Thus, it will be interesting to link this scenario to that in her work. Regarding surrogacy, it can be discussed at a patient level or at a trial level. That is, the question can be how predictive the marker is of the clinical event of interest for each patient, or how well the treatment effect on the marker can translate into the treatment effect on the clinical benefit on average. Multiple trials are needed to address the latter issue. Perhaps this contrast needs to be thought about in further research. Dr Kattan argues that the value of adding a new marker to a model that contains an existing set of markers is primarily about whether the addition can improve prediction substantially. The new model may be quite different from the old models, even in the functional form. Ideally, the best new model with the new marker should be sought and compared to the best old model without the new marker. For the comparison, prediction accuracy is the most important parameter, and thus, it might not be useful to look at correlation, hazard ratio, or p value in multivariate analyses. These are all reasonable points. In practice, however, there is often no gold standard even for defining the disease or actual clinical outcome we care about. Many clinical outcomes are still defined using markers, unless they are clearly symptomatic. In multivariate analyses, the cutoffs used for new markers are very likely to be affected by what established markers are included in the model. People still use these approaches for variable selection in model building, in addition to the domain expert’s suggestion. To me, use of these statistics for variable selection can be very confusing and the domain expert’s suggestion
Journal of Biopharmaceutical Statistics | 2012
William Mietlowski
This book identifies 26 issues judged by the author to be controversial. It is the goal of the author that the book will provide a vehicle for regulatory agencies, clinical scientists, and biostatisticians to resolve/correct these issues and potentially enhance the level of good clinical and good statistical practice in clinical trials. The book has 27 chapters: an introductory chapter outlining the aim and structure of the book and providing a high-level overview of the 26 controversial issues, and a separate chapter for each of these issues. Professor Chow and co-authors Lan-Yan Yang and Ying Lu published a synopsis of eight of the chapters (Chapters 5–12) in a 2011 issue of the Drug Information Journal (Chow et al., 2011). This publication was reviewed at the November 9, 2011, meeting of the Virtual Journal Club of the Drug Information Association (DIA) Statistics Special Interest Area Community (SIAC). Robert O’Neill of the Food and Drug Administration (FDA) and Jerald Schindler of Merck were formal discussants. I start my review here with the eight chapters summarized in the DIA publication, incorporating key comments from participants at the Virtual Journal Club meeting to supplement my own comments. I then review the remaining 18 chapters and conclude with some comments about some errors in the book (typographical and missing references) and some other potentially controversial issues that might be addressed in subsequent editions of the book. Chapter 5 suggests that designing a trial to have adequate power to detect a clinically meaningful treatment effect based on the prespecified primary variable, generally measuring efficacy may not have adequate power to evaluate the treatment performance based on both efficacy and safety (the controversy). The author proposes testing a composite hypothesis that takes both efficacy and safety into consideration. The alternative hypothesis for each component may be superiority, noninferiority, or equivalence, leading to nine different types of alternative composite hypotheses. Comment from Robert O’Neill: Safety issues may be better addressed in prospective meta-analyses of clinical trials—for example, cardiovascular risk with antidiabetic drugs—than as a composite endpoint in a single clinical trial. Reviewer’s comment: It may be difficult to prospectively specify the safety risk from a therapeutic agent with a novel mechanism of action, and the acceptable safety margin in a life-threatening indication may depend on the level of efficacy observed. Chapter 6 discusses the instability of sample size estimates, particularly if based on a small pilot study. In the Gaussian location setting, the sample size
Journal of Biopharmaceutical Statistics | 2012
William Mietlowski
Although there had been few textbooks dealing with frailty models in survival analysis in the past, Chapman and Hall/CRC Press published two books in 2011 devoted to this topic. I reviewed the book written by Wienke (2011) in a recent issue of the Journal of Biopharmaceutical Statistics and recommended it (Mietlowski, 2011). My objective in reviewing the second recent book on frailty models, written by David D. Hanagal, was to determine its usage in conjunction with Wienke’s book. A comparison of these two books is given at the end of this review. David Hanagal’s book has three main parts. The first part, “Basic Concepts in Survival Analysis,” encompasses the first three chapters of the book. Chapter 1 introduces eight datasets used to illustrate concepts throughout the book and reviews basic definitions and notations. Chapter 2 describes five commonly used parametric survival distributions, maximum likelihood estimation, and regression models for each of the five parametric survival distributions. The chapter concludes with two applications of parametric regression models (myeloma and kidney dialysis datasets described in chapter 1) with R-program code and corresponding output based on the R-function survreg. Chapter 3 describes nonparametric and semiparametric models for survival data. The chapter begins with a discussion of various graphical methods (probability, hazard, and cumulative hazard plotting as well as graphical estimation). Since the methods are primarily used to examine goodness of fit for the parametric models described in chapter 2, perhaps this material might be better suited to chapter 2. Nonparametric estimation of the survival function (Kaplan– Meier and Nelson–Aalen estimates) is discussed and the appropriate R-program commands are provided. The log-rank and Wilcoxon tests for comparing two survival distributions are discussed, as well as Cox’s proportional hazards model. The R-program code for analyzing the myeloma data with the Cox model and corresponding output based on the R-function coxph are also presented. Part II, “Univariate and Shared Frailty Models for Survival Data,” is presented in chapters 4–10. Chapter 4 presents basic concepts and definitions of frailty, especially shared frailty. The author cites several literature references studying the effects of ignoring heterogeneity in survival analysis. The uses of frailty as a model for omitted covariates, frailty as a model of stochastic hazard, and identifiability issues are briefly discussed.