Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ekkehard Glimm is active.

Publication


Featured researches published by Ekkehard Glimm.


Statistics in Medicine | 2009

Adaptive designs for confirmatory clinical trials

Frank Bretz; Franz Koenig; Werner Brannath; Ekkehard Glimm; Martin Posch

Adaptive designs play an increasingly important role in clinical drug development. Such designs use accumulating data of an ongoing trial to decide how to modify design aspects without undermining the validity and integrity of the trial. Adaptive designs thus allow for a number of possible adaptations at midterm: Early stopping either for futility or success, sample size reassessment, change of population, etc. A particularly appealing application is the use of adaptive designs in combined phase II/III studies with treatment selection at interim. The expectation has arisen that carefully planned and conducted studies based on adaptive designs increase the efficiency of the drug development process by making better use of the observed data, thus leading to a higher information value per patient.In this paper we focus on adaptive designs for confirmatory clinical trials. We review the adaptive design methodology for a single null hypothesis and how to perform adaptive designs with multiple hypotheses using closed test procedures. We report the results of an extensive simulation study to evaluate the operational characteristics of the various methods. A case study and related numerical examples are used to illustrate the key results. In addition we provide a detailed discussion of current methods to calculate point estimates and confidence intervals for relevant parameters.


Biometrical Journal | 2011

Graphical approaches for multiple comparison procedures using weighted Bonferroni, Simes, or parametric tests.

Frank Bretz; Martin Posch; Ekkehard Glimm; Florian Klinglmueller; Willi Maurer; Kornelius Rohmeyer

The confirmatory analysis of pre-specified multiple hypotheses has become common in pivotal clinical trials. In the recent past multiple test procedures have been developed that reflect the relative importance of different study objectives, such as fixed sequence, fallback, and gatekeeping procedures. In addition, graphical approaches have been proposed that facilitate the visualization and communication of Bonferroni-based closed test procedures for common multiple test problems, such as comparing several treatments with a control, assessing the benefit of a new drug for more than one endpoint, combined non-inferiority and superiority testing, or testing a treatment at different dose levels in an overall and a subpopulation. In this paper, we focus on extended graphical approaches by dissociating the underlying weighting strategy from the employed test procedure. This allows one to first derive suitable weighting strategies that reflect the given study objectives and subsequently apply appropriate test procedures, such as weighted Bonferroni tests, weighted parametric tests accounting for the correlation between the test statistics, or weighted Simes tests. We illustrate the extended graphical approaches with several examples. In addition, we describe briefly the gMCP package in R, which implements some of the methods described in this paper.


Biometrical Journal | 2008

Unbiased estimation of selected treatment means in two-stage trials.

Jack Bowden; Ekkehard Glimm

Straightforward estimation of a treatments effect in an adaptive clinical trial can be severely hindered when it has been chosen from a larger group of potential candidates. This is because selection mechanisms that condition on the rank order of treatment statistics introduce bias. Nevertheless, designs of this sort are seen as a practical and efficient way to fast track the most promising compounds in drug development. In this paper we extend the method of Cohen and Sackrowitz (1989) who proposed a two-stage unbiased estimate for the best performing treatment at interim. This enables their estimate to work for unequal stage one and two sample sizes, and also when the quantity of interest is the best, second best, or j -th best treatment out of k. The implications of this new flexibility are explored via simulation.


Statistics in Medicine | 2014

Model‐based dose finding under model uncertainty using general parametric models

José Pinheiro; Björn Bornkamp; Ekkehard Glimm; Frank Bretz

The statistical methodology for the design and analysis of clinical Phase II dose-response studies, with related software implementation, is well developed for the case of a normally distributed, homoscedastic response considered for a single timepoint in parallel group study designs. In practice, however, binary, count, or time-to-event endpoints are encountered, typically measured repeatedly over time and sometimes in more complex settings like crossover study designs. In this paper, we develop an overarching methodology to perform efficient multiple comparisons and modeling for dose finding, under uncertainty about the dose-response shape, using general parametric models. The framework described here is quite broad and can be utilized in situations involving for example generalized nonlinear models, linear and nonlinear mixed effects models, Cox proportional hazards models, with the main restriction being that a univariate dose-response relationship is modeled, that is, both dose and response correspond to univariate measurements. In addition to the core framework, we also develop a general purpose methodology to fit dose-response data in a computationally and statistically efficient way. Several examples illustrate the breadth of applicability of the results. For the analyses, we developed the R add-on package DoseFinding, which provides a convenient interface to the general approach adopted here.


Statistics in Biopharmaceutical Research | 2011

Multiple and Repeated Testing of Primary, Coprimary, and Secondary Hypotheses

Willi Maurer; Ekkehard Glimm; Frank Bretz

In confirmatory clinical trials the Type I error rate must be controlled for claims forming the basis for approval and labeling of a new drug. Strong control of the familywise error rate is usually needed for hypotheses related to the primary endpoint(s). For hypotheses related to secondary endpoint(s) which are only of interest if the corresponding “parent” primary null hypotheses have been rejected, less strict error rate control might be sufficient. We review and extend procedures for families of primary and secondary hypotheses when either at least one of the primary hypotheses or all coprimary hypotheses must be rejected to claim success for the trial. Such families of hypotheses arise naturally from comparing several treatments with a control, combined noninferiority and superiority testing for primary and secondary variables, the presence of multiple primary or secondary endpoints or any combination thereof. We show that many of the procedures proposed in the literature follow a common underlying principle and in some cases can be improved. In addition we present some general results on Type I error rates for the different families and subfamilies of hypotheses and their relation to group-sequential testing of multiple hypotheses.


Biometrical Journal | 2009

High-dimensional data analysis: Selection of variables, data compression and graphics – Application to gene expression

Jürgen Läuter; Friedemann Horn; Maciej Rosolowski; Ekkehard Glimm

The paper presents effective and mathematically exact procedures for selection of variables which are applicable in cases with a very high dimension as, for example, in gene expression analysis. Choosing sets of variables is an important method to increase the power of the statistical conclusions and to facilitate the biological interpretation. For the construction of sets, each single variable is considered as the centre of potential sets of variables. Testing for significance is carried out by means of the Westfall-Young principle based on resampling or by the parametric method of spherical tests. The particular requirements for statistical stability are taken into account; each kind of overfitting is avoided. Thus, high power is attained and the familywise type I error can be kept in spite of the large dimension. To obtain graphical representations by heat maps and curves, a specific data compression technique is applied. Gene expression data from B-cell lymphoma patients serve for the demonstration of the procedures.


Biometrical Journal | 2014

Conditionally unbiased and near unbiased estimation of the selected treatment mean for multistage drop-the-losers trials.

Jack Bowden; Ekkehard Glimm

The two-stage drop-the-loser design provides a framework for selecting the most promising of K experimental treatments in stage one, in order to test it against a control in a confirmatory analysis at stage two. The multistage drop-the-losers design is both a natural extension of the original two-stage design, and a special case of the more general framework of Stallard & Friede (2008) (Stat. Med. 27, 6209–6227). It may be a useful strategy if deselecting all but the best performing treatment after one interim analysis is thought to pose an unacceptable risk of dropping the truly best treatment. However, estimation has yet to be considered for this design. Building on the work of Cohen & Sackrowitz (1989) (Stat. Prob. Lett. 8, 273–278), we derive unbiased and near-unbiased estimates in the multistage setting. Complications caused by the multistage selection process are shown to hinder a simple identification of the multistage uniform minimum variance conditionally unbiased estimate (UMVCUE); two separate but related estimators are therefore proposed, each containing some of the UMVCUEs theoretical characteristics. For a specific example of a three-stage drop-the-losers trial, we compare their performance against several alternative estimators in terms of bias, mean squared error, confidence interval width and coverage.


Statistics in Medicine | 2014

Empirical Bayes estimation of the selected treatment mean for two-stage drop-the-loser trials: a meta-analytic approach.

Jack Bowden; Werner Brannath; Ekkehard Glimm

Point estimation for the selected treatment in a two-stage drop-the-loser trial is not straightforward because a substantial bias can be induced in the standard maximum likelihood estimate (MLE) through the first stage selection process. Research has generally focused on alternative estimation strategies that apply a bias correction to the MLE; however, such estimators can have a large mean squared error. Carreras and Brannath (Stat. Med. 32:1677-90) have recently proposed using a special form of shrinkage estimation in this context. Given certain assumptions, their estimator is shown to dominate the MLE in terms of mean squared error loss, which provides a very powerful argument for its use in practice. In this paper, we suggest the use of a more general form of shrinkage estimation in drop-the-loser trials that has parallels with model fitting in the area of meta-analysis. Several estimators are identified and are shown to perform favourably to Carreras and Brannaths original estimator and the MLE. However, they necessitate either explicit estimation of an additional parameter measuring the heterogeneity between treatment effects or a quite unnatural prior distribution for the treatment effects that can only be specified after the first stage data has been observed. Shrinkage methods are a powerful tool for accurately quantifying treatment effects in multi-arm clinical trials, and further research is needed to understand how to maximise their utility.


Journal of Biopharmaceutical Statistics | 2010

Comments on the Draft Guidance on “Adaptive Design Clinical Trials for Drugs and Biologics” of the U.S. Food and Drug Administration

Werner Brannath; Hans Ulrich Burger; Ekkehard Glimm; Nigel Stallard; Marc Vandemeulebroecke; Gernot Wassmer

The U.S. FDA has published a draft guidance on “Adaptive Design Clinical Trials for Drugs and Biologics”, which gives regulatory guidance on methodological issues in exploratory and confirmatory clinical trials planned with an adaptive design. This comment summarizes the discussion within the joint working group “Adaptive Designs and Multiple Testing Procedures” of the Austro-Swiss and German regions of the International Biometric Society held at the 90-day public comment period in spring 2010.


Statistics in Medicine | 2014

Connections between permutation and t‐tests: relevance to adaptive methods

Michael A. Proschan; Ekkehard Glimm; Martin Posch

A permutation test assigns a p-value by conditioning on the data and treating the different possible treatment assignments as random. The fact that the conditional type I error rate given the data is controlled at level α ensures validity of the test even if certain adaptations are made. We show the connection between permutation and t-tests, and use this connection to explain why certain adaptations are valid in a t-test setting as well. We illustrate this with an example of blinded sample size recalculation.

Collaboration


Dive into the Ekkehard Glimm's collaboration.

Top Co-Authors

Avatar

Jürgen Läuter

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Siegfried Kropf

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

John B. Porter

University College London

View shared research outputs
Top Co-Authors

Avatar

Martin Posch

Medical University of Vienna

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge