Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Thomas Jaki is active.

Publication


Featured researches published by Thomas Jaki.


Statistics and Computing | 2010

Probabilistic relabelling strategies for the label switching problem in Bayesian mixture models

Matthew Sperrin; Thomas Jaki; Ernst Wit

The label switching problem is caused by the likelihood of a Bayesian mixture model being invariant to permutations of the labels. The permutation can change multiple times between Markov Chain Monte Carlo (MCMC) iterations making it difficult to infer component-specific parameters of the model. Various so-called ‘relabelling’ strategies exist with the goal to ‘undo’ the label switches that have occurred to enable estimation of functions that depend on component-specific parameters. Existing deterministic relabelling algorithms rely upon specifying a loss function, and relabelling by minimising its posterior expected loss. In this paper we develop probabilistic approaches to relabelling that allow for estimation and incorporation of the uncertainty in the relabelling process. Variants of the probabilistic relabelling algorithm are introduced and compared to existing deterministic relabelling algorithms. We demonstrate that the idea of probabilistic relabelling can be expressed in a rigorous framework based on the EM algorithm.


Developmental Psychology | 2009

Assessing differential effects: applying regression mixture models to identify variations in the influence of family resources on academic achievement.

M. Lee Van Horn; Thomas Jaki; Katherine E. Masyn; Sharon Landesman Ramey; Jessalyn Smith; Susan P. Antaramian

Developmental scientists frequently seek to understand effects of environmental contexts on development. Traditional analytic strategies assume similar environmental effects for all children, sometimes exploring possible moderating influences or exceptions (e.g., outliers) as a secondary step. These strategies are poorly matched to ecological models of human development that posit complex individual by environment interactions. An alternative conceptual framework is proposed that tests the hypothesis that the environment has differential (nonuniform) effects on children. A demonstration of the utility of this framework is provided by examining the effects of family resources on childrens academic outcomes in a multisite study (N = 6,305). Three distinctive groups of children were identified, including 1 group particularly resilient to influence of low levels of family resources. Predictors of group differences including parenting and child demographics are tested, the replicability of the results are examined, and findings are contrasted with those obtained with traditional regression interaction effects. This approach is proposed as a partial solution to advance theories of the environment, social ecological systems research, and behavioral genetics to create well-tailored environments for children.


Statistics in Medicine | 2012

Optimal design of multi‐arm multi‐stage trials

James Wason; Thomas Jaki

In drug development, there is often uncertainty about the most promising among a set of different treatments. Multi-arm multi-stage (MAMS) trials provide large gains in efficiency over separate randomised trials of each treatment. They allow a shared control group, dropping of ineffective treatments before the end of the trial and stopping the trial early if sufficient evidence of a treatment being superior to control is found. In this paper, we discuss optimal design of MAMS trials. An optimal design has the required type I error rate and power but minimises the expected sample size at some set of treatment effects. Finding an optimal design requires searching over stopping boundaries and sample size, potentially a large number of parameters. We propose a method that combines quick evaluation of specific designs and an efficient stochastic search to find the optimal design parameters. We compare various potential designs motivated by the design of a phase II MAMS trial. We also consider allocating more patients to the control group, as has been carried out in real MAMS studies. We show that the optimal allocation to the control group, although greater than a 1:1 ratio, is smaller than previously advocated and that the gain in efficiency is generally small.


Statistical Methods in Medical Research | 2016

Some recommendations for multi-arm multi-stage trials:

James Wason; Dominic Magirr; Martin Law; Thomas Jaki

Multi-arm multi-stage designs can improve the efficiency of the drug-development process by evaluating multiple experimental arms against a common control within one trial. This reduces the number of patients required compared to a series of trials testing each experimental arm separately against control. By allowing for multiple stages experimental treatments can be eliminated early from the study if they are unlikely to be significantly better than control. Using the TAILoR trial as a motivating example, we explore a broad range of statistical issues related to multi-arm multi-stage trials including a comparison of different ways to power a multi-arm multi-stage trial; choosing the allocation ratio to the control group compared to other experimental arms; the consequences of adding additional experimental arms during a multi-arm multi-stage trial, and how one might control the type-I error rate when this is necessary; and modifying the stopping boundaries of a multi-arm multi-stage design to account for unknown variance in the treatment outcome. Multi-arm multi-stage trials represent a large financial investment, and so considering their design carefully is important to ensure efficiency and that they have a good chance of succeeding.


Multivariate Behavioral Research | 2008

Using Multilevel Mixtures to Evaluate Intervention Effects in Group Randomized Trials.

M. Lee Van Horn; Abigail A. Fagan; Thomas Jaki; Eric C. Brown; J. David Hawkins; Michael W. Arthur; Robert D. Abbott; Richard F. Catalano

There is evidence to suggest that the effects of behavioral interventions may be limited to specific types of individuals, but methods for evaluating such outcomes have not been fully developed. This study proposes the use of finite mixture models to evaluate whether interventions, and, specifically, group randomized trials, impact participants with certain characteristics or levels of problem behaviors. This study uses latent classes defined by clustering of individuals based on the targeted behaviors and illustrates the model by testing whether a preventive intervention aimed at reducing problem behaviors affects experimental users of illicit substances differently than problematic substance users or those individuals engaged in more serious problem behaviors. An illustrative example is used to demonstrate the identification of latent classes, specification of random effects in a multilevel mixture model, independent validation of latent classes, and the estimation of power for the proposed models to detect intervention effects. This study proposes specific steps for the estimation of multilevel mixture models and their power and suggests that this model can be applied more broadly to understand the effectiveness of interventions.


Statistics in Medicine | 2009

One- and two-stage design proposals for a phase II trial comparing three active treatments with control using an ordered categorical endpoint.

John Whitehead; Thomas Jaki

Phase II clinical trials are performed to investigate whether a novel treatment shows sufficient promise of efficacy to justify its evaluation in a subsequent definitive phase III trial, and they are often also used to select the dose to take forward. In this paper we discuss different design proposals for a phase II trial in which three active treatment doses and a placebo control are to be compared in terms of a single-ordered categorical endpoint. The sample size requirements for one-stage and two-stage designs are derived, based on an approach similar to that of Dunnett. Detailed computations are prepared for an illustrative example concerning a study in stroke. Allowance for early stopping for futility is made. Simulations are used to verify that the specified type I error and power requirements are valid, despite certain approximations used in the derivation of sample size. The advantages and disadvantages of the different designs are discussed, and the scope for extending the approach to different forms of endpoint is considered.


Journal of Pharmacokinetics and Pharmacodynamics | 2005

Estimation of AUC from 0 to Infinity in Serial Sacrifice Designs.

Martin J. Wolfsegger; Thomas Jaki

Nonclinical in vivo animal studies have to be completed before starting clinical studies of the pharmacokinetic behavior of a drug in humans. The drug exposure in animal studies is often measured by the area under the concentration time curve (AUC). The classical complete data design where each animal is sampled for analysis once per time point is usually only applicable for large animals. In the case of rats and mice, where blood sampling is restricted, the batch design or the serial sacrifice design need to be considered. In batch designs samples are taken more than once from each animal, but not at all time points. In serial sacrifice designs only one sample is taken from each animal. This paper presents an estimator for AUC from 0 to infinity in serial sacrifice designs, the corresponding variance and its asymptotic distribution.


Toxicology and Applied Pharmacology | 2009

A note on statistical analysis of organ weights in non-clinical toxicological studies.

Martin J. Wolfsegger; Thomas Jaki; Barbara Dietrich; Jackie Kunzler; Kerry Barker

Statistical comparison of organ weights between treated and untreated animals have traditionally been used to predict potential toxicity for patients. The manner of presentation of organ weight data, and the value of statistical analyses have been topics of discussion. Historically, a decision tree approach has been applied for statistical comparison of organ weights which does not control the overall error rate and can lead to different statistical tests being used by chance for identical settings causing confusion. This paper proposes a simple nonparametric approach for assessing treatment effects on organ weights in terms of ratios based on the Hodges-Lehmann estimator. This allows for simple interpretation of results and aids in the identification of potential target organs as the evaluation is based on effect sizes and not on p-values allowing a robust proof of effect as well as a robust proof of no effect. The proposed estimate and the corresponding nonparametric confidence interval applied to a rank-sum score can be used as a confirmatory test for difference and as a confirmatory test for equivalence. Exploratory analyses can be performed calculating the proposed estimates for each organ separately to be summarized graphically in a confidence interval plot.


Pharmaceutical Statistics | 2009

Confidence intervals for ratios of AUCs in the case of serial sampling: a comparison of seven methods.

Thomas Jaki; Martin J. Wolfsegger; Meinhard Ploner

Pharmacokinetic studies are commonly performed using the two-stage approach. The first stage involves estimation of pharmacokinetic parameters such as the area under the concentration versus time curve (AUC) for each analysis subject separately, and the second stage uses the individual parameter estimates for statistical inference. This two-stage approach is not applicable in sparse sampling situations where only one sample is available per analysis subject similar to that in non-clinical in vivo studies. In a serial sampling design, only one sample is taken from each analysis subject. A simulation study was carried out to assess coverage, power and type I error of seven methods to construct two-sided 90% confidence intervals for ratios of two AUCs assessed in a serial sampling design, which can be used to assess bioequivalence in this parameter.


Clinical Trials | 2013

Uptake of novel statistical methods for early phase clinical studies in the UK public sector

Thomas Jaki

Background In recent years, the success rate of confirmatory studies has been poor resulting in more emphasis on the conduct of exploratory studies. As one possibility to improve decision-making during the early stages of development, adaptive and Bayesian methods have been recommended. Purpose To investigate the current practice in designing early-phase studies in UK public sector research institutions and the use of adaptive and Bayesian methods in particular and to determine factors that hinder the penetration of methodological advances into practice. Methods A questionnaire was sent to all UK clinical trials units (CTUs) to gauge their involvement in early-phase studies and to learn about the designs used in these studies. Follow-up visits to units conducting early-phase studies with round-table discussions around the methods used and the obstacles faced when using adaptive methods were undertaken. Results More than half of the CTUs are involved in early-phase studies, but conservatism in the methods used in these studies is present. Reasons for novel methodology not being used include a lack of expertise, incompatible funding and unit structure, and a lack of software. Limitations Information is collected from UK CTUs, which undertake a large portion (but not all) publicly funded trials. Conclusions The use of adaptive and Bayesian methods for early-phase clinical studies in the UK public sector is at present limited. Various different initiatives aim to support and facilitate the use of these methods, however, so that an increased use of these methods can be anticipated in the future.

Collaboration


Dive into the Thomas Jaki's collaboration.

Researchain Logo
Decentralizing Knowledge