Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ting-Li Su is active.

Publication


Featured researches published by Ting-Li Su.


Statistics in Medicine | 2013

Designing exploratory cancer trials using change in tumour size as primary endpoint.

Thomas Jaki; Valérie André; Ting-Li Su; John Whitehead

In phase III cancer clinical trials, overall survival is commonly used as the definitive endpoint. In phase II clinical trials, however, more immediate endpoints such as incidence of complete or partial response within 1 or 2 months or progression-free survival (PFS) are generally used. Because of the limited ability to detect change in overall survival with response, the inherent variability of PFS and the long wait for progression to be observed, more informative and immediate alternatives to overall survival are desirable in exploratory phase II trials. In this paper, we show how comparative trials can be designed and analysed using change in tumour size as the primary endpoint. The test developed is based on the framework of score statistics and will formally incorporate the information of whether patients survive until the time at which change in tumour size is assessed. Using an example in non-small cell lung cancer, we show that the sample size requirements for a trial based on change in tumour size are favourable compared with alternative randomized trials and demonstrate that these conclusions are robust to our assumptions.


Statistical Methods in Medical Research | 2018

A review of statistical updating methods for clinical prediction models

Ting-Li Su; Thomas Jaki; Graeme L. Hickey; Iain Buchan; Matthew Sperrin

A clinical prediction model is a tool for predicting healthcare outcomes, usually within a specific population and context. A common approach is to develop a new clinical prediction model for each population and context; however, this wastes potentially useful historical information. A better approach is to update or incorporate the existing clinical prediction models already developed for use in similar contexts or populations. In addition, clinical prediction models commonly become miscalibrated over time, and need replacing or updating. In this article, we review a range of approaches for re-using and updating clinical prediction models; these fall in into three main categories: simple coefficient updating, combining multiple previous clinical prediction models in a meta-model and dynamic updating of models. We evaluated the performance (discrimination and calibration) of the different strategies using data on mortality following cardiac surgery in the United Kingdom: We found that no single strategy performed sufficiently well to be used to the exclusion of the others. In conclusion, useful tools exist for updating existing clinical prediction models to a new population or context, and these should be implemented rather than developing a new clinical prediction model from scratch, using a breadth of complementary statistical methods.


Journal of Dental Research | 2017

Longitudinal study of caries development from childhood to adolescence

Emma Hall-Scullin; Hilary Whitehead; K. M. Milsom; Martin Tickle; Ting-Li Su; Tanya Walsh

The World Health Organization (WHO) stated that globally, dental caries is the most important oral condition. To develop effective prevention strategies requires an understanding of how this condition develops and progresses over time, but there are few longitudinal studies of caries onset and progression in children. The aim of the study was to establish the pattern of caries development from childhood into adolescence and to explore the role of potential risk factors (age, sex, ethnicity, and social deprivation). Of particular interest was the disease trajectory of dentinal caries in the permanent teeth in groups defined by the presence or absence of dentinal caries in the primary teeth. Intraoral examinations to assess oral health were performed at 4 time points by trained and calibrated dentist examiners using a standardized, national diagnostic protocol. Clinical data were available from 6,651 children. Mean caries prevalence (% D3MFT > 0) was 16.7% at the first clinical examination (ages 7–9 y), increasing to 31.0%, 42.2%, and 45.7% at subsequent examinations. A population-averaged model (generalized estimating equations) was used to model the longitudinal data. Estimated mean values indicated a rising D3MFT count as pupils aged (consistent with new teeth emerging), which was significantly higher (4.49 times; 95% confidence interval, 3.90–5.16) in those pupils with caries in their primary dentition than in those without. This study is one of the few large longitudinal studies to report the development of dental caries from childhood into adolescence. Children who developed caries in their primary dentition had a very different caries trajectory in their permanent dentition compared to their caries-free contemporaries. In light of these results, caries-free and caries-active children should be considered as 2 separate populations, suggesting different prevention strategies are required to address their different risk profiles.


Pharmaceutical Statistics | 2013

Investigation of the robustness of two models for assessing synergy in pre-clinical drug combination studies.

Anne Whitehead; Ting-Li Su; Helene Thygesen; Matthew Sperrin; Chris Harbron

Pre-clinical studies may be used to screen for synergistic combinations of drugs. The types of in vitro assays used for this purpose will depend upon the disease area of interest. In oncology, one frequently used study measures cell line viability: cells placed into wells on a plate are treated with doses of two compounds, and cell viability is assessed from an optical density measurement corrected for blank well values. These measurements are often transformed and analysed as cell survival relative to untreated wells. The monotherapies are assumed to follow the Hill equation with lower and upper asymptotes at 0 and 1, respectively. Additionally, a common variance about the dose-response curve may be assumed. In this paper, we consider two models for incorporating synergy parameters. We investigate the effect of different models of biological variation on the assessment of synergy from both of these models. We show that estimates of the synergy parameters appear to be robust, even when estimates of the other model parameters are biased. Using untransformed measurements provides better coverage of the 95% confidence intervals for the synergy parameters than using transformed measurements, and the requirement to fit the upper asymptote does not cause difficulties. Assuming homoscedastic variances appears to be robust. The added complexity of determining and fitting an appropriate heteroscedastic model does not seem to be justified.


Plastic and Reconstructive Surgery | 2016

Facial Aesthetic Outcomes of Cleft Surgery: Assessment of Discrete Lip and Nose Images Compared with Digital Symmetry Analysis

Ciara E. Deall; Nirvana S S Kornmann; Husam Bella; Katy Wallis; Joseph Hardwicke; Ting-Li Su; Bruce Richard

Background: High-quality aesthetic outcomes are of paramount importance to children growing up after cleft lip and palate surgery. Establishing a validated and reliable assessment tool for cleft professionals and families will facilitate cleft units, surgeons, techniques, and protocols to be audited and compared with greater confidence. This study used exemplar images across a five-point aesthetic scale, identified in a pilot project, to score lips and noses as separate units and compared these human scores with computer-based SymNose symmetry scores. Methods: Forty-five assessors (17 cleft surgeons nationally and 28 other cleft professionals from the UK South West Tri-centre units), scored 25 standardized photographs, uploaded randomly onto a Web-based platform, twice. Each photograph was shown in three forms: lip and nose together, and separately cropped images of nose only and lip only. The same images were analyzed using the SymNose software program. Results: Scoring lips gave the best intrarater and interrater reliabilities. Nose scores were more variable. Lip scoring associated most closely with the whole-image score. SymNose ranking of the lip images related highly to the same ranking by humans (p = 0.001). The exemplar images maintained their established previous ranking. Conclusions: Images illustrating the aesthetic outcome grades are confirmed. The lip score is reliable and seems to dominate in the whole-image score. Noses are much harder to score reliably. It appears that SymNose can score lip images very effectively by symmetry. Further use of SymNose will be investigated, and families of children with cleft will trial the scoring system. CLINICAL QUESTION/LEVEL OF EVIDENCE: Therapeutic, III.


BMC Medical Informatics and Decision Making | 2016

Understanding clinical prediction models as 'innovations': a mixed methods study in UK family practice

Benjamin Brown; Sudeh Cheraghi-Sohi; Thomas Jaki; Ting-Li Su; Iain Buchan; Matthew Sperrin

BackgroundWell-designed clinical prediction models (CPMs) often out-perform clinicians at estimating probabilities of clinical outcomes, though their adoption by family physicians is variable. How family physicians interact with CPMs is poorly understood, therefore a better understanding and framing within a context-sensitive theoretical framework may improve CPM development and implementation. The aim of this study was to investigate why family physicians do or do not use CPMs, interpreting these findings within a theoretical framework to provide recommendations for the development and implementation of future CPMs.MethodsMixed methods study in North West England that comprised an online survey and focus groups.ResultsOne hundred thirty eight respondents completed the survey, which found the main perceived advantages to using CPMs were that they guided appropriate treatment (weighted rank [r] = 299; maximum r = 414 throughout), justified treatment decisions (r = 217), and incorporated a large body of evidence (r = 156). The most commonly reported barriers to using CPMs were lack of time (r = 163), irrelevance to some patients (r = 161), and poor integration with electronic health records (r = 147). Eighteen clinicians participated in two focus groups (i.e. nine in each), which revealed 13 interdependent themes affecting CPM use under three overarching domains: clinician factors, CPM factors and contextual factors. Themes were interdependent, indicating the tensions family physicians experience in providing evidence-based care for individual patients.ConclusionsThe survey and focus groups showed that CPMs were valued when they supported clinical decision making and were robust. Barriers to their use related to their being time-consuming, difficult to use and not always adding value. Therefore, to be successful, CPMs should offer a relative advantage to current working, be easy to implement, be supported by training, policy and guidelines, and fit within the organisational culture.


Pharmaceutical Statistics | 2012

An evaluation of methods for testing hypotheses relating to two endpoints in a single clinical trial.

Ting-Li Su; Ekkehard Glimm; John Whitehead; Mike Branson

The issues and dangers involved in testing multiple hypotheses are well recognised within the pharmaceutical industry. In reporting clinical trials, strenuous efforts are taken to avoid the inflation of type I error, with procedures such as the Bonferroni adjustment and its many elaborations and refinements being widely employed. Typically, such methods are conservative. They tend to be accurate if the multiple test statistics involved are mutually independent and achieve less than the type I error rate specified if these statistics are positively correlated. An alternative approach is to estimate the correlations between the test statistics and to perform a test that is conditional on those estimates being the true correlations. In this paper, we begin by assuming that test statistics are normally distributed and that their correlations are known. Under these circumstances, we explore several approaches to multiple testing, adapt them so that type I error is preserved exactly and then compare their powers over a range of true parameter values. For simplicity, the explorations are confined to the bivariate case. Having described the relative strengths and weaknesses of the approaches under study, we use simulation to assess the accuracy of the approximate theory developed when the correlations are estimated from the study data rather than being known in advance and when data are binary so that test statistics are only approximately normally distributed.


Drug Information Journal | 2010

Determining an Adaptive Exclusion Procedure following Discovery of an Association between the Whole Genome and Adverse Drug Reactions

Ting-Li Su; Helene Thygesen; John Whitehead; Clive Bowman

This article concerns the identification of associations between the incidence of adverse drug reactions and features apparent from whole genome scans of patients together with the subsequent implementation of an adaptive exclusion procedure within a drug development program. Our context is not a retrospective assessment of a large and complete database: instead we are concerned with identifying such a relationship during a drug development program and the consequences for the future conduct of that program. In particular, we seek methods for identifying changes to the exclusion criteria that will prevent future patients at high risk of an adverse reaction from continuing to be recruited. We discuss the levels of evidence needed to amend an existing recruitment policy, how this can be done, and how to evaluate and revise the reformulated recruitment policy as the trials continue. The approach will be illustrated using clinical trial data to demonstrate its potential for making an immediate reduction in the incidence of adverse drug reactions.


Communications in Statistics - Simulation and Computation | 2018

An evaluation of the bootstrap for model validation in mixture models

Thomas Jaki; Ting-Li Su; Minjung Kim; M. Lee Van Horn

ABSTRACT Bootstrapping has been used as a diagnostic tool for validating model results for a wide array of statistical models. Here we evaluate the use of the non-parametric bootstrap for model validation in mixture models. We show that the bootstrap is problematic for validating the results of class enumeration and demonstrating the stability of parameter estimates in both finite mixture and regression mixture models. In only 44% of simulations did bootstrapping detect the correct number of classes in at least 90% of the bootstrap samples for a finite mixture model without any model violations. For regression mixture models and cases with violated model assumptions, the performance was even worse. Consequently, we cannot recommend the non-parametric bootstrap for validating mixture models. The cause of the problem is that when resampling is used influential individual observations have a high likelihood of being sampled many times. The presence of multiple replications of even moderately extreme observations is shown to lead to additional latent classes being extracted. To verify that these replications cause the problems we show that leave-k-out cross-validation where sub-samples taken without replacement does not suffer from the same problem.


Pharmaceutical Statistics | 2015

Experimental designs for detecting synergy and antagonism between two drugs in a pre‐clinical study

Matthew Sperrin; Helene Thygesen; Ting-Li Su; Chris Harbron; Anne Whitehead

The identification of synergistic interactions between combinations of drugs is an important area within drug discovery and development. Pre-clinically, large numbers of screening studies to identify synergistic pairs of compounds can often be ran, necessitating efficient and robust experimental designs. We consider experimental designs for detecting interaction between two drugs in a pre-clinical in vitro assay in the presence of uncertainty of the monotherapy response. The monotherapies are assumed to follow the Hill equation with common lower and upper asymptotes, and a common variance. The optimality criterion used is the variance of the interaction parameter. We focus on ray designs and investigate two algorithms for selecting the optimum set of dose combinations. The first is a forward algorithm in which design points are added sequentially. This is found to give useful solutions in simple cases but can lack robustness when knowledge about the monotherapy parameters is insufficient. The second algorithm is a more pragmatic approach where the design points are constrained to be distributed log-normally along the rays and monotherapy doses. We find that the pragmatic algorithm is more stable than the forward algorithm, and even when the forward algorithm has converged, the pragmatic algorithm can still out-perform it. Practically, we find that good designs for detecting an interaction have equal numbers of points on monotherapies and combination therapies, with those points typically placed in positions where a 50% response is expected. More uncertainty in monotherapy parameters leads to an optimal design with design points that are more spread out.

Collaboration


Dive into the Ting-Li Su's collaboration.

Top Co-Authors

Avatar

Matthew Sperrin

Manchester Academic Health Science Centre

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Helene Thygesen

St James's University Hospital

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bruce Richard

Boston Children's Hospital

View shared research outputs
Top Co-Authors

Avatar

Iain Buchan

University of Manchester

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

H. Bella

Boston Children's Hospital

View shared research outputs
Researchain Logo
Decentralizing Knowledge