Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Joanna Thorn is active.

Publication


Featured researches published by Joanna Thorn.


Applied Health Economics and Health Policy | 2013

Resource-Use Measurement Based on Patient Recall: Issues and Challenges for Economic Evaluation

Joanna Thorn; Joanna Coast; David Cohen; William Hollingworth; Martin Knapp; Sian Noble; Colin Ridyard; Sarah Wordsworth; Dyfrig A. Hughes

Accurate resource-use measurement is challenging within an economic evaluation, but is a fundamental requirement for estimating efficiency. Considerable research effort has been concentrated on the appropriate measurement of outcomes and the policy implications of economic evaluation, while methods for resource-use measurement have been relatively neglected. Recently, the Database of Instruments for Resource Use Measurement (DIRUM) was set up at http://www.dirum.org to provide a repository where researchers can share resource-use measures and methods. A workshop to discuss the issues was held at the University of Birmingham in October 2011. Based on material presented at the workshop, this article highlights the state of the art of UK instruments for resource-use data collection based on patient recall. We consider methodological issues in the design and analysis of resource-use instruments, and the challenges associated with designing new questionnaires. We suggest a method of developing a good practice guideline, and identify some areas for future research. Consensus amongst health economists has yet to be reached on many aspects of resource-use measurement. We argue that researchers should now afford costing methodologies the same attention as outcome measurement, and we hope that this Current Opinion article will stimulate a debate on methods of resource-use data collection and establish a research agenda to improve the precision and accuracy of resource-use estimates.


JAMA | 2018

Effect of a low-intensity psa-based screening intervention on prostate cancer mortality : The CAP randomized clinical trial

Richard M. Martin; Jenny Donovan; Emma L Turner; Chris Metcalfe; Grace Young; Eleanor Walsh; J. Athene Lane; Sian Noble; Steven E. Oliver; Simon Evans; Jonathan A C Sterne; Peter Holding; Yoav Ben-Shlomo; Peter Brindle; Naomi J Williams; Elizabeth M Hill; Siaw Yein Ng; Jessica Toole; Marta K. Tazewell; Laura J Hughes; Charlotte F Davies; Joanna Thorn; Elizabeth Down; George Davey Smith; David E. Neal; Freddie C. Hamdy

Importance Prostate cancer screening remains controversial because potential mortality or quality-of-life benefits may be outweighed by harms from overdetection and overtreatment. Objective To evaluate the effect of a single prostate-specific antigen (PSA) screening intervention and standardized diagnostic pathway on prostate cancer–specific mortality. Design, Setting, and Participants The Cluster Randomized Trial of PSA Testing for Prostate Cancer (CAP) included 419 582 men aged 50 to 69 years and was conducted at 573 primary care practices across the United Kingdom. Randomization and recruitment of the practices occurred between 2001 and 2009; patient follow-up ended on March 31, 2016. Intervention An invitation to attend a PSA testing clinic and receive a single PSA test vs standard (unscreened) practice. Main Outcomes and Measures Primary outcome: prostate cancer–specific mortality at a median follow-up of 10 years. Prespecified secondary outcomes: diagnostic cancer stage and Gleason grade (range, 2-10; higher scores indicate a poorer prognosis) of prostate cancers identified, all-cause mortality, and an instrumental variable analysis estimating the causal effect of attending the PSA screening clinic. Results Among 415 357 randomized men (mean [SD] age, 59.0 [5.6] years), 189 386 in the intervention group and 219 439 in the control group were included in the analysis (n = 408 825; 98%). In the intervention group, 75 707 (40%) attended the PSA testing clinic and 67 313 (36%) underwent PSA testing. Of 64 436 with a valid PSA test result, 6857 (11%) had a PSA level between 3 ng/mL and 19.9 ng/mL, of whom 5850 (85%) had a prostate biopsy. After a median follow-up of 10 years, 549 (0.30 per 1000 person-years) died of prostate cancer in the intervention group vs 647 (0.31 per 1000 person-years) in the control group (rate difference, −0.013 per 1000 person-years [95% CI, −0.047 to 0.022]; rate ratio [RR], 0.96 [95% CI, 0.85 to 1.08]; P = .50). The number diagnosed with prostate cancer was higher in the intervention group (n = 8054; 4.3%) than in the control group (n = 7853; 3.6%) (RR, 1.19 [95% CI, 1.14 to 1.25]; P < .001). More prostate cancer tumors with a Gleason grade of 6 or lower were identified in the intervention group (n = 3263/189 386 [1.7%]) than in the control group (n = 2440/219 439 [1.1%]) (difference per 1000 men, 6.11 [95% CI, 5.38 to 6.84]; P < .001). In the analysis of all-cause mortality, there were 25 459 deaths in the intervention group vs 28 306 deaths in the control group (RR, 0.99 [95% CI, 0.94 to 1.03]; P = .49). In the instrumental variable analysis for prostate cancer mortality, the adherence-adjusted causal RR was 0.93 (95% CI, 0.67 to 1.29; P = .66). Conclusions and Relevance Among practices randomized to a single PSA screening intervention vs standard practice without screening, there was no significant difference in prostate cancer mortality after a median follow-up of 10 years but the detection of low-risk prostate cancer cases increased. Although longer-term follow-up is under way, the findings do not support single PSA testing for population-based screening. Trial Registration ISRCTN Identifier: ISRCTN92187251


BMJ Open | 2016

Validating the use of Hospital Episode Statistics data and comparison of costing methodologies for economic evaluation: an end-of-life case study from the Cluster randomised triAl of PSA testing for Prostate cancer (CAP)

Joanna Thorn; Emma L Turner; Luke Hounsome; Eleanor Walsh; L Down; Julia Verne; Jenny Donovan; David E. Neal; Freddie C. Hamdy; Richard M. Martin; Sian Noble

Objectives To evaluate the accuracy of routine data for costing inpatient resource use in a large clinical trial and to investigate costing methodologies. Design Final-year inpatient cost profiles were derived using (1) data extracted from medical records mapped to the National Health Service (NHS) reference costs via service codes and (2) Hospital Episode Statistics (HES) data using NHS reference costs. Trust finance departments were consulted to obtain costs for comparison purposes. Setting 7 UK secondary care centres. Population A subsample of 292 men identified as having died at least a year after being diagnosed with prostate cancer in Cluster randomised triAl of PSA testing for Prostate cancer (CAP), a long-running trial to evaluate the effectiveness and cost-effectiveness of prostate-specific antigen (PSA) testing. Results Both inpatient cost profiles showed a rise in costs in the months leading up to death, and were broadly similar. The difference in mean inpatient costs was £899, with HES data yielding ∼8% lower costs than medical record data (differences compatible with chance, p=0.3). Events were missing from both data sets. 11 men (3.8%) had events identified in HES that were all missing from medical record review, while 7 men (2.4%) had events identified in medical record review that were all missing from HES. The response from finance departments to requests for cost data was poor: only 3 of 7 departments returned adequate data sets within 6 months. Conclusions Using HES routine data coupled with NHS reference costs resulted in mean annual inpatient costs that were very similar to those derived via medical record review; therefore, routinely available data can be used as the primary method of costing resource use in large clinical trials. Neither HES nor medical record review represent gold standards of data collection. Requesting cost data from finance departments is impractical for large clinical trials. Trial registration number ISRCTN92187251; Pre-results.


Medical Decision Making | 2016

Interpretation of the Expected Value of Perfect Information and Research Recommendations A Systematic Review and Empirical Investigation

Joanna Thorn; Joanna Coast; Lazaros Andronis

Background. Expected value of perfect information (EVPI) calculations are increasingly performed to guide and underpin research recommendations. An EVPI value that exceeds the estimated cost of research forms a necessary (although not sufficient) condition for further research to be considered worthwhile. However, it is unclear what factors affect researchers’ recommendations and whether there is a notional threshold of positive returns below which research is not recommended. The objectives of this study were to explore whether EVPI and other factors have a bearing on research recommendations and to assess whether there exists a threshold EVPI below which research is typically not recommended. Methods. A systematic literature review was undertaken to identify applied EVPI calculations in the health care field. Study characteristics were extracted, including funder, location, disease group, publication year, primary language, and outcome measure. Population EVPI values and willingness-to-pay thresholds were also extracted alongside verbatim text excerpts describing the authors’ research recommendations. Recommendations were classified according to whether further research was recommended (a positive recommendation) or not (negative). Factors affecting the likelihood of a positive recommendation were examined statistically using logistic regression and visually by plotting the results in graphs. Results and Conclusions. Eighty-six articles were included, of which 13 suggested no further research, 66 recommended further research, and 7 gave no recommendation. EVPI appears to be a key driver of researchers’ recommendations for further research. Disease area, funder, study location, publication year, and outcome may have a bearing on recommendations, although none of these factors reached statistical significance. A threshold EVPI value below which research is typically not recommended was found at around £1.48 million.


Value in Health | 2017

Core items for a standardized resource-use measure (ISRUM): expert Delphi consensus survey

Joanna Thorn; Sara Brookes; Colin Ridyard; Ruth Riley; Dyfrig A. Hughes; Sarah Wordsworth; Sian Noble; Gail Thornton; William Hollingworth

Background Resource use measurement by patient recall is characterized by inconsistent methods and a lack of validation. A validated standardized resource use measure could increase data quality, improve comparability between studies, and reduce research burden. Objectives To identify a minimum set of core resource use items that should be included in a standardized adult instrument for UK health economic evaluation from a provider perspective. Methods Health economists with experience of UK-based economic evaluations were recruited to participate in an electronic Delphi survey. Respondents were asked to rate 60 resource use items (e.g., medication names) on a scale of 1 to 9 according to the importance of the item in a generic context. Items considered less important according to predefined consensus criteria were dropped and a second survey was developed. In the second round, respondents received the median score and their own score from round 1 for each item alongside summarized comments and were asked to rerate items. A final project team meeting was held to determine the recommended core set. Results Forty-five participants completed round 1. Twenty-six items were considered less important and were dropped, 34 items were retained for the second round, and no new items were added. Forty-two respondents (93.3%) completed round 2, and greater consensus was observed. After the final meeting, 10 core items were selected, with further items identified as suitable for “bolt-on” questionnaire modules. Conclusions The consensus on 10 items considered important in a generic context suggests that a standardized instrument for core resource use items is feasible.


Value in Health | 2015

Identification Of Items For A Standardised Resource-Use Measure: Review Of Current Instruments.

Joanna Thorn; Colin Ridyard; Ruth Riley; Sara Brookes; Dyfrig A. Hughes; Sarah Wordsworth; Sian Noble; William Hollingworth

Methods A single version of each instrument designed for use in a UK-based study was identified from the Database of Instruments for Resource-Use Measurement (http://www.dirum. org). Section headings (‘domains’) and questions (‘items’) were extracted verbatim according to a predefined schema. Information on the recall period, level of detail, use of skip logic and scope (disease-specific or total resource use) was also extracted. Items were scrutinised for overlap.


BMJ Open | 2015

Protocol for a randomised controlled trial for Reducing Arthritis Fatigue by clinical Teams (RAFT) using cognitive–behavioural approaches

Sarah Hewlett; N. Ambler; Celia Almeida; Peter S Blair; Ernest Choy; Emma Dures; Alison Hammond; William Hollingworth; John R. Kirwan; Zoe Plummer; C. Rooke; Joanna Thorn; Keeley Tomkinson; Jon Pollock

Introduction Rheumatoid arthritis (RA) fatigue is distressing, leading to unmanageable physical and cognitive exhaustion impacting on health, leisure and work. Group cognitive–behavioural (CB) therapy delivered by a clinical psychologist demonstrated large improvements in fatigue impact. However, few rheumatology teams include a clinical psychologist, therefore, this study aims to examine whether conventional rheumatology teams can reproduce similar results, potentially widening intervention availability. Methods and analysis This is a multicentre, randomised, controlled trial of a group CB intervention for RA fatigue self-management, delivered by local rheumatology clinical teams. 7 centres will each recruit 4 consecutive cohorts of 10–16 patients with RA (fatigue severity ≥6/10). After consenting, patients will have baseline assessments, then usual care (fatigue self-management booklet, discussed for 5–6 min), then be randomised into control (no action) or intervention arms. The intervention, Reducing Arthritis Fatigue by clinical Teams (RAFT) will be cofacilitated by two local rheumatology clinicians (eg, nurse/occupational therapist), who will have had brief training in CB approaches, a RAFT manual and materials, and delivered an observed practice course. Groups of 5–8 patients will attend 6×2 h sessions (weeks 1–6) and a 1 hr consolidation session (week 14) addressing different self-management topics and behaviours. The primary outcome is fatigue impact (26 weeks); secondary outcomes are fatigue severity, coping and multidimensional impact, quality of life, clinical and mood status (to week 104). Statistical and health economic analyses will follow a predetermined plan to establish whether the intervention is clinically and cost-effective. Effects of teaching CB skills to clinicians will be evaluated qualitatively. Ethics and dissemination Approval was given by an NHS Research Ethics Committee, and participants will provide written informed consent. The copyrighted RAFT package will be freely available. Findings will be submitted to the National Institute for Health and Care Excellence, Clinical Commissioning Groups and all UK rheumatology departments. Trial registration number ISRCTN: 52709998; Protocol v3 09.02.2015.


Expert Review of Pharmacoeconomics & Outcomes Research | 2014

Methodological developments in randomized controlled trial-based economic evaluations

Joanna Thorn; Sian Noble; William Hollingworth

Economic evaluation is a key contributor to decision making in health care, and it is important that it is carried out as effectively and reliably as possible. Studies carried out alongside randomised controlled trials are required to contribute real-world evidence to the decision-making process. However, the requirement that resource use be measured as well as effectiveness data within a trial results in additional complexity for trialists, and there are a number of methodological areas in which improvement is needed. This article reviews the literature in methodological work carried out to inform economic evaluation studies conducted alongside randomised controlled trials. Recent advances in areas including overall trial design, measuring resource use, measuring outcomes and reporting economic evaluations are discussed.


PharmacoEconomics | 2018

Current UK practices on Health Economics Analysis Plans (HEAPs) : are we using heaps of them?

Melina Dritsaki; Alastair Gray; Stavros Petrou; Susan Dutton; Sarah E Lamb; Joanna Thorn

Economic evaluation has increasingly become an integral component of randomised controlled trial (RCT) designs. UK organisations, such as the National Institute for Health Research’s Health Technology Assessment (NIHR HTA) Programme and the Medical Research Council (MRC), fund RCTs that try to address both clinical effectiveness issues as well as cost-effectiveness considerations. The proposed economic evaluation is outlined in the application, and once a proposal is funded, a section in the protocol may describe the intended analysis to be followed as part of the economic evaluation based on the RCT. Guidance on how to conduct economic evaluation alongside RCTs has been published elsewhere [1], together with considerations around methodological issues and the novel approaches that may be applied [2]. A guidance document [known as a standard operating procedure (SOP)] that outlines the predetermined steps and instructions to be followed as part of the economic as well as the statistical analysis of a trial is an important aspect of the quality management of any trial. In a way, it safeguards the transparency and consistency of the higher level steps that should be followed as part of any analysis. However, little is known to date about how to integrate health economics operating procedures, and health economics analysis plans (HEAPs) as part of a study. Common questions arising are (1) Is a HEAP always needed? (2) What information should be included as standard? (3) Can a proposed HEAP be changed, and if so, in what circumstances? Before answering these very important questions, we took a step back to first identify current practice and opinions about the use of HEAPs and HEAP SOPs. We expected a priori that clinical trials units (CTUs) and individual health economists would follow specific instructions about who should write and approve a HEAP, for whom, and when, as well as the types of RCTs for which a HEAP is necessary.


4th International Clinical Trials Methodology Conference (ICTMC) and the 38th Annual Meeting of the Society for Clinical Trials | 2017

Core items for a standardised resource-use measure (ISRUM):expert Delphi consensus survey

Joanna Thorn; Colin Ridyard; Ruth Riley; Sara Brookes; Dyfrig A. Hughes; Sarah Wordsworth; Sian Noble; Gail Thornton; William Hollingworth

Citation for published version (APA): Rogers, A., Mackenzie, I., Rorie, D., & MacDonald, T. (2017). Successful recruitment to a large online randomised trial: the TIME study. In Trials: Meeting abstracts from the 4th International Clinical Trials Methodology Conference (ICTMC) and the 38th Annual Meeting of the Society for Clinical Trials (Supplement 1 ed., Vol. 18, pp. 222-223). [089] (Trials; Vol. 18, No. Suppl. 1). United Kingdom: BioMed Central. https://doi.org/10.1186/s13063-017-1902-yThe study will address variability in practice, defined in Standard Operating Procedures, that UK Clinical Trials Units (CTU) have in place for: i) defining, ii) classifying, and iii) reporting adverse events in non-CTIMPs. Compared to drug trials, adverse events in non-CTIMPs are not managed well. There is considerable inconsistency in reporting styles between trials of similar design and intervention type. To promote increased consistency, we will conduct a consensus exercise among non-CTIMP experts using a Delphi technique followed by a face-to-face meeting. This method adheres to the recommended sequence outlined by the international network for Enhancing the Quality and Transparency of Health Research (EQUATOR) for developing health research guidelines. A non-CTIMP expert is defined as: a CTU representative, a Chief Investigator or trial manager of non-CTIMPs with >3 trials experience in this role, or a senior member of the Health Research Authority’s Operations team or Ethics Committee. As such, the participants in the consensus exercises will also be the direct beneficiaries from the project maximising its pathway to impact. Following the face-to-face meeting — guidance and explanatory statements will be drafted. The guidance statement will focus on: • how adverse events should be defined in relation to the non-pharmacological intervention, • how CTU standard operating procedures should be designed to reflect the results of the Delphi exercise, • how adverse events should be classified following a judicious causal assessment, and • recommended reporting methods that will promote more effective meta-analyses of non-pharmacological interventions that provide a balanced benefit-harm evaluation. Following study completion, we will work with a selection of UK CTUs to evaluate the implementation of any agreed modifications to current practice.This is the final version of the article. Available from BioMed Central via the DOI in this record.Objective Multi-centre RCT designs provide robust evidence of therapeutic effect of health interventions. However participating centres often differ in how well they conduct the trial and the number of patients successfully recruited. This paper describes barriers different research teams encountered when conducting a complex RCT comparing a surgical procedure with physiotherapy, and the actions taken by the trial management group to overcome obstacles that were hindering recruitment. Methods We conducted 22 interviews with principal investigators and research associates at 14 sites involved in the delivery of a surgical RCT that compared hip arthroscopy and physiotherapy for hip pain. Interview transcripts were analysed thematically and case study approaches were utilised to present results to the trial management group. Results Research teams reported difficulties related to logistics (e.g. Room space); motivation (e.g. PI reluctant to approach patients); and skill (e.g. Lack of knowledge about the treatment arms). Similar Issues were shared by sites that recruited to target and those that did not, however there were differences in the team ’ s response to challenges. Whilst on-target sites found local solutions to issues or support through their research infrastructure or the trial TMG, off-target sites usually did not show proactivity. Site profiles were created and action plans designed based on aspects that were particular to the individual sites. These plans were implemented in collaboration with site teams. Conclusions This qualitative study added to the growing evidence of how aspects of team functioning are important for recruitment to complex RCTs. Trial Management Groups can help research teams identify and ad- dress issues, and therefore contributing to a sense of ownership by the research team. Empowering research teams to find solutions at local level is essential to conduct multi-centre RCTs successfully.

Collaboration


Dive into the Joanna Thorn's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sandra Hollinghurst

National Institute for Health Research

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Celia Almeida

University of the West of England

View shared research outputs
Top Co-Authors

Avatar

Emma Dures

University of the West of England

View shared research outputs
Researchain Logo
Decentralizing Knowledge