Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Amanda Blatch-Jones is active.

Publication


Featured researches published by Amanda Blatch-Jones.


Health Research Policy and Systems | 2017

The impact on healthcare, policy and practice from 36 multi-project research programmes: findings from two reviews.

Steve Hanney; Trisha Greenhalgh; Amanda Blatch-Jones; Matthew Glover; James Raftery

BackgroundWe sought to analyse the impacts found, and the methods used, in a series of assessments of programmes and portfolios of health research consisting of multiple projects.MethodsWe analysed a sample of 36 impact studies of multi-project research programmes, selected from a wider sample of impact studies included in two narrative systematic reviews published in 2007 and 2016. We included impact studies in which the individual projects in a programme had been assessed for wider impact, especially on policy or practice, and where findings had been described in such a way that allowed them to be collated and compared.ResultsIncluded programmes were highly diverse in terms of location (11 different countries plus two multi-country ones), number of component projects (8 to 178), nature of the programme, research field, mode of funding, time between completion and impact assessment, methods used to assess impact, and level of impact identified.Thirty-one studies reported on policy impact, 17 on clinician behaviour or informing clinical practice, three on a combined category such as policy and clinician impact, and 12 on wider elements of impact (health gain, patient benefit, improved care or other benefits to the healthcare system). In those multi-programme projects that assessed the respective categories, the percentage of projects that reported some impact was policy 35% (range 5–100%), practice 32% (10–69%), combined category 64% (60–67%), and health gain/health services 27% (6–48%).Variations in levels of impact achieved partly reflected differences in the types of programme, levels of collaboration with users, and methods and timing of impact assessment. Most commonly, principal investigators were surveyed; some studies involved desk research and some interviews with investigators and/or stakeholders. Most studies used a conceptual framework such as the Payback Framework. One study attempted to assess the monetary value of a research programme’s health gain.ConclusionThe widespread impact reported for some multi-project programmes, including needs-led and collaborative ones, could potentially be used to promote further research funding. Moves towards greater standardisation of assessment methods could address existing inconsistencies and better inform strategic decisions about research investment; however, unresolved issues about such moves remain.


Trials | 2018

Identifying trial recruitment uncertainties using a James Lind Alliance Priority Setting Partnership - The PRioRiTy (Prioritising Recruitment in Randomised Trials) Study

Patricia Healy; Sandra Galvin; Paula Williamson; Shaun Treweek; Caroline Whiting; Beccy Maeso; Christopher Bray; Peter Brocklehurst; Mary Clarke Moloney; Abdel Douiri; Carrol Gamble; Heidi Rebecca Gardner; Derick Mitchell; Derek Stewart; Joan Jordan; Martin O'Donnell; Mike Clarke; Sue Pavitt; Eleanor Woodford Guegan; Amanda Blatch-Jones; Valerie Smith; Hannah Reay; Declan Devane

BackgroundDespite the problem of inadequate recruitment to randomised trials, there is little evidence to guide researchers on decisions about how people are effectively recruited to take part in trials. The PRioRiTy study aimed to identify and prioritise important unanswered trial recruitment questions for research. The PRioRiTy study - Priority Setting Partnership (PSP) included members of the public approached to take part in a randomised trial or who have represented participants on randomised trial steering committees, health professionals and research staff with experience of recruiting to randomised trials, people who have designed, conducted, analysed or reported on randomised trials and people with experience of randomised trials methodology.MethodsThis partnership was aided by the James Lind Alliance and involved eight stages: (i) identifying a unique, relevant prioritisation area within trial methodology; (ii) establishing a steering group (iii) identifying and engaging with partners and stakeholders; (iv) formulating an initial list of uncertainties; (v) collating the uncertainties into research questions; (vi) confirming that the questions for research are a current recruitment challenge; (vii) shortlisting questions and (viii) final prioritisation through a face-to-face workshop.ResultsA total of 790 survey respondents yielded 1693 open-text answers to 6 questions, from which 1880 potential questions for research were identified. After merging duplicates, the number of questions was reduced to 496. Questions were combined further, and those that were submitted by fewer than 15 people and/or fewer than 6 of the 7 stakeholder groups were excluded from the next round of prioritisation resulting in 31 unique questions for research. All 31 questions were confirmed as being unanswered after checking relevant, up-to-date research evidence. The 10 highest priority questions were ranked at a face-to-face workshop. The number 1 ranked question was “How can randomised trials become part of routine care and best utilise current clinical care pathways?” The top 10 research questions can be viewed at www.priorityresearch.ie.ConclusionThe prioritised questions call for a collective focus on normalising trials as part of clinical care, enhancing communication, addressing barriers, enablers and motivators around participation and exploring greater public involvement in the research process.


Trials | 2018

National Institute for Health Research (NIHR) Health Technology Assessment (HTA) Programme research funding and UK burden of disease

Fay Chinnery; Gemma Bashevoy; Amanda Blatch-Jones; Lisa Douet; Sarah Puddicombe; James Raftery

BackgroundHTA Programme funding is governed by the need for evidence and scientific quality, reflecting funding of the National Institute for Health Research (NIHR) by the NHS. The need criterion incorporates covering the spectrum of diseases, but also taking account of research supported by other funders. This study compared the NIHR HTA Programme portfolio of research with the UK burden of disease as measured by Disability-adjusted Life Years (DALYs).MethodsA retrospective cross-sectional study using a cohort of all funded primary research and evidence syntheses projects received by the HTA Programme from April 2011 to March 2016 (n = 363); to determine the proportion of spend by disease compared with burden of disease in the UK calculated using 2015 UK DALY data.ResultsThe programme costing just under £44 million broadly reflected UK DALY burden by disease. Spend was lower than disease burden for cancer, cardiovascular and musculoskeletal diseases, which may reflect the importance of other funders, notably medical charities, which concentrate on these diseases.ConclusionThe HTA Programme spend, adjusted for other relevant funders, broadly matches disease burden in the UK; no diseases are being neglected.


BMJ Open | 2018

Role of feasibility and pilot studies in randomised controlled trials: a cross-sectional study

Amanda Blatch-Jones; Wei Pek; Emma Kirkpatrick; Martin Ashton-Key

Objectives To assess the value of pilot and feasibility studies to randomised controlled trials (RCTs) funded by the National Institute for Health Research (NIHR) Health Technology Assessment (HTA) programme. To explore the methodological components of pilot/feasibility studies and how they inform full RCTs. Study design Cross-sectional study. Setting Both groups included NIHR HTA programme funded studies in the period 1 January 2010–31 December 2014 (decision date). Group 1: stand-alone pilot/feasibility studies published in the HTA Journal or accepted for publication. Group 2: all funded RCT applications funded by the HTA programme, including reference to an internal and/or external pilot/feasibility study. The methodological components were assessed using an adapted framework from a previous study. Main outcome measures The proportion of stand-alone pilot and feasibility studies which recommended proceeding to full trial and what study elements were assessed. The proportion of ‘HTA funded’ trials which used internal and external pilot and feasibility studies to inform the design of the trial. Results Group 1 identified 15 stand-alone pilot/feasibility studies. Study elements most commonly assessed were testing recruitment (100% in both groups), feasibility (83%, 100%) and suggestions for further study/investigation (83%, 100%). Group 2 identified 161 ‘HTA funded’ applications: 59 cited an external pilot/feasibility study where testing recruitment (50%, 73%) and feasibility (42%, 73%) were the most commonly reported study elements: 92 reported an internal pilot/feasibility study where testing recruitment (93%, 100%) and feasibility (44%, 92%) were the most common study elements reported. Conclusions ‘HTA funded’ research which includes pilot and feasibility studies assesses a variety of study elements. Pilot and feasibility studies serve an important role when determining the most appropriate trial design. However, how they are reported and in what context requires caution when interpreting the findings and delivering a definitive trial.


Health Technology Assessment | 2016

Models and applications for measuring the impact of health research: Update of a systematic review for the health technology assessment programme

James Raftery; Steve Hanney; Trish Greenhalgh; Matthew Glover; Amanda Blatch-Jones


Archive | 2016

Updated systematic review

James Raftery; Steve Hanney; Trish Greenhalgh; Matthew Glover; Amanda Blatch-Jones


International Journal of Technology Assessment in Health Care | 2017

OP08 National Institute for Health Research Health Technology Assessment Programme Research Funding And United Kingdom Burden Of Disease

Fay Chinnery; Gemma Bashevoy; Amanda Blatch-Jones; Lisa Douet; Sarah Puddicombe; James Raftery


Archive | 2016

Towards a broader taxonomy of impact models

James Raftery; Steve Hanney; Trish Greenhalgh; Matthew Glover; Amanda Blatch-Jones


Archive | 2016

Data extraction sheet

James Raftery; Steve Hanney; Trish Greenhalgh; Matthew Glover; Amanda Blatch-Jones


Archive | 2016

List of interesting studies

James Raftery; Steve Hanney; Trish Greenhalgh; Matthew Glover; Amanda Blatch-Jones

Collaboration


Dive into the Amanda Blatch-Jones's collaboration.

Top Co-Authors

Avatar

James Raftery

University of Southampton

View shared research outputs
Top Co-Authors

Avatar

Matthew Glover

Brunel University London

View shared research outputs
Top Co-Authors

Avatar

Steve Hanney

Brunel University London

View shared research outputs
Top Co-Authors

Avatar

Fay Chinnery

University of Southampton

View shared research outputs
Top Co-Authors

Avatar

Gemma Bashevoy

National Institute for Health Research

View shared research outputs
Top Co-Authors

Avatar

Lisa Douet

National Institute for Health Research

View shared research outputs
Top Co-Authors

Avatar

Sarah Puddicombe

National Institute for Health Research

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Beccy Maeso

University of Southampton

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge