Thomas Keeley
University of Birmingham
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Thomas Keeley.
PLOS ONE | 2013
Thomas Keeley; Hareth Al-Janabi; Paula Lorgelly; Joanna Coast
Purpose The ICECAP-A and EQ-5D-5L are two index measures appropriate for use in health research. Assessment of content validity allows understanding of whether a measure captures the most relevant and important aspects of a concept. This paper reports a qualitative assessment of the content validity and appropriateness for use of the eq-5D-5L and ICECAP-A measures, using novel methodology. Methods In-depth semi-structured interviews were conducted with research professionals in the UK and Australia. Informants were purposively sampled based on their professional role. Data were analysed in an iterative, thematic and constant comparative manner. A two stage investigation - the comparative direct approach - was developed to address the methodological challenges of the content validity research and allow rigorous assessment. Results Informants viewed the ICECAP-A as an assessment of the broader determinants of quality of life, but lacking in assessment of health-related determinants. The eq-5D-5L was viewed as offering good coverage of health determinants, but as lacking in assessment of these broader determinants. Informants held some concerns about the content or wording of the Self-care, Pain/Discomfort and Anxiety/Depression items (EQ-5D-5L) and the Enjoyment, Achievement and attachment items (ICECAP-A). Conclusion Using rigorous qualitative methodology the results suggest that the ICECAP-A and EQ-5D-5L hold acceptable levels of content validity and are appropriate for use in health research. This work adds expert opinion to the emerging body of research using patients and public to validate these measures.
Trials | 2016
Thomas Keeley; Paula Williamson; Peter Callery; Laura Jones; Jonathan Mathers; Janet Jones; Bridget Young; Melanie Calvert
BackgroundCore outcome sets (COS) help to minimise bias in trials and facilitate evidence synthesis. Delphi surveys are increasingly being used as part of a wider process to reach consensus about what outcomes should be included in a COS. Qualitative research can be used to inform the development of Delphi surveys. This is an advance in the field of COS development and one which is potentially valuable; however, little guidance exists for COS developers on how best to use qualitative methods and what the challenges are. This paper aims to provide early guidance on the potential role and contribution of qualitative research in this area. We hope the ideas we present will be challenged, critiqued and built upon by others exploring the role of qualitative research in COS development.This paper draws upon the experiences of using qualitative methods in the pre-Delphi stage of the development of three different COS. Using these studies as examples, we identify some of the ways that qualitative research might contribute to COS development, the challenges in using such methods and areas where future research is required.ResultsQualitative research can help to identify what outcomes are important to stakeholders; facilitate understanding of why some outcomes may be more important than others, determine the scope of outcomes; identify appropriate language for use in the Delphi survey and inform comparisons between stakeholder data and other sources, such as systematic reviews. Developers need to consider a number of methodological points when using qualitative research: specifically, which stakeholders to involve, how to sample participants, which data collection methods are most appropriate, how to consider outcomes with stakeholders and how to analyse these data. A number of areas for future research are identified.ConclusionsQualitative research has the potential to increase the research community’s confidence in COS, although this will be dependent upon using rigorous and appropriate methodology. We have begun to identify some issues for COS developers to consider in using qualitative methods to inform the development of Delphi surveys in this article.
BMJ Open | 2012
Helen Michelle Kirkby; Melanie Calvert; Heather Draper; Thomas Keeley; Sue Wilson
Objective To establish the empirical evidence base for the information that participants want to know about medical research and to assess how this relates to current guidance from the National Research Ethics Service (NRES). Data sources Medline, Web of Science, Applied Social Sciences Index and Abstracts, Sociological abstracts, Health Management Information Consortium, Cochrane Library, thesis indexs, grey literature databases, reference and cited article lists, key journals, Google Scholar and correspondence with expert authors. Study selection Original research studies published between 1950 and October 2010 that asked potential participants to indicate how much or what types of information they wanted to be told about a research study or asked them to rate the importance of a specific piece of information were included. Study appraisal and synthesis methods Studies were appraised based on the generalisability of results to the UK potential research participant population. A metadata analysis using basic thematic analysis was used to split results from papers into themes based on the sections of information that NRES recommends should be included in a participant information sheet. Results 14 studies were included. Of the 20 pieces of information that NRES recommend should be included in patient information sheets for research pooled proportions could be calculated for seven themes. Results showed that potential participants wanted to be offered information about result dissemination (91% (95% CI 85% to 95%)), investigator conflicts of interest (48% (95% CI 27% to 69%)), the purpose of the study (76% (95% CI 27% to 100%)), voluntariness (39% (95% CI 2% to 100%)), how long the research would last (61% (95% CI 16% to 97%)), potential benefits (57% (95% CI 7% to 98%)) and confidentiality (44% (95% CI 10% to 82%)). The level of detail participants wanted to know was not explored comprehensively in the studies. There was no empirical evidence to support the level of information provision required by participants on the remaining seven items. Conclusions There is limited empirical evidence on what potential participants want to know about research. The existing empirical evidence suggests that individuals may have very different needs and a more tailored evidence-based approach may be necessary.
PLOS ONE | 2013
Derek Kyte; Jonathan Ives; Heather Draper; Thomas Keeley; Melanie Calvert
Background Patient-reported outcomes (PROs), such as health-related quality of life (HRQL) are increasingly used to evaluate treatment effectiveness in clinical trials, are valued by patients, and may inform important decisions in the clinical setting. It is of concern, therefore, that preliminary evidence, gained from group discussions at UK-wide Medical Research Council (MRC) quality of life training days, suggests there are inconsistent standards of HRQL data collection in trials and appropriate training and education is often lacking. Our objective was to investigate these reports, to determine if they represented isolated experiences, or were indicative of a potentially wider problem. Methods And Findings We undertook a qualitative study, conducting 26 semi-structured interviews with research nurses, data managers, trial coordinators and research facilitators involved in the collection and entry of HRQL data in clinical trials, across one primary care NHS trust, two secondary care NHS trusts and two clinical trials units in the UK. We used conventional content analysis to analyze and interpret our data. Our study participants reported (1) inconsistent standards in HRQL measurement, both between, and within, trials, which appeared to risk the introduction of bias; (2), difficulties in dealing with HRQL data that raised concern for the well-being of the trial participant, which in some instances led to the delivery of non-protocol driven co-interventions, (3), a frequent lack of HRQL protocol content and appropriate training and education of trial staff, and (4) that HRQL data collection could be associated with emotional and/or ethical burden. Conclusions Our findings suggest there are inconsistencies in the standards of HRQL data collection in some trials resulting from a general lack of HRQL-specific protocol content, training and education. These inconsistencies could lead to biased HRQL trial results. Future research should aim to develop HRQL guidelines and training programmes aimed at supporting researchers to carry out high quality data collection.
Trials | 2015
Thomas Keeley; Humera Khan; Vanessa Pinfold; Paula Williamson; Jonathan Mathers; Linda Davies; Ruth Sayers; Elizabeth England; Siobhan Reilly; Richard Byng; Linda Gask; Michael Clark; Peter Huxley; Peter Lewis; M. Birchwood; Melanie Calvert
BackgroundIn the general population the prevalence of bipolar and schizophrenia is 0.24% and 1.4% respectively. People with schizophrenia and bipolar disorder have a significantly reduced life expectancy, increased rates of unemployment and a fear of stigma leading to reduced self-confidence. A core outcome set is a standardised collection of items that should be reported in all controlled trials within a research area. There are currently no core outcome sets available for use in effectiveness trials involving bipolar or schizophrenia service users managed in a community setting.MethodsA three-step approach is to be used to concurrently develop two core outcome sets, one for bipolar and one for schizophrenia. First, a comprehensive list of outcomes will be compiled through qualitative research and systematic searching of trial databases. Focus groups and one-to-one interviews will be completed with service users, carers and healthcare professionals. Second, a Delphi study will be used to reduce the lists to a core set. The three-round Delphi study will ask service users to score the outcome list for relevance. In round two stakeholders will only see the results of their group, while in round three stakeholders will see the results of all stakeholder group by stakeholder group. Third, a consensus meeting with stakeholders will be used to confirm outcomes to be included in the core set. Following the development of the core set a systematic literature review of existing measures will allow recommendations for how the core outcomes should be measured and a stated preference survey will explore the strength of people’s preferences and estimate weights for the outcomes that comprise the core set.DiscussionA core outcome set represents the minimum measurement requirement for a research area. We aim to develop core outcome sets for use in research involving service users with schizophrenia or bipolar managed in a community setting. This will inform the wider PARTNERS2 study aims and objectives of developing an innovative primary care-based model of collaborative care for people with a diagnosis of bipolar or schizophrenia.
Quality of Life Research | 2015
Thomas Keeley; Hareth Al-Janabi; Elaine Nicholls; Nadine E. Foster; Sue Jowett; Joanna Coast
PurposeThe ICECAP-A is a simple measure of capability well-being for use with the adult population. The descriptive system is made up of five key attributes: Stability, Attachment, Autonomy, Achievement and Enjoyment. Studies have begun to assess the psychometric properties of the measure, including the construct and content validity and feasibility for use. This is the first study to use longitudinal data to assess the responsiveness of the measure.MethodsThis responsiveness study was completed alongside a randomised controlled trial comparing three physiotherapy-led exercise interventions for older adults with knee pain attributable to osteoarthritis. Anchor-based methodologies were used to explore the relationship between change over time in ICECAP-A score (the target measure) and change over time in another measure (the anchor). Analyses were completed using the non-value-weighted and value-weighted ICECAP-A scores. The EQ-5D-3L was used as a comparator measure to contextualise change in the ICECAP-A. Effect sizes, standardised response means and t tests were used to quantify responsiveness.ResultsSmall changes in the ICECAP-A scores were seen in response to underlying changes in patients’ health-related quality of life, anxiety and depression. Non-weighted scores were slightly more responsive than value-weighted scores. ICECAP-A change was of comparable size to change in the EQ-5D-3L reference measure.ConclusionThis first analysis of the responsiveness using longitudinal data provides some positive evidence for the responsiveness of the ICECAP-A measure. There is a need for further research in those with low health and capability, and experiencing larger underlying changes in quality of life.
PLOS ONE | 2017
Janet Jones; Laura Jones; Thomas Keeley; Melanie Calvert; Jonathan Mathers; Bridget Young
Background To be meaningful, a core outcome set (COS) should be relevant to all stakeholders including patients and carers. This review aimed to explore the methods by which patients and carers have been included as participants in COS development exercises and, in particular, the use and reporting of qualitative methods. Methods In August 2015, a search of the Core Outcomes Measures in Effectiveness Trials (COMET) database was undertaken to identify papers involving patients and carers in COS development. Data were extracted to identify the data collection methods used in COS development, the number of health professionals, patients and carers participating in these, and the reported details of qualitative research undertaken. Results Fifty-nine papers reporting patient and carer participation were included in the review, ten of which reported using qualitative methods. Although patients and carers participated in outcome elicitation for inclusion in COS processes, health professionals tended to dominate the prioritisation exercises. Of the ten qualitative papers, only three were reported as a clear pre-designed part of a COS process. Qualitative data were collected using interviews, focus groups or a combination of these. None of the qualitative papers reported an underpinning methodological framework and details regarding data saturation, reflexivity and resource use associated with data collection were often poorly reported. Five papers reported difficulty in achieving a diverse sample of participants and two reported that a large and varied range of outcomes were often identified by participants making subsequent rating and ranking difficult. Conclusions Consideration of the best way to include patients and carers throughout the COS development process is needed. Additionally, further work is required to assess the potential role of qualitative methods in COS, to explore the knowledge produced by different qualitative data collection methods, and to evaluate the time and resources required to incorporate qualitative methods into COS development.
PLOS ONE | 2017
Olalekan Lee Aiyegbusi; Derek Kyte; Paul Cockwell; Tom Marshall; Adrian Gheorghe; Thomas Keeley; Anita Slade; Melanie Calvert
Background Patient-reported outcome measures (PROMs) can provide valuable information which may assist with the care of patients with chronic kidney disease (CKD). However, given the large number of measures available, it is unclear which PROMs are suitable for use in research or clinical practice. To address this we comprehensively evaluated studies that assessed the measurement properties of PROMs in adults with CKD. Methods Four databases were searched; reference list and citation searching of included studies was also conducted. The COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) checklist was used to appraise the methodological quality of the included studies and to inform a best evidence synthesis for each PROM. Results The search strategy retrieved 3,702 titles/abstracts. After 288 duplicates were removed, 3,414 abstracts were screened and 71 full-text articles were retrieved for further review. Of these, 24 full-text articles were excluded as they did not meet the eligibility criteria. Following reference list and citation searching, 19 articles were retrieved bringing the total number of papers included in the final analysis to 66. There was strong evidence supporting internal consistency and moderate evidence supporting construct validity for the Kidney Disease Quality of Life-36 (KDQOL-36) in pre-dialysis patients. In the dialysis population, the KDQOL-Short Form (KDQOL-SF) had strong evidence for internal consistency and structural validity and moderate evidence for test-retest reliability and construct validity while the KDQOL-36 had moderate evidence of internal consistency, test-retest reliability and construct validity. The End Stage Renal Disease-Symptom Checklist Transplantation Module (ESRD-SCLTM) demonstrated strong evidence for internal consistency and moderate evidence for test-retest reliability, structural and construct validity in renal transplant recipients. Conclusions We suggest considering the KDQOL-36 for use in pre-dialysis patients; the KDQOL-SF or KDQOL-36 for dialysis patients and the ESRD-SCLTM for use in transplant recipients. However, further research is required to evaluate the measurement error, structural validity, responsiveness and patient acceptability of PROMs used in CKD.
BMJ Open | 2016
Khaled Ahmed; Derek Kyte; Thomas Keeley; Fabio Efficace; Jo Armes; Julia Brown; Lynn Calman; Chris Copland; Anna Gavin; Adam Glaser; Diana Greenfield; Anne Lanceley; Rachel M. Taylor; Galina Velikova; Michael Brundage; Rebecca Mercieca-Bebber; Madeleine King; Melanie Calvert
Introduction Emerging evidence suggests that patient-reported outcome (PRO)-specific information may be omitted in trial protocols and that PRO results are poorly reported, limiting the use of PRO data to inform cancer care. This study aims to evaluate the standards of PRO-specific content in UK cancer trial protocols and their arising publications and to highlight examples of best-practice PRO protocol content and reporting where they occur. The objective of this study is to determine if these early findings are generalisable to UK cancer trials, and if so, how best we can bring about future improvements in clinical trials methodology to enhance the way PROs are assessed, managed and reported. Hypothesis: Trials in which the primary end point is based on a PRO will have more complete PRO protocol and publication components than trials in which PROs are secondary end points. Methods and analysis Completed National Institute for Health Research (NIHR) Portfolio Cancer clinical trials (all cancer specialities/age-groups) will be included if they contain a primary/secondary PRO end point. The NIHR portfolio includes cancer trials, supported by a range of funders, adjudged as high-quality clinical research studies. The sample will be drawn from studies completed between 31 December 2000 and 1 March 2014 (n=1141) to allow sufficient time for completion of the final trial report and publication. Two reviewers will then review the protocols and arising publications of included trials to: (1) determine the completeness of their PRO-specific protocol content; (2) determine the proportion and completeness of PRO reporting in UK Cancer trials and (3) model factors associated with PRO protocol and reporting completeness and with PRO reporting proportion. Ethics and dissemination The study was approved by the ethics committee at University of Birmingham (ERN_15-0311). Trial findings will be disseminated via presentations at local, national and international conferences, peer-reviewed journals and social media including the CPROR twitter account and UOB departmental website (http://www.birmingham.ac.uk/cpro0r). Trial registration number PROSPERO CRD42016036533.
PLOS Medicine | 2017
Samantha Cruz Rivera; Derek Kyte; Olalekan Lee Aiyegbusi; Thomas Keeley; Melanie Calvert
Background Increasingly, researchers need to demonstrate the impact of their research to their sponsors, funders, and fellow academics. However, the most appropriate way of measuring the impact of healthcare research is subject to debate. We aimed to identify the existing methodological frameworks used to measure healthcare research impact and to summarise the common themes and metrics in an impact matrix. Methods and findings Two independent investigators systematically searched the Medical Literature Analysis and Retrieval System Online (MEDLINE), the Excerpta Medica Database (EMBASE), the Cumulative Index to Nursing and Allied Health Literature (CINAHL+), the Health Management Information Consortium, and the Journal of Research Evaluation from inception until May 2017 for publications that presented a methodological framework for research impact. We then summarised the common concepts and themes across methodological frameworks and identified the metrics used to evaluate differing forms of impact. Twenty-four unique methodological frameworks were identified, addressing 5 broad categories of impact: (1) ‘primary research-related impact’, (2) ‘influence on policy making’, (3) ‘health and health systems impact’, (4) ‘health-related and societal impact’, and (5) ‘broader economic impact’. These categories were subdivided into 16 common impact subgroups. Authors of the included publications proposed 80 different metrics aimed at measuring impact in these areas. The main limitation of the study was the potential exclusion of relevant articles, as a consequence of the poor indexing of the databases searched. Conclusions The measurement of research impact is an essential exercise to help direct the allocation of limited research resources, to maximise research benefit, and to help minimise research waste. This review provides a collective summary of existing methodological frameworks for research impact, which funders may use to inform the measurement of research impact and researchers may use to inform study design decisions aimed at maximising the short-, medium-, and long-term impact of their research.