Caroline Vass
University of Manchester
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Caroline Vass.
The Patient: Patient-Centered Outcomes Research | 2014
Mark Harrison; Dan Rigby; Caroline Vass; Terry N. Flynn; Jordan J. Louviere; Katherine Payne
BackgroundDiscrete choice experiments (DCEs) are used to elicit preferences of current and future patients and healthcare professionals about how they value different aspects of healthcare. Risk is an integral part of most healthcare decisions. Despite the use of risk attributes in DCEs consistently being highlighted as an area for further research, current methods of incorporating risk attributes in DCEs have not been reviewed explicitly.ObjectivesThis study aimed to systematically identify published healthcare DCEs that incorporated a risk attribute, summarise and appraise methods used to present and analyse risk attributes, and recommend best practice regarding including, analysing and transparently reporting the methodology supporting risk attributes in future DCEs.Data SourcesThe Web of Science, MEDLINE, EMBASE, PsycINFO and Econlit databases were searched on 18 April 2013 for DCEs that included a risk attribute published since 1995, and on 23 April 2013 to identify studies assessing risk communication in the general (non-DCE) health literature.Study Eligibility CriteriaHealthcare-related DCEs with a risk attribute mentioned or suggested in the title/abstract were obtained and retained in the final review if a risk attribute meeting our definition was included.Study Appraisal and Synthesis MethodsExtracted data were tabulated and critically appraised to summarise the quality of reporting, and the format, presentation and interpretation of the risk attribute were summarised.ResultsThis review identified 117 healthcare DCEs that incorporated at least one risk attribute. Whilst there was some evidence of good practice incorporated into the presentation of risk attributes, little evidence was found that developing methods and recommendations from other disciplines about effective methods and validation of risk communication were systematically applied to DCEs. In general, the reviewed DCE studies did not thoroughly report the methodology supporting the explanation of risk in training materials, the impact of framing risk, or exploring the validity of risk communication.LimitationsThe primary limitation of this review was that the methods underlying presentation, format and analysis of risk attributes could only be appraised to the extent that they were reported.ConclusionsImprovements in reporting and transparency of risk presentation from conception to the analysis of DCEs are needed. To define best practice, further research is needed to test how the process of communicating risk affects the way in which people value risk attributes in DCEs.
Medical Decision Making | 2017
Caroline Vass; Dan Rigby; Katherine Payne
Background. The use of qualitative research (QR) methods is recommended as good practice in discrete choice experiments (DCEs). This study investigated the use and reporting of QR to inform the design and/or interpretation of healthcare-related DCEs and explored the perceived usefulness of such methods. Methods. DCEs were identified from a systematic search of the MEDLINE database. Studies were classified by the quantity of QR reported (none, basic, or extensive). Authors (n = 91) of papers reporting the use of QR were invited to complete an online survey eliciting their views about using the methods. Results. A total of 254 healthcare DCEs were included in the review; of these, 111 (44%) did not report using any qualitative methods; 114 (45%) reported “basic” information; and 29 (11%) reported or cited “extensive” use of qualitative methods. Studies reporting the use of qualitative methods used them to select attributes and/or levels (n = 95; 66%) and/or pilot the DCE survey (n = 26; 18%). Popular qualitative methods included focus groups (n = 63; 44%) and interviews (n = 109; 76%). Forty-four studies (31%) reported the analytical approach, with content (n = 10; 7%) and framework analysis (n = 5; 4%) most commonly reported. The survey identified that all responding authors (n = 50; 100%) found that qualitative methods added value to their DCE study, but many (n = 22; 44%) reported that journals were uninterested in the reporting of QR results. Conclusions. Despite recommendations that QR methods be used alongside DCEs, the use of QR methods is not consistently reported. The lack of reporting risks the inference that QR methods are of little use in DCE research, contradicting practitioners’ assessments. Explicit guidelines would enable more clarity and consistency in reporting, and journals should facilitate such reporting via online supplementary materials.
Value in Health | 2017
Caroline Vass; Dan Rigby; Katherine Payne
BACKGROUND The relative benefits and risks of screening programs for breast cancer have been extensively debated. OBJECTIVES To quantify and investigate heterogeneity in womens preferences for the benefits and risks of a national breast screening program (NBSP) and to understand the effect of risk communication format on these preferences. METHODS An online discrete choice experiment survey was designed to elicit preferences from female members of the public for an NBSP described by three attributes (probability of detecting a cancer, risk of unnecessary follow-up, and out-of-pocket screening costs). Survey respondents were randomized to one of two surveys, presenting risk either as percentages only or as icon arrays and percentages. Respondents were required to choose between two hypothetical NBSPs or no screening in 11 choice sets generated using a Bayesian D-efficient design. The trade-offs women made were analyzed using heteroskedastic conditional logit and scale-adjusted latent class models. RESULTS A total of 1018 women completed the discrete choice experiment (percentages-only version = 507; icon arrays and percentages version = 511). The results of the heteroskedastic conditional logit model suggested that, on average, women were willing-to-accept 1.72 (confidence interval 1.47-1.97) additional unnecessary follow-ups and willing-to-pay £79.17 (confidence interval £66.98-£91.35) for an additional cancer detected per 100 women screened. Latent class analysis indicated substantial heterogeneity in preferences with six latent classes and three scale classes providing the best fit. The risk communication format received was not a predictor of scale class or preference class membership. CONCLUSIONS Most women were willing to trade-off the benefits and risks of screening, but decision makers seeking to improve uptake should consider the disparate needs of women when configuring services.
PharmacoEconomics | 2017
Caroline Vass; Katherine Payne
There is emerging interest in the use of discrete choice experiments as a means of quantifying the perceived balance between benefits and risks (quantitative benefit-risk assessment) of new healthcare interventions, such as medicines, under assessment by regulatory agencies. For stated preference data on benefit-risk assessment to be used in regulatory decision making, the methods to generate these data must be valid, reliable and capable of producing meaningful estimates understood by decision makers. Some reporting guidelines exist for discrete choice experiments, and for related methods such as conjoint analysis. However, existing guidelines focus on reporting standards, are general in focus and do not consider the requirements for using discrete choice experiments specifically for quantifying benefit-risk assessments in the context of regulatory decision making. This opinion piece outlines the current state of play in using discrete choice experiments for benefit-risk assessment and proposes key areas needing to be addressed to demonstrate that discrete choice experiments are an appropriate and valid stated preference elicitation method in this context. Methodological research is required to establish: how robust the results of discrete choice experiments are to formats and methods of risk communication; how information in the discrete choice experiment can be presented effectually to respondents; whose preferences should be elicited; the correct underlying utility function and analytical model; the impact of heterogeneity in preferences; and the generalisability of the results. We believe these methodological issues should be addressed, alongside developing a ‘reference case’, before agencies can safely and confidently use discrete choice experiments for quantitative benefit-risk assessment in the context of regulatory decision making for new medicines and healthcare products.
The Patient: Patient-Centered Outcomes Research | 2018
Caroline Vass; Stuart Wright; Michael Burton; Katherine Payne
Discrete choice experiments (DCEs) are used to quantify the preferences of specified sample populations for different aspects of a good or service and are increasingly used to value interventions and services related to healthcare. Systematic reviews of healthcare DCEs have focussed on the trends over time of specific design issues and changes in the approach to analysis, with a more recent move towards consideration of a specific type of variation in preferences within the sample population, called taste heterogeneity, noting rises in the popularity of mixed logit and latent class models. Another type of variation, called scale heterogeneity, which relates to differences in the randomness of choice behaviour, may also account for some of the observed ‘differences’ in preference weights. The issue of scale heterogeneity becomes particularly important when comparing preferences across subgroups of the sample population as apparent differences in preferences could be due to taste and/or choice consistency. This primer aims to define and describe the relevance of scale heterogeneity in a healthcare context, and illustrate key points, with a simulated data set provided to readers in the Online appendix.
The Patient: Patient-Centered Outcomes Research | 2018
Caroline Vass; Dan Rigby; Katherine Payne
BackgroundRisk is increasingly used as an attribute in discrete choice experiments (DCEs). However, risk and probabilities are complex concepts that can be open to misinterpretation, potentially undermining the robustness of DCEs as a valuation method. This study aimed to understand how respondents made benefit–risk trade-offs in a DCE and if these were affected by the communication of the risk attributes.MethodsFemale members of the public were recruited via local advertisements to participate in think-aloud interviews when completing a DCE eliciting their preferences for a hypothetical breast screening programme described by three attributes: probability of detecting a cancer; risk of unnecessary follow-up; and cost of screening. Women were randomised to receive risk information as either (1) percentages or (2) percentages and icon arrays. Interviews were digitally recorded then transcribed to generate qualitative data for thematic analysis.ResultsNineteen women completed the interviews (icon arrays n = 9; percentages n = 10). Analysis revealed four key themes where women made references to (1) the nature of the task; (2) their feelings; (3) their experiences, for instance making analogies to similar risks; and (4) economic phenomena such as opportunity costs and discounting.ConclusionMost women completed the DCE in line with economic theory; however, violations were identified. Women appeared to visualise risk whether they received icon arrays or percentages only. Providing clear instructions and graphics to aid interpretation of risk and qualitative piloting to verify understanding is recommended. Further investigation is required to determine if the process of verbalising thoughts changes the behaviour of respondents.
International Journal of Clinical Pharmacy | 2015
Caroline Vass; Ewan Gray; Katherine Payne
The Patient: Patient-Centered Outcomes Research | 2016
Ewan Gray; Martin Eden; Caroline Vass; Marion McAllister; Jordan J. Louviere; Katherine Payne
Medical Decision Making | 2014
Caroline Vass; Dan Rigby; Stephen Campbell; Kelly Tate; Andrew J. Stewart; Katherine Payne
The Patient: Patient-Centered Outcomes Research | 2018
Stuart Wright; Caroline Vass; Gene Sim; Michael Burton; Denzil G. Fiebig; Katherine Payne