Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Carol Bennett is active.

Publication


Featured researches published by Carol Bennett.


Canadian Medical Association Journal | 2011

Proportion of hospital readmissions deemed avoidable: a systematic review

Carl van Walraven; Carol Bennett; Alison Jennings; Peter C. Austin; Alan J. Forster

Background Readmissions to hospital are increasingly being used as an indicator of quality of care. However, this approach is valid only when we know what proportion of readmissions are avoidable. We conducted a systematic review of studies that measured the proportion of readmissions deemed avoidable. We examined how such readmissions were measured and estimated their prevalence. Methods We searched the MEDLINE and EMBASE databases to identify all studies published from 1966 to July 2010 that reviewed hospital readmissions and that specified how many were classified as avoidable. Results Our search strategy identified 34 studies. Three of the studies used combinations of administrative diagnostic codes to determine whether readmissions were avoidable. Criteria used in the remaining studies were subjective. Most of the studies were conducted at single teaching hospitals, did not consider information from the community or treating physicians, and used only one reviewer to decide whether readmissions were avoidable. The median proportion of readmissions deemed avoidable was 27.1% but varied from 5% to 79%. Three study-level factors (teaching status of hospital, whether all diagnoses or only some were considered, and length of follow-up) were significantly associated with the proportion of admissions deemed to be avoidable and explained some, but not all, of the heterogeneity between the studies. Interpretation All but three of the studies used subjective criteria to determine whether readmissions were avoidable. Study methods had notable deficits and varied extensively, as did the proportion of readmissions deemed avoidable. The true proportion of hospital readmissions that are potentially avoidable remains unclear.


PLOS ONE | 2009

Assessing the Quality of Decision Support Technologies Using the International Patient Decision Aid Standards instrument (IPDASi)

Glyn Elwyn; Annette M. O'Connor; Carol Bennett; Robert G. Newcombe; Mary C. Politi; Marie-Anne Durand; Elizabeth Drake; Natalie Joseph-Williams; Sara Khangura; Anton Saarimaki; Stephanie Sivell; Mareike Stiel; Steven Bernstein; Nananda F. Col; Angela Coulter; Karen Eden; Martin Härter; Margaret Holmes Rovner; Nora Moumjid; Dawn Stacey; Richard Thomson; Timothy J. Whelan; Trudy van der Weijden; Adrian Edwards

Objectives To describe the development, validation and inter-rater reliability of an instrument to measure the quality of patient decision support technologies (decision aids). Design Scale development study, involving construct, item and scale development, validation and reliability testing. Setting There has been increasing use of decision support technologies – adjuncts to the discussions clinicians have with patients about difficult decisions. A global interest in developing these interventions exists among both for-profit and not-for-profit organisations. It is therefore essential to have internationally accepted standards to assess the quality of their development, process, content, potential bias and method of field testing and evaluation. Methods Scale development study, involving construct, item and scale development, validation and reliability testing. Participants Twenty-five researcher-members of the International Patient Decision Aid Standards Collaboration worked together to develop the instrument (IPDASi). In the fourth Stage (reliability study), eight raters assessed thirty randomly selected decision support technologies. Results IPDASi measures quality in 10 dimensions, using 47 items, and provides an overall quality score (scaled from 0 to 100) for each intervention. Overall IPDASi scores ranged from 33 to 82 across the decision support technologies sampled (n = 30), enabling discrimination. The inter-rater intraclass correlation for the overall quality score was 0.80. Correlations of dimension scores with the overall score were all positive (0.31 to 0.68). Cronbachs alpha values for the 8 raters ranged from 0.72 to 0.93. Cronbachs alphas based on the dimension means ranged from 0.50 to 0.81, indicating that the dimensions, although well correlated, measure different aspects of decision support technology quality. A short version (19 items) was also developed that had very similar mean scores to IPDASi and high correlation between short score and overall score 0.87 (CI 0.79 to 0.92). Conclusions This work demonstrates that IPDASi has the ability to assess the quality of decision support technologies. The existing IPDASi provides an assessment of the quality of a DSTs components and will be used as a tool to provide formative advice to DSTs developers and summative assessments for those who want to compare their tools against an existing benchmark.


PLOS Medicine | 2011

Reporting Guidelines for Survey Research: An Analysis of Published Guidance and Reporting Practices

Carol Bennett; Sara Khangura; Jamie C. Brehaut; Ian D. Graham; David Moher; Beth K. Potter; Jeremy Grimshaw

Carol Bennett and colleagues review the evidence and find that there is limited guidance and no consensus on the optimal reporting of survey research.


Medical Decision Making | 2014

Toward Minimum Standards for Certifying Patient Decision Aids A Modified Delphi Consensus Process

Natalie Joseph-Williams; Robert G. Newcombe; Mary C. Politi; Marie-Anne Durand; Stephanie Sivell; Dawn Stacey; Annette M. O'Connor; Robert J. Volk; Adrian Edwards; Carol Bennett; Michael Pignone; Richard Thomson; Glyn Elwyn

Objective. The IPDAS Collaboration has developed a checklist and an instrument (IPDASi v3.0) to assess the quality of patient decision aids (PDAs) in terms of their development process and shared decision-making design components. Certification of PDAs is of growing interest in the US and elsewhere. We report a modified Delphi consensus process to agree on IPDASi (v3.0) items that should be considered as minimum standards for PDA certification, for inclusion in the refined IPDASi (v4.0). Methods. A 2-stage Delphi voting process considered the inclusion of IPDASi (v3.0) items as minimum standards. Item scores and qualitative comments were analyzed, followed by expert group discussion. Results. One hundred and one people voted in round 1; 87 in round 2. Forty-seven items were reduced to 44 items across 3 new categories: 1) qualifying criteria, which are required in order for an intervention to be considered a decision aid (6 items); 2) certification criteria, without which a decision aid is judged to have a high risk of harmful bias (10 items); and 3) quality criteria, believed to strengthen a decision aid but whose omission does not present a high risk of harmful bias (28 items). Conclusions. This study provides preliminary certification criteria for PDAs. Scoring and rating processes need to be tested and finalized. However, the process of appraising the quality of the clinical evidence reported by the PDA should be used to complement these criteria; the proposed standards are designed to rate the quality of the development process and shared decision-making design elements, not the quality of the PDA’s clinical content.


BMJ | 2011

Impact of CONSORT extension for cluster randomised trials on quality of reporting and study methodology: review of random sample of 300 trials, 2000-8

Noah Ivers; Monica Taljaard; Stephanie N. Dixon; Carol Bennett; Andrew D McRae; Julia Taleban; Zoe Skea; Jamie C. Brehaut; Robert F. Boruch; Martin P Eccles; Jeremy Grimshaw; Charles Weijer; Merrick Zwarenstein; Allan Donner

Objective To assess the impact of the 2004 extension of the CONSORT guidelines on the reporting and methodological quality of cluster randomised trials. Design Methodological review of 300 randomly sampled cluster randomised trials. Two reviewers independently abstracted 14 criteria related to quality of reporting and four methodological criteria specific to cluster randomised trials. We compared manuscripts published before CONSORT (2000-4) with those published after CONSORT (2005-8). We also investigated differences by journal impact factor, type of journal, and trial setting. Data sources A validated Medline search strategy. Eligibility criteria for selecting studies Cluster randomised trials published in English language journals, 2000-8. Results There were significant improvements in five of 14 reporting criteria: identification as cluster randomised; justification for cluster randomisation; reporting whether outcome assessments were blind; reporting the number of clusters randomised; and reporting the number of clusters lost to follow-up. No significant improvements were found in adherence to methodological criteria. Trials conducted in clinical rather than non-clinical settings and studies published in medical journals with higher impact factor or general medical journals were more likely to adhere to recommended reporting and methodological criteria overall, but there was no evidence that improvements after publication of the CONSORT extension for cluster trials were more likely in trials conducted in clinical settings nor in trials published in either general medical journals or in higher impact factor journals. Conclusion The quality of reporting of cluster randomised trials improved in only a few aspects since the publication of the extension of CONSORT for cluster randomised trials, and no improvements at all were observed in essential methodological features. Overall, the adherence to reporting and methodological guidelines for cluster randomised trials remains suboptimal, and further efforts are needed to improve both reporting and methodology.


BMC Public Health | 2013

Ascertainment of chronic diseases using population health data: a comparison of health administrative data and patient self-report

Elizabeth Muggah; Erin Graves; Carol Bennett; Douglas G. Manuel

BackgroundHealth administrative data is increasingly being used for chronic disease surveillance. This study explored agreement between administrative and survey data for ascertainment of seven key chronic diseases, using individually linked data from a large population of individuals in Ontario, Canada.MethodsAll adults who completed any one of three cycles of the Canadian Community Health Survey (2001, 2003 or 2005) and agreed to have their responses linked to provincial health administrative data were included. The sample population included 85,549 persons. Previously validated case definitions for myocardial infarction, asthma, diabetes, chronic lung disease, stroke, hypertension and congestive heart failure based on hospital and physician billing codes were used to identify cases in health administrative data and these were compared with self-report of each disease from the survey. Concordance was measured using the Kappa statistic, percent positive and negative agreement and prevalence estimates.ResultsAgreement using the Kappa statistic was good or very good (kappa range: 0.66-0.80) for diabetes and hypertension, moderate for myocardial infarction and asthma and poor or fair (kappa range: 0.29-0.36) for stroke, congestive heart failure and COPD. Prevalence was higher in health administrative data for all diseases except stroke and myocardial infarction. Health Utilities Index scores were higher for cases identified by health administrative data compared with self-reported data for some chronic diseases (acute myocardial infarction, stroke, heart failure), suggesting that administrative data may pick up less severe cases.ConclusionsIn the general population, discordance between self-report and administrative data was large for many chronic diseases, particularly disease with low prevalence, and differences were not easily explained by individual and disease characteristics.


Medical Decision Making | 2012

Decision Coaching to Prepare Patients for Making Health Decisions A Systematic Review of Decision Coaching in Trials of Patient Decision Aids

Dawn Stacey; Jennifer Kryworuchko; Carol Bennett; Mary Ann Murray; Sarah Mullan

Background. Decision coaching is individualized, nondirective facilitation of patient preparation for shared decision making. Purpose. To explore characteristics and effectiveness of decision coaching evaluated within trials of patient decision aids (PtDAs) for health decisions. Data Sources. A subanalysis of trials included in the 2011 Cochrane Review of PtDAs. Study Selection. Eligible trials allowed the effectiveness of decision coaching to be compared with another intervention and/or usual care. Data Extraction. Two reviewers independently screened 86 trials, extracted data, and appraised quality. Data Synthesis. Ten trials were eligible. Decision coaching was provided by genetic counselors, nurses, pharmacists, physicians, psychologists, or health educators. Coaching compared with usual care (n = 1 trial) improved knowledge. Coaching plus PtDA compared with usual care (n = 4) improved knowledge and participation in decision making without reported dissatisfaction. Coaching compared with PtDA alone (n = 4) increased values-choice agreement and improved satisfaction with the decision-making process without any difference in knowledge or participation in decision making. Coaching plus PtDA compared with PtDA alone (n = 4) had no difference in knowledge, values-choice agreement, participation in decision making, or satisfaction with the process. Decision coaching plus PtDA was more cost-effective compared with PtDA alone or usual care (n = 1). Limitations. Methodological quality, number of trials, and description of decision coaching. Conclusions. Compared with usual care, decision coaching improved knowledge. However, the improvement in knowledge was similar when coaching was compared with PtDA alone. Outcomes for other comparisons are more variable, some trials showing positive effects and other trials reporting no difference. Given the small number of trials and variability in results, further research is required to determine the effectiveness of decision coaching.


Journal of Clinical Epidemiology | 2011

Administrative database research infrequently used validated diagnostic or procedural codes

Carl van Walraven; Carol Bennett; Alan J. Forster

OBJECTIVE Administrative database research (ADR) frequently uses codes to identify diagnoses or procedures. The association of these codes with the condition it represents must be measured to gauge misclassification in the study. Measure the proportion of ADR studies using diagnostic or procedural codes that measured or referenced code accuracy. STUDY DESIGN AND SETTING Random sample of 150 MEDLINE-cited ADR studies stratified by year of publication. The proportion of ADR studies using codes to define patient cohorts, exposures, or outcomes that measured or referenced code accuracy and Bayesian estimates for probability of disease given code operating characteristics were measured. RESULTS One hundred fifteen ADR studies (76.7% [95% confidence interval (CI), 69.3-82.8]) used codes. Of these studies, only 14 (12.1% [7.3-19.5]) measured or referenced the association of the code with the entity it supposedly represented. This proportion did not vary by year of publication but was significantly higher in journals with greater impact factors. Of five studies reporting code sensitivity and specificity, the estimated probability of code-related condition in code-positive patients was less than 50% in two. CONCLUSION In ADR, diagnostic and procedural codes are commonly used but infrequently validated. People with a code frequently do not have the condition it represents.


Patient Education and Counseling | 2010

Validation of a Preparation for Decision Making scale

Carol Bennett; Ian D. Graham; Elizabeth Kristjansson; Stephen Kearing; Kate F. Clay; Annette M. O’Connor

OBJECTIVE The Preparation for Decision Making (PrepDM) scale was developed to evaluate decision processes relating to the preparation of patients for decision making and dialoguing with their practitioners. The objective of this study was to evaluate the scales psychometric properties. METHODS From July 2005 to March 2006, after viewing a decision aid prescribed during routine clinical care, patients completed a questionnaire including: demographic information, treatment intention, decisional conflict, decision aid acceptability, and the PrepDM scale. RESULTS Four hundred orthopaedic patients completed the questionnaire. The PrepDM scale showed significant correlation with the informed (r=-0.21, p<0.01) and support (r=-0.13, p=0.01) subscales (DCS); and discriminated significantly between patients who did and did not find the decision aid helpful (p<0.0001). Alpha coefficients for internal consistency ranged from 0.92 to 0.96. The scale is strongly unidimensional (principal components analysis) and Item Response Theory analyses demonstrated that all ten scale items function very well. CONCLUSION The psychometric properties of the PrepDM scale are very good. PRACTICE IMPLICATIONS The scale could allow more comprehensive evaluation of interventions designed to prepare patients for shared-decision making encounters regarding complex health care decisions.


Patient Education and Counseling | 2008

Appraisal of primary outcome measures used in trials of patient decision support.

Jennifer Kryworuchko; Dawn Stacey; Carol Bennett; Ian D. Graham

OBJECTIVE To appraise instruments used as primary outcome measures in trials measuring the effectiveness of patient decision support interventions. METHODS Primary outcome measures were identified in trials of patient decision aids included in the 2003 Cochrane Review. Instruments were appraised for: use in calculating sample size, appropriateness, reliability, validity, responsiveness, precision, interpretability, acceptability, and feasibility. RESULTS Of the 35 trials, there were 35 unique primary outcome measures and 8 instruments were appraised. Actual or preferred choice was the primary outcome measure in 18 trials. Two instruments met at least 6 of 8 appraisal criteria: Control Preference Scale (n=2 trials) and Decisional Conflict Scale (n=5 trials). The Decision Conflict Scale was used to calculate sample size in 4 trials. CONCLUSION Decision was the most consistent outcome measure. Most publications provided inadequate detail for appraising the instruments. Four instruments (Decisional Conflict, Control Preferences, Genetic Testing Knowledge Questionnaire, and McBrides Satisfaction with Decision) measured one or more International Patient Decision Aid Standards criteria for evaluating effectiveness. PRACTICE IMPLICATIONS Selecting relevant and high quality outcome measures remains challenging and is an important area for further research in the field of shared decision making.

Collaboration


Dive into the Carol Bennett's collaboration.

Top Co-Authors

Avatar

Douglas G. Manuel

Ottawa Hospital Research Institute

View shared research outputs
Top Co-Authors

Avatar

Dawn Stacey

Women's College Hospital

View shared research outputs
Top Co-Authors

Avatar

Peter Tanuseputro

Ottawa Hospital Research Institute

View shared research outputs
Top Co-Authors

Avatar

Meltem Tuna

Ottawa Hospital Research Institute

View shared research outputs
Top Co-Authors

Avatar

Monica Taljaard

Ottawa Hospital Research Institute

View shared research outputs
Top Co-Authors

Avatar

Deirdre Hennessy

Ottawa Hospital Research Institute

View shared research outputs
Top Co-Authors

Avatar

Richard Perez

Ottawa Hospital Research Institute

View shared research outputs
Top Co-Authors

Avatar

Carl van Walraven

Ottawa Hospital Research Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alan J. Forster

Ottawa Hospital Research Institute

View shared research outputs
Researchain Logo
Decentralizing Knowledge