David A. Feldstein
University of Wisconsin-Madison
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by David A. Feldstein.
Annals of Internal Medicine | 2014
Daniella A. Zipkin; Craig A. Umscheid; Nancy L. Keating; Elizabeth Allen; KoKo Aung; Rebecca J. Beyth; Scott Kaatz; Devin M. Mann; Jeremy B. Sussman; Deborah Korenstein; Connie Schardt; Avishek Nagi; Richard Sloane; David A. Feldstein
Shared decision making is a collaborative process that allows patients and medical professionals to consider the best scientific evidence available, along with patients values and preferences, to make health care decisions (1). A recent Institute of Medicine report concluded that although people desire a patient experience that includes deep engagement in shared decision making, there are gaps between what patients want and what they get (2). For patients to get the experience they want, providers must effectively communicate evidence about benefits and harms. To improve the decision-making process, the Institute of Medicine recommended development and dissemination of high-quality communication tools (2). New tools, however, must match patients numerical abilities, which are often limited. For example, in one study, as many as 40% of high school graduates could not perform basic numerical operations, such as converting 1% of 1000 to 10 of 1000. This collective statistical illiteracy is a major barrier to the interpretation of health statistics (3). Physicians may also find statistical information difficult to interpret and explain (4). Existing literature about methods of communicating benefits and harms is broad. One review, based on 19 studies, concluded that the choice of a specific graphic is not as important as whether the graphic frames the frequency of an event with a visual representation of the total population in which it occurs (5). Another review, involving a limited literature search, found that comprehension improved when using frequencies (such as 1 in 5) instead of event rates (such as 20%) and using absolute risk reductions (ARRs) instead of relative risk reductions (RRRs) (6). The review did not assess affective outcomes, such as patient satisfaction, and behavioral outcomes, such as changes in decision making. Yet another review identified strong evidence that patients misinterpret RRRs and supported the effectiveness of graphs in communicating harms (7). However, they did not examine the comparative effectiveness of such approaches. More narrowly focused Cochrane reviews examined the communication of risk specific to screening tests (8, 9); numerical presentations, such as ARRs, RRRs, and numbers needed to treat (NNTs) (10); and effects of decision aids (11). An expert commentary about effective risk communication recommended using plain language, icon arrays, and absolute risks and providing time intervals with risk information (12). A group of experts identified 11 key components of risk communication, including presenting numerical estimates in context with evaluative labels, conveying uncertainty, and tailoring estimates (13). The aim of this systematic review is to comprehensively examine the comparative effectiveness of all methods of communicating probabilistic information about benefits and harms to patients to maximize their understanding, satisfaction, and decision-making ability. Methods We developed and followed a plan for the review that included several searches and dual abstraction of study data using standardized abstraction forms. Data Sources and Study Selection We searched PubMed (1966 to March 2014), CINAHL, EMBASE, and the Cochrane Central Register of Controlled Trials (1966 to December 2011) using keywords and structured terms related to the concepts of patients; communication; riskbenefit; and outcomes, such as understanding or comprehension, preferences or satisfaction, and decision making. Supplement 1 shows the detailed search strategy. Supplement 1. Search Strategies We included cross-sectional or prospective, longitudinal trials that were published in English and had an active control group that recruited patients or healthy volunteers and compared any method of communicating probabilistic information with another method. We focused on different methods of communicating the same specific probabilities to eliminate any independent effects that could result from different probabilities being studied (for example, different magnitudes or directions of effect). Studies of personalized risks, which may vary from person to person, were included when participants were randomly assigned. When studies of personalized risks were not randomized, the risks were considered to differ between the groups and were excluded. No limits were placed on study size, location, or duration or on the nature of the communication method. When needed, we reviewed sources specified in the articles, such as Web sites, to directly review the interventions and determine whether probabilistic information was addressed. Studies of medical students, health professionals, and public health or mass media campaigns were excluded. One independent reviewer screened each title and abstract and excluded citations that were not original studies or were unrelated to probabilistic information. Two independent reviewers screened the full text of the remaining citations to identify eligible articles. Disagreements between the 2 reviewers were resolved by consensus, with a third reviewer arbitrating any unresolved disagreements. Data Extraction and Quality Assessment Two reviewers independently abstracted detailed information about the study population, interventions, primary outcomes, and risk of bias from each included study using a standardized abstraction form, which was developed a priori (Supplement 2). A third reviewer resolved any disagreements. We categorized outcomes in 1 of 3 domains: cognitive (or understanding, such as accuracy in answering questions related to probabilistic information, or general comprehension of the probabilistic information), affective (such as preferences for or satisfaction with the method of communicating probabilistic information), and behavioral (such as real or theoretical decision making). Supplement 2. Abstraction Form Risk of bias in randomized, controlled trials was assessed on the basis of adequacy of randomization, allocation concealment, similarity of study groups at baseline, blinding, equal treatment of groups throughout the study, completeness of follow-up, and intention to treat (participants analyzed in the groups to which they were randomly assigned) (14). Risk of bias in observational studies was assessed with a modified set of criteria adapted from the NewcastleOttawa Scale (15). Data Synthesis and Analysis Data were tabulated, and the frequency of all head-to-head comparisons in studies was assessed to identify clusters of comparisons. In many instances, several interventions were bundled in a single study group (such as event rate plus icon array, or event rate plus natural frequencies plus ARRs). Bundles were not separated or combined with similar interventions because it could not be determined which component of the bundle drove the intervention. Descriptive statistics were used. We decided a priori not to do meta-analysis because of study heterogeneity. We emphasized findings from randomized studies as well as nonrandomized studies when findings were supported by more than 1 study. Role of the Funding Source No funding supported this study. The authors participated within their role on the Evidence-Based Medicine Task Force of the Society of General Internal Medicine. Results The initial search through December 2011 retrieved 22103 citations (16661 from PubMed, 1194 from CINAHL, 2861 from the Cochrane Central Register of Controlled Trials, and 1387 from EMBASE), and 20076 remained after removing duplicates. We updated the PubMed search through 30 March 2014, yielding 6529 additional citations; 5970 remained after removing duplicates, for a total of 26046 citations for review. A total of 630 articles were selected for full-text review and 84 were included, representing 91 unique studies (1699). Reasons for exclusion are noted in Figure 1, and study details are provided in Supplement 3. Figure 1. Summary of evidence search and selection. Supplement 3. Details of All Included Studies Seventy-four (81.3%) of the 91 included studies were randomized trials, most with cross-sectional designs. The median number of participants in randomized trials was 268 (range, 31 to 4685), and the median in all studies was 268 (range, 24 to 16133). Thirty-three studies (36.3%) included patients at specific risk for the target condition of interest. Forty-eight studies (52.7%) presented probabilistic data about benefits of a therapy or intervention (with 7 [14.6%] also presenting harms), 21 (23.1%) presented data only on harms, and 9 (10%) involved screening tests. Forty-nine studies (54.4%) delivered interventions on paper and 39 (42.9%) on a computer, typically over the Internet. The characteristics of study participants are presented in Tables 1 and 2. Table 1. Characteristics of Study Participants Table 2. Proportion of Studies Including Participants at Risk Versus Not at Risk for Target Condition Risk of bias for the included randomized trials was moderate (Figure 2). Randomization was adequate in 32 trials (42.7%), inadequate in 3 (4.0%), and unclear in 40 (53.3%). Allocation concealment was not stated in 55 trials (73.3%). Similarity of groups at baseline was adequate in 37 trials (49.3%) and unclear in 32 (42.7%). Blinding, equal treatment, and intention-to-treat items were similarly difficult to assess from reported information. Figure 2. Risk of bias for randomized, controlled trials (n = 74). Adapted from reference 100. Study Interventions and Comparators A frequency table (heat map) of all study intervention comparisons was created to identify clusters of comparisons (Supplement 4). The heat map represents study group comparisons, so one study may contribute several comparisons. The most commonly studied numerical presentations of data were natural frequencies, defined as the numbers of persons with events juxtaposed with a baseline denominator of persons (for example, 4 out of 100 persons had the outcome); event rates, defined as the proportions of persons wi
Annals of Internal Medicine | 2014
Daniella A. Zipkin; Craig A. Umscheid; Nancy L. Keating; Elizabeth Allen; KoKo Aung; Rebecca J. Beyth; Scott Kaatz; Devin M. Mann; Jeremy B. Sussman; Deborah Korenstein; Connie Schardt; Avishek Nagi; Richard Sloane; David A. Feldstein
Shared decision making is a collaborative process that allows patients and medical professionals to consider the best scientific evidence available, along with patients values and preferences, to make health care decisions (1). A recent Institute of Medicine report concluded that although people desire a patient experience that includes deep engagement in shared decision making, there are gaps between what patients want and what they get (2). For patients to get the experience they want, providers must effectively communicate evidence about benefits and harms. To improve the decision-making process, the Institute of Medicine recommended development and dissemination of high-quality communication tools (2). New tools, however, must match patients numerical abilities, which are often limited. For example, in one study, as many as 40% of high school graduates could not perform basic numerical operations, such as converting 1% of 1000 to 10 of 1000. This collective statistical illiteracy is a major barrier to the interpretation of health statistics (3). Physicians may also find statistical information difficult to interpret and explain (4). Existing literature about methods of communicating benefits and harms is broad. One review, based on 19 studies, concluded that the choice of a specific graphic is not as important as whether the graphic frames the frequency of an event with a visual representation of the total population in which it occurs (5). Another review, involving a limited literature search, found that comprehension improved when using frequencies (such as 1 in 5) instead of event rates (such as 20%) and using absolute risk reductions (ARRs) instead of relative risk reductions (RRRs) (6). The review did not assess affective outcomes, such as patient satisfaction, and behavioral outcomes, such as changes in decision making. Yet another review identified strong evidence that patients misinterpret RRRs and supported the effectiveness of graphs in communicating harms (7). However, they did not examine the comparative effectiveness of such approaches. More narrowly focused Cochrane reviews examined the communication of risk specific to screening tests (8, 9); numerical presentations, such as ARRs, RRRs, and numbers needed to treat (NNTs) (10); and effects of decision aids (11). An expert commentary about effective risk communication recommended using plain language, icon arrays, and absolute risks and providing time intervals with risk information (12). A group of experts identified 11 key components of risk communication, including presenting numerical estimates in context with evaluative labels, conveying uncertainty, and tailoring estimates (13). The aim of this systematic review is to comprehensively examine the comparative effectiveness of all methods of communicating probabilistic information about benefits and harms to patients to maximize their understanding, satisfaction, and decision-making ability. Methods We developed and followed a plan for the review that included several searches and dual abstraction of study data using standardized abstraction forms. Data Sources and Study Selection We searched PubMed (1966 to March 2014), CINAHL, EMBASE, and the Cochrane Central Register of Controlled Trials (1966 to December 2011) using keywords and structured terms related to the concepts of patients; communication; riskbenefit; and outcomes, such as understanding or comprehension, preferences or satisfaction, and decision making. Supplement 1 shows the detailed search strategy. Supplement 1. Search Strategies We included cross-sectional or prospective, longitudinal trials that were published in English and had an active control group that recruited patients or healthy volunteers and compared any method of communicating probabilistic information with another method. We focused on different methods of communicating the same specific probabilities to eliminate any independent effects that could result from different probabilities being studied (for example, different magnitudes or directions of effect). Studies of personalized risks, which may vary from person to person, were included when participants were randomly assigned. When studies of personalized risks were not randomized, the risks were considered to differ between the groups and were excluded. No limits were placed on study size, location, or duration or on the nature of the communication method. When needed, we reviewed sources specified in the articles, such as Web sites, to directly review the interventions and determine whether probabilistic information was addressed. Studies of medical students, health professionals, and public health or mass media campaigns were excluded. One independent reviewer screened each title and abstract and excluded citations that were not original studies or were unrelated to probabilistic information. Two independent reviewers screened the full text of the remaining citations to identify eligible articles. Disagreements between the 2 reviewers were resolved by consensus, with a third reviewer arbitrating any unresolved disagreements. Data Extraction and Quality Assessment Two reviewers independently abstracted detailed information about the study population, interventions, primary outcomes, and risk of bias from each included study using a standardized abstraction form, which was developed a priori (Supplement 2). A third reviewer resolved any disagreements. We categorized outcomes in 1 of 3 domains: cognitive (or understanding, such as accuracy in answering questions related to probabilistic information, or general comprehension of the probabilistic information), affective (such as preferences for or satisfaction with the method of communicating probabilistic information), and behavioral (such as real or theoretical decision making). Supplement 2. Abstraction Form Risk of bias in randomized, controlled trials was assessed on the basis of adequacy of randomization, allocation concealment, similarity of study groups at baseline, blinding, equal treatment of groups throughout the study, completeness of follow-up, and intention to treat (participants analyzed in the groups to which they were randomly assigned) (14). Risk of bias in observational studies was assessed with a modified set of criteria adapted from the NewcastleOttawa Scale (15). Data Synthesis and Analysis Data were tabulated, and the frequency of all head-to-head comparisons in studies was assessed to identify clusters of comparisons. In many instances, several interventions were bundled in a single study group (such as event rate plus icon array, or event rate plus natural frequencies plus ARRs). Bundles were not separated or combined with similar interventions because it could not be determined which component of the bundle drove the intervention. Descriptive statistics were used. We decided a priori not to do meta-analysis because of study heterogeneity. We emphasized findings from randomized studies as well as nonrandomized studies when findings were supported by more than 1 study. Role of the Funding Source No funding supported this study. The authors participated within their role on the Evidence-Based Medicine Task Force of the Society of General Internal Medicine. Results The initial search through December 2011 retrieved 22103 citations (16661 from PubMed, 1194 from CINAHL, 2861 from the Cochrane Central Register of Controlled Trials, and 1387 from EMBASE), and 20076 remained after removing duplicates. We updated the PubMed search through 30 March 2014, yielding 6529 additional citations; 5970 remained after removing duplicates, for a total of 26046 citations for review. A total of 630 articles were selected for full-text review and 84 were included, representing 91 unique studies (1699). Reasons for exclusion are noted in Figure 1, and study details are provided in Supplement 3. Figure 1. Summary of evidence search and selection. Supplement 3. Details of All Included Studies Seventy-four (81.3%) of the 91 included studies were randomized trials, most with cross-sectional designs. The median number of participants in randomized trials was 268 (range, 31 to 4685), and the median in all studies was 268 (range, 24 to 16133). Thirty-three studies (36.3%) included patients at specific risk for the target condition of interest. Forty-eight studies (52.7%) presented probabilistic data about benefits of a therapy or intervention (with 7 [14.6%] also presenting harms), 21 (23.1%) presented data only on harms, and 9 (10%) involved screening tests. Forty-nine studies (54.4%) delivered interventions on paper and 39 (42.9%) on a computer, typically over the Internet. The characteristics of study participants are presented in Tables 1 and 2. Table 1. Characteristics of Study Participants Table 2. Proportion of Studies Including Participants at Risk Versus Not at Risk for Target Condition Risk of bias for the included randomized trials was moderate (Figure 2). Randomization was adequate in 32 trials (42.7%), inadequate in 3 (4.0%), and unclear in 40 (53.3%). Allocation concealment was not stated in 55 trials (73.3%). Similarity of groups at baseline was adequate in 37 trials (49.3%) and unclear in 32 (42.7%). Blinding, equal treatment, and intention-to-treat items were similarly difficult to assess from reported information. Figure 2. Risk of bias for randomized, controlled trials (n = 74). Adapted from reference 100. Study Interventions and Comparators A frequency table (heat map) of all study intervention comparisons was created to identify clusters of comparisons (Supplement 4). The heat map represents study group comparisons, so one study may contribute several comparisons. The most commonly studied numerical presentations of data were natural frequencies, defined as the numbers of persons with events juxtaposed with a baseline denominator of persons (for example, 4 out of 100 persons had the outcome); event rates, defined as the proportions of persons wi
Depression and Anxiety | 2013
Amrit Kanwar; Shaista Malik; Larry J. Prokop; Leslie A. Sim; David A. Feldstein; Zhen Wang; M. Hassan Murad
Although anxiety has been proposed to be a potentially modifiable risk factor for suicide, research examining the relationship between anxiety and suicidal behaviors has demonstrated mixed results. Therefore, we aimed at testing the hypothesis that anxiety disorders are associated with suicidal behaviors and evaluate the magnitude and quality of supporting evidence.
Implementation Science | 2017
David A. Feldstein; Rachel Hess; Thomas McGinn; Rebecca G. Mishuris; Lauren McCullagh; Paul D. Smith; Michael Flynn; Joseph Palmisano; Gheorghe Doros; Devin Mann
BackgroundClinical prediction rules (CPRs) represent a method of determining individual patient risk to help providers make more accurate decisions at the point of care. Well-validated CPRs are underutilized but may decrease antibiotic overuse for acute respiratory infections. The integrated clinical prediction rules (iCPR) study builds on a previous single clinic study to integrate two CPRs into the electronic health record and assess their impact on practice. This article discusses study design and implementation of a multicenter cluster randomized control trial of the iCPR clinical decision support system, including the tool adaptation, usability testing, staff training, and implementation study to disseminate iCPR at multiple clinical sites across two health care systems.MethodsThe iCPR tool is based on two well-validated CPRs, one for strep pharyngitis and one for pneumonia. The iCPR tool uses the reason for visit to trigger a risk calculator. Provider completion of the risk calculator provides a risk score, which is linked to an order set. Order sets guide evidence-based care and include progress note documentation, tests, prescription medications, and patient instructions. The iCPR tool was refined based on interviews with providers, medical assistants, and clinic managers, and two rounds of usability testing. “Near live” usability testing with simulated patients was used to ensure that iCPR fit into providers’ clinical workflows. Thirty-three Family Medicine and General Internal Medicine primary care clinics were recruited at two institutions. Clinics were randomized to academic detailing about strep pharyngitis and pneumonia diagnosis and treatment (control) or academic detailing plus use of the iCPR tool (intervention). The primary outcome is the difference in antibiotic prescribing rates between the intervention and control groups with secondary outcomes of difference in rapid strep and chest x-ray ordering. Use of the components of the iCPR will also be assessed.DiscussionThe iCPR study uses a strong user-centered design and builds on the previous initial study, to assess whether CPRs integrated in the electronic health record can change provider behavior and improve evidence-based care in a broad range of primary care clinics.Trial registrationClinicaltrials.gov (NCT02534987)
Medical Teacher | 2016
Craig A. Umscheid; Matthew J. Maenner; Nikhil Mull; Angela Veesenmeyer; John T. Farrar; Stanley Goldfarb; Gail Morrison; Mark A. Albanese; John G. Frohna; David A. Feldstein
Abstract Purpose: To evaluate feasibility and impact of evidence-based medicine (EBM) educational prescriptions (EPs) in medical student clerkships. Methods: Students answered clinical questions during clerkships using EPs, which guide learners through the “four As” of EBM. Epidemiology fellows graded EPs using a rubric. Feasibility was assessed using descriptive statistics and student and fellow end-of-study questionnaires, which also measured impact. In addition, for each EP, students reported patient impact. Impact on EBM skills was assessed by change in EP scores over time and scores on an EBM objective structured clinical exam (OSCE) that were compared to controls from the prior year. Results: 117 students completed 402 EPs evaluated by 24 fellows. Average score was 7.34/9.00 (SD 1.58). 69 students (59%) and 21 fellows (88%) completed questionnaires. Most students thought EPs improved “Acquiring” and “Appraising”. Almost half thought EPs improved “Asking” and “Applying”. Fellows did not value grading EPs. For 18% of EPs, students reported a “change” or “potential change” in treatment. 56% “confirmed” treatment. EP scores increased by 1.27 (95% CI: 0.81–1.72). There were no differences in OSCE scores between cohorts. Conclusions: Integrating EPs into clerkships is feasible and has impact, yet OSCEs were unchanged, and research fellows had limitations as evaluators.
Journal of Graduate Medical Education | 2017
Jeremy Smith; Elizabeth Jacobs; Zhanhai Li; Bennett Vogelman; Yingqi Zhao; David A. Feldstein
BACKGROUND Direct observation of clinical skills is a cornerstone of competency-based education and training. Ensuring direct observation in a consistent fashion has been a significant challenge for residency programs. OBJECTIVE The purpose of this study was to evaluate the effects of a novel evaluation system, designed to achieve ongoing direct observation of residents, examine changes in resident observation practices, and understand faculty attitudes toward direct observation and the evaluation system. METHODS Internal medicine residents on an ambulatory block rotation participated in a new evaluation system, which replaced a single end-of-rotation summative evaluation with 9 formative evaluations based on direct observation. Faculty received training in direct observation and use of the forms, and residents were given responsibility to collect 9 observations per rotation. Faculty members contacted residents at the beginning and middle of the rotation to ensure completion of the observations. Residents and faculty also completed postrotation surveys to gauge the impact of the new system. RESULTS A total of 507 patient encounters were directly observed, and 52 of 57 (91%) residents completed all 9 observations. Residents reported considerably more direct observation than prior to the intervention, and most reported changes to their clinical skills based on faculty feedback. Faculty reported improvements in their attitudes, increased their use of direct observation, and preferred the new system to the old one. CONCLUSIONS A novel evaluation system replacing summative evaluations with multiple formative evaluations based on direct observation was successful in achieving high rates of observations, and improving faculty attitudes toward direct observation.
Journal of Cancer Education | 2017
SarahMaria Donohue; James Edward Haine; Zhanhai Li; David A. Feldstein; Mark A. Micek; Elizabeth Trowbridge; Sandra Kamnetz; James M. Sosman; Lee G. Wilke; Mary E. Sesto; Amye Tevaarwerk
Every cancer survivor and his/her primary care provider should receive an individualized survivorship care plan (SCP) following curative treatment. Little is known regarding point-of-care utilization at primary care visits. We assessed SCP utilization in the clinical context of primary care visits. Primary care physicians and advanced practice providers (APPs) who had seen survivors following provision of an SCP were identified. Eligible primary care physicians and APPs were sent an online survey, evaluating SCP utilization and influence on decision-making at the point-of-care, accompanied by copies of the survivor’s SCP and the clinic note. Eighty-eight primary care physicians and APPs were surveyed November 2016, with 40 (45%) responding. Most respondents (60%) reported discussing cancer or related issues during the visit. Information needed included treatment (66%) and follow-up visits, and the cancer team was responsible for (58%) vs primary care (58%). Respondents acquired this information by asking the patient (79%), checking oncology notes (75%), the SCP (17%), or online resources (8%). Barriers to SCP use included being unaware of the SCP (73%), difficulty locating it (30%), and finding needed information faster via another mechanism (15%). Despite largely not using the SCP for the visit (90%), most respondents (61%) believed one would be quite or very helpful for future visits. Most primary care visits included discussion of cancer or cancer-related issues. SCPs may provide the information necessary to deliver optimal survivor care but efforts are needed to reduce barriers and design SCPs for primary care use.
JAMA | 2006
Terrence M. Shaneyfelt; Karyn D. Baum; Douglas S. Bell; David A. Feldstein; Thomas K. Houston; Scott Kaatz; Chad T. Whelan; Michael L. Green
Archive | 2006
Terrence M. Shaneyfelt; Karyn D. Baum; Douglas S. Bell; David A. Feldstein; Thomas K. Houston; Chad T. Whelan; Michael L. Green
Medical Teacher | 2010
Randall S. Edson; Thomas J. Beckman; Colin P. West; Paul Aronowitz; Robert G. Badgett; David A. Feldstein; Mark C. Henderson; Joseph C. Kolars; Furman S. McDonald