Sheri A. Keitz
University of Miami
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sheri A. Keitz.
BMC Medical Informatics and Decision Making | 2007
Connie Schardt; Martha B. Adams; Thomas Owens; Sheri A. Keitz; Paul A. Fontelo
BackgroundSupporting 21st century health care and the practice of evidence-based medicine (EBM) requires ubiquitous access to clinical information and to knowledge-based resources to answer clinical questions. Many questions go unanswered, however, due to lack of skills in formulating questions, crafting effective search strategies, and accessing databases to identify best levels of evidence.MethodsThis randomized trial was designed as a pilot study to measure the relevancy of search results using three different interfaces for the PubMed search system. Two of the search interfaces utilized a specific framework called PICO, which was designed to focus clinical questions and to prompt for publication type or type of question asked. The third interface was the standard PubMed interface readily available on the Web. Study subjects were recruited from interns and residents on an inpatient general medicine rotation at an academic medical center in the US. Thirty-one subjects were randomized to one of the three interfaces, given 3 clinical questions, and asked to search PubMed for a set of relevant articles that would provide an answer for each question. The success of the search results was determined by a precision score, which compared the number of relevant or gold standard articles retrieved in a result set to the total number of articles retrieved in that set.ResultsParticipants using the PICO templates (Protocol A or Protocol B) had higher precision scores for each question than the participants who used Protocol C, the standard PubMed Web interface. (Question 1: A = 35%, B = 28%, C = 20%; Question 2: A = 5%, B = 6%, C = 4%; Question 3: A = 1%, B = 0%, C = 0%) 95% confidence intervals were calculated for the precision for each question using a lower boundary of zero. However, the 95% confidence limits were overlapping, suggesting no statistical difference between the groups.ConclusionDue to the small number of searches for each arm, this pilot study could not demonstrate a statistically significant difference between the search protocols. However there was a trend towards higher precision that needs to be investigated in a larger study to determine if PICO can improve the relevancy of search results.
Canadian Medical Association Journal | 2004
Alexandra Barratt; Peter C. Wyer; Rose Hatala; Thomas McGinn; Antonio L. Dans; Sheri A. Keitz; Virginia A. Moyer; Gordon Guyatt for
Physicians, patients and policy-makers are influenced not only by the results of studies but also by how authors present the results.[1][1],[2][2],[3][3],[4][4] Depending on which measures of effect authors choose, the impact of an intervention may appear very large or quite small, even though the
Canadian Medical Association Journal | 2005
Rose Hatala; Sheri A. Keitz; Peter C. Wyer; Gordon H. Guyatt
Clinicians wishing to quickly answer a clinical question may seek a systematic review, rather than searching for primary articles. Such a review is also called a meta-analysis when the investigators have used statistical techniques to combine results across studies. Databases useful for this purpose
Journal of General Internal Medicine | 2006
Rose Hatala; Sheri A. Keitz; Mph Mark C. Wilson Md; Gordon H. Guyatt
Incorporating evidence-based medicine (EBM) into clinical practice is an important competency that residency training must address. Residency program directors, and the clinical educators who work with them, should develop curricula to enhance residents’ capacity for independent evidence-based practice. In this article, the authors argue that residency programs must move beyond journal club formats to promote the practice of EBM by trainees. The authors highlight the limitations of journal club, and suggest additional curricular approaches for an integrated EBM curriculum. Helping residents become effective evidence users will require a sustained effort on the part of residents, faculty, and their educational institutions.
Journal of General Internal Medicine | 2001
Christopher H. Cabell; Connie Schardt; Linda L. Sanders; G. Ralph Corey; Sheri A. Keitz
AbstractOBJECTIVE: To determine if a simple educational intervention can increase resident physician literature search activity. DESIGN: Randomized controlled trial. SETTING: University hospital-based internal medicine training program. PATIENTS/PARTICIPANTS: Forty-eight medical residents rotating on the general internal medicine service. INTERVENTIONS: One-hour didactic session, the use of well-built clinical question cards, and practical sessions in clinical question building. MEASUREMENTS AND MAIN RESULTS: Objective data from the library information system that included the number of log-ons to MEDLINE, searching volume, abstracts viewed, full-text articles viewed, and time spent searching. Median search activity as measured per person per week (control vs intervention): number of log-ons to MEDLINE (2.1 vs 4.4, P<.001); total number of search sets (24.0 vs 74.2, P<.001); abstracts viewed (5.8 vs 17.7, P=.001); articles viewed (1.0 vs 2.6, P=.005); and hours spent searching (0.8 vs 2.4, P<.001). CONCLUSIONS: A simple educational intervention can markedly increase resident searching activity.
Canadian Medical Association Journal | 2005
Victor M. Montori; Peter C. Wyer; Thomas B. Newman; Sheri A. Keitz; Gordon H. Guyatt
For clinicians to use a diagnostic test in clinical practice, they need to know how well the test distinguishes between those who have the suspected disease or condition and those who do not. If investigators choose clinically inappropriate populations for their study of a diagnostic test and
Canadian Medical Association Journal | 2004
Peter C. Wyer; Sheri A. Keitz; Rose Hatala; Robert Hayward; Alexandra Barratt; Victor M. Montori; Eric Wooltorton; Gordon H. Guyatt
Medical educators have embraced evidence-based medicine (EBM) since its introduction as an innovative approach to medical practice and education in the early 1990s.[1][1],[2][2] The Royal College of Physicians and Surgeons of Canada, the College of Family Physicians of Canada and the US
Academic Medicine | 2003
Sheri A. Keitz; Gloria J. Holland; Evert H. Melander; Hayden B. Bosworth; Stephanie H. Pincus
Purpose The U.S. Department of Veterans Affairs (VA) supports 8,700 resident positions nationally to enhance quality of care for veterans and to educate physicians. This study sought to establish a yearly quality indicator to identify and follow strengths and opportunities for improvement in VA clinical training programs. Method In March 2001, the VA Learners’ Perceptions Survey, a validated 57-item questionnaire, was mailed to 3,338 residents registered at 130 VA facilities. They were asked to rate their overall satisfaction with the VA clinical training experience and their satisfaction in four domains: faculty/preceptor, learning, working, and physical environments using a five-point Likert scale. Questionnaires were received from 1,775 residents (53.2%). A full analysis was conducted using 1,436 of these questionnaires, whose respondents were categorized in four training programs: medicine (n = 706), surgery (n = 291), subspecialty (n = 266), and psychiatry (n = 173). Results On a scale of 0 to 100, residents gave their clinical training experience an average score of 79. Eighty-four percent would have recommended VA training to peers, and 81% would have chosen VA training again. Overall, 87% were satisfied with their faculty/preceptors, 78% with the learning environment, and 67% with the working and physical environments. The survey was sensitive to differences in satisfaction among the trainee groups, with residents in internal medicine (IM) the least satisfied. Conclusion The VA Learners’ Perceptions Survey is the first validated survey to address comprehensive satisfaction issues in clinical training. The survey highlights strengths and opportunities for improvement in VA clinical training and is the first step toward improving education.
Journal of General Internal Medicine | 2008
Kameshwar Prasad; Roman Jaeschke; Peter C. Wyer; Sheri A. Keitz; Gordon H. Guyatt
Odds ratios (OR) commonly appear in the medical literature summarizing the comparative effects of interventions and exposures in observational studies, randomized trials, and meta-analyses. Clinicians find it difficult to understand odds and odds ratios as measures of association, although they may be comfortable with the parallel concepts of risk and risk ratios. Probably, no one (with the possible exception of certain statisticians) intuitively understands a ratio of odds.1,2 Nevertheless, odds ratios are frequently encountered in research reports as the principal measure of association.3–5 Odds and risk constitute parallel statistical metrics for measuring frequency and ratios of frequency. Their relationship might be compared to the use of different scales such as Fahrenheit and Centigrade to report absolute values of and relationships between different temperatures. Until recently, the choices between the odds and risk metrics were determined largely on the basis of their statistical properties rather than on the basis of their usefulness as means of communicating the results of research to clinicians and their patients. In a previous article, we have demonstrated approaches to helping teachers and learners to master the concepts of risk, relative risk, and risk reduction as the preferred framework for the purposes of clinical application.6 The content of that article may be regarded as pre-requisite knowledge for the purposes of the present discussion and demonstration. This article presents an approach in helping clinician understand what odds and odds ratios mean and when can they numerically substitute risk and risk ratio for odds and odds ratios. Odds ratios may be chosen as the measure of association by authors of studies conforming to a variety of designs, only some of which mandate their use in preference to risk ratios. These include randomized trials, systematic reviews, case control studies, and studies involving the use of logistical regression. In teaching clinical learners who are not familiar with this measure of outcome, we have found it important to concentrate initially on the conceptual understanding of odds and odds ratios and to avoid combining this with discussion of study design issues. We have correspondingly not included discussion of these issues in this article. We will present interactive approaches that educators have developed to overcome ‘stumbling blocks’ among learners. To help the reader envision these approaches, we present sequenced advice for teachers and characteristic learner responses. The “tips” in this article are adapted from approaches developed by educators with experience in teaching evidence-based medicine skills to clinicians. We present a full description of the development of the tips in this series and pertinent background information elsewhere.7 Each tip includes section on “when to use the tip,” “The script,” the “bottom line,” and a “summary card.” The first tip helps learners to understand the relationship between odds and probability (risk) and to define circumstances when odds and risks are similar. The second tip builds on the first and helps learners understand what an odds ratio is and when it is similar to the risk ratio.
Academic Medicine | 2008
Grant W. Cannon; Sheri A. Keitz; Gloria J. Holland; Barbara K. Chang; John M. Byrne; Anne Tomolo; David C. Aron; Annie Wicker; T. Michael Kashner
Purpose To compare medical students’ and physician residents’ satisfaction with Veterans Affairs (VA) training to determine the factors that were most strongly associated with trainee satisfaction ratings. Method Each year from 2001 to 2006, all medical students and residents in VA teaching facilities were invited to complete the Learners’ Perceptions Survey. Participants rated their overall training satisfaction on a 100-point scale and ranked specific satisfaction in four separate educational domains (learning environment, clinical faculty, working environment, and physical environment) on a five-point Likert scale. Each domain was composed of unique items. Results A total of 6,527 medical students and 16,583 physician residents responded to the survey. The overall training satisfaction scores for medical students and physician residents were 84 and 79, respectively (P < .001), with significant differences in satisfaction reported across the training continuum. For both medical students and residents, the rating of each of the four educational domains was statistically significantly associated with the overall training satisfaction score (P < .001). The learning environment domain had the strongest association with overall training satisfaction score, followed by the clinical preceptor, working environment, and physical environment domains; no significant differences were found between medical students and physician residents in the rank order. Satisfaction with quality of care and faculty teaching contributed significantly to training satisfaction. Conclusions Factors that influence training satisfaction were similar for residents and medical students. The domain with the highest association was the learning environment; quality of care was a key item within this domain.