Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where John H. Littlefield is active.

Publication


Featured researches published by John H. Littlefield.


Academic Medicine | 2000

What evidence supports teaching evidence-based medicine?

Alison Dobbie; F. David Schneider; Anthony D. Anderson; John H. Littlefield

Evidence-based teaching and learning are ‘‘hot topics’’ in medical education. Teaching critical thinking and appraisal skills to learners should give them a current knowledge base, a constantly questioning attitude, and the tools for lifelong learning. However, what is the evidence that teaching evidence-based medicine (EBM) actually changes learners’ behaviors and that such changes eventually translate into better patient care and outcomes? Ironically, few published studies have evaluated the teaching of evidencebased medicine, and two recent critical reviews of EBM curricula offer disappointing conclusions as to the effectiveness of such programs. Norman and Shannon found evidence that teaching critical appraisal skills can increase students’, but not residents’, knowledge of epidemiology. Green reviewed published reports on 18 teaching programs and concluded that most of the studies had poor teaching methods and inadequate evaluation methods. He concluded that, ‘‘in those studies that were methodologically rigorous, the curricula’s effectiveness in improving knowledge and skills was modest.’’ There are thus few good tools for measuring shortterm outcomes of evidence-based teaching and learning (for example, how well learners acquire the basic knowledge and skills of EBM), and fewer yet to measure whether learners’ behaviors change or are maintained over time. It is even more difficult to determine whether teaching EBM techniques benefits patients in terms of reduced morbidity and mortality. In fact, some authors have postulated that using EBM can adversely affect patient care, by devaluing the ‘‘non-evidentiary aspects of medical practice,’’ such as clinical judgment and expert opinion. We conducted a small-group discussion session entitled ‘‘How Can We Best Evaluate The Teaching of Evidence-based Medicine?’’ at the 1999 annual meeting of the Association of American Medical Colleges. Approximately 40 MD and non-MD faculty at all levels of seniority participated, representing many different specialties and medical schools across the country. Most were currently involved in the development and administration of EBM teaching programs for students and residents at their home institutions. The discussion centered on the following four questions: (1) What is the evidence that teaching evidence-based learning techniques changes learners’ behaviors? (2) How can we collaborate to produce this evidence? (3) What new tools and strategies can we invent as a group? (4) Is there any valid and reliable way to measure the effect of EBM teaching and learning on patient outcomes? This paper presents a summary of the group’s discussion, with suggestions for future work needed to evaluate the outcomes of EBM teaching programs.


Academic Medicine | 2003

Do peer chart audits improve residents' performance in providing preventive care?

Judy L. Paukert; Heidi S. Chumley-jones; John H. Littlefield

Purpose. One recommended method to evaluate residents’ competence in practice-based learning and improvement is chart audit. This study determined whether residents improved in providing preventive care after a peer chart audit program was initiated. Method. Residents audited 1,005 charts and scored their peers on 12 clinical preventive services. The mean total chart audit scores were compared across five time blocks of the 45-month study. Results. Residents’ performance in providing preventive care initially improved significantly but declined in the last ten months. However, their performance remained significantly higher than at the beginning. Conclusions. By auditing their peers’ charts, residents improved their own performance in providing preventive care. The diffusion of innovations theory may explain the prolonged implementation phase and problems in maintaining a chart audit program.


Academic Medicine | 2005

Improving resident performance assessment data: Numeric precision and narrative specificity

John H. Littlefield; Debra A. DaRosa; Judy L. Paukert; Reed G. Williams; Debra L. Klamen; John Schoolfield

Purpose To evaluate the use of a systems approach for diagnosing performance assessment problems in surgery residencies, and intervene to improve the numeric precision of global rating scores and the behavioral specificity of narrative comments. Method Faculty and residents at two surgery programs participated in parallel before-and-after trials. During the baseline year, quality assurance data were gathered and problems were identified. During two subsequent intervention years, an educational specialist at each program intervened with an organizational change strategy to improve information feedback loops. Three quality-assurance measures were analyzed: (1) percentage return rate of forms, (2) generalizability coefficients and 95% confidence intervals of scores, and (3) percentage of forms with behaviorally specific narrative comments. Results Median return rates of forms increased significantly from baseline to intervention Year 1 at Site A (71% to 100%) and Site B (75% to 100%), and then remained stable during Year 2. Generalizability coefficients increased between baseline and intervention Year 1 at Site A (0.65 to 0.85) and Site B (0.58 to 0.79), and then remained stable. The 95% confidence interval around resident mean scores improved at Site A from baseline to intervention Year 1 (0.78 to 0.58) and then remained stable; at Site B, it remained constant throughout (0.55 to 0.56). The median percentage of forms with behaviorally specific narrative comments at Site A increased significantly from baseline to intervention Years 1 and 2 (50%, 57%, 82%); at Site B, the percentage increased significantly in intervention Year 1, and then remained constant (50%, 60%, 67%). Conclusions Diagnosing performance assessment system problems and improving information feedback loops improved the quality of resident performance assessment data at both programs.


The Journal of Urology | 2001

Third year medical student attitudes toward learning urology.

Joel M.H. Teichman; Manoj Monga; John H. Littlefield

PURPOSE We studied certain research questions, including the learning environments in which third year medical students perceive that they acquire urological knowledge and skills, and whether medical students interested in urology as a career have different perceived learning needs than those interested in other specialties. MATERIALS AND METHODS A survey instrument was pilot tested and revised. The instrument elicited student perceptions of how they best learned urological diagnosis and skills. Student attitudes toward the third year urology rotation and career motivation toward urology were assessed. Consecutive students were surveyed after completing the third year urology rotation. RESULTS Most students perceived that they learned to manage most urological problems by seeing patients in outpatient clinics and they learned to perform physical examination and urinalysis interpretation by seeing patients. The overall usefulness of various learning environments was highest for seeing patients in clinic, followed by resident teaching, following inpatients, independent reading, watching open surgery, formal conferences, watching endoscopic surgery and routine menial work. Students interested in urology as a career choice were equally motivated by seeing patients in clinic, the subject matter and seeing surgery. CONCLUSIONS Third year medical students perceive that the most important urological learning environment is outpatient evaluation of patients. The urological learning needs of third year medical students are not different in those interested and not interested in urology as a career.


Medical Teacher | 2012

The design and utility of institutional teaching awards: a literature review.

Kathryn N. Huggett; Ruth B. Greenberg; Deepa Rao; Boyd F. Richards; Sheila W. Chauvin; Tracy B. Fulton; Summers Kalishman; John H. Littlefield; Linda Perkowski; Lynne Robins; Deborah Simpson

Background: Institutional teaching awards have been used widely in higher education since the 1970s. Nevertheless, a comprehensive review of the literature on such awards has not been published since 1997. Aim: We conducted a literature review to learn as much as possible about the design (e.g., formats, selection processes) and utility (e.g., impact on individuals and institutions) of teaching awards in order to provide information for use in designing, implementing, or evaluating award programs. Methods: We searched electronic databases for English-language publications on awards for exemplary teaching. Targeted publications included descriptions and/or investigations of award programs, their impact, and theoretical or conceptual models for awards programs. Screening was conducted by dual review; a third reviewer was assigned for disagreements. Data were analyzed qualitatively. Results were summarized descriptively. Results: We identified 1302 publications for initial relevancy screening by title and abstract. We identified an additional 23 publications in a follow-up search. The full text of 126 publications was reviewed for further relevance. A total of 62 publications were identified as relevant, and of these 43 met our criteria for inclusion. Of the 43, 19 described the design features of 24 awards; 20 reports discussed award utility. Nomination and selection processes and benefits (e.g., plaques) varied as did perceived impact on individuals and institutions. Conclusion: Limited evidence exists regarding design and utility of teaching awards. Awards are perceived as having potential for positive impact, including promotions, but may also have unintended negative consequences. Future research should investigate the impact of awards on personal and professional development, and how promotion and tenure committees perceive awards.


Journal of Drug Education | 2003

Effectiveness of Addiction Science Presentations to Treatment Professionals, Using a Modified Solomon Study Design.

Carlton K. Erickson; Richard E. Wilcox; Gary W. Miller; John H. Littlefield; Kenneth A. Lawson

Objectives: Knowledge of addiction research findings is critical for healthcare professionals who treat addicted patients. However, there is little information available about the instructional effectiveness of lecture-slide presentations in changing knowledge vs. beliefs of such professionals. Design: A modified Solomon four-group experimental design was used to assess the instructional effectiveness (knowledge gain vs. belief changes) of threehour addiction science workshops presented to health-care professionals by neurobiologically-trained academic researchers. Effectiveness of the workshops was assessed by a 28-item questionnaire on participant versus control group knowledge/beliefs on addiction. Six-month follow-up questionnaires measured “retention” of knowledge and belief changes. Results: The workshop participants showed significant knowledge gain and belief changes, whereas the two control groups showed no change in knowledge or beliefs. After six months, knowledge gains decreased, but were still higher than pre-test scores. In contrast, belief changes on three subscales persisted over six months in 40 to 52 percent of the subjects. Conclusions: These results illustrate a successful continuing education model by which academic researchers who are skilled teachers present a three-hour lecture-slide workshop with extensive question-and-answer sessions on addictions. We conclude that motivated health-care professionals can experience important knowledge gains and belief changes by participating in such workshops. In contrast to the transient retention of knowledge, belief changes persisted surprisingly well for at least six months in about half the subjects. These results suggest that long-term changes in the professional orientation of these health-care workers are possible.


Academic Medicine | 2001

Quality assurance data for residents' global performance ratings.

John H. Littlefield; Judy L. Paukert; John Schoolfield

The Accreditation Council for Graduate Medical Education (ACGME) has defined six areas of competency that are expected of a new practitioner. Residency programs must require their residents to master competencies in the six areas, but little guidance exists about how to assess these competencies. To assist residency programs, the ACGME has developed the Toolbox of Assessment Methods. The Toolbox recommends global ratings as a potentially applicable assessment method for selected competencies in two areas (patient care and practice-based learning and improvement). Global performance ratings are the most widely used measure of residents’ performances. A literature review of global ratings in residency education concluded that they provide relatively reliable and consistent evaluations. When used in an OSCE, global ratings have been shown to be reliable and valid. Global ratings used over time in actual clinical settings, however, have been described as ‘‘. . . little more than a popularity contest . . .’’ due to the unreliability of raters’ memories and the paucity of observation. Other problems with global performance ratings in actual clinics (GPRACs) are leniency error, which also contributes to the low numerical precision of individual ratings, and the lack of behaviorally specific written comments. Many criticisms of the GPRAC as a measurement method would be less serious if the GPRAC were viewed as a low-cost, noninvasive epidemiologic screening test for marginal performance, analogous to a blood pressure screening. Prescriptive screening is aimed at detecting disease that can be better managed when detected early in presumptively healthy individuals. Continuing this screening analogy, a substandard (below-average) GPRAC for John Q Resident (i.e., positive screening test) would indicate possible marginal performance. However, in order to diagnose and treat the problem, additional data must be obtained, such as assigning John Q to a rotation with intensive observation by several attending faculty. If GPRAC were viewed as a prescriptive screening test, then measurement problems such as low numerical precision of a single observation and paucity of observation would be less serious because additional diagnostic performance data would precede the initiation of definitive action. This paper proposes a method to generate quality assurance data for the GPRAC. The completed rating forms are only one of four parts (context surrounding the ratings, faculty ability and motivation to judge performance, completed rating forms, and the program’s willingness to take administrative action regarding a marginal resident) that comprise the GPRAC system. The four parts of a GPRAC system function interdependently as a unified whole. A GPRAC system can be improved by making changes to one or more of the system’s parts. Quality assurance addresses the question of how GPRAC quality can be ensured and managed. Internal quality monitoring provides quantitative data to assess how well a system is functioning. This study proposes an internal quality monitoring system for the GPRAC based on four prerequisite conditions that should exist in a residency program. First, GPRAC forms must be routinely completed and returned to the residency program director. Second, attending faculty must write candid appraisals when a resident’s performance is marginal. (A national survey of residency directors indicated that about 7% of residents have problems in one or more areas of competency.) Third, the program must be willing to act by diagnosing and by treating a performance problem that is identified by a small number of GPRACs. Fourth, the program should have a systematic GPRAC database to support more severe administrative actions when educational diagnosis and treatment fail. Internal data for quality monitoring can be collected for each prerequisite condition described above, specifically:


Academic Medicine | 1992

An Instrument to Evaluate Alcohol-Abuse Interviewing and Intervention Skills.

Seale Jp; Nancy Amodei; John H. Littlefield; Ortiz E; Bedolla M; Yuan Ch

At the University of Texas Health Science Center at San Antonio from 1988 through 1990, the authors developed the Alcoholism Intervention Performance Evaluation (AIPE), a rating instrument for the evaluation of alcohol-abuse interviewing and intervention skills. Factor analysis of 51 rating items identified seven factors that accounted for most of the variability among the items; 35 were retained and assigned to the factor with which they correlated most highly, thus resulting in a seven-factor instrument with 35 items. The AIPE overall score had an interrater reliability of .73 (for four raters each rating approximately 30 videotaped simulated-patient interviews) and a test-retest reliability of .89 (for one rater rescoring 20 interviews after one month). The authors suggest that the individual scores for the seven factors can be used to provide instructional feedback to trainees and that the overall score can be used to certify interviewer proficiency.


Substance Use & Misuse | 2004

Educating Treatment Professionals About Addiction Science Research: Demographics of Knowledge and Belief Changes

Kenneth A. Lawson; Richard E. Wilcox; John H. Littlefield; Keenan A. Pituch; Carlton K. Erickson

Communication of accurate, objective, and timely scientific information to treatment professionals is important—especially in the “drug abuse” and addiction field where misinformation and a lack of exposure to new information are common. The purpose of this study was to assess knowledge and belief changes that accompanied educational workshops (3 or 6 hr-long) on addiction science targeted to treatment professionals (N = 1403) given in the United States and Puerto Rico between July 2000 and August 2001. Each workshop covered three main concepts: (1) terms and definitions; (2) basic neurochemistry of addiction; and (3) how new neurobiological knowledge will affect the treatment of addictions in the future. Analysis of variance was used to compare mean pretest to posttest change scores among levels of four independent variables: gender, age, occupation/position, and race/ethnicity. Workshop participants achieved a significant improvement in knowledge about addiction with younger groups achieving greater gains. Participants’ beliefs shifted in the desired direction. Significant differences in belief shifts occurred among occupational and gender groups, but not among race/ethnicity or age groups. There was also a consistent change in the policy belief subscale that related to how strongly the audience members believed research on addiction was important. We conclude that addiction science education provided to treatment professionals can increase their knowledge and change their beliefs about the causes of addictions. In addition, the workshop participants form a base of constituents who are likely to support greater addiction research funding.


Medical Education | 2009

Nursing staff assessment of residents' professionalism and communication skills.

Gary Sutkin; John H. Littlefield; Douglas W Laube

Context and setting Significantly positive or negative professionalism and behaviours related to interpersonal and communication skills (ICS) occur relatively infrequently in resident training. Meaningful in-training assessment can come from observation of resident behaviour in the actual work environment. Despite calls for 360-degree evaluation of residents by the Accreditation Council for Graduate Medical Education (ACGME), few programmes routinely use resident performance assessment (RPA) by nursing staff. Why the idea was necessary We believe nursing staff observe numerous resident–patient and resident–staff interactions and can identify residents who are performing seriously below or above standards. Our objective was to involve nurses in a collaborative effort to create and pilot-test a professionalism and ICS rating form. What was done A list of 17 global rating items was distributed to 185 registered nurses, licensed vocational nurses and nursing technical staff in four clinical sites (operating room, outpatient clinic, surgical and postpartum floor, and labour and delivery area). Forty-seven nursing staff (25%) returned the list, which was condensed to a 10item RPA form with questions such as: ‘Does the resident listen to and consider what you have to say?’ and ‘Is the resident courteous to patients and their families?’ The final 10-item RPA form was distributed to all 185 nursing staff so they could evaluate all 12 obstetrics and gynaecology residents on two separate occasions (test–retest), 2 weeks apart. Evaluation of results and impact Individual residents were evaluated by a mean of 36 nurses on each occasion. Although return rates were only 30% and 25%, many nurses told us they appreciated being part of the assessment process. Pearson correlations, measuring test–retest reliability over a 2-week period, were very stable (0.65–0.85) for nine of the 10 items. The generalisability of individual resident mean scores across 28 nurses was moderate (r = 0.39). One-way ANOVA and Tukey HSD (honestly significant differences) testing performed on scores for 10 items from 55 forms identified Resident 3 as a lowscore outlier on nine of 10 items and Resident 12 as a high-score outlier on eight of 10 items. The status of these two outliers was not surprising to the clinical faculty. Disturbingly, when asked ‘Do you believe that the nursing staff are appropriate evaluators of resident professionalism and ICS?’, 67% of residents stated they did not believe nursing staff were appropriate evaluators of resident professionalism and ICS. We received numerous written comments from the nurses, which proved to be informative for both the residents and ourselves. Examples included: ‘He is excellent, caring, honest, respectful;’ ‘At times he treats us disrespectful[ly]. Does not listen even when [the nurse is] proven right;’ ‘I find her sometimes to be brash with patients,’ and ‘He sometimes refers to the patients as ‘‘sweetie’’ or ‘‘honey’’.’ Narrative comments can serve as an aid in the counselling of residents who might require corrective action or positive recognition. We were pleased that the nursing staff were willing to participate in the resident assessment process. It is vital that problems with professionalism and ICS are documented and addressed during training, and we believe that nursing staff can provide useful assessments in a variety of residency training settings.

Collaboration


Dive into the John H. Littlefield's collaboration.

Top Co-Authors

Avatar

Carlton K. Erickson

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Richard E. Wilcox

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Anne Cale Jones

University of Texas Health Science Center at San Antonio

View shared research outputs
Top Co-Authors

Avatar

C. Alex McMahan

University of Texas Health Science Center at San Antonio

View shared research outputs
Top Co-Authors

Avatar

Judy L. Paukert

Houston Methodist Hospital

View shared research outputs
Top Co-Authors

Avatar

Kenneth A. Lawson

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

R. Neal Pinckard

University of Texas Health Science Center at San Antonio

View shared research outputs
Top Co-Authors

Avatar

Thomas J. Prihoda

University of Texas Health Science Center at San Antonio

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge