Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Donna M. Waechter is active.

Publication


Featured researches published by Donna M. Waechter.


Academic Medicine | 2012

Achievement Goal Structures and Self-Regulated Learning: Relationships and Changes in Medical School

Anthony R. Artino; Ting Dong; Kent J. DeZee; William R. Gilliland; Donna M. Waechter; David F. Cruess; Steven J. Durning

Purpose Practicing physicians have a societal obligation to maintain their competence. Unfortunately, the self-regulated learning skills likely required for lifelong learning are not explicitly addressed in most medical schools. The authors examined how medical students’ perceptions of the learning environment relate to their self-regulated learning behaviors. They also explored how students’ perceptions and behaviors correlate with performance and change across medical school. Method The authors collected survey data from 304 students at different phases of medical school training. The survey items assessed students’ perceptions of the learning environment, as well as their metacognition, procrastination, and avoidance-of-help-seeking behaviors. The authors operationalized achievement as cumulative medical school grade point average (GPA) and, for third- and fourth-year students, collected clerkship outcomes. Results Students’ perceptions of the learning environment were associated with their metacognition, procrastination, and help-avoidance behaviors. These behaviors were also related to academic outcomes. Specifically, avoidance of help seeking was negatively correlated with cumulative medical school GPA (r = −0.23, P < .01) as well as exam (r = −0.22, P < .05) and clinical performance (r = −0.34, P < .01) in the internal medical clerkship; these help-avoidance behaviors were also positively correlated with students’ presentation at a grade adjudication committee (r = 0.20, P < .05). Additionally, students’ perceptions of the learning environment varied as a function of their phase of training. Conclusions Medical students’ perceptions of the learning environment are related, in predictable ways, to their use of self-regulated learning behaviors; these perceptions seem to change across medical school.


Academic Medicine | 2005

The feasibility, reliability, and validity of a program director's (supervisor's) evaluation form for medical school graduates.

Steven J. Durning; Louis N. Pangaro; Linda L. Lawrence; Donna M. Waechter; John E. McManigle; Jeffrey L. Jackson

Purpose To determine the feasibility, reliability, and validity of the supervisor’s evaluation form for first-year residents as an outcome measure for programmatic evaluation. Method Prospective feedback has been sought from supervisors for the Uniformed Services University of the Health Sciences (USUHS) graduates during their internship year. Supervisors are sent yearly evaluation forms with up to three additional mailings. Using a six-point scale, supervisors rate residents on 18 items. The authors used evaluation data from 1993 to 2002. Feasibility was estimated by response rate. Internal consistency was assessed by calculating Cronbach’s alpha and analyzing scores on a year-to-year and interrater basis. Validity was determined by exploratory factor analysis with oblique rotations, comparing ratings with end-of-medical school GPA and United States Medical Licensing Examination (USMLE) Step 1 and Step 2 scores (Pearson correlations), and by analyzing the range of scores to include the percentage of scores below acceptable level. Results A total of 1,247 evaluations were collected for the 1,559 USUHS graduates (80%). Cronbach’s alpha was .96 with no significant difference in scores by supervisor specialty or year. Factor analysis found that the evaluation form collapsed into two domains accounting for 68% of the variance: professionalism and expertise. End-of-medical school GPA and USMLE Step 1 and 2 scores correlated with expertise but not with professionalism. Mean scores across items were 3.5–4.31 with a median of 4.0 for all items (SD .80–1.21). Four percent of graduates received less-than-satisfactory ratings. Conclusions This evaluation form has high feasibility and internal consistency. Factory analysis revealed two complimentary domains supporting its validity. Correlation with end-of-medical school measurements and analysis of range of scores supports the form’s validity.


Military Medicine | 2010

Using Qualitative Data From a Program Director's Evaluation Form as an Outcome Measurement for Medical School

Steven J. Durning; Janice L. Hanson; William R. Gilliland; John M. McManigle; Donna M. Waechter; Louis N. Pangaro

BACKGROUND Medical education programs need outcome measurements to promote curriculum improvement and to help meet accreditation standards. PURPOSE Determine the added value of qualitative comments written by program directors (PDs) in response to a survey concerning first postgraduate year (PGY-1) graduates. We hypothesized that these comments would serve as an additional outcome measurement for our graduates, adding information not readily captured in numeric data. METHODS PD evaluation form surveys from 1993-2002. All qualitative comments offered in response to free text questions were coded and compared with numeric ratings. RESULTS A total of 1,247 surveys were included (80% response rate). Comments about specific graduates were coded as positive, negative, or neutral and were categorized into themes. Inter-rater reliability was high (kappa= 0.82). Compared with 4% of graduates who received one or more numeric ratings of less than satisfactory, 7% had one or more qualitative phrases classified as negative. CONCLUSIONS Qualitative comments can serve as a useful outcome measurement.


Medical Education | 2012

Does self-reported clinical experience predict performance in medical school and internship?

Anthony R. Artino; William R. Gilliland; Donna M. Waechter; David F. Cruess; Margaret Calloway; Steven J. Durning

Medical Education 2012: 46 : 172–178


Academic Medicine | 1993

A preliminary study of the validity of scores and pass/fail standards for USMLE steps 1 and 2.

David B. Swanson; Susan M. Case; Donna M. Waechter; J. Jon Veloski; Carol Hasbrouck; Miriam Friedman; Jan D. Carline; Carol MacLaren

No abstract available.


Military Medicine | 2012

The Long-Term Career Outcome Study (LTCOS): What Have We Learned From 40 Years of Military Medical Education and Where Should We Go?

Steven J. Durning; Anthony R. Artino; Ting Dong; David F. Cruess; William R. Gilliland; Kent J. DeZee; Aaron Saguil; Donna M. Waechter; John E. McManigle

The work of the Long-Term Career Outcome Study (LTCOS), F. Edward Hébert School of Medicine, Uniformed Services University of the Health Sciences (USU) has been a multidisciplinary effort spanning more than 5 years. Borrowing from the established program evaluation and quality assurance literature, the LTCOS team has organized its evaluation and research efforts into three phases: before medical school, during medical school, and after medical school. The purpose of this commentary is to summarize the research articles presented in this special issue and to answer two fundamental questions: (1) what has been learned from LTCOS research conducted to date, and (2) where should the LTCOS team take its evaluation and research efforts in the future? Answers to these questions are relevant to USU, and they also can inform other medical education institutions and policy makers. What is more, answers to these questions will help to ensure USU meets its societal obligation to provide the highest quality health care to military members, their families, and society at large.


Medical Reference Services Quarterly | 2010

Institutional support for handheld computing: clinical and educational lessons learned.

Mark B. Stephens; Donna M. Waechter; Pamela M. Williams; Alan L. Williams; Kenneth S. Yew; Scott M. Strayer

Handheld computing devices, or personal digital assistants (PDAs), are used often in the health care setting. They provide a convenient way to store and carry either personal or reference information and can be used to accomplish other tasks associated with patient care. This article reports clinical and educational lessons learned from a longitudinal institutional initiative designed to provide medical students with PDAs to facilitate patient care and assist with clinical learning.


Academic Medicine | 2016

The Variables That Lead to Severe Action Decisions by the Liaison Committee on Medical Education.

Dan Hunt; Michael Migdal; Donna M. Waechter; Barbara Barzansky; Robert F. Sabalis

Purpose To identify the variables associated with severe action decisions (SADs) (unspecified accreditation term, warning status, probation status) by the Liaison Committee on Medical Education (LCME) regarding the accreditation status of established MD-granting medical education programs in the United States and Canada. Method The authors reviewed all LCME decisions made on full survey reports between October 2004 and June 2012 to test whether SADs were associated with an insufficient response in the data collection instrument/self-study, chronic noncompliance with one or more accreditation standards, noncompliance with specific standards, and noncompliance with a large number of standards. Results The LCME issued 103 nonsevere action decisions and 40 SADs. SADs were significantly associated with an insufficient response in the data collection instrument/self-study (odds ratio [OR] = 7.30; 95% confidence interval [CI] = 2.38–22.46); chronic noncompliance with one or more standards (OR = 12.18; 95% CI = 1.91–77.55); noncompliance with standards related to the educational program for the MD degree (ED): ED-8 (OR = 6.73; 95% CI = 2.32–19.47) and ED-33 (OR = 5.40; 95% CI = 1.98–14.76); and noncompliance with a large number of standards (r pb = 0.62; P < .001). Conclusions These findings provide insight into the LCME’s pattern of decision making. Noncompliance with two standards was strongly associated with SADs: lack of evidence of comparability across instructional sites (ED-8) and the absence of strong central management of the curriculum (ED-33). These results can help medical school staff as they prepare for an LCME full survey visit.


Military Medicine | 2012

40 Years of Military Medical Education: An Overview of the Long-Term Career Outcome Study (LTCOS)

Steven J. Durning; Anthony R. Artino; Ting Dong; David F. Cruess; William R. Gilliland; Kent J. DeZee; Aaron Saguil; Donna M. Waechter; John E. McManigle

In 2005, the Long-Term Career Outcome Study (LTCOS) was established by the Dean, F. Edward Hébert School of Medicine, Uniformed Services University of the Health Sciences (USU). The original charge to the LTCOS team was to establish an electronic database of current and past students at USU. Since its inception, however, the LTCOS team has broadened its mission and started collecting and analyzing data on a continuous basis for the purposes of program evaluation and, in some cases, research. The purpose of this commentary is to review the history of the LTCOS, including details about USU, a brief review of prior LTCOS work, and progress made since our last essay on LTCOS efforts. This commentary also provides an introduction to the special issue, which is arranged as a series of articles that span the medical education continuum (i.e., before, during, and after medical school). The relative balance of articles in each phase of training represents the LTCOS teams efforts to address the entire continuum of medical education.


Clinical Pharmacology & Therapeutics | 2012

Broadening Our Understanding of Clinical Quality: From Attribution Error to Situated Cognition

Anthony R. Artino; Steven J. Durning; Donna M. Waechter; K L Leary; William R. Gilliland

The tendency to overestimate the influence of personal characteristics on outcomes, and to underestimate the influence of situational factors, is known as the fundamental attribution error. We argue that medical‐education researchers and policy makers may be guilty of this error in their quest to understand clinical quality. We suggest that to truly understand clinical quality, they must examine situational factors, which often have a strong influence on the quality of clinical encounters.

Collaboration


Dive into the Donna M. Waechter's collaboration.

Top Co-Authors

Avatar

Steven J. Durning

Uniformed Services University of the Health Sciences

View shared research outputs
Top Co-Authors

Avatar

William R. Gilliland

Uniformed Services University of the Health Sciences

View shared research outputs
Top Co-Authors

Avatar

Anthony R. Artino

Uniformed Services University of the Health Sciences

View shared research outputs
Top Co-Authors

Avatar

David F. Cruess

Uniformed Services University of the Health Sciences

View shared research outputs
Top Co-Authors

Avatar

Ting Dong

Uniformed Services University of the Health Sciences

View shared research outputs
Top Co-Authors

Avatar

Kent J. DeZee

Uniformed Services University of the Health Sciences

View shared research outputs
Top Co-Authors

Avatar

John E. McManigle

Uniformed Services University of the Health Sciences

View shared research outputs
Top Co-Authors

Avatar

Aaron Saguil

Uniformed Services University of the Health Sciences

View shared research outputs
Top Co-Authors

Avatar

Louis N. Pangaro

Uniformed Services University of the Health Sciences

View shared research outputs
Top Co-Authors

Avatar

Margaret Calloway

Uniformed Services University of the Health Sciences

View shared research outputs
Researchain Logo
Decentralizing Knowledge