Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mary C. Schuller is active.

Publication


Featured researches published by Mary C. Schuller.


Medical Education | 2011

Applying multimedia design principles enhances learning in medical education.

Nabil Issa; Mary C. Schuller; Susan Santacaterina; Michael B. Shapiro; Richard E. Mayer; Debra A. DaRosa

Medical Education 2011: 45: 818–826


Journal of Surgical Education | 2014

Reliability, Validity, and Feasibility of the Zwisch Scale for the Assessment of Intraoperative Performance

Brian C. George; Ezra N. Teitelbaum; Shari L. Meyerson; Mary C. Schuller; Debra A. DaRosa; Emil R. Petrusa; Lucia C. Petito; Jonathan P. Fryer

PURPOSE The existing methods for evaluating resident operative performance interrupt the workflow of the attending physician, are resource intensive, and are often completed well after the end of the procedure in question. These limitations lead to low faculty compliance and potential significant recall bias. In this study, we deployed a smartphone-based system, the Procedural Autonomy and Supervisions System, to facilitate assessment of resident performance according to the Zwisch scale with minimal workflow disruption. We aimed to demonstrate that this is a reliable, valid, and feasible method of measuring resident operative autonomy. METHODS Before implementation, general surgery residents and faculty underwent frame-of-reference training to the Zwisch scale. Immediately after any operation in which a resident participated, the system automatically sent a text message prompting the attending physician to rate the residents level of operative autonomy according to the 4-level Zwisch scale. Of these procedures, 8 were videotaped and independently rated by 2 additional surgeons. The Zwisch ratings of the 3 raters were compared using an intraclass correlation coefficient. Videotaped procedures were also scored using 2 alternative operating room (OR) performance assessment instruments (Operative Performance Rating System and Ottawa Surgical Competency OR Evaluation), against which the item correlations were calculated. RESULTS Between December 2012 and June 2013, 27 faculty used the smartphone system to complete 1490 operative performance assessments on 31 residents. During this period, faculty completed evaluations for 92% of all operations performed with general surgery residents. The Zwisch scores were shown to correlate with postgraduate year (PGY) levels based on sequential pairwise chi-squared tests: PGY 1 vs PGY 2 (χ(2) = 106.9, df = 3, p < 0.001); PGY 2 vs PGY 3 (χ(2) = 22.2, df = 3, p < 0.001); and PGY 3 vs PGY 4 (χ(2) = 56.4, df = 3, p < 0.001). Comparison of PGY 4 to PGY 5 scores were not significantly different (χ(2) = 4.5, df = 3, p = 0.21). For the 8 operations reviewed for interrater reliability, the intraclass correlation coefficient was 0.90 (95% CI: 0.72-0.98, p < 0.01). Correlation of Procedural Autonomy and Supervisions System ratings with both Operative Performance Rating System items (each r > 0.90, all ps < 0.01) and Ottawa Surgical Competency OR Evaluation items (each r > 0.86, all ps < 0.01) was high. CONCLUSIONS The Zwisch scale can be used to make reliable and valid measurements of faculty guidance and resident autonomy. Our data also suggest that Zwisch ratings may be used to infer resident operative performance. Deployed on an automated smartphone-based system, it can be used to feasibly record evaluations for most operations performed by residents. This information can be used to council individual residents, modify programmatic curricula, and potentially inform national training guidelines.


Medical Education | 2013

Teaching for understanding in medical classrooms using multimedia design principles

Nabil Issa; Richard E. Mayer; Mary C. Schuller; Michael B. Shapiro; Debra A. DaRosa

Objectives  In line with a recent report entitled Effective Use of Educational Technology in Medical Education from the Association of American Medical Colleges Institute for Improving Medical Education (AAMC‐IME), this study examined whether revising a medical lecture based on evidence‐based principles of multimedia design would lead to improved long‐term transfer and retention in Year 3 medical students. A previous study yielded positive effects on an immediate retention test, but did not investigate long‐term effects.


Journal of Surgical Education | 2013

Duration of Faculty Training Needed to Ensure Reliable OR Performance Ratings

Brian C. George; Ezra N. Teitelbaum; Debra A. DaRosa; Eric S. Hungness; Shari L. Meyerson; Jonathan P. Fryer; Mary C. Schuller; Joseph B. Zwischenberger

OBJECTIVES The American Board of Surgery has mandated intraoperative assessment of general surgery residents, yet the time required to train faculty to accurately and reliably complete operating room performance evaluation forms is unknown. Outside of surgical education, frame-of-reference (FOR) training has been shown to be an effective training modality to teach raters the specific performance indicators associated with each point on a rating scale. Little is known, however, about what form and duration of FOR training is needed to accomplish reliable ratings among surgical faculty. DESIGN Two groups of surgical faculty separately underwent either an accelerated 1-hour (n = 10) or immersive four-hour (n = 34) FOR faculty development program. Both programs included a formal presentation and a facilitated discussion of sample behaviors for each point on the Zwisch operating room performance rating scale (see DaRosa et al.(8)). The immersive group additionally participated in a small group exercise that included additional practice. After training, both groups were tested using 10 video clips of trainees at various levels. Responses were scored against expert consensus ratings. The 2-sided Mann-Whitney U test was used to compare between group means. SETTING AND PARTICIPANTS All trainees were faculty members in the Department of Surgery of a large midwestern private medical school. RESULTS Faculty undergoing the 1-hour FOR training program did not have a statistically different mean correct response rate on the video test when compared with those undergoing the 4-hour training program (88% vs 80%; p = 0.07). CONCLUSIONS One-hour FOR training sessions are likely sufficient to train surgical faculty to reliably use a simple evaluation instrument for the assessment of intraoperative performance. Additional research is needed to determine how these results generalize to different assessment instruments.


Journal of Surgical Education | 2016

The Feasibility of Real-Time Intraoperative Performance Assessment With SIMPL (System for Improving and Measuring Procedural Learning): Early Experience From a Multi-institutional Trial

Jordan D. Bohnen; Brian C. George; Reed G. Williams; Mary C. Schuller; Debra A. DaRosa; Laura Torbeck; John T. Mullen; Shari L. Meyerson; Edward D. Auyang; Jeffrey G. Chipman; Jennifer N. Choi; Michael A. Choti; Eric D. Endean; Eugene F. Foley; Samuel P. Mandell; Andreas H. Meier; Douglas S. Smink; Kyla P. Terhune; Paul E. Wise; Nathaniel J. Soper; Joseph B. Zwischenberger; Keith D. Lillemoe; Gary L. Dunnington; Jonathan P. Fryer

PURPOSE Intraoperative performance assessment of residents is of growing interest to trainees, faculty, and accreditors. Current approaches to collect such assessments are limited by low participation rates and long delays between procedure and evaluation. We deployed an innovative, smartphone-based tool, SIMPL (System for Improving and Measuring Procedural Learning), to make real-time intraoperative performance assessment feasible for every case in which surgical trainees participate, and hypothesized that SIMPL could be feasibly integrated into surgical training programs. METHODS Between September 1, 2015 and February 29, 2016, 15 U.S. general surgery residency programs were enrolled in an institutional review board-approved trial. SIMPL was made available after 70% of faculty and residents completed a 1-hour training session. Descriptive and univariate statistics analyzed multiple dimensions of feasibility, including training rates, volume of assessments, response rates/times, and dictation rates. The 20 most active residents and attendings were evaluated in greater detail. RESULTS A total of 90% of eligible users (1267/1412) completed training. Further, 13/15 programs began using SIMPL. Totally, 6024 assessments were completed by 254 categorical general surgery residents (n = 3555 assessments) and 259 attendings (n = 2469 assessments), and 3762 unique operations were assessed. There was significant heterogeneity in participation within and between programs. Mean percentage (range) of users who completed ≥1, 5, and 20 assessments were 62% (21%-96%), 34% (5%-75%), and 10% (0%-32%) across all programs, and 96%, 75%, and 32% in the most active program. Overall, response rate was 70%, dictation rate was 24%, and mean response time was 12 hours. Assessments increased from 357 (September 2015) to 1146 (February 2016). The 20 most active residents each received mean 46 assessments by 10 attendings for 20 different procedures. CONCLUSIONS SIMPL can be feasibly integrated into surgical training programs to enhance the frequency and timeliness of intraoperative performance assessment. We believe SIMPL could help facilitate a national competency-based surgical training system, although local and systemic challenges still need to be addressed.


Academic Medicine | 2015

Using just-in-time teaching and peer instruction in a residency program's core curriculum: Enhancing satisfaction, engagement, and retention

Mary C. Schuller; Debra A. DaRosa; Marie Crandall

Purpose To assess use of the combined just-in-time teaching (JiTT) and peer instruction (PI) instructional strategy in a residency program’s core curriculum. Method In 2010–2011, JiTT/PI was piloted in 31 core curriculum sessions taught by 22 faculty in the Northwestern University Feinberg School of Medicine’s general surgery residency program. JiTT/PI required preliminary and categorical residents (n = 31) to complete Web-based study questions before weekly specialty topic sessions. Responses were examined by faculty members “just in time” to tailor session content to residents’ learning needs. In the sessions, residents answered multiple-choice questions (MCQs) using clickers and engaged in PI. Participants completed surveys assessing their perceptions of JiTT/PI. Videos were coded to assess resident engagement time in JiTT/PI sessions versus prior lecture-based sessions. Responses to topic session MCQs repeated in review sessions were evaluated to study retention. Results More than 70% of resident survey respondents indicated that JiTT/PI aided in the learning of key points. At least 90% of faculty survey respondents reported positive perceptions of aspects of the JiTT/PI strategy. Resident engagement time for JiTT/PI sessions was significantly greater than for prior lecture-based sessions (z = –2.4, P = .016). Significantly more review session MCQ responses were correct for residents who had attended corresponding JiTT/PI sessions than for residents who had not (chi-square = 13.7; df = 1; P < .001). Conclusions JiTT/PI increased learner participation, learner retention, and the amount of learner-centered time. JiTT/PI represents an effective approach for meaningful and active learning in core curriculum sessions.


Annals of Surgery | 2017

Readiness of US General Surgery Residents for Independent Practice

Brian C. George; Jordan D. Bohnen; Reed G. Williams; Shari L. Meyerson; Mary C. Schuller; Michael Clark; Andreas H. Meier; Laura Torbeck; Samuel P. Mandell; John T. Mullen; Douglas S. Smink; Rebecca E. Scully; Jeffrey G. Chipman; Edward D. Auyang; Kyla P. Terhune; Paul E. Wise; Jennifer N. Choi; Eugene F. Foley; Justin B. Dimick; Michael A. Choti; Nathaniel J. Soper; Keith D. Lillemoe; Joseph B. Zwischenberger; Gary L. Dunnington; Debra A. DaRosa; Jonathan P. Fryer

Objective: This study evaluates the current state of the General Surgery (GS) residency training model by investigating resident operative performance and autonomy. Background: The American Board of Surgery has designated 132 procedures as being “Core” to the practice of GS. GS residents are expected to be able to safely and independently perform those procedures by the time they graduate. There is growing concern that not all residents achieve that standard. Lack of operative autonomy may play a role. Methods: Attendings in 14 General Surgery programs were trained to use a) the 5-level System for Improving and Measuring Procedural Learning (SIMPL) Performance scale to assess resident readiness for independent practice and b) the 4-level Zwisch scale to assess the level of guidance (ie, autonomy) they provided to residents during specific procedures. Ratings were collected immediately after cases that involved a categorical GS resident. Data were analyzed using descriptive statistics and supplemented with Bayesian ordinal model-based estimation. Results: A total of 444 attending surgeons rated 536 categorical residents after 10,130 procedures. Performance: from the first to the last year of training, the proportion of Performance ratings for Core procedures (n = 6931) at “Practice Ready” or above increased from 12.3% to 77.1%. The predicted probability that a typical trainee would be rated as Competent after performing an average Core procedure on an average complexity patient during the last week of residency training is 90.5% (95% CI: 85.7%–94%). This falls to 84.6% for more complex patients and to less than 80% for more difficult Core procedures. Autonomy: for all procedures, the proportion of Zwisch ratings indicating meaningful autonomy (“Passive Help” or “Supervision Only”) increased from 15.1% to 65.7% from the first to the last year of training. For the Core procedures performed by residents in their final 6 months of training (cholecystectomy, inguinal/femoral hernia repair, appendectomy, ventral hernia repair, and partial colectomy), the proportion of Zwisch ratings (n = 357) indicating near-independence (“Supervision Only”) was 33.3%. Conclusions: US General Surgery residents are not universally ready to independently perform Core procedures by the time they complete residency training. Progressive resident autonomy is also limited. It is unknown if the amount of autonomy residents do achieve is sufficient to ensure readiness for the entire spectrum of independent practice.


Journal of The American College of Surgeons | 2013

Development and Participant Assessment of a Practical Quality Improvement Educational Initiative for Surgical Residents

Morgan M. Sellers; Kristi Hanson; Mary C. Schuller; Karen L. Sherman; Rachel R. Kelz; Jonathan P. Fryer; Debra A. DaRosa; Karl Y. Bilimoria

BACKGROUND As patient-safety and quality efforts spread throughout health care, the need for physician involvement is critical, yet structured training programs during surgical residency are still uncommon. Our objective was to develop an extended quality-improvement curriculum for surgical residents that included formal didactics and structured practical experience. METHODS Surgical trainees completed an 8-hour didactic program in quality-improvement methodology at the start of PGY3. Small teams developed practical quality-improvement projects based on needs identified during clinical experience. With the assistance of the hospitals process-improvement team and surgical faculty, residents worked through their selected projects during the following year. Residents were anonymously surveyed after their participation to assess the experience. RESULTS During the first 3 years of the program, 17 residents participated, with 100% survey completion. Seven quality-improvement projects were developed, with 57% completing all DMAIC (Define, Measure, Analyze, Improve, Control) phases. Initial projects involved issues of clinical efficiency and later projects increasingly focused on clinical care questions. Residents found the experience educationally important (65%) and believed they were well equipped to lead similar initiatives in the future (70%). Based on feedback, the timeline was expanded from 12 to 24 months and changed to start in PGY2. CONCLUSIONS Developing an extended curriculum using both didactic sessions and applied projects to teach residents the theory and implementation of quality improvement is possible and effective. It addresses the ACGME competencies of practice-based improvement and learning and systems-based practice. Our iterative experience during the past 3 years can serve as a guide for other programs.


Journal of Surgical Education | 2017

Effect of Ongoing Assessment of Resident Operative Autonomy on the Operating Room Environment

Jonathan P. Fryer; Ezra N. Teitelbaum; Brian C. George; Mary C. Schuller; Shari L. Meyerson; Christina M. Theodorou; Joseph Kang; Amy Yang; Lihui Zhao; Debra A. DaRosa

OBJECTIVE We have previously demonstrated the feasibility and validity of a smartphone-based system called Procedural Autonomy and Supervision System (PASS), which uses the Zwisch autonomy scale to facilitate assessment of the operative performances of surgical residents and promote progressive autonomy. To determine whether the use of PASS in a general surgery residency program is associated with any negative consequences, we tested the null hypothesis that PASS implementation at our institution would not negatively affect resident or faculty satisfaction in the operating room (OR) nor increase mean OR times for cases performed together by residents and faculty. METHODS Mean OR times were obtained from the electronic medical record at Northwestern Memorial Hospital for the 20 procedures most commonly performed by faculty members with residents before and after PASS implementation. OR times were compared via two-sample t-test. The OR Educational Environment Measure tool was used to assess OR satisfaction with all clinically active general surgery residents (n = 31) and full-time general surgery faculty members (n = 27) before and after PASS implementation. Results were compared using the Mann-Whitney rank sum test. RESULTS A significant prolongation in mean OR time between control and study period was found for only 1 of the 20 operative procedures performed at least 20 times by participating faculty members with residents. Based on the overall survey score, no significant differences were found between resident and faculty responses to the OR Educational Environment Measure survey before and after PASS implementation. When individual survey items were compared, while no differences were found with resident responses, differences were noted with faculty responses for 7 of the 35 items addressed although after Bonferroni correction none of these differences remained significant. CONCLUSIONS Our data suggest that PASS does not increase mean OR times for the most commonly performed procedures. Resident OR satisfaction did not significantly change during PASS implementation, whereas some changes in faculty satisfaction were noted suggesting that PASS implementation may have had some negative effect with them. Although the effect on faculty satisfaction clearly requires further investigation, our findings support that use of an autonomy-based OR performance assessment system such as PASS does not appear to have a major negative influence on OR times nor OR satisfaction.


Journal of Surgical Education | 2015

Beta Test of Web-Based Virtual Patient Decision-Making Exercises for Residents Demonstrates Discriminant Validity and Learning

Anne Close; Amy J. Goldberg; Irene B. Helenowski; Mary C. Schuller; Debra A. DaRosa; Jonathan P. Fryer

INTRODUCTION Correct clinical decision-making is a key competency of surgical trainees. The purpose of this study was to assess validity and effect on resident decision-making accuracy of web-based virtual patient case scenarios in general surgery training. MATERIAL AND METHODS During the 2013-2014 academic year, the use of web-based virtual patient scenarios for teaching and assessment of resident critical thinking and decision-making was assessed in 2 urban university-based residency programs. In all, 71 residents (PGY [postgraduate year] 1 = 21, PGY2 = 11, PGY3 = 14, PGY4 = 13, and PGY5 = 12) took the cases over the course of the academic year. Cases were made available to the residents online 1 week before a scheduled debriefing conference with a faculty facilitator and were completed by residents individually on their own schedule. At the completion of each case attempt, residents were given a computer-generated score and feedback. Residents were allowed to repeat the cases before the debriefing if they wished. Cases were required to be completed by 48 hours before the conference, at which time a faculty report was computer generated that measured group and individual performance and identified the frequency of errors in decision-making. This report was reviewed with the residents in the faculty debriefing, and teaching focused on the knowledge gaps identified in the reports. RESULTS The mean percentage of assigned cases completed by categorical residents was 85.7%. Mean score (maximum possible = 100) on the cases increased by resident year (PGY1 = 45.3, PGY2 = 49.3, PGY3 = 53.6, PGY4 = 57.5, and PGY5 = 61.8), a 25% increase between PGY1 and PGY5 (p < 0.001 by analysis of variance). In all, 45 (63%) residents chose to repeat at least 1 case before the debriefing. The number of repetitions of individuals on the same case varied from a minimum of 1 to a maximum of 5. On repeated cases, mean scores rose (attempt 1 = 22.6, attempt 2 = 69.3, attempt 3 = 72.1, attempt 4 = 77.5, attempt 5 = 100, p < 0.0001 by analysis of variance). Paired t tests on case repetition using each resident as his-her own control showed that scores rose by 46 points between attempt 1 and attempt 2 (p < 0.001). CONCLUSIONS (1) In a beta test of web-based scenarios that teach and assess clinical decision-making, resident scores improved by 25% from PGY 1 to PGY5 in a stepwise and statistically significant manner, suggesting that such exercises could serve as milestones for competency assessment. Additional studies are needed to acquire evidence for other forms of validity. (2) Repetition of cases after feedback led to highly significant increases in performance, suggesting that requiring repeated training to reach defined levels of competence is practical.

Collaboration


Dive into the Mary C. Schuller's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Reed G. Williams

Southern Illinois University School of Medicine

View shared research outputs
Top Co-Authors

Avatar

Andreas H. Meier

State University of New York Upstate Medical University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kyla P. Terhune

Vanderbilt University Medical Center

View shared research outputs
Researchain Logo
Decentralizing Knowledge