Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Brian C. George is active.

Publication


Featured researches published by Brian C. George.


Journal of Surgical Education | 2014

Reliability, Validity, and Feasibility of the Zwisch Scale for the Assessment of Intraoperative Performance

Brian C. George; Ezra N. Teitelbaum; Shari L. Meyerson; Mary C. Schuller; Debra A. DaRosa; Emil R. Petrusa; Lucia C. Petito; Jonathan P. Fryer

PURPOSE The existing methods for evaluating resident operative performance interrupt the workflow of the attending physician, are resource intensive, and are often completed well after the end of the procedure in question. These limitations lead to low faculty compliance and potential significant recall bias. In this study, we deployed a smartphone-based system, the Procedural Autonomy and Supervisions System, to facilitate assessment of resident performance according to the Zwisch scale with minimal workflow disruption. We aimed to demonstrate that this is a reliable, valid, and feasible method of measuring resident operative autonomy. METHODS Before implementation, general surgery residents and faculty underwent frame-of-reference training to the Zwisch scale. Immediately after any operation in which a resident participated, the system automatically sent a text message prompting the attending physician to rate the residents level of operative autonomy according to the 4-level Zwisch scale. Of these procedures, 8 were videotaped and independently rated by 2 additional surgeons. The Zwisch ratings of the 3 raters were compared using an intraclass correlation coefficient. Videotaped procedures were also scored using 2 alternative operating room (OR) performance assessment instruments (Operative Performance Rating System and Ottawa Surgical Competency OR Evaluation), against which the item correlations were calculated. RESULTS Between December 2012 and June 2013, 27 faculty used the smartphone system to complete 1490 operative performance assessments on 31 residents. During this period, faculty completed evaluations for 92% of all operations performed with general surgery residents. The Zwisch scores were shown to correlate with postgraduate year (PGY) levels based on sequential pairwise chi-squared tests: PGY 1 vs PGY 2 (χ(2) = 106.9, df = 3, p < 0.001); PGY 2 vs PGY 3 (χ(2) = 22.2, df = 3, p < 0.001); and PGY 3 vs PGY 4 (χ(2) = 56.4, df = 3, p < 0.001). Comparison of PGY 4 to PGY 5 scores were not significantly different (χ(2) = 4.5, df = 3, p = 0.21). For the 8 operations reviewed for interrater reliability, the intraclass correlation coefficient was 0.90 (95% CI: 0.72-0.98, p < 0.01). Correlation of Procedural Autonomy and Supervisions System ratings with both Operative Performance Rating System items (each r > 0.90, all ps < 0.01) and Ottawa Surgical Competency OR Evaluation items (each r > 0.86, all ps < 0.01) was high. CONCLUSIONS The Zwisch scale can be used to make reliable and valid measurements of faculty guidance and resident autonomy. Our data also suggest that Zwisch ratings may be used to infer resident operative performance. Deployed on an automated smartphone-based system, it can be used to feasibly record evaluations for most operations performed by residents. This information can be used to council individual residents, modify programmatic curricula, and potentially inform national training guidelines.


Journal of Surgical Education | 2013

Duration of Faculty Training Needed to Ensure Reliable OR Performance Ratings

Brian C. George; Ezra N. Teitelbaum; Debra A. DaRosa; Eric S. Hungness; Shari L. Meyerson; Jonathan P. Fryer; Mary C. Schuller; Joseph B. Zwischenberger

OBJECTIVES The American Board of Surgery has mandated intraoperative assessment of general surgery residents, yet the time required to train faculty to accurately and reliably complete operating room performance evaluation forms is unknown. Outside of surgical education, frame-of-reference (FOR) training has been shown to be an effective training modality to teach raters the specific performance indicators associated with each point on a rating scale. Little is known, however, about what form and duration of FOR training is needed to accomplish reliable ratings among surgical faculty. DESIGN Two groups of surgical faculty separately underwent either an accelerated 1-hour (n = 10) or immersive four-hour (n = 34) FOR faculty development program. Both programs included a formal presentation and a facilitated discussion of sample behaviors for each point on the Zwisch operating room performance rating scale (see DaRosa et al.(8)). The immersive group additionally participated in a small group exercise that included additional practice. After training, both groups were tested using 10 video clips of trainees at various levels. Responses were scored against expert consensus ratings. The 2-sided Mann-Whitney U test was used to compare between group means. SETTING AND PARTICIPANTS All trainees were faculty members in the Department of Surgery of a large midwestern private medical school. RESULTS Faculty undergoing the 1-hour FOR training program did not have a statistically different mean correct response rate on the video test when compared with those undergoing the 4-hour training program (88% vs 80%; p = 0.07). CONCLUSIONS One-hour FOR training sessions are likely sufficient to train surgical faculty to reliably use a simple evaluation instrument for the assessment of intraoperative performance. Additional research is needed to determine how these results generalize to different assessment instruments.


Journal of Surgical Education | 2016

The Feasibility of Real-Time Intraoperative Performance Assessment With SIMPL (System for Improving and Measuring Procedural Learning): Early Experience From a Multi-institutional Trial

Jordan D. Bohnen; Brian C. George; Reed G. Williams; Mary C. Schuller; Debra A. DaRosa; Laura Torbeck; John T. Mullen; Shari L. Meyerson; Edward D. Auyang; Jeffrey G. Chipman; Jennifer N. Choi; Michael A. Choti; Eric D. Endean; Eugene F. Foley; Samuel P. Mandell; Andreas H. Meier; Douglas S. Smink; Kyla P. Terhune; Paul E. Wise; Nathaniel J. Soper; Joseph B. Zwischenberger; Keith D. Lillemoe; Gary L. Dunnington; Jonathan P. Fryer

PURPOSE Intraoperative performance assessment of residents is of growing interest to trainees, faculty, and accreditors. Current approaches to collect such assessments are limited by low participation rates and long delays between procedure and evaluation. We deployed an innovative, smartphone-based tool, SIMPL (System for Improving and Measuring Procedural Learning), to make real-time intraoperative performance assessment feasible for every case in which surgical trainees participate, and hypothesized that SIMPL could be feasibly integrated into surgical training programs. METHODS Between September 1, 2015 and February 29, 2016, 15 U.S. general surgery residency programs were enrolled in an institutional review board-approved trial. SIMPL was made available after 70% of faculty and residents completed a 1-hour training session. Descriptive and univariate statistics analyzed multiple dimensions of feasibility, including training rates, volume of assessments, response rates/times, and dictation rates. The 20 most active residents and attendings were evaluated in greater detail. RESULTS A total of 90% of eligible users (1267/1412) completed training. Further, 13/15 programs began using SIMPL. Totally, 6024 assessments were completed by 254 categorical general surgery residents (n = 3555 assessments) and 259 attendings (n = 2469 assessments), and 3762 unique operations were assessed. There was significant heterogeneity in participation within and between programs. Mean percentage (range) of users who completed ≥1, 5, and 20 assessments were 62% (21%-96%), 34% (5%-75%), and 10% (0%-32%) across all programs, and 96%, 75%, and 32% in the most active program. Overall, response rate was 70%, dictation rate was 24%, and mean response time was 12 hours. Assessments increased from 357 (September 2015) to 1146 (February 2016). The 20 most active residents each received mean 46 assessments by 10 attendings for 20 different procedures. CONCLUSIONS SIMPL can be feasibly integrated into surgical training programs to enhance the frequency and timeliness of intraoperative performance assessment. We believe SIMPL could help facilitate a national competency-based surgical training system, although local and systemic challenges still need to be addressed.


Journal of Surgical Education | 2012

Does resident ranking during recruitment accurately predict subsequent performance as a surgical resident

Jonathan P. Fryer; Noreen Corcoran; Brian C. George; Ed Wang; Debra A. DaRosa

BACKGROUND While the primary goal of ranking applicants for surgical residency training positions is to identify the candidates who will subsequently perform best as surgical residents, the effectiveness of the ranking process has not been adequately studied. METHODS We evaluated our general surgery resident recruitment process between 2001 and 2011 inclusive, to determine if our recruitment ranking parameters effectively predicted subsequent resident performance. We identified 3 candidate ranking parameters (United States Medical Licensing Examination [USMLE] Step 1 score, unadjusted ranking score [URS], and final adjusted ranking [FAR]), and 4 resident performance parameters (American Board of Surgery In-Training Examination [ABSITE] score, PGY1 resident evaluation grade [REG], overall REG, and independent faculty rating ranking [IFRR]), and assessed whether the former were predictive of the latter. Analyses utilized Spearman correlation coefficient. RESULTS We found that the URS, which is based on objective and criterion based parameters, was a better predictor of subsequent performance than the FAR, which is a modification of the URS based on subsequent determinations of the resident selection committee. USMLE score was a reliable predictor of ABSITE scores only. However, when we compared our worst residence performances with the performances of the other residents in this evaluation, the data did not produce convincing evidence that poor resident performances could be reliably predicted by any of the recruitment ranking parameters. Finally, stratifying candidates based on their rank range did not effectively define a ranking cut-off beyond which resident performance would drop off. CONCLUSIONS Based on these findings, we recommend surgery programs may be better served by utilizing a more structured resident ranking process and that subsequent adjustments to the rank list generated by this process should be undertaken with caution.


Annals of Surgery | 2017

Readiness of US General Surgery Residents for Independent Practice

Brian C. George; Jordan D. Bohnen; Reed G. Williams; Shari L. Meyerson; Mary C. Schuller; Michael Clark; Andreas H. Meier; Laura Torbeck; Samuel P. Mandell; John T. Mullen; Douglas S. Smink; Rebecca E. Scully; Jeffrey G. Chipman; Edward D. Auyang; Kyla P. Terhune; Paul E. Wise; Jennifer N. Choi; Eugene F. Foley; Justin B. Dimick; Michael A. Choti; Nathaniel J. Soper; Keith D. Lillemoe; Joseph B. Zwischenberger; Gary L. Dunnington; Debra A. DaRosa; Jonathan P. Fryer

Objective: This study evaluates the current state of the General Surgery (GS) residency training model by investigating resident operative performance and autonomy. Background: The American Board of Surgery has designated 132 procedures as being “Core” to the practice of GS. GS residents are expected to be able to safely and independently perform those procedures by the time they graduate. There is growing concern that not all residents achieve that standard. Lack of operative autonomy may play a role. Methods: Attendings in 14 General Surgery programs were trained to use a) the 5-level System for Improving and Measuring Procedural Learning (SIMPL) Performance scale to assess resident readiness for independent practice and b) the 4-level Zwisch scale to assess the level of guidance (ie, autonomy) they provided to residents during specific procedures. Ratings were collected immediately after cases that involved a categorical GS resident. Data were analyzed using descriptive statistics and supplemented with Bayesian ordinal model-based estimation. Results: A total of 444 attending surgeons rated 536 categorical residents after 10,130 procedures. Performance: from the first to the last year of training, the proportion of Performance ratings for Core procedures (n = 6931) at “Practice Ready” or above increased from 12.3% to 77.1%. The predicted probability that a typical trainee would be rated as Competent after performing an average Core procedure on an average complexity patient during the last week of residency training is 90.5% (95% CI: 85.7%–94%). This falls to 84.6% for more complex patients and to less than 80% for more difficult Core procedures. Autonomy: for all procedures, the proportion of Zwisch ratings indicating meaningful autonomy (“Passive Help” or “Supervision Only”) increased from 15.1% to 65.7% from the first to the last year of training. For the Core procedures performed by residents in their final 6 months of training (cholecystectomy, inguinal/femoral hernia repair, appendectomy, ventral hernia repair, and partial colectomy), the proportion of Zwisch ratings (n = 357) indicating near-independence (“Supervision Only”) was 33.3%. Conclusions: US General Surgery residents are not universally ready to independently perform Core procedures by the time they complete residency training. Progressive resident autonomy is also limited. It is unknown if the amount of autonomy residents do achieve is sufficient to ensure readiness for the entire spectrum of independent practice.


Plastic and Reconstructive Surgery | 2016

Uniting Evidence-based Evaluation with the Acgme Plastic Surgery Milestones: A Simple and Reliable Assessment of Resident Operative Performance.

Edward M. Kobraei; Jordan D. Bohnen; Brian C. George; John T. Mullen; Keith D. Lillemoe; Austen Wg; Eric Liao

Background: Milestones evaluations in plastic surgery reflect a shift toward competency-based training but have created a number of challenges. The authors have incorporated the smartphone application evaluation tool, System for Improving and Measuring Procedural Learning (SIMPL), that was recently developed by a multi-institutional research collaborative. In this pilot study, the authors hypothesize that SIMPL can improve resident evaluation and also collect granular performance data to simplify compliance with the plastic surgery Milestones. Methods: SIMPL was prospectively piloted with a plastic surgery resident and faculty surgeon at Massachusetts General Hospital in this institutional review board–approved study. The study period was a 2-month interval corresponding to the resident’s rotation. Results: The resident-faculty combination performed 20 cases together. All cases were evaluated with SIMPL. SIMPL evaluations uniformly took under 1 minute to submit. The average time to completed evaluation from surgery completion was 5 hours (<0.5 hour to 12 hours). Concrete, objective, and specific data about resident performance were collected for every case and presented in a concise format. Conclusions: SIMPL is an innovative, evidence-based evaluation system that makes performance assessment feasible for every procedure in which a plastic surgery resident participates. SIMPL’s competency-based framework can be easily scaled to facilitate data collection and reporting of mandatory Milestones evaluations at the program and national levels. This technology will support a shared vocabulary between residents and faculty to enhance intraoperative education.


Journal of Surgical Education | 2017

Effect of Ongoing Assessment of Resident Operative Autonomy on the Operating Room Environment

Jonathan P. Fryer; Ezra N. Teitelbaum; Brian C. George; Mary C. Schuller; Shari L. Meyerson; Christina M. Theodorou; Joseph Kang; Amy Yang; Lihui Zhao; Debra A. DaRosa

OBJECTIVE We have previously demonstrated the feasibility and validity of a smartphone-based system called Procedural Autonomy and Supervision System (PASS), which uses the Zwisch autonomy scale to facilitate assessment of the operative performances of surgical residents and promote progressive autonomy. To determine whether the use of PASS in a general surgery residency program is associated with any negative consequences, we tested the null hypothesis that PASS implementation at our institution would not negatively affect resident or faculty satisfaction in the operating room (OR) nor increase mean OR times for cases performed together by residents and faculty. METHODS Mean OR times were obtained from the electronic medical record at Northwestern Memorial Hospital for the 20 procedures most commonly performed by faculty members with residents before and after PASS implementation. OR times were compared via two-sample t-test. The OR Educational Environment Measure tool was used to assess OR satisfaction with all clinically active general surgery residents (n = 31) and full-time general surgery faculty members (n = 27) before and after PASS implementation. Results were compared using the Mann-Whitney rank sum test. RESULTS A significant prolongation in mean OR time between control and study period was found for only 1 of the 20 operative procedures performed at least 20 times by participating faculty members with residents. Based on the overall survey score, no significant differences were found between resident and faculty responses to the OR Educational Environment Measure survey before and after PASS implementation. When individual survey items were compared, while no differences were found with resident responses, differences were noted with faculty responses for 7 of the 35 items addressed although after Bonferroni correction none of these differences remained significant. CONCLUSIONS Our data suggest that PASS does not increase mean OR times for the most commonly performed procedures. Resident OR satisfaction did not significantly change during PASS implementation, whereas some changes in faculty satisfaction were noted suggesting that PASS implementation may have had some negative effect with them. Although the effect on faculty satisfaction clearly requires further investigation, our findings support that use of an autonomy-based OR performance assessment system such as PASS does not appear to have a major negative influence on OR times nor OR satisfaction.


Journal of Surgical Education | 2018

Identifying and Addressing High Priority Issues in General Surgery Training and Education

Jonathan P. Fryer; Mary C. Schuller; Greg Wnuk; Shari L. Meyerson; Joseph B. Zwischenberger; Andreas H. Meier; Reed G. Williams; Brian C. George

BACKGROUND Complex problems are often easier to address when multiple entities collaborate. The Procedural Learning and Safety Collaborative (PLSC) was established to address complex problems in general surgery residency training by connectively engaging multiple residency programs in addressing progressive research questions. STUDY DESIGN Recently, PLSC members held a national symposium which included leadership from several leading surgical societies to come to a consensus on what are the most critical issues in general surgery education. RESULTS This paper describes the process used and the end result of this process. This paper describes the process used and the end result of this process.


Annals of Surgery | 2017

Trainee Autonomy and Patient Safety

Brian C. George; Gary L. Dunnington; Debra A. DaRosa

T he ultimate goal of surgical residency must be to train surgeons who are safe and independent. There is accumulating evidence that many graduating residents have not achieved that goal. Although the precise reasons for these findings remain unknown, elements such as the 80-hour work-week, increased faculty-to-resident ratios, and reduced experience with open operations have all been hypothesized to play a role. Recently, the erosion in resident autonomy has received increased attention. Patient expectations and the health care system have both undergone dramatic change over the past 30 years, change that has put pressure on the traditional Halstedian model of progressive resident independence. For example, graduated independence has been diminished by changes designed to enhance patient safety, including limits on concurrent surgery and rules mandating specific supervisory behaviors by attendings. These rules are often made because patients, administrators, and payers assume that patient outcomes are universally improved with less resident autonomy, despite the lack of any evidence that appropriately supervised residents are unsafe. The impact of these changes is compounded by growing clinical productivity expectations for academic surgeons. If a supervising surgeon must increase their productivity but is not permitted to delegate responsibility, the expected outcome is less resident autonomy. As a result, senior residents today usually perform the critical portion of procedures with the attending scrubbed and guiding the operation. The potential long-term consequences of these changes are troubling. Patient safety is a genuinely vital concern of the present, but new policies must also account for the role that graduated responsibility plays in achieving that same goal in the future. A myopic pursuit of safety at the expense of learning threatens to produce future surgeons who are less competent. While the shortterm priorities of our current health care system are important, if they undermine the training process, then these priorities are unsustainable. To realize these goals over the long-term, we must critically assess the amount and impact of progressive autonomy granted to


Journal of Surgical Education | 2013

A theory-based model for teaching and assessing residents in the operating room

Debra A. DaRosa; Joseph B. Zwischenberger; Shari L. Meyerson; Brian C. George; Ezra N. Teitelbaum; Nathaniel J. Soper; Jonathan P. Fryer

Collaboration


Dive into the Brian C. George's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Reed G. Williams

Southern Illinois University School of Medicine

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andreas H. Meier

State University of New York Upstate Medical University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge