Peter Szasz
University of Toronto
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Peter Szasz.
Annals of Surgery | 2015
Peter Szasz; Marisa Louridas; Kenneth A. Harris; Rajesh Aggarwal; Teodor P. Grantcharov
OBJECTIVE To systematically examine the literature describing the methods by which technical competence is assessed in surgical trainees. BACKGROUND The last decade has witnessed an evolution away from time-based surgical education. In response, governing bodies worldwide have implemented competency-based education paradigms. The definition of competence, however, remains elusive, and the impact of these education initiatives in terms of assessment methods remains unclear. METHODS A systematic review examining the methods by which technical competence is assessed was conducted by searching MEDLINE, EMBASE, PsychINFO, and the Cochrane database of systematic reviews. Abstracts of retrieved studies were reviewed and those meeting inclusion criteria were selected for full review. Data were retrieved in a systematic manner, the validity and reliability of the assessment methods was evaluated, and quality was assessed using the Grading of Recommendations Assessment, Development and Evaluation classification. RESULTS Of the 6814 studies identified, 85 studies involving 2369 surgical residents were included in this review. The methods used to assess technical competence were categorized into 5 groups; Likert scales (37), benchmarks (31), binary outcomes (11), novel tools (4), and surrogate outcomes (2). Their validity and reliability were mostly previously established. The overall Grading of Recommendations Assessment, Development and Evaluation for randomized controlled trials was high and low for the observational studies. CONCLUSIONS The definition of technical competence continues to be debated within the medical literature. The methods used to evaluate technical competence predominantly include instruments that were originally created to assess technical skill. Very few studies identify standard setting approaches that differentiate competent versus noncompetent performers; subsequently, this has been identified as an area with great research potential.
Annals of Surgery | 2017
Andras B. Fecso; Peter Szasz; Georgi Kerezov; Teodor P. Grantcharov
Objective: Systematic review of the effect of intraoperative technical performance on patient outcomes. Background: The operating room is a high-stakes, high-risk environment. As a result, the quality of surgical interventions affecting patient outcomes has been the subject of discussion and research for years. Methods: MEDLINE, EMBASE, PsycINFO, and Cochrane databases were searched. All surgical specialties were eligible for inclusion. Data were reviewed in regards to the methods by which technical performance was measured, what patient outcomes were assessed, and how intraoperative technical performance affected patient outcomes. Quality of evidence was assessed using the Medical Education Research Study Quality Instrument (MERSQI). Results: Of the 12,758 studies initially identified, 24 articles (7775 total participants) were ultimately included in this review. Seventeen studies assessed the performance of the faculty alone, 2 assessed both the faculty and trainees, 1 assessed trainees alone, and in 4 studies, the level of the operating surgeon was not specified. In 18 studies, a performance assessment tool was used. Patient outcomes were evaluated using intraoperative complications, short-term morbidity, long-term morbidity, short-term mortality, and long-term mortality. The average MERSQI score was 11.67 (range 9.5–14.5). Twenty-one studies demonstrated that superior technical performance was related to improved patient outcomes. Conclusions: The results of this systematic review demonstrated that superior technical performance positively affects patient outcomes. Despite this initial evidence, more robust research is needed to directly assess intraoperative technical performance and its effect on postoperative patient outcomes using meaningful assessment instruments and reliable processes.
Annals of Surgery | 2016
Nicolas J. Dedy; Andras B. Fecso; Peter Szasz; Esther M. Bonrath; Teodor P. Grantcharov
Objective:To evaluate the effectiveness of debriefing and feedback on intraoperative nontechnical performance as an instructional strategy in surgical training. Background:Regulatory authorities for accreditation in North America have included nontechnical skills such as communication and teamwork in the competencies to be acquired by surgical residents before graduation. Concrete recommendations regarding the training and assessment of these competencies, however, are lacking. Methods:Nonrandomized, single-blinded study using an interrupted time-series design. Eleven senior surgical residents were observed during routine cases in the operating room (OR) at baseline and post-training. The Non-Technical Skills for Surgeons (NOTSS) rating system was used. Observers were trained in NOTSS and blinded to the study purpose. Independent of the blinded observations, a surgeon educator conducted intraoperative observations, which served as the basis for the structured debriefing and feedback intervention. The intervention was administered to participants after a set of (blinded) baseline observations had been completed. Primary outcome was nontechnical performance in the OR as measured by the NOTSS system. Secondary outcome was perceived utility as assessed by a post-training questionnaire. Results:Twelve senior trainees were recruited, 11 completed the study. Average NOTSS scores improved significantly from 3.2 (SD 0.37) at baseline to 3.5 (SD 0.43) post-training [t(10) = −2.55, P = 0.29]. All participants felt the intervention was useful, and the majority thought that debriefing and feedback on nontechnical skills should be integrated in surgical training. Conclusions:Debriefing and feedback in the OR may represent an effective strategy to ensure development of nontechnical skills in competency-based education.
British Journal of Surgery | 2017
Mitchell G. Goldenberg; Alaina Garbens; Peter Szasz; Tyler M. Hauer; Teodor P. Grantcharov
Standard setting allows educators to create benchmarks that distinguish between those who pass and those who fail an assessment. It can also be used to create standards in clinical and simulated procedural skill. The objective of this review was to perform a systematic review of the literature using absolute standard‐setting methodology to create benchmarks in technical performance.
British Journal of Surgery | 2016
Peter Szasz; Marisa Louridas; S. de Montbrun; Kenneth A. Harris; Teodor P. Grantcharov
Surgical education is becoming competency‐based with the implementation of in‐training milestones. Training guidelines should reflect these changes and determine the specific procedures for such milestone assessments. This study aimed to develop a consensus view regarding operative procedures and tasks considered appropriate for junior and senior trainees, and the procedures that can be used as technical milestone assessments for trainee progression in general surgery.
Surgical Endoscopy and Other Interventional Techniques | 2017
Marisa Louridas; Peter Szasz; Andras B. Fecso; Michael G. Zywiel; Parisa Lak; Ayse Basar Bener; Kenneth A. Harris; Teodor P. Grantcharov
BackgroundIt is hypothesized that not all surgical trainees are able to reach technical competence despite ongoing practice. The objectives of the study were to assess a trainees’ ability to reach technical competence by assessing learning patterns of the acquisition of surgical skills. Furthermore, it aims to determine whether individuals’ learning patterns were consistent across a range of open and laparoscopic tasks of variable difficulty.MethodsSixty-five preclinical medical students participated in a training curriculum with standardized feedback over forty repetitions of the following laparoscopic and open technical tasks: peg transfer (PT), circle cutting (CC), intracorporeal knot tie (IKT), one-handed tie, and simulated laparotomy closure. Data mining techniques were used to analyze the prospectively collected data and stratify the students into four learning clusters. Performance was compared between groups, and learning curve characteristics unique to trainees who have difficulty reaching technical competence were quantified.ResultsTop performers (22–35%) and high performers (32–42%) reached proficiency in all tasks. Moderate performers (25–37%) reached proficiency for all open tasks but not all laparoscopic tasks. Low performers (8–15%) failed to reach proficiency in four of five tasks including all laparoscopic tasks (PT 7.8%; CC 9.4%; IKT 15.6%). Participants in lower performance clusters demonstrated sustained performance disadvantage across tasks, with widely variable learning curves and no evidence of progression towards a plateau phase.ConclusionsMost students reached proficiency across a range of surgical tasks, but low-performing trainees failed to reach competence in laparoscopic tasks. With increasing use of laparoscopy in surgical practice, screening potential candidates to identify the lowest performers may be beneficial.
Annals of Surgery | 2017
Peter Szasz; Esther M. Bonrath; Marisa Louridas; Andras B. Fecso; Brett L. Howe; Adam Fehr; Michael Ott; Lloyd A. Mack; Kenneth A. Harris; Teodor P. Grantcharov
Objectives: The objectives of this study were to (1) create a technical and nontechnical performance standard for the laparoscopic cholecystectomy, (2) assess the classification accuracy and (3) credibility of these standards, (4) determine a trainees’ ability to meet both standards concurrently, and (5) delineate factors that predict standard acquisition. Background: Scores on performance assessments are difficult to interpret in the absence of established standards. Methods: Trained raters observed General Surgery residents performing laparoscopic cholecystectomies using the Objective Structured Assessment of Technical Skill (OSATS) and the Objective Structured Assessment of Non-Technical Skills (OSANTS) instruments, while as also providing a global competent/noncompetent decision for each performance. The global decision was used to divide the trainees into 2 contrasting groups and the OSATS or OSANTS scores were graphed per group to determine the performance standard. Parametric statistics were used to determine classification accuracy and concurrent standard acquisition, receiver operator characteristic (ROC) curves were used to delineate predictive factors. Results: Thirty-six trainees were observed 101 times. The technical standard was an OSATS of 21.04/35.00 and the nontechnical standard an OSANTS of 22.49/35.00. Applying these standards, competent/noncompetent trainees could be discriminated in 94% of technical and 95% of nontechnical performances (P < 0.001). A 21% discordance between technically and nontechnically competent trainees was identified (P < 0.001). ROC analysis demonstrated case experience and trainee level were both able to predict achieving the standards with an area under the curve (AUC) between 0.83 and 0.96 (P < 0.001). Conclusions: The present study presents defensible standards for technical and nontechnical performance. Such standards are imperative to implementing summative assessments into surgical training.
American Journal of Surgery | 2016
Marisa Louridas; Peter Szasz; Sandra de Montbrun; Kenneth A. Harris; Teodor P. Grantcharov
BACKGROUND The objectives of this study were to assemble an international perspective on (1) current, and (2) ideal technical performance assessment methods, and (3) barriers to their adoption during: selection, in-training, and certification. METHODS A questionnaire was distributed to international educational directorates. RESULTS Eight of 10 jurisdictions responded. Currently, aptitude tests or simulated tasks are used during selection, observational rating scales during training and nothing is used at certification. Ideally, innate ability should be determined during selection, in-training evaluation reports, and global rating scales used during training, whereas global and procedure-specific rating scales used at the time of certification. Barriers include lack of predictive evidence for use in selection, financial limitations during training, and a combination with respect to certification. CONCLUSIONS Identifying current and ideal evaluation methods will prove beneficial to ensure the best assessments of technical performance are chosen for each training time point.
Journal of Surgical Education | 2017
Sandra de Montbrun; Marisa Louridas; Peter Szasz; Kenneth A. Harris; Teodor P. Grantcharov
INTRODUCTION There is a recognized need to develop high-stakes technical skills assessments for decisions of certification and resident promotion. High-stakes examinations requires a rigorous approach in accruing validity evidence throughout the developmental process. One of the first steps in development is the creation of a blueprint which outlines the potential content of examination. The purpose of this validation study was to develop an examination blueprint for a Canadian General Surgery assessment of technical skill certifying examination. METHODS A Delphi methodology was used to gain consensus amongst Canadian General Surgery program directors as to the content (tasks or procedures) that could be included in a certifying Canadian General Surgery examination. Consensus was defined a priori as a Cronbachs α ≥ 0.70. All procedures or tasks reaching a positive consensus (defined as ≥80% of program directors rated items as ≥4 on the 5-point Likert scale) were then included in the final examination blueprint. RESULTS Two Delphi rounds were needed to reach consensus. Of the 17 General Surgery Program directors across the country, 14 (82.4%) and 10 (58.8%) program directors responded to the first and second round, respectively. A total of 59 items and procedures reached positive consensus and were included in the final examination blueprint. CONCLUSIONS The present study has outlined the development of an examination blueprint for a General Surgery certifying examination using a consensus-based methodology. This validation study will serve as the foundational work from which simulated model will be developed, pilot tested and evaluated.
Journal of Surgical Education | 2017
Marisa Louridas; Peter Szasz; Sandra de Montbrun; Kenneth A. Harris; Teodor P. Grantcharov