Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Todd P. Chang is active.

Publication


Featured researches published by Todd P. Chang.


Pediatrics | 2014

Designing and Conducting Simulation-Based Research

Adam Cheng; Marc Auerbach; Elizabeth A. Hunt; Todd P. Chang; Martin Pusic; Vinay Nadkarni; David Kessler

As simulation is increasingly used to study questions pertaining to pediatrics, it is important that investigators use rigorous methods to conduct their research. In this article, we discuss several important aspects of conducting simulation-based research in pediatrics. First, we describe, from a pediatric perspective, the 2 main types of simulation-based research: (1) studies that assess the efficacy of simulation as a training methodology and (2) studies where simulation is used as an investigative methodology. We provide a framework to help structure research questions for each type of research and describe illustrative examples of published research in pediatrics using these 2 frameworks. Second, we highlight the benefits of simulation-based research and how these apply to pediatrics. Third, we describe simulation-specific confounding variables that serve as threats to the internal validity of simulation studies and offer strategies to mitigate these confounders. Finally, we discuss the various types of outcome measures available for simulation research and offer a list of validated pediatric assessment tools that can be used in future simulation-based studies.


Advances in Simulation | 2016

Reporting guidelines for health care simulation research: Extensions to the CONSORT and STROBE statements

Adam Cheng; David Kessler; Ralph MacKinnon; Todd P. Chang; Vinay Nadkarni; Elizabeth A. Hunt; Jordan Duval-Arnould; Yiqun Lin; David A. Cook; Martin Pusic; Joshua Hui; David Moher; Matthias Egger; Marc Auerbach

BackgroundSimulation-based research (SBR) is rapidly expanding but the quality of reporting needs improvement. For a reader to critically assess a study, the elements of the study need to be clearly reported. Our objective was to develop reporting guidelines for SBR by creating extensions to the Consolidated Standards of Reporting Trials (CONSORT) and Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) Statements.MethodsAn iterative multistep consensus-building process was used on the basis of the recommended steps for developing reporting guidelines. The consensus process involved the following: (1) developing a steering committee, (2) defining the scope of the reporting guidelines, (3) identifying a consensus panel, (4) generating a list of items for discussion via online premeeting survey, (5) conducting a consensus meeting, and (6) drafting reporting guidelines with an explanation and elaboration document.ResultsThe following 11 extensions were recommended for CONSORT: item 1 (title/abstract), item 2 (background), item 5 (interventions), item 6 (outcomes), item 11 (blinding), item 12 (statistical methods), item 15 (baseline data), item 17 (outcomes/ estimation), item 20 (limitations), item 21 (generalizability), and item 25 (funding). The following 10 extensions were recommended for STROBE: item 1 (title/abstract), item 2 (background/rationale), item 7 (variables), item 8 (data sources/measurement), item 12 (statistical methods), item 14 (descriptive data), item 16 (main results), item 19 (limitations), item 21 (generalizability), and item 22 (funding). An elaboration document was created to provide examples and explanation for each extension.ConclusionsWe have developed extensions for the CONSORT and STROBE Statements that can help improve the quality of reporting for SBR (Sim Healthcare 00:00-00, 2016).


Academic Medicine | 2015

Learn, see, practice, prove, do, maintain: an evidence-based pedagogical framework for procedural skill training in medicine.

Taylor Sawyer; Marjorie Lee White; Pavan Zaveri; Todd P. Chang; Anne Ades; Heather French; JoDee M. Anderson; Marc Auerbach; Lindsay Johnston; David Kessler

Acquisition of competency in procedural skills is a fundamental goal of medical training. In this Perspective, the authors propose an evidence-based pedagogical framework for procedural skill training. The framework was developed based on a review of the literature using a critical synthesis approach and builds on earlier models of procedural skill training in medicine. The authors begin by describing the fundamentals of procedural skill development. Then, a six-step pedagogical framework for procedural skills training is presented: Learn, See, Practice, Prove, Do, and Maintain. In this framework, procedural skill training begins with the learner acquiring requisite cognitive knowledge through didactic education (Learn) and observation of the procedure (See). The learner then progresses to the stage of psychomotor skill acquisition and is allowed to deliberately practice the procedure on a simulator (Practice). Simulation-based mastery learning is employed to allow the trainee to prove competency prior to performing the procedure on a patient (Prove). Once competency is demonstrated on a simulator, the trainee is allowed to perform the procedure on patients with direct supervision, until he or she can be entrusted to perform the procedure independently (Do). Maintenance of the skill is ensured through continued clinical practice, supplemented by simulation-based training as needed (Maintain). Evidence in support of each component of the framework is presented. Implementation of the proposed framework presents a paradigm shift in procedural skill training. However, the authors believe that adoption of the framework will improve procedural skill training and patient safety.


Pediatrics | 2013

Interns' Success With Clinical Procedures in Infants After Simulation Training

David Kessler; Grace M. Arteaga; Kevin Ching; Laura Haubner; Gunjan Kamdar; Amanda Krantz; Julie B. Lindower; Michael E. Miller; Matei Petrescu; Martin Pusic; Joshua Rocker; Nikhil Shah; Christopher Strother; Lindsey Tilt; Eric Weinberg; Todd P. Chang; Daniel M. Fein; Marc Auerbach

BACKGROUND AND OBJECTIVE: Simulation-based medical education (SBME) is used to teach residents. However, few studies have evaluated its clinical impact. The goal of this study was to evaluate the impact of an SBME session on pediatric interns’ clinical procedural success. METHODS: This randomized trial was conducted at 10 academic medical centers. Interns were surveyed on infant lumbar puncture (ILP) and child intravenous line placement (CIV) knowledge and watched audiovisual expert modeling of both procedures. Participants were randomized to SBME mastery learning for ILP or CIV and for 6 succeeding months reported clinical performance for both procedures. ILP success was defined as obtaining a sample on the first attempt with <1000 red blood cells per high-power field or fluid described as clear. CIV success was defined as placement of a functioning catheter on the first try. Each group served as the control group for the procedure for which they did not receive the intervention. RESULTS: Two-hundred interns participated (104 in the ILP group and 96 in the CIV group). Together, they reported 409 procedures. ILP success rates were 34% (31 of 91) for interns who received ILP mastery learning and 34% (25 of 73) for controls (difference: 0.2% [95% confidence interval: –0.1 to 0.1]). The CIV success rate was 54% (62 of 115) for interns who received CIV mastery learning compared with 50% (58 of 115) for controls (difference: 3% [95% confidence interval: –10 to 17]). CONCLUSIONS: Participation in a single SBME mastery learning session was insufficient to affect pediatric interns’ subsequent procedural success.


Simulation in Healthcare | 2016

Reporting Guidelines for Health Care Simulation Research

Adam Cheng; David Kessler; Ralph MacKinnon; Todd P. Chang; Vinay Nadkarni; Elizabeth A. Hunt; Jordan Duval-Arnould; Yiqun Lin; David A. Cook; Martin Pusic; Joshua Hui; David Moher; Matthias Egger; Marc Auerbach

Introduction Simulation-based research (SBR) is rapidly expanding but the quality of reporting needs improvement. For a reader to critically assess a study, the elements of the study need to be clearly reported. Our objective was to develop reporting guidelines for SBR by creating extensions to the Consolidated Standards of Reporting Trials (CONSORT) and Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) Statements. Methods An iterative multistep consensus-building process was used on the basis of the recommended steps for developing reporting guidelines. The consensus process involved the following: (1) developing a steering committee, (2) defining the scope of the reporting guidelines, (3) identifying a consensus panel, (4) generating a list of items for discussion via online premeeting survey, (5) conducting a consensus meeting, and (6) drafting reporting guidelines with an explanation and elaboration document. Results The following 11 extensions were recommended for CONSORT: item 1 (title/abstract), item 2 (background), item 5 (interventions), item 6 (outcomes), item 11 (blinding), item 12 (statistical methods), item 15 (baseline data), item 17 (outcomes/estimation), item 20 (limitations), item 21 (generalizability), and item 25 (funding). The following 10 extensions were recommended for STROBE: item 1 (title/abstract), item 2 (background/rationale), item 7 (variables), item 8 (data sources/measurement), item 12 (statistical methods), item 14 (descriptive data), item 16 (main results), item 19 (limitations), item 21 (generalizability), and item 22 (funding). An elaboration document was created to provide examples and explanation for each extension. Conclusions We have developed extensions for the CONSORT and STROBE Statements that can help improve the quality of reporting for SBR.


Pediatrics | 2015

Impact of Just-in-Time and Just-in-Place Simulation on Intern Success With Infant Lumbar Puncture.

David Kessler; Martin Pusic; Todd P. Chang; Daniel M. Fein; Devin Grossman; Renuka Mehta; Marjorie Lee White; Jaewon Jang; Travis Whitfill; Marc Auerbach; Michael Holder; Glenn R. Stryjewski; Kathleen Ostrom; Lara Kothari; Pavan Zaveri; Berry Seelbach; Dewesh Agrawal; Joshua Rocker; Kiran Hebbar; Maybelle Kou; Julie B. Lindower; Glenda K. Rabe; Audrey Z. Paul; Christopher Strother; Eric Weinberg; Nikhil Shah; Kevin Ching; Kelly Cleary; Noel S. Zuckerbraun; Brett McAninch

BACKGROUND AND OBJECTIVE: Simulation-based skill trainings are common; however, optimal instructional designs that improve outcomes are not well specified. We explored the impact of just-in-time and just-in-place training (JIPT) on interns’ infant lumbar puncture (LP) success. METHODS: This prospective study enrolled pediatric and emergency medicine interns from 2009 to 2012 at 34 centers. Two distinct instructional design strategies were compared. Cohort A (2009–2010) completed simulation-based training at commencement of internship, receiving individually coached practice on the LP simulator until achieving a predefined mastery performance standard. Cohort B (2010–2012) had the same training plus JIPT sessions immediately before their first clinical LP. Main outcome was LP success, defined as obtaining fluid with first needle insertion and <1000 red blood cells per high-power field. Process measures included use of analgesia, early stylet removal, and overall attempts. RESULTS: A total of 436 first infant LPs were analyzed. The LP success rate in cohort A was 35% (13/37), compared with 38% (152/399) in cohort B (95% confidence interval for difference [CI diff], −15% to +18%). Cohort B exhibited greater analgesia use (68% vs 19%; 95% CI diff, 33% to 59%), early stylet removal (69% vs 54%; 95% CI diff, 0% to 32%), and lower mean number of attempts (1.4 ± 0.6 vs 2.1 ± 1.6, P < .01) compared with cohort A. CONCLUSIONS: Across multiple institutions, intern success rates with infant LP are poor. Despite improving process measures, adding JIPT to training bundles did not improve success rate. More research is needed on optimal instructional design strategies for infant LP.


Pediatric Emergency Care | 2011

Iron poisoning: a literature-based review of epidemiology, diagnosis, and management.

Todd P. Chang; Cyrus Rangan

Although seen less frequently than acetaminophen or salicylate poisoning, acute iron poisoning remains a dangerous threat, particularly to pediatric patients. Multiple factors-including legal and manufacturing practices-have changed the landscape of iron poisoning over the decades. Despite these changes, diagnosis and management of iron poisoning have minimally evolved, and the current evidence for iron poisoning is yet based principally on case series, expert consensus, animal studies, and adult volunteer studies. This review article describes in detail the epidemiology of acute iron poisoning as it relates to the pediatric patient, as well as the historical and current array of literature on diagnosis and management.


Simulation in healthcare : journal of the Society for Simulation in Healthcare | 2013

Qualitative Evaluation of Just-in-time Simulation-based Learning: The Learners’ Perspective

Gunjan Kamdar; David Kessler; Lindsey Tilt; Geetanjali Srivastava; Kajal Khanna; Todd P. Chang; Dorene F. Balmer; Marc Auerbach

Introduction Just-in-time training (JITT) is an educational strategy where training occurs in close temporal proximity to a clinical encounter. A multicenter study evaluated the impact of simulation-based JITT on interns’ infant lumbar puncture (LP) success rates. Concurrent with this multicenter study, we conducted a qualitative evaluation to describe learner perceptions of this modality of skills training. Methods Eleven interns from a single institution participated in a face-to-face semistructured interview exploring their JITT experience. Interviews were audio-recorded and transcribed. Two investigators reviewed the transcripts, assigned codes to the data, and categorized the codes. Categories were modified by 4 emergency physicians. As a means of data triangulation, we performed focus groups at a second institution. Results Benefits of JITT included review of anatomic landmarks, procedural rehearsal, and an opportunity to ask questions. These perceived benefits improved confidence with infant LP. Deficits of the training included lack of mannequin fidelity and unrealistic context when compared with an actual LP. An unexpected category, which emerged from our analysis, was that of barriers to JITT performance. Barriers included lack of time in a busy clinical setting and various instructor factors. The focus group findings confirmed and elaborated the benefits and deficits of JITT and the barriers to JITT performance. Conclusions Just-in-time training improved procedural confidence with infant LP, but work place busyness and instructor lack of support or unawareness were barriers to JITT performance. Optimal LP JITT would occur with improved contextual fidelity. More research is needed to determine optimal training strategies that are effective for the learner and maximize clinical outcomes for the patient.


Academic Emergency Medicine | 2014

Pediatric emergency medicine asynchronous e-learning: A multicenter randomized controlled solomon four-group study

Todd P. Chang; Phung K. Pham; Brad Sobolewski; Cara Doughty; Nazreen Jamal; Karen Y. Kwan; Kim Little; Timothy E. Brenkert; David J. Mathison

OBJECTIVES Asynchronous e-learning allows for targeted teaching, particularly advantageous when bedside and didactic education is insufficient. An asynchronous e-learning curriculum has not been studied across multiple centers in the context of a clinical rotation. We hypothesize that an asynchronous e-learning curriculum during the pediatric emergency medicine (EM) rotation improves medical knowledge among residents and students across multiple participating centers. METHODS Trainees on pediatric EM rotations at four large pediatric centers from 2012 to 2013 were randomized in a Solomon four-group design. The experimental arms received an asynchronous e-learning curriculum consisting of nine Web-based, interactive, peer-reviewed Flash/HTML5 modules. Postrotation testing and in-training examination (ITE) scores quantified improvements in knowledge. A 2 × 2 analysis of covariance (ANCOVA) tested interaction and main effects, and Pearsons correlation tested associations between module usage, scores, and ITE scores. RESULTS A total of 256 of 458 participants completed all study elements; 104 had access to asynchronous e-learning modules, and 152 were controls who used the current education standards. No pretest sensitization was found (p = 0.75). Use of asynchronous e-learning modules was associated with an improvement in posttest scores (p < 0.001), from a mean score of 18.45 (95% confidence interval [CI] = 17.92 to 18.98) to 21.30 (95% CI = 20.69 to 21.91), a large effect (partial η(2) = 0.19). Posttest scores correlated with ITE scores (r(2) = 0.14, p < 0.001) among pediatric residents. CONCLUSIONS Asynchronous e-learning is an effective educational tool to improve knowledge in a clinical rotation. Web-based asynchronous e-learning is a promising modality to standardize education among multiple institutions with common curricula, particularly in clinical rotations where scheduling difficulties, seasonality, and variable experiences limit in-hospital learning.


Pediatric Emergency Care | 2013

Are pediatric interns prepared to perform infant lumbar punctures? A multi-institutional descriptive study.

Marc Auerbach; Todd P. Chang; Jennifer Reid; Casandra Quinones; Amanda Krantz; Amanda Pratt; James M. Gerard; Renuka Mehta; Martin Pusic; David Kessler

Background There are few data describing pediatric interns’ experiences, knowledge, attitudes, and skills related to common procedures. This information would help guide supervisors’ decisions about interns’ preparedness and training needs. Objectives This study aimed to describe pediatric interns’ medical school experiences, knowledge, attitudes, and skills with regard to infant lumbar punctures (LPs) and to describe the impact of these factors on interns’ infant LP skills. Methods This prospective cross-sectional descriptive study was conducted at 21 academic medical centers participating during 2010. Participants answered 8 knowledge questions, 3 attitude questions, and 6 experience questions online. Skills were assessed on an infant LP simulator using a 15-item subcomponent checklist and a 4-point global assessment. Results Eligible interns numbered 493, with 422 (86%) completing surveys and 362 (73%) completing skills assessments. The majority 287/422 (68%) had never performed an infant LP; however, 306 (73%) had observed an infant LP during school. The mean (SD) knowledge score was 63% (±21%). The mean (SD) subcomponent skills checklist score was 73% (±21%). On the global skills assessment, 225 (62%) interns were rated as beginner, and 137 (38%) were rated as competent, proficient, or expert. Independent predictors of an above-beginner simulator performance included infant LP experience on a patient (odds ratio [OR], 2.2; 95% confidence interval [CI], 1.4–3.5), a knowledge score greater than 65% (OR, 2.4; 95% CI, 1.5–3.7), or self-reported confidence (OR, 3.5; 95% CI, 1.9–6.4). Conclusions At the start of residency, the majority of pediatric interns have little experience, poor knowledge, and low confidence and are not prepared to perform infant LPs.

Collaboration


Dive into the Todd P. Chang's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Daniel M. Fein

Albert Einstein College of Medicine

View shared research outputs
Top Co-Authors

Avatar

Pavan Zaveri

Children's National Medical Center

View shared research outputs
Top Co-Authors

Avatar

Ralph MacKinnon

Boston Children's Hospital

View shared research outputs
Top Co-Authors

Avatar

Adam Cheng

Alberta Children's Hospital

View shared research outputs
Top Co-Authors

Avatar

Vinay Nadkarni

Children's Hospital of Philadelphia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Renuka Mehta

Georgia Regents University

View shared research outputs
Researchain Logo
Decentralizing Knowledge