Daniel M. Wong
St. Vincent's Health System
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Daniel M. Wong.
Regional Anesthesia and Pain Medicine | 2012
Michael J. Barrington; Daniel M. Wong; Ben Slater; Jason J. Ivanusic; Matthew Ovens
Background and Objectives Ultrasound needle visualization is a fundamental skill required for competency in ultrasound-guided regional anesthesia. The primary objective of this study using a cadaver model was to quantify the number of procedures that novices need to perform before competency, using a predefined dynamic scoring system was achieved in ultrasound needle visualization skills. Methods Fifteen trainees, novices to ultrasound-guided regional anesthesia, performed 30 simulated sciatic nerve blocks in cadavers. After each procedure, a supervisor provided feedback regarding quality-compromising behaviors. Learning curves were constructed for each individual trainee by calculating cusum statistics. Trainees were categorized into those who were proficient, not proficient, and undetermined. A mathematical model predicted the number of procedures required before an acceptable success rate would be attained. Logistic regression was used to identify factors associated with success. Results There was wide variability in individual cusum curves. The mean number of trials required to achieve competency in this cohort was 28. Trainees were categorized as proficient (n = 6), not proficient (n = 5), and undetermined (n = 4). With each subsequent procedure, there was a significant increase in the likelihood of success for trainees categorized as not proficient (P = 0.023) or undetermined (P = 0.024) but not for trainees categorized as proficient (P = 0.076). Participants recruited later in the study had an increased likelihood of success (P < 0.001). Conclusions Trainees became competent in ultrasound needle visualization at a variable rate. This study estimates that novices would require approximately 28 supervised trials with feedback before competency in ultrasound needle visualization is achieved.
Regional Anesthesia and Pain Medicine | 2014
Daniel M. Wong; Mathew J. Watson; Roman Kluger; A. Chuan; Michael D. Herrick; Irene Ng; Damian J. Castanelli; Lisa C. Lin; Andrew Lansdown; Michael J. Barrington
Background and Objectives Checklists and global rating scales (GRSs) are used for assessment of anesthesia procedural skills. The purpose of this study was to evaluate the reliability and validity of a recently proposed assessment tool comprising a checklist and GRS specific for ultrasound-guided regional anesthesia. Methods In this prospective, fully crossed study, we videotaped 30 single-target nerve block procedures performed by anesthesia trainees. Following pilot assessment and observer training, videos were assessed in random order by 6 blinded, expert observers. Interrater reliability was evaluated with intraclass correlation coefficients (ICCs) based on a 2-way random-effects model that took into account both agreement and correlation between observer results. Construct validity and feasibility were also evaluated. Results The ICC between assessors’ total scores was 0.44 (95% confidence interval, 0.27–0.62). All 6 observers scored “experienced trainees” higher than “inexperienced trainees” (median total score 76.7 vs 54.2, P = 0.01), supporting the test’s construct validity. The median time to assess the videos was 4 minutes 29 seconds. Conclusions This is the first study to evaluate the reliability and validity of a combined checklist and GRS for ultrasound-guided regional anesthesia using multiple observers and taking into account both absolute agreement and correlation in determining the ICC of 0.44 for interrater reliability. There was evidence to support construct validity.
Anaesthesia | 2015
A. Chuan; Petra L. Graham; Daniel M. Wong; Michael J. Barrington; D. B. Auyong; A. J. D. Cameron; Y. C. Lim; L. Pope; B Germanoska; Kirsty Forrest; Colin Royse
The aim of this study was to create and evaluate the validity, reliability and feasibility of the Regional Anaesthesia Procedural Skills tool, designed for the assessment of all peripheral and neuraxial blocks using all nerve localisation techniques. The first phase was construction of a 25‐item checklist by five regional anaesthesia experts using a Delphi process. This checklist was combined with a global rating scale to create the tool. In the second phase, initial validation by 10 independent anaesthetists using a test–retest methodology was successful (Cohen kappa ≥ 0.70 for inter‐rater agreement, scores between test to retest, paired t‐test, p > 0.12). In the third phase, 70 clinical videos of trainees were scored by three blinded international assessors. The RAPS tool exhibited face validity (p < 0.026), construct validity (p < 0.001), feasibility (mean time to score < 3.9 min), and overall reliability (intraclass correlation coefficient 0.80 (95% CI 0.67–0.88)). The Regional Anaesthesia Procedural Skills tool used in this study is a valid and reliable assessment tool to score the performance of trainees for regional anaesthesia.
Anaesthesia | 2014
M. J. Watson; Daniel M. Wong; Roman Kluger; A. Chuan; M. D. Herrick; I. Ng; Damian J. Castanelli; L. Lin; Andrew Lansdown; Michael J. Barrington
Assessment tools must be investigated for reliability, validity and feasibility before being implemented. In 2013, the Australian and New Zealand College of Anaesthetists introduced workplace‐based assessments, including a direct observation of a procedural skills assessment tool. The objective of this study was to evaluate the psychometric properties of this assessment tool for ultrasound‐guided regional anaesthesia. Six experts assessed 30 video‐recorded trainee performances of ultrasound‐guided regional anaesthesia. Inter‐rater reliability, assessed using absolute agreement intraclass correlation coefficients, varied from 0.10 to 0.49 for the nine individual nine‐point scale items, and was 0.25 for a ‘total score’ of all items. Internal consistency was measured by correlation between ‘total score’ and ‘overall performance’ scale item (r = 0.68, p < 0.001). Construct validity was demonstrated by the ‘total score’ correlating with trainee experience (r = 0.51, p = 0.004). The mean time taken to complete assessments was 6 min 35 s.
Regional Anesthesia and Pain Medicine | 2016
Michael J. Barrington; Laura P. Viero; Roman Kluger; Alexander L. Clarke; Jason J. Ivanusic; Daniel M. Wong
Background and Objectives The objectives of this study were to determine the learning curve for capturing sonograms and identifying anatomical structures relevant to ultrasound-guided axillary brachial plexus block and to determine if massed was superior to distributed practice for this core sonographic skill. Methods Ten University of Melbourne, third- or fourth-year Doctor of Medicine students were randomized to massed or distributed practice. Participants performed 15 supervised learning sessions comprising scanning followed by feedback. A “sonographic proficiency score” was calculated by summing parameters in acquiring and interpreting the sonogram, and identifying relevant anatomical structures. Results Between the 1st and 10th sessions, the proficiency scores increased (P = 0.043). Except for one, all participants had relatively rapid increases in their “sonographic proficiency scores.” There was no difference in proficiency scores between the 15th and 10th sessions (P > 0.05). There was no difference in scores between groups for the first session, (P = 0.40), 15th session (P = 0.10), or at any time. There was no difference in the slope of the increase in “sonographic proficiency score” over the first 10 scanning sessions between groups [massed, 1.1 (0.32); distributed, 0.90 (0.15); P = 0.22) presented as mean (SD)]. The 95% confidence interval for the difference in slopes between massed and distributed groups was −0.15 to 0.56. Conclusions The proficiency of participants in capturing sonograms and identifying anatomical structures improved significantly over 8 to 10 learning sessions. Because of sample size issues, we cannot make a firm conclusion regarding massed versus distributed practice for this core sonographic skill.
Regional Anesthesia and Pain Medicine | 2016
Michael J. Barrington; Samuel Gledhill; Roman Kluger; Alexander L. Clarke; Daniel M. Wong; Henry Davidson; R. Thomas
Background Ultrasound-guided techniques improve outcomes in regional anesthesia when compared with traditional techniques; however, this assertion has not been studied with novices. The primary objective of this study was to compare sensory and motor block after axillary brachial plexus block when performed by novice trainees allocated to an ultrasound- or nerve–stimulator-guided group. A secondary objective was to compare the rates of skill acquisition between the 2 groups. Methods This study was a prospective, randomized, observer-blinded, 2-arm controlled trial. Anesthesia trainees participating in this trial were novices to axillary brachial plexus block and sonography. All trainee participants underwent a standardized training program. The primary outcome was combined sensory and motor block in the relevant territories 30 minutes after completion of block. A global rating scale was used to assess trainee block performance. Results The study was ceased after 12 trainees completed 153 blocks. There was no difference between groups in combined motor-sensory score (P = 0.28) or as a function of block number (P = 0.38). There was no difference in onset between groups (P = 0.38). In both groups, there was an increase in the global rating scale score (P < 0.0001) and reduced preblock survey and block performance times (P = 0.001) with experience. Conclusions We were unable to demonstrate a difference in the efficacy of axillary brachial plexus block performed by novices when ultrasound guidance was compared with a nerve stimulator technique. There was evidence of similarly improved clinical performance of novices in both groups.
Anaesthesia | 2016
A. Chuan; Petra L. Graham; Kirsty Forrest; Michael J. Barrington; Colin Royse; Daniel M. Wong; A. J. D. Cameron; Y. C. Lim; D. B. Auyong
References 1. Chuan A, Graham PL, Wong DM, et al. Design and validation of the Regional Anaesthesia Procedural Skills Assessment Tool. Anaesthesia 2015; 70: 1401–11. 2. Glance LG, Kellermann AL, Hannan EL, et al. The impact of anesthesiologists on coronary artery bypass graft surgery outcomes. Anesthesia and Analgesia 2015; 120: 526–33. 3. Birkmeyer JD, Finks JF, O’Reilly A, et al. Surgical skill and complication rates after bariatric surgery. New England Journal of Medicine 2013; 369: 1434–42. 4. Gallagher AG, Ritter EM, Satava RM. Fundamental principles of validation, and reliability: rigorous science for the assessment of surgical education and training. Surgical Endoscopy 2003; 17: 1525–9. 5. Gallagher AG, O’Sullivan GC, Leonard G, Bunting BP, McGlade KJ. Objective structured assessment of technical skills and checklist scales reliability compared for high stakes assessments. Australia and New Zealand Journal of Surgery 2012; 84: 568–73.
Regional Anesthesia and Pain Medicine | 2013
Daniel M. Wong; Michael J. Barrington
W e read with interest the publication by Cheung et al, consisting of a consensus-backed checklist and modified global rating scale. This ‘‘objective assessment tool’’ may be the best of its kind to date and has been designed to be highly specific for the assessment of ultrasound-guided regional anesthesia. However, we suggest improvements in the following areas: (1) As outlined in a previous review on the subject, we believe that the options for the checklist should be dichotomous to be more objective. Each item is currently scored using 3 options: ‘‘not performed,’’ ‘‘performed poorly,’’ and ‘‘performed well.’’ We suggest that the scoring options be made dichotomous by combining ‘‘not performed’’ together with ‘‘performed poorly.’’ For example, consider item 3, ‘‘Choice of correct transducer,’’ and item 10, ‘‘Maintenance of needle tip during advancement of needle.’’ To make the distinction between ‘‘not performed’’ and ‘‘performed poorly’’seems either irrelevant or not applicable. (2) The needle in-plane skills (items 9Y11) are difficult to rate objectively without further definition. Where do you cross the line from ‘‘performed poorly’’ to ‘‘performed well’’ in regard to item 9, ‘‘Appropriate needle alignment’’; item 10, ‘‘Maintenance of needle tip during advancement’’; and item 11, ‘‘Efficiency of regaining needle tip’’? We have previously explored the usefulness of incorporating quality compromising behaviors into an assessment of needling skills, and there may be a role for incorporating a similar or shortened version of these quality compromising behaviors into the needle alignment items. For example, some reference could be made to the lack of obvious errors (eg, ‘‘infrequently’’ advancing the needle without the tip seen). Otherwise, these subjective items may poorly discriminate between trainees. (3) Items 19 and 20 (needle tip positioning during injection) are possibly redundant and almost identical to items 10 and 11 (needle tip positioning during advancement). There may also be some redundancy among the global rating scale items, ‘‘Time and motion’’ versus ‘‘Instrument handling’’ and ‘‘Flow of procedure’’ versus ‘‘Knowledge of procedure.’’ These items may be difficult to separate because of their wording. (4) ‘‘Overall should the candidate pass or fail’’ appears subjective. Perhaps the wording could be: ‘‘Would you be happy (or would it be safe) to let the candidate perform the next procedure unsupervised (or with minimal supervision)?’’
Anaesthesia and Intensive Care | 2010
Lee Th; Michael J. Barrington; T.M.N. Tran; Daniel M. Wong; P. Hebbard
Regional Anesthesia and Pain Medicine | 2009
Daniel M. Wong; Sam Gledhill; R. Thomas; Michael J. Barrington