Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gary Cole is active.

Publication


Featured researches published by Gary Cole.


Medical Education | 2008

CanMEDS evaluation in Canadian postgraduate training programmes: tools used and programme director satisfaction

Sophia Chou; Gary Cole; Kevin McLaughlin; Jocelyn Lockyer

Context  The Royal College of Physicians and Surgeons of Canada (RCPSC) CanMEDS framework is being incorporated into specialty education worldwide. However, the literature on how to evaluate trainees in the CanMEDS competencies remains sparse.


Medical Teacher | 2013

Simulation in healthcare: A taxonomy and a conceptual framework for instructional design and media selection

Gilles Chiniara; Gary Cole; Ken Brisbin; Dan Huffman; Betty Cragg; Mike Lamacchia; Dianne Norman

Background: Simulation in healthcare lacks a dedicated framework and supporting taxonomy for instructional design (ID) to assist educators in creating appropriate simulation learning experiences. Aims: This article aims to fill the identified gap. It provides a conceptual framework for ID of healthcare simulation. Methods: The work is based on published literature and authors’ experience with simulation-based education. Results: The framework for ID itself presents four progressive levels describing the educational intervention. Medium is the mode of delivery of instruction. Simulation modality is the broad description of the simulation experience and includes four modalities (computer-based simulation, simulated patient (SP), simulated clinical immersion, and procedural simulation) in addition to mixed, hybrid simulations. Instructional method describes the techniques used for learning. Presentation describes the detailed characteristics of the intervention. The choice of simulation as a learning medium is based on a matrix of simulation relating acuity (severity) to opportunity (frequency) of events, with a corresponding zone of simulation. An accompanying chart assists in the selection of appropriate media and simulation modalities based on learning outcomes. Conclusion: This framework should help educators incorporate simulation in their ID efforts. It also provides a taxonomy to streamline future research and ID efforts in simulation.


Academic Medicine | 2005

Incorporating simulation technology in a canadian internal medicine specialty examination: a descriptive report.

Rose Hatala; Barry O. Kassen; James Nishikawa; Gary Cole; S. Barry Issenberg

High-stakes assessment of clinical performance through the use of standardized patients (SPs) is limited by the SPs lack of real physical abnormalities. The authors report on the development and implementation of physical examination stations that combine simulation technology in the form of digitized cardiac auscultation videos with an SP assessment for the 2003 Royal College of Physicians and Surgeons of Canadas Comprehensive Objective Examination in Internal Medicine. The authors assessed candidates on both the traditional stations and the stations that combined the traditional SP examination with the digitized cardiac auscultation video. For the combined stations, candidates first completed a physical examination of the SP, watched and listened to a computer simulation, and then described their auscultatory findings. The candidates’ mean scores for both types of stations were similar, as were the mean discrimination indices for both types of stations, suggesting that the combined stations were of a testing standard similar to the traditional stations. Combining an SP with simulation technology may be one approach to the assessment of clinical competence in high-stakes testing situations.


Medical Education | 2008

Assessing cardiac physical examination skills using simulation technology and real patients: A comparison study

Rose Hatala; S. Barry Issenberg; Barry O. Kassen; Gary Cole; C Maria Bacchus; Ross J. Scalese

Objective  High‐stakes assessments of doctors’ physical examination skills often employ standardised patients (SPs) who lack physical abnormalities. Simulation technology provides additional opportunities to assess these skills by mimicking physical abnormalities. The current study examined the relationship between internists’ cardiac physical examination competence as assessed with simulation technology compared with that assessed with real patients (RPs).


Medical Education | 2016

Do OSCE progress test scores predict performance in a national high-stakes examination?

Debra Pugh; Farhan Bhanji; Gary Cole; Jonathan Dupre; Rose Hatala; Susan Humphrey-Murto; Claire Touchie; Timothy J. Wood

Progress tests, in which learners are repeatedly assessed on equivalent content at different times in their training and provided with feedback, would seem to lend themselves well to a competency‐based framework, which requires more frequent formative assessments. The objective structured clinical examination (OSCE) progress test is a relatively new form of assessment that is used to assess the progression of clinical skills. The purpose of this study was to establish further evidence for the use of an OSCE progress test by demonstrating an association between scores from this assessment method and those from a national high‐stakes examination.


Medical Teacher | 2011

Validating objectives and training in Canadian paediatrics residency training programmes

Harish Amin; Nalini Singhal; Gary Cole

Background: Changing health care systems and learning environments with reduction in resident work hours raises the question: “Are we adequately training our paediatricians?” Aims: (1) Identify clinical competencies to be acquired during paediatric residency training to enable graduates to practise as consultant paediatricians; (2) Identify gaps in preparedness during training and; (3) Review and validate competencies contained in the Royal College of Physicians and Surgeons of Canada (RCPSC) objectives of training (OTR) for paediatrics. Methods: A questionnaire with 19 classification domains containing 92 clinical competencies was administered to RCPSC certified paediatricians who completed residency training in Canada from June 2004 to June 2008. For each competency, paediatricians were asked to indicate the importance and their degree of preparedness upon entering practice. Gap scores (GSs) between importance and preparedness were calculated. Results: Response rate was 43% (187/435); 91.3% (84/92) of competencies in the RCPSC OTR were identified as important. Paediatricians felt less than adequately prepared for 25% (23/92) of competencies; 40 competencies had GSs >10%. Conclusions: The unique approach used in this study is useful in validating OTR as well as the preparation of residents in relation to OTR. The results indicate a potential need for additional training in specific competencies.


Academic Medicine | 2007

Assessing the relationship between cardiac physical examination technique and accurate bedside diagnosis during an objective structured clinical examination (OSCE)

Rose Hatala; S. Barry Issenberg; Barry O. Kassen; Gary Cole; C Maria Bacchus; Ross J. Scalese

Background Many standardized patient (SP) encounters employ SPs without physical findings and, thus, assess physical examination technique. The relationship between technique, accurate bedside diagnosis, and global competence in physical examination remains unclear. Method Twenty-eight internists undertook a cardiac physical examination objective structured clinical examination, using three modalities: real cardiac patients (RP), “normal” SPs combined with related cardiac audio–video simulations, and a cardiology patient simulator (CPS). Two examiners assessed physical examination technique and global bedside competence. Accuracy of cardiac diagnosis was scored separately. Results The correlation coefficients between participants’ physical examination technique and diagnostic accuracy were 0.39 for RP (P < .05), 0.29 for SP, and 0.30 for CPS. Patient modality impacted the relative weighting of technique and diagnostic accuracy in the determination of global competence. Conclusions Assessments of physical examination competence should evaluate both technique and diagnostic accuracy. Patient modality affects the relative contributions of each outcome towards a global rating.


Simulation in healthcare : journal of the Society for Simulation in Healthcare | 2009

Development and validation of a cardiac findings checklist for use with simulator-based assessments of cardiac physical examination competence.

Rose Hatala; Ross J. Scalese; Gary Cole; Maria Bacchus; Barry O. Kassen; S. Barry Issenberg

Introduction: Objective outcome measures for use with simulator-based assessments of cardiac physical examination competence are lacking. The current study describes the development and validation of an approach to scoring performance using a cardiac findings checklist. Methods: A cardiac findings checklist was developed and implemented for use with a simulator-based assessment of cardiac physical examination competence at a Canadian national specialty examination in internal medicine. Candidate performance as measured using the checklist was compared with global ratings of clinical performance on the cardiac patient simulator and with overall examination performance. Results: Interrater reliability for scoring the checklist ranged from 0.95 for scoring correct findings to 0.72 for scoring incorrect findings. A summary checklist score had a Pearson correlation of 0.60 with overall candidate performance on the simulator-based station. Conclusion: Use of a cardiac findings checklist provides one objective measure of cardiac physical examination competence that may be used with simulator-based assessments.


Medical Teacher | 2007

Does physical examination competence correlate with bedside diagnostic acumen? An observational study.

Rose Hatala; Gary Cole; Barry O. Kassen; C Maria Bacchus; S. Barry Issenberg

Aim: To examine the relationship between a physicians ability to examine a standardized patient (SP) and their ability to correctly identify related clinical findings created with simulation technology. Method: The authors conducted an observational study of 347 candidates during a Canadian national specialty examination at the end of post-graduate internal medicine training. Stations were created that combined physical examination of an SP with evaluation of a related audio-video simulation of a patient abnormality, in the domains of cardiology and neurology. Examiners evaluated a candidates competence at performing a physical examination of an SP and their accuracy in diagnosing a related audio-video simulation. Results: For the cardiology stations, the correlation between the physical examination scores and recognition of simulation abnormalities was 0.31 (p < 0.01). For the neurology stations, the correlation was 0.27 (p < 0.01). Addition of the simulations identified 18% of 197 passing candidates on the cardiology stations and 17% of 240 passing candidates on the neurology stations who were competent in their physical examination technique but did not achieve the passing score for diagnostic skills. Conclusions: Assessments incorporating SPs without physical findings may need to include other methodologies to assess bedside diagnostic acumen.


Medical Teacher | 2016

Constraints on reducing the costs of high-stakes OSCEs

Gary Cole; Jonathan Dupre

It was with particular interest that we read your commentary on misconception and the OSCE (Harden 2015). At the Royal College, we administer 11 OSCE exams involving standardized patients so we agree that OSCEs are highly desirable, for the reasons that you mentioned—they can be reliable and valid; they measure application of knowledge; they can be used to provide feedback. At the Royal College of Physicians and Surgeons of Canada, we have compared the reactions of both candidates and exam developers to the written format of exams versus the OSCE and they certainly prefer the OSCE. However, we wish to point out that the circumstances for the use of any assessment tool can vary considerably affecting the cost. OSCEs are not all created equal. In particular, there is a difference between the use of OSCEs for formative and summative assessment. In a high stakes setting where validity and reliability are absolutely essential the OSCE must be rigorously standardized. This can be costly (This is not to say that the cost is unjustified). In the original OSCE publication (Harden et al. 1975), including real patients was proposed, but this would seriously detract from the standardization necessary for a high stakes exam. A high stakes examination must use highly standardized scenarios with highly trained standardized patients. We must have backup standardized patients. The development and training for these scenarios is expensive. We need a minimum number of stations to achieve reliability. The use of OSCEs in a formative context where standardization is not as great a concern is less subject to these constraints. Measures to reduce costs such as sequential testing (candidates object to this process for high stakes examinations) or the use of real patients (they cannot be truly standardized) are not necessarily feasible for high stakes exams. The point is simply that OSCE costs are higher for high stakes exams and measures to reduce these costs, which are feasible for formative exams, are not necessarily feasible for high stakes exams.

Collaboration


Dive into the Gary Cole's collaboration.

Top Co-Authors

Avatar

Rose Hatala

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Barry O. Kassen

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dan Huffman

Alberta Health Services

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dianne Norman

McMaster Children's Hospital

View shared research outputs
Researchain Logo
Decentralizing Knowledge