Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tyrone Donnon is active.

Publication


Featured researches published by Tyrone Donnon.


Academic Medicine | 2007

The predictive validity of the MCAT for medical school performance and medical board licensing examinations: a meta-analysis of the published research.

Tyrone Donnon; Elizabeth Oddone Paolucci; Claudio Violato

Purpose To conduct a meta-analysis of published studies to determine the predictive validity of the MCAT on medical school performance and medical board licensing examinations. Method The authors included all peer-reviewed published studies reporting empirical data on the relationship between MCAT scores and medical school performance or medical board licensing exam measures. Moderator variables, participant characteristics, and medical school performance/medical board licensing exam measures were extracted and reviewed separately by three reviewers using a standardized protocol. Results Medical school performance measures from 11 studies and medical board licensing examinations from 18 studies, for a total of 23 studies, were selected. A random-effects model meta-analysis of weighted effects sizes (r) resulted in (1) a predictive validity coefficient for the MCAT in the preclinical years of r = 0.39 (95% confidence interval [CI], 0.21–0.54) and on the USMLE Step 1 of r = 0.60 (95% CI, 0.50–0.67); and (2) the biological sciences subtest as the best predictor of medical school performance in the preclinical years (r = 0.32 95% CI, 0.21–0.42) and on the USMLE Step 1 (r = 0.48 95% CI, 0.41–0.54). Conclusions The predictive validity of the MCAT ranges from small to medium for both medical school performance and medical board licensing exam measures. The medical profession is challenged to develop screening and selection criteria with improved validity that can supplement the MCAT as an important criterion for admission to medical schools.


Academic Medicine | 2014

The Reliability, Validity, and Feasibility of Multisource Feedback Physician Assessment: A Systematic Review

Tyrone Donnon; Ahmed Al Ansari; Samah Al Alawi; Claudio Violato

Purpose The use of multisource feedback (MSF) or 360-degree evaluation has become a recognized method of assessing physician performance in practice. The purpose of the present systematic review was to investigate the reliability, generalizability, validity, and feasibility of MSF for the assessment of physicians. Method The authors searched the EMBASE, PsycINFO, MEDLINE, PubMed, and CINAHL databases for peer-reviewed, English-language articles published from 1975 to January, 2013. Studies were included if they met the follow ing inclusion criteria: used one or more MSF instruments to assess physician performance in practice; reported psychometric evidence of the instrument(s) in the form of reliability, generalizability coefficients, and construct or criterion-related validity; and provided information regarding the administration or feasibility of the process in collecting the feedback data. Results Of the 96 full-text articles assessed for eligibility, 43 articles were included. The use of MSF has been shown to be an effective method for providing feedback to physicians from a multitude of specialties about their clinical and nonclinical (i.e., professionalism, communication, interpersonal relationship, management) performance. In general, assessment of physician performance was based on the completion of the MSF instruments by 8 medical colleagues, 8 coworkers, and 25 patients to achieve adequate reliability and generalizability coefficients of &agr; ≥ 0.90 and Ep2 ≥ 0.80, respectively. Conclusions The use of MSF employing medical colleagues, coworkers, and patients as a method to assess physicians in practice has been shown to have high reliability, validity, and feasibility.


Simulation in healthcare : journal of the Society for Simulation in Healthcare | 2012

Undergraduate students' perceptions of and attitudes toward a simulation-based interprofessional curriculum: the KidSIM ATTITUDES questionnaire.

Elaine Sigalet; Tyrone Donnon; Vincent Grant

Introduction Existing attitude scales on interprofessional education (IPE) focus on students’ attitudes toward the concepts of teamwork and opportunities for IPE but fail to examine student perceptions of the learning modality that also plays an important role in the teaching and learning process. The purpose of this present study was to test the psychometric characteristics of the KidSIM Attitude Towards Teamwork in Training Undergoing Designed Educational Simulation (ATTITUDES) questionnaire developed to measure student perceptions of and attitudes toward IPE, teamwork, and simulation as a learning modality. Methods A total of 196 medical, nursing, and respiratory therapy students received a 3-hour IPE curriculum module that focused on 2 simulation-based team training scenarios in emergency and intensive care unit settings. Each multiprofessional group of students completed the 30-item ATTITUDES questionnaire before participating in the IPE curriculum and the same questionnaire again as a posttest on completion of the high-fidelity simulation, team-based learning sessions. Results The internal reliability of the ATTITUDES questionnaire was &agr; = 0.95. The factor analysis supports a 5-factor solution accounting for 61.6% of the variance: communication (8 items), relevance of IPE (7 items), relevance of simulation (5 items), roles and responsibilities (6 items), and situation awareness (4 items). Aggregated and profession-specific analysis of students’ responses using paired sample t tests showed significant differences from the pretest to the posttest for all questionnaire items and subscale measures (P < 0.001). Conclusions The KidSIM ATTITUDES questionnaire provides a reliable and construct valid measure of student perceptions of and attitudes toward IPE, teamwork, and simulation as a learning modality.


European Journal of Radiology | 2012

Comment on: A meta-analysis of common risk factors associated with the diagnosis of developmental dysplasia of the hip in newborns

Clara L. Ortiz-Neira; Elizabeth Oddone Paolucci; Tyrone Donnon

Abstract Background Although there is no clear consensus about the process of screening for developmental dysplasia of the hip (DDH), there are six common risk factors associated with DDH in patients less than 6 months of age (breech presentation, sex, family history, first-born, side of hip, and mode of delivery). Methods A meta-analysis of published studies was conducted to identify the relative risk ratio of the six commonly known risk factors. A total of 31 primary studies consisting of 20,196 DDH patients met the following inclusion criteria: (1) contained empirical data on at least one common risk factor, (2) were peer-reviewed from an English language scientific journal, (3) included patients less or equal to 6 months of age, and (4) identified method of diagnosis (e.g., ultrasound, radiographs or clinical examination). Results Fixed effect and random effects models with 95% confidence intervals were calculated for each of the six risk factors. Reported relative risk ratio (RR) for each factor in newborns was: breech presentation 3.75 (95% CI: 2.25–6.24), females 2.54 (95% CI: 2.11–3.05), left hip side 1.54 (95% CI: 1.25–1.90), first born 1.44 (95% CI: 1.12–1.86), and family history 1.39 (95% CI: 1.23–1.57). A non-significant RR value of 1.22 (95% CI: 0.46–3.23) was found for mode of delivery. Conclusion Results suggest that ultrasound and radiology screening methods be used to confirm DDH in newborns that present with one or a combination of the following common risk factors: breech presentation, female, left hip affected, first born and family history of DDH.


Psychological Reports | 2007

A psychometric assessment of the self-reported Youth Resiliency: Assessing Developmental Strengths questionnaire.

Tyrone Donnon; Wayne Hammond

As opposed to the problem-based approach of dealing with specific at-risk behaviors, the objective of the self-reported Youth Resiliency: Assessing Developmental Strengths questionnaire is to provide a statistically sound and research-based approach to understanding the factors that contribute to the development of adolescent resiliency. The study of protective factors, or the more recent attempts at conceptualizing the phenomena of individual resiliency, has been prevalent in the social and health sciences research for decades. In this study, the psychometric characteristics of the Youth Resiliency questionnaire, based on a large urban sample of Grades 7 to 9 adolescents (N = 2,291), are presented. The findings from this study present a potential framework for understanding the construct and function of resiliency as it pertains to both the extrinsic and intrinsic factors of adolescent development.


Journal of Veterinary Medical Education | 2009

Assessment of Applicants to the Veterinary Curriculum Using a Multiple Mini-Interview Method

Kent G. Hecker; Tyrone Donnon; Carmen Fuentealba; David Hall; Oscar Illanes; Doug W. Morck; Christoph Muelling

This study describes the development, implementation, and psychometric assessment of the multiple mini-interview (MMI) for the inaugural class of veterinary medicine applicants at the University of Calgary Faculty of Veterinary Medicine (UCVM). The MMI is a series of approximately five to 12 10-minute interviews that consist of situational events. Applicants are given a scenario and asked to work through an issue or behavioral-type questions that are meant to assess one attribute (e.g., empathy) at a time. This structure allows for multiple assessments of the applicants by trained interviewers on the same questions. MMI scenario development was based on a review of the noncognitive attributes currently assessed by the 31 veterinary schools across Canada and the United States and the goals and objectives of UCVM. The noncognitive attributes of applicants (N=110) were assessed at five stations, by two interviewers within each station, on three items using a standardized rating form on an anchored 1-5 scale. The method was determined to be reliable (G-coefficient=0.88) and demonstrated evidence of validity. The MMI score did not correlate with grade-point average (r=0.12, p=0.22). While neither the applicants nor interviewers had participated in an MMI format before, both groups reported the process to be acceptable in a post-interview questionnaire. This analysis provides preliminary evidence of the reliability, validity, and acceptability of the MMI in assessing the noncognitive attributes of applicants for veterinary medical school admissions.


Academic Medicine | 2013

Development of a team performance scale to assess undergraduate health professionals.

Sigalet E; Tyrone Donnon; Adam Cheng; Cooke S; Robinson T; Bissett W; Grant

Purpose Interprofessional simulation-based team training is strongly endorsed as a potential solution for improving teamwork in health care delivery. Unfortunately, there are few teamwork evaluation instruments. The present study developed and tested the psychometric characteristics of the newly developed KidSIM Team Performance Scale checklist. Method A quasi-experimental research design engaging a convenience sample of 196 undergraduate medical, nursing, and respiratory therapy students was completed in the 2010–2011 academic year. Multidisciplinary student teams participated in a simulation-based curriculum that included the completion of two acute illness management scenarios, resulting in 282 independent reviews by evaluators from medicine, nursing, and respiratory therapy. The authors investigated the underlying factors of the performance checklist and examined the performance scores of an experimental and a control team-training-curriculum group. Results Participation in the supplemental team training curriculum was related to higher team performance scores (P < .001). All teams at Time 2 achieved higher scores than at Time 1 (P < .05). The reliability coefficient for the total performance scale was &agr; = 0.90. Factor analysis supported a three-factor solution (accounting for 67.9% of the variance) with an emphasis on roles and responsibilities (five items) and communication (six items) subscale factors. Conclusions When simulation is used in acute illness management training, the KidSIM Team Performance Scale provides reliable, valid score interpretation of undergraduates’ team process based on communication effectiveness and identification of roles and responsibilities. Implementation of a supplementary team training curriculum significantly enhances students’ performance in multidisciplinary simulation-based scenarios at the undergraduate level.


Academic Medicine | 2013

The Construct and Criterion Validity of the Mini-CEX: A Meta-Analysis of the Published Research

Ahmed Al Ansari; Syeda Kauser Ali; Tyrone Donnon

Purpose To conduct a meta-analysis of published studies to determine the construct and criterion validity of the mini-clinical evaluation exercise (mini-CEX) to measure clinical performance. Method The authors included all peer-reviewed studies published from 1995 to 2012 that reported the relationship between participants’ performance on the mini-CEX and on other standardized academic and clinical performance measures. Moderator variables and performance and standardized exam measures were extracted and reviewed independently using a standardized coding protocol. Results Performance measures from 11 studies were identified. A random-effects model of weighted mean effect size differences (d) resulted in: (1) construct validity coefficients for the mini-CEX on the trainees’ performance across different residency year levels ranging from d = 0.25 (95% confidence intervals [CI]: 0.04–0.46) to d = 0.50 (95% CI: 0.31–0.70), and (2) concurrent validity coefficients for the mini-CEX based on personnel ratings ranging from d = 0.23 (95% CI: 0.04–0.50) to d = 0.50 (95% CI: 0.34–0.65). Also, a random-effects model of weighted correlation effect size differences (r) resulted in predictive validity coefficients for the mini-CEX on trainees’ performance across different standardized measures ranging from r = 0.26 (95% CI: 0.16–0.35) to r = 0.85 (95% CI: 0.47–0.96). Conclusions The construct and criterion validity of the mini-CEX was supported by small to large effect size differences based on measures between trainees’ achievement and clinical skills performance, indicating that it is an important instrument for the direct observation of trainees’ clinical performance.


BMC Medical Education | 2008

The need for national medical licensing examination in Saudi Arabia

Sohail Bajammal; Rania Zaini; Wesam Abuznadah; Mohammad Al-Rukban; Syed Moyn Aly; Abdulaziz Boker; Abdulmohsen Al-Zalabani; Mohammad Al-Omran; Amro Al-Habib; Mona Hmoud AlSheikh; Mohammad Al-Sultan; Nadia M. Fida; Khalid Alzahrani; Bashir Hamad; Mohammad Yahya Al Shehri; Khalid A. Bin Abdulrahman; Saleh Al-Damegh; Mansour M. Al-Nozha; Tyrone Donnon

BackgroundMedical education in Saudi Arabia is facing multiple challenges, including the rapid increase in the number of medical schools over a short period of time, the influx of foreign medical graduates to work in Saudi Arabia, the award of scholarships to hundreds of students to study medicine in various countries, and the absence of published national guidelines for minimal acceptable competencies of a medical graduate.DiscussionWe are arguing for the need for a Saudi national medical licensing examination that consists of two parts: Part I (Written) which tests the basic science and clinical knowledge and Part II (Objective Structured Clinical Examination) which tests the clinical skills and attitudes. We propose this examination to be mandated as a licensure requirement for practicing medicine in Saudi Arabia.ConclusionThe driving and hindering forces as well as the strengths and weaknesses of implementing the licensing examination are discussed in details in this debate.


Medical Teacher | 2010

Student and teaching characteristics related to ratings of instruction in medical sciences graduate programs

Tyrone Donnon; Hilary Delver; Tanya N. Beran

Background: Although the validity of students’ ratings of instruction has been documented, several student and course characteristics may be related to the ratings students give their instructors. Aims: The purpose of this study was to examine student ratings obtained from the Universal Student Ratings of Instruction (USRI) instrument. These responses were compared to various student characteristics. Also, teaching characteristics that were most closely associated with the ratings were determined. Method: A total of 1738 USRI forms were completed by graduate students enrolled in medical science courses from 1999 to 2006 in the Faculty of Medicine at a Canadian university. Results: Between group comparisons showed that negative student perceptions about the course (i.e., did not have the freedom to select), perceiving the course workload as high, and low grade expectations held were related to negative student ratings of overall quality of instruction. In terms of the student and teaching characteristics, organization of course material and perceptions of whether students felt they learned a lot in the course were most closely related to global ratings of instructional quality. Conclusion: Implications for teaching focus on improving the organization and delivery of course content that meets the learning objectives of graduate students in medical sciences.

Collaboration


Dive into the Tyrone Donnon's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge