Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Harold I. Reiter is active.

Publication


Featured researches published by Harold I. Reiter.


Medical Education | 2004

An admissions OSCE: the multiple mini‐interview

Kevin W. Eva; Jack Rosenfeld; Harold I. Reiter; Geoffrey R. Norman

Context  Although health sciences programmes continue to value non‐cognitive variables such as interpersonal skills and professionalism, it is not clear that current admissions tools like the personal interview are capable of assessing ability in these domains. Hypothesising that many of the problems with the personal interview might be explained, at least in part, by it being yet another measurement tool that is plagued by context specificity, we have attempted to develop a multiple sample approach to the personal interview.


Medical Education | 2009

Predictive validity of the multiple mini-interview for selecting medical trainees.

Kevin W. Eva; Harold I. Reiter; Kien Trinh; Parveen Wasi; Jack Rosenfeld; Geoffrey R. Norman

Introduction  In this paper we report on further tests of the validity of the multiple mini‐interview (MMI) selection process, comparing MMI scores with those achieved on a national high‐stakes clinical skills examination. We also continue to explore the stability of candidate performance and the extent to which so‐called ‘cognitive’ and ‘non‐cognitive’ qualities should be deemed independent of one another.


Academic Medicine | 2004

The ability of the multiple mini-interview to predict preclerkship performance in medical school.

Kevin W. Eva; Harold I. Reiter; Jack Rosenfeld; Geoffrey R. Norman

Problem Statement and Background. One of the greatest challenges continuing to face medical educators is the development of an admissions protocol that provides valid information pertaining to the noncognitive qualities candidates possess. An innovative protocol, the Multiple Mini-Interview, has recently been shown to be feasible, acceptable, and reliable. This article presents a first assessment of the techniques validity. Method. Forty five candidates to the Undergraduate MD program at McMaster University participated in an MMI in Spring 2002 and enrolled in the program the following autumn. Performance on this tool and on the traditional protocol was compared to performance on preclerkship evaluation exercises. Results. The MMI was the best predictor of objective structured clinical examination performance and grade point average was the most consistent predictor of performance on multiple-choice question examinations of medical knowledge. Conclusions. While further validity testing is required, the MMI appears better able to predict preclerkship performance relative to traditional tools designed to assess the noncognitive qualities of applicants.


Medical Education | 2007

Multiple mini‐interviews predict clerkship and licensing examination performance

Harold I. Reiter; Kevin W. Eva; Jack Rosenfeld; Geoffrey R. Norman

Objective  The Multiple Mini‐Interview (MMI) has previously been shown to have a positive correlation with early medical school performance. Data have matured to allow comparison with clerkship evaluations and national licensing examinations.


Academic Medicine | 2004

The relationship between interviewers' characteristics and ratings assigned during a multiple mini-interview.

Kevin W. Eva; Harold I. Reiter; Jack Rosenfeld; Geoffrey R. Norman

Purpose. To assess the consistency of ratings assigned by health sciences faculty members relative to community members during an innovative admissions protocol called the Multiple Mini-Interview (MMI). Method. A nine-station MMI was created and 54 candidates to an undergraduate MD program participated in the exercise in Spring 2003. Three stations were staffed with a pair of faculty members, three with a pair of community members, and three with one member of each group. Raters completed a four-item evaluation form. All participants completed post-MMI questionnaires. Generalizability Theory was used to examine the consistency of the ratings provided within each of these three subgroups. Results. The overall test reliability was found to be .78 and a Decision Study suggested that admissions committees should distribute their resources by increasing the number of interviews to which candidates are exposed rather than increasing the number of interviewers within each interview. Divergence of ratings was greater within the pairing of community member to faculty member and least for pairings of community members. Participants responded positively to the MMI. Conclusion. The MMI provides a reliable protocol for assessing the personal qualities of candidates by accounting for context specificity with a multiple sampling approach. Increasing the heterogeneity of interviewers may increase the heterogeneity of the accepted group of candidates. Further work will determine the extent to which different groups of raters provide equally valid (albeit different) judgments.


Medical Education | 2006

The effect of defined violations of test security on admissions outcomes using multiple mini‐interviews

Harold I. Reiter; Penny Salvatori; Jack Rosenfeld; Kien Trinh; Kevin W. Eva

Introduction  Heterogeneous results exist regarding the impact of security violations on student performances in objective structured clinical examinations (OSCEs). Three separate studies investigate whether anticipated security violations result in undesirable enhancement of MMI performance ratings.


Advances in Health Sciences Education | 2010

Non-association between Neo-5 personality tests and multiple mini-interview

Kulamakan Kulasegaram; Harold I. Reiter; Willi H. Wiesner; Rick D. Hackett; Geoffrey R. Norman

Most medical schools attempt to select applicants on the basis of cognitive and non-cognitive skills. Typically, interpersonal skills are assessed by interview, though relatively few applicants make it to interview. Thus, an efficient paper and pencil test of non-cognitive skills is needed. One possibility is personality tests. Tests of the five factor model of personality, and in particular the factor of conscientiousness, has proven effective in predicting future job performance. Can it serve as a screen for admissions interviews? In particular, correlation with the multiple mini-interviews (MMI) is of interest since the latter is a well validated test of non-cognitive skills. A total of 152 applicants to Michael G. DeGroote School of Medicine at McMaster completed the Neo-5 personality test voluntarily in advance of their admissions interviews. Correlations were calculated between personality factors and grade point average (GPA), medical college admissions test (MCAT) and MMI. No statistically significant correlation was found between personality factors and cognitive (GPA, MCAT) measures. More surprisingly, no statistically significant correlation was found between personality factors, including conscientiousness, and the MMI. Personality testing is not a useful screening test for the MMI.


Academic Medicine | 2006

Medical school admissions: enhancing the reliability and validity of an autobiographical screening tool.

Kelly L. Dore; Mark D. Hanson; Harold I. Reiter; Melanie Blanchard; Karen Deeth; Kevin W. Eva

Background Most medical school applicants are screened out preinterview. Some cognitive scores available preinterview and some noncognitive scores available at interview demonstrate reasonable reliability and predictive validity. A reliable preinterview noncognitive measure would relax dependence upon screening based entirely on cognitive tendencies. Method In 2005, applicants interviewing at McMaster University’s Michael G. DeGroote School of Medicine completed an offsite, noninvigilated, Autobiographical Submission (ABS) preinterview and another onsite, invigilated, ABS at interview. Traditional and new ABS scoring methods were compared, with raters either evaluating all ABS questions for each candidate in turn (vertical scoring–traditional method) or evaluating all candidates for each question in turn (horizontal scoring–new method). Results The new scoring method revealed lower internal consistency and higher interrater reliability relative to the traditional method. More importantly, the new scoring method correlated better with the Multiple Mini-Interview (MMI) relative to the traditional method. Conclusions The new ABS scoring method revealed greater interrater reliability and predictive capacity, thus increasing its potential as a screen for noncognitive characteristics.


Academic Medicine | 2009

Extending the interview to all medical school candidates--Computer-Based Multiple Sample Evaluation of Noncognitive Skills (CMSENS).

Kelly L. Dore; Harold I. Reiter; Kevin W. Eva; Sharyn Krueger; Edward Scriven; Eric Siu; Shannon Hilsden; Jennifer Thomas; Geoffrey R. Norman

Background Most medical school candidates are excluded without benefit of noncognitive skills assessment. Is development of a noncognitive preinterview screening test that correlates with the well-validated Multiple Mini-Interview (MMI) possible? Method Study 1: 110 medical school candidates completed MMI and Computer-based Multiple Sample Evaluation of Noncognitive Skills (CMSENS)—eight 1-minute video-based scenarios and four self-descriptive questions, with short-answer-response format. Seventy-eight responses were audiotaped, 32 typewritten; all were scored by two independent raters. Study 2: 167 candidates completed CMSENS—eight videos, six self-descriptive questions, typewritten responses only, scored by two raters; 88 of 167 underwent the MMI. Results Results for overall test generalizability, interrater reliability, and correlation with MMI, respectively, were, for Study 1, audio-responders: 0.86, 0.82, 0.15; typewritten-responders: 0.72, 0.81, 0.51; and for Study 2, 0.83, 0.95, 0.46 (correlation with disattenuation was 0.60). Conclusions Strong psychometric properties, including MMI correlation, of CMSENS warrant investigation into future widespread implementation as a preinterview noncognitive screening test.


Academic Medicine | 2015

The Effect of Differential Weighting of Academics, Experiences, and Competencies Measured by Multiple Mini Interview (MMI) on Race and Ethnicity of Cohorts Accepted to One Medical School

Carol A. Terregino; Meghan McConnell; Harold I. Reiter

Purpose To examine whether academic scores, experience scores, and Multiple Mini Interview (MMI) core personal competencies scores vary across applicants’ self-reported ethnicities, and whether changes in weighting of scores would alter the proportion of ethnicities underrepresented in medicine (URIM) in the entering class composition. Method This study analyzed retrospective data from 1,339 applicants to the Rutgers Robert Wood Johnson Medical School interviewed for entering classes 2011–2013. Data analyzed included two academic scores—grade point average (GPA) and Medical College Admission Test (MCAT)—service/clinical/research (SCR) scores, and MMI scores. Independent-samples t tests evaluated whether URIM ethnicities differed from non-URIM across GPA, MCAT, SCR, and MMI scores. A series of “what-if” analyses were conducted to determine whether alternative weighting methods would have changed final admissions decisions and entering class composition. Results URIM applicants had significantly lower GPAs (P < .001), MCATs (P < .001), and SCR scores (P < .001). However, this pattern was not found with MMI score (non-URIM 10.4 [1.6], URIM 10.4 [1.3], P = .55). Alternative weighting analyses show that including academic/experiential scores impacts the percentage of URIM acceptances. URIM acceptance rate declined from 57% (100% MMI) to 43% (10% GPA/10% MCAT/10% SCR/70% MMI), 39% (30% GPA/70% MMI), to as low as 22% (50% MCAT/50% MMI). Conclusions Sole reliance on the MMI for final admissions decisions, after threshold academic/experiential preparation are met, promotes diversity with the accepted applicant pool; weighting of “the numbers” or what is written about the application may decrease the acceptance of URIM applicants.

Collaboration


Dive into the Harold I. Reiter's collaboration.

Top Co-Authors

Avatar

Kevin W. Eva

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

B. Strang

Juravinski Cancer Centre

View shared research outputs
Researchain Logo
Decentralizing Knowledge