Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rick D. Axelson is active.

Publication


Featured researches published by Rick D. Axelson.


Medical Education | 2009

Rater and occasion impacts on the reliability of pre-admission assessments

Rick D. Axelson; Clarence D. Kreiter

Context  Some medical schools have recently replaced the medical school pre‐admission interview (MSPI) with the multiple mini‐interview (MMI), which utilises objective structured clinical examination (OSCE)‐style measurement techniques. Their motivation for doing so stems from the superior reliabilities obtained with the OSCE‐style measures. Other institutions, however, are hesitant to embrace the MMI format because of the time and costs involved in restructuring recruitment and admission procedures.


Evaluation & the Health Professions | 2010

Assessing implicit gender bias in Medical Student Performance Evaluations.

Rick D. Axelson; Catherine Solow; Kristi J. Ferguson; Michael B. Cohen

For medical schools, the increasing presence of women makes it especially important that potential sources of gender bias be identified and removed from student evaluation methods. Our study looked for patterns of gender bias in adjective data used to inform our Medical Student Performance Evaluations (MSPEs). Multigroup Confirmatory Factor Analysis (CFA) was used to model the latent structure of the adjectives attributed to students (n = 657) and to test for systematic scoring errors by gender. Gender bias was evident in two areas: (a) women were more likely than comparable men to be described as ‘‘compassionate,’’ ‘‘sensitive,’’ and ‘‘enthusiastic’’ and (b) men were more likely than comparable women to be seen as ‘‘quick learners.’’ The gender gap in ‘‘quick learner’’ attribution grows with increasing student proficiency; men’s rate of increase is over twice that of women’s. Technical and nontechnical approaches for ameliorating the impact of gender bias on student recommendations are suggested.


Teaching and Learning in Medicine | 2013

A Perspective on Medical School Admission Research and Practice Over the Last 25 Years

Clarence D. Kreiter; Rick D. Axelson

Over the last 25 years a large body of research has investigated how best to select applicants to study medicine. Although these studies have inspired little actual change in admission practice, the implications of this research are substantial. Five areas of inquiry are discussed: (1) the interview and related techniques, (2) admission tests, (3) other measures of personal competencies, (4) the decision process, and (5) defining and measuring the criterion. In each of these areas we summarize consequential developments and discuss their implication for improving practice. (1) The traditional interview has been shown to lack both reliability and validity. Alternatives have been developed that display promising measurement characteristics. (2) Admission test scores have been shown to predict academic and clinical performance and are generally the most useful measures obtained about an applicant. (3) Due to the high-stakes nature of the admission decision, it is difficult to support a logical validity argument for the use of personality tests. Although standardized letters of recommendation appear to offer some promise, more research is needed. (4) The methods used to make the selection decision should be responsive to validity research on how best to utilize applicant information. (5) Few resources have been invested in obtaining valid criterion measures. Future research might profitably focus on composite score as a method for generating a measure of a physicians career success. There are a number of social and organization factors that resist evidence-based change. However, research over the last 25 years does present important findings that could be used to improve the admission process.


Teaching and Learning in Medicine | 2010

Medical School Preadmission Interviews: Are Structured Interviews More Reliable Than Unstructured Interviews?

Rick D. Axelson; Clarence D. Kreiter; Kristi J. Ferguson; Catherine Solow; Kathi Huebner

Background: The medical education research literature consistently recommends a structured format for the medical school preadmission interview. There is, however, little direct evidence to support this recommendation. Purpose: To shed further light on this issue, the present study examines the respective reliability contributions from the structured and unstructured interview components at the University of Iowa. Methods: We conducted three univariate G studies on ratings from 3,043 interviews and one multivariate G study using responses from 168 applicants who interviewed twice. Results: Examining interrater reliability and test–retest types of reliability, the unstructured format proved more reliable in both instances. Yet, combining measures from the two interview formats yielded a more reliable score than using either alone. Conclusions: At least from a reliability perspective, the popular advice regarding interview structure may need to be reconsidered. Issues related to validity, fairness, and reliability should be carefully weighed when designing the interview process.


BMC Medical Education | 2016

'I wish someone watched me interview:' medical student insight into observation and feedback as a method for teaching communication skills during the clinical years

Heather Schopper; Marcy E. Rosenbaum; Rick D. Axelson

BackgroundExperts suggest observation and feedback is a useful tool for teaching and evaluating medical student communication skills during the clinical years. Failing to do this effectively risks contributing to deterioration of students’ communication skills during the very educational period in which they are most important. While educators have been queried about their thoughts on this issue, little is known about what this process is like for learners and if they feel they get educational value from being observed. This study explored student perspectives regarding their experiences with clinical observation and feedback on communication skills.MethodsA total of 125 senior medical students at a U.S. medical school were interviewed about their experiences with observation and feedback. Thematic analysis of interview data identified common themes among student responses.ResultsThe majority of students reported rarely being observed interviewing, and they reported receiving feedback even less frequently. Students valued having communication skills observed and became more comfortable with observation the more it occurred. Student-identified challenges included supervisor time constraints and grading based on observation. Most feedback focused on information gathering and was commonly delayed until well after the observed encounter.ConclusionsEliciting students’ perspectives on the effect of observation and feedback on the development of their communication skills is a unique way to look at this topic, and brings to light many student-identified obstacles and opportunities to maximize the educational value of observation and feedback for teaching communication, including increasing the number of observations, disassociating observation from numerically scored evaluation, training faculty to give meaningful feedback, and timing the observation/feedback earlier in clerkships.


Teaching and Learning in Medicine | 2012

Do Preceptors With More Rating Experience Provide More Reliable Assessments of Medical Student Performance

Kristi J. Ferguson; Clarence D. Kreiter; Rick D. Axelson

Background: Although the existing psychometric literature provides guidance on the best method for acquiring a reliable clinical evaluation form (CEF)-based score, it also shows that a single CEF rating has very low reliability. Purpose: This study examines whether experience with rating students might act as a form of rater training and hence improve the quality of CEF ratings. Methods: Preceptors were divided into two groups based on rater experience. The univariate and multivariate G study designs used were simple rater (r)–nested-within-person (p) [r : p and r○ : p•] models, and in the univariate analysis was applied separately to CEFs completed by high and low experienced raters. Results: The high experienced rater group yielded a substantially higher observed reliability in both the univariate and multivariate analyses. Conclusions: These results support the hypothesis that high experienced raters produce more reliable ratings of student performance and suggest methods for improving CEF ratings.


Academic Medicine | 2015

Assessing effective teaching: what medical students value when developing evaluation instruments.

Jeffrey E. Pettit; Rick D. Axelson; Kristi J. Ferguson; Marcy E. Rosenbaum

Purpose To investigate what criteria medical students would value and use in assessing teaching skills. Method Fourth-year medical students at the University of Iowa Carver College of Medicine enrolled in a teaching elective course are required to design and use an evaluation instrument to assess effective teaching. Each class uses a similar process in developing their instruments. Since the first class in spring 2007, 193 medical students have created 36 different instruments. Three faculty evaluation experts conducted a thematic analysis of the instruments and coded the information according to what was being evaluated and what types of ratings were indicated. The data were submitted to a fourth faculty reviewer, who synthesized the information and adjusted the codes to better capture the data. Common themes and categories were detected. Results Four themes were identified: content (instructor knowledgeable, teaches at level of learner, practical information), learning environment, teacher personal attributes, and teaching methods. Thirty-two descriptors were distinguished across the 36 instruments. Thirteen descriptors were present in 50% or more of the instruments. The most common rating systems were Likert scales and open comments. Conclusions Fourth-year medical students can offer an eclectic resource for evaluating teaching in the classroom and the clinic. Using the descriptors that were identified in greater than 50% of the evaluation instruments will provide effective measures that can be incorporated into medical teacher evaluation instruments.


Proceedings in Obstetrics and Gynecology | 2012

Formative feedback on a patient-based assessment: comparing student perceptions of two feedback methods

Marygrace Elson; Rick D. Axelson

Introduction: Although formative feedback is widely recognized as an essential aid to student learning, there is little evidence regarding effective ways of providing formative feedback on structured clinical exams. This study compares students’ perceptions of immediate, face-to-face feedback with delayed, written on-line faculty feedback on their Obstetrics and Gynecology medical student clerkship patient-based assessment (PBA) at the University of Iowa.


Change: The Magazine of Higher Learning | 2010

Defining Student Engagement.

Rick D. Axelson; Arend Flick


Patient Education and Counseling | 2013

Curricular disconnects in learning communication skills: What and how students learn about communication during clinical clerkships

Marcy E. Rosenbaum; Rick D. Axelson

Collaboration


Dive into the Rick D. Axelson's collaboration.

Top Co-Authors

Avatar

Kristi J. Ferguson

Roy J. and Lucille A. Carver College of Medicine

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Marcy E. Rosenbaum

Roy J. and Lucille A. Carver College of Medicine

View shared research outputs
Top Co-Authors

Avatar

Catherine Solow

Roy J. and Lucille A. Carver College of Medicine

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Donald W. Black

Roy J. and Lucille A. Carver College of Medicine

View shared research outputs
Top Co-Authors

Avatar

Heather Schopper

Roy J. and Lucille A. Carver College of Medicine

View shared research outputs
Top Co-Authors

Avatar

J Roy

Roy J. and Lucille A. Carver College of Medicine

View shared research outputs
Top Co-Authors

Avatar

Jeffrey E. Pettit

Roy J. and Lucille A. Carver College of Medicine

View shared research outputs
Top Co-Authors

Avatar

Kathi Huebner

Roy J. and Lucille A. Carver College of Medicine

View shared research outputs
Researchain Logo
Decentralizing Knowledge