Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kris G. Thomas is active.

Publication


Featured researches published by Kris G. Thomas.


Journal of General Internal Medicine | 2009

Effect of Rater Training on Reliability and Accuracy of Mini-CEX Scores: A Randomized, Controlled Trial

David A. Cook; Denise M. Dupras; Thomas J. Beckman; Kris G. Thomas; V. Shane Pankratz

BackgroundMini-CEX scores assess resident competence. Rater training might improve mini-CEX score interrater reliability, but evidence is lacking.ObjectiveEvaluate a rater training workshop using interrater reliability and accuracy.DesignRandomized trial (immediate versus delayed workshop) and single-group pre/post study (randomized groups combined).SettingAcademic medical center.ParticipantsFifty-two internal medicine clinic preceptors (31 randomized and 21 additional workshop attendees).InterventionThe workshop included rater error training, performance dimension training, behavioral observation training, and frame of reference training using lecture, video, and facilitated discussion. Delayed group received no intervention until after posttest.MeasurementsMini-CEX ratings at baseline (just before workshop for workshop group), and four weeks later using videotaped resident–patient encounters; mini-CEX ratings of live resident–patient encounters one year preceding and one year following the workshop; rater confidence using mini-CEX.ResultsAmong 31 randomized participants, interrater reliabilities in the delayed group (baseline intraclass correlation coefficient [ICC] 0.43, follow-up 0.53) and workshop group (baseline 0.40, follow-up 0.43) were not significantly different (p = 0.19). Mean ratings were similar at baseline (delayed 4.9 [95% confidence interval 4.6–5.2], workshop 4.8 [4.5–5.1]) and follow-up (delayed 5.4 [5.0–5.7], workshop 5.3 [5.0–5.6]; p = 0.88 for interaction). For the entire cohort, rater confidence (1 = not confident, 6 = very confident) improved from mean (SD) 3.8 (1.4) to 4.4 (1.0), p = 0.018. Interrater reliability for ratings of live encounters (entire cohort) was higher after the workshop (ICC 0.34) than before (ICC 0.18) but the standard error of measurement was similar for both periods.ConclusionsRater training did not improve interrater reliability or accuracy of mini-CEX scores.Clinical trials registrationclinicaltrials.gov identifier NCT00667940


Academic Medicine | 2006

Impact of Self-Assessment Questions and Learning Styles in Web-Based Learning: A Randomized, Controlled, Crossover Trial

David A. Cook; Warren G. Thompson; Kris G. Thomas; Matthew R. Thomas; V. Shane Pankratz

Purpose To determine the effect of self-assessment questions on learners’ knowledge and format preference in a Web-based course, and investigate associations between learning styles and outcomes. Method The authors conducted a randomized, controlled, crossover trial in the continuity clinics of the Mayo-Rochester internal medicine residency program during the 2003–04 academic year. Case-based self-assessment questions were added to Web-based modules covering topics in ambulatory internal medicine. Participants completed two modules with questions and two modules without questions, with sequence randomly assigned. Outcomes included knowledge assessed after each module, format preference, and learning style assessed using the Index of Learning Styles. Results A total of 121 of 146 residents (83%) consented. Residents had higher test scores when using the question format (mean ± standard error, 78.9% ± 1.0) than when using the standard format (76.2% ± 1.0, p = .006). Residents preferring the question format scored higher (79.7% ± 1.1) than those preferring standard (69.5% ± 2.3, p < .001). Learning styles did not affect scores except that visual-verbal “intermediate” learners (80.6% ± 1.4) and visual learners (77.5% ± 1.3) did better than verbal learners (70.9% ± 3.0, p = .003 and p = .033, respectively). Sixty-five of 78 residents (83.3%, 95% CI 73.2–90.8%) preferred the question format. Learning styles were not associated with preference (p > .384). Although the question format took longer than the standard format (60.4 ± 3.6 versus 44.3 ± 3.3 minutes, p < .001), 55 of 77 residents (71.4%, 60.0–81.2%) reported that it was more efficient. Conclusions Instructional methods that actively engage learners improve learning outcomes. These findings hold implications for both Web-based learning and “traditional” educational activities. Future research, in both Web-based learning and other teaching modalities, should focus on further defining the effectiveness of selected instructional methods in specific learning contexts.


The Prostate | 1997

Effects of the calciotrophic peptides calcitonin and parathyroid hormone on prostate cancer growth and chemotaxis.

Candace K. Ritchie; Kris G. Thomas; Laura R. Andrews; Donald J. Tindall; Lorraine A. Fitzpatrick

The most common site of metastases in prostate cancer is the skeleton and occurs in 70–80% of patients with prostate carcinoma. Calciotrophic peptides are important in the growth and development of normal bone matrix.


Academic Medicine | 2013

There is no "i" in teamwork in the patient-centered medical home: defining teamwork competencies for academic practice.

Emily Leasure; Ronald R. Jones; Lauren Meade; Marla I. Sanger; Kris G. Thomas; Virginia P. Tilden; Judith L. Bowen; Eric J. Warm

Evidence suggests that teamwork is essential for safe, reliable practice. Creating health care teams able to function effectively in patient-centered medical homes (PCMHs), practices that organize care around the patient and demonstrate achievement of defined quality care standards, remains challenging. Preparing trainees for practice in interprofessional teams is particularly challenging in academic health centers where health professions curricula are largely siloed. Here, the authors review a well-delineated set of teamwork competencies that are important for high-functioning teams and suggest how these competencies might be useful for interprofessional team training and achievement of PCMH standards. The five competencies are (1) team leadership, the ability to coordinate team members’ activities, ensure appropriate task distribution, evaluate effectiveness, and inspire high-level performance, (2) mutual performance monitoring, the ability to develop a shared understanding among team members regarding intentions, roles, and responsibilities so as to accurately monitor one another’s performance for collective success, (3) backup behavior, the ability to anticipate the needs of other team members and shift responsibilities during times of variable workload, (4) adaptability, the capability of team members to adjust their strategy for completing tasks on the basis of feedback from the work environment, and (5) team orientation, the tendency to prioritize team goals over individual goals, encourage alternative perspectives, and show respect and regard for each team member. Relating each competency to a vignette from an academic primary care clinic, the authors describe potential strategies for improving teamwork learning and applying the teamwork competences to academic PCMH practices.


Journal of General Internal Medicine | 2007

Validation of a method for assessing resident physicians' quality improvement proposals.

James L. Leenstra; Thomas J. Beckman; Darcy A. Reed; William C. Mundell; Kris G. Thomas; Bryan J. Krajicek; Stephen S. Cha; Joseph C. Kolars; Furman S. McDonald

BACKGROUNDResidency programs involve trainees in quality improvement (QI) projects to evaluate competency in systems-based practice and practice-based learning and improvement. Valid approaches to assess QI proposals are lacking.OBJECTIVEWe developed an instrument for assessing resident QI proposals—the Quality Improvement Proposal Assessment Tool (QIPAT-7)—and determined its validity and reliability.DESIGNQIPAT-7 content was initially obtained from a national panel of QI experts. Through an iterative process, the instrument was refined, pilot-tested, and revised.PARTICIPANTSSeven raters used the instrument to assess 45 resident QI proposals.MEASUREMENTSPrincipal factor analysis was used to explore the dimensionality of instrument scores. Cronbach’s alpha and intraclass correlations were calculated to determine internal consistency and interrater reliability, respectively.RESULTSQIPAT-7 items comprised a single factor (eigenvalue = 3.4) suggesting a single assessment dimension. Interrater reliability for each item (range 0.79 to 0.93) and internal consistency reliability among the items (Cronbach’s alpha = 0.87) were high.CONCLUSIONSThis method for assessing resident physician QI proposals is supported by content and internal structure validity evidence. QIPAT-7 is a useful tool for assessing resident QI proposals. Future research should determine the reliability of QIPAT-7 scores in other residency and fellowship training programs. Correlations should also be made between assessment scores and criteria for QI proposal success such as implementation of QI proposals, resident scholarly productivity, and improved patient outcomes.


Academic Medicine | 2009

Measuring motivational characteristics of courses: applying Keller's instructional materials motivation survey to a web-based course.

David A. Cook; Thomas J. Beckman; Kris G. Thomas; Warren G. Thompson

Purpose The Instructional Materials Motivation Survey (IMMS) purports to assess the motivational characteristics of instructional materials or courses using the Attention, Relevance, Confidence, and Satisfaction (ARCS) model of motivation. The IMMS has received little use or study in medical education. The authors sought to evaluate the validity of IMMS scores and compare scores between standard and adaptive Web-based learning modules. Method During the 2005–2006 academic year, 124 internal medicine residents at the Mayo School of Graduate Medical Education (Rochester, Minnesota) were asked to complete the IMMS for two Web-based learning modules. Participants were randomly assigned to use one module that adapted to their prior knowledge of the topic, and one module using a nonadaptive design. IMMS internal structure was evaluated using Cronbach alpha and interdimension score correlations. Relations to other variables were explored through correlation with global module satisfaction and regression with knowledge scores. Results Of the 124 eligible participants, 79 (64%) completed the IMMS at least once. Cronbach alpha was ≥0.75 for scores from all IMMS dimensions. Interdimension score correlations ranged 0.40 to 0.80, whereas correlations between IMMS scores and global satisfaction ratings ranged 0.40 to 0.63 (P < .001). Knowledge scores were associated with Attention and Relevance subscores (P = .033 and .01, respectively) but not with other IMMS dimensions (P ≥ .07). IMMS scores were similar between module designs (on a five-point scale, differences ranged from 0.0 to 0.15, P ≥ .33). Conclusions These limited data generally support the validity of IMMS scores. Adaptive and standard Web-based instructional designs were similarly motivating. Cautious use and further study of the IMMS are warranted.


Academic Medicine | 2009

Case-based or non-case-based questions for teaching postgraduate physicians: a randomized crossover trial.

David A. Cook; Warren G. Thompson; Kris G. Thomas

Purpose The comparative efficacy of case-based (CB) and non-CB self-assessment questions in Web-based instruction is unknown. The authors sought to compare CB and non-CB questions. Method The authors conducted a randomized crossover trial in the continuity clinics of two academic residency programs. Four Web-based modules on ambulatory medicine were developed in both CB (periodic questions based on patient scenarios) and non-CB (questions matched for content but lacking patient scenarios) formats. Participants completed two modules in each format (sequence randomly assigned). Participants also completed a pretest of applied knowledge for two modules (randomly assigned). Results For the 130 participating internal medicine and family medicine residents, knowledge scores improved significantly (P < .0001) from pretest (mean: 53.5; SE: 1.1) to posttest (75.1; SE: 0.7). Posttest knowledge scores were similar in CB (75.0; SE: 0.1) and non-CB formats (74.7; SE: 1.1); the 95% CI was −1.6, 2.2 (P = .76). A nearly significant (P = .062) interaction between format and the presence or absence of pretest suggested a differential effect of question format, depending on pretest. Overall, those taking pretests had higher posttest knowledge scores (76.7; SE: 1.1) than did those not taking pretests (73.0; SE: 1.1; 95% CI: 1.7, 5.6; P = .0003). Learners preferred the CB format. Time required was similar (CB: 42.5; SE: 1.8 minutes, non-CB: 40.9; SE: 1.8 minutes; P = .22). Conclusions Our findings suggest that, among postgraduate physicians, CB and non-CB questions have similar effects on knowledge scores, but learners prefer CB questions. Pretests influence posttest scores.


Medical Education | 2010

Validation of a method to measure resident doctors' reflections on quality improvement

Christopher M. Wittich; Thomas J. Beckman; Monica M. Drefahl; Jayawant N. Mandrekar; Darcy A. Reed; Bryan J. Krajicek; Rudy M. Haddad; Furman S. McDonald; Joseph C. Kolars; Kris G. Thomas

Medical Education 2010: 44 : 248–255


Journal of General Internal Medicine | 2009

Alternative Approaches to Ambulatory Training: Internal Medicine Residents’ and Program Directors’ Perspectives

Kris G. Thomas; Colin P. West; Carol Popkave; Lisa M. Bellini; Steven E. Weinberger; Joseph C. Kolars; Jennifer R. Kogan

ABSTRACTBACKGROUNDInternal medicine ambulatory training redesign, including recommendations to increase ambulatory training, is a focus of national discussion. Residents’ and program directors’ perceptions about ambulatory training models are unknown.OBJECTIVETo describe internal medicine residents’ and program directors’ perceptions regarding ambulatory training duration, alternative ambulatory training models, and factors important for ambulatory education.DESIGNNational cohort study.PARTICIPANTSInternal medicine residents (N = 14,941) and program directors (N = 222) who completed the 2007 Internal Medicine In-Training Examination (IM-ITE) Residents Questionnaire or Program Directors Survey, representing 389 US residency programs.RESULTSA total of 58.4% of program directors and 43.7% of residents preferred one-third or more training time in outpatient settings. Resident preferences for one-third or more outpatient training increased with higher levels of training (48.3% PGY3), female sex (52.7%), primary care program enrollment (64.8%), and anticipated outpatient-focused career, such as geriatrics. Most program directors (77.3%) and residents (58.4%) preferred training models containing weekly clinic. Although residents and program directors reported problems with competing inpatient-outpatient responsibilities (74.9% and 88.1%, respectively) and felt that absence of conflict with inpatient responsibilities is important for good outpatient training (69.4% and 74.2%, respectively), only 41.6% of residents and 22.7% of program directors supported models eliminating ambulatory sessions during inpatient rotations.CONCLUSIONSResidents’ and program directors’ preferences for outpatient training differ from recommendations for increased ambulatory training. Discordance was observed between reported problems with conflicting inpatient-outpatient responsibilities and preferences for models maintaining longitudinal clinic during inpatient rotations. Further study regarding benefits and barriers of ambulatory redesign is needed.


Medical Education | 2011

The Motivated Strategies for Learning Questionnaire: score validity among medicine residents

David A. Cook; Warren G. Thompson; Kris G. Thomas

Medical Education 2011: 45: 1230–1240

Collaboration


Dive into the Kris G. Thomas's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Furman S. McDonald

American Board of Internal Medicine

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Eric J. Warm

University of Cincinnati Academic Health Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge