Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michael J. Peeters is active.

Publication


Featured researches published by Michael J. Peeters.


The American Journal of Pharmaceutical Education | 2011

An active-learning strategies primer for achieving ability-based educational outcomes

Brenda L. Gleason; Michael J. Peeters; Beth H. Resman-Targoff; Samantha Karr; Sarah McBane; Kristi W. Kelley; Tyan Thomas; Tina Harrach Denetclaw

Active learning is an important component of pharmacy education. By engaging students in the learning process, they are better able to apply the knowledge they gain. This paper describes evidence supporting the use of active-learning strategies in pharmacy education and also offers strategies for implementing active learning in pharmacy curricula in the classroom and during pharmacy practice experiences.


Annals of Noninvasive Electrocardiology | 2006

Effects of three fluoroquinolones on QT analysis after standard treatment courses.

James P. Tsikouris; Michael J. Peeters; Craig D. Cox; Gary Meyerrose; Charles F. Seifert

Background: Fluoroquinolone (FQ) agents have been speculated to influence the risk of Torsades de pointes (Tdp). Methods of evaluating this risk are varied and not systematic. QTc interval (QTc) prolongation is the most commonly used marker of Tdp, but has questionable utility. QT dispersion (QTd) may be a more selective marker of Tdp. No assessment of QTd for FQs has been reported. The current study evaluates the effects of three commonly prescribed FQs by comprehensive QT analysis.


The American Journal of Pharmaceutical Education | 2013

Educational testing and validity of conclusions in the scholarship of teaching and learning.

Michael J. Peeters; Svetlana A. Beltyukova; Beth A. Martin

Validity and its integral evidence of reliability are fundamentals for educational and psychological measurement, and standards of educational testing. Herein, we describe these standards of educational testing, along with their subtypes including internal consistency, inter-rater reliability, and inter-rater agreement. Next, related issues of measurement error and effect size are discussed. This article concludes with a call for future authors to improve reporting of psychometrics and practical significance with educational testing in the pharmacy education literature. By increasing the scientific rigor of educational research and reporting, the overall quality and meaningfulness of SoTL will be improved.


The American Journal of Pharmaceutical Education | 2010

A standardized rubric to evaluate student presentations

Michael J. Peeters; Eric G. Sahloff; Gregory E. Stone

Objective. To design, implement, and assess a rubric to evaluate student presentations in a capstone doctor of pharmacy (PharmD) course. Design. A 20-item rubric was designed and used to evaluate student presentations in a capstone fourth-year course in 2007–2008, and then revised and expanded to 25 items and used to evaluate student presentations for the same course in 2008–2009. Two faculty members evaluated each presentation. Assessment. The Many-Facets Rasch Model (MFRM) was used to determine the rubrics reliability, quantify the contribution of evaluator harshness/leniency in scoring, and assess grading validity by comparing the current grading method with a criterion-referenced grading scheme. In 2007–2008, rubric reliability was 0.98, with a separation of 7.1 and 4 rating scale categories. In 2008–2009, MFRM analysis suggested 2 of 98 grades be adjusted to eliminate evaluator leniency, while a further criterion-referenced MFRM analysis suggested 10 of 98 grades should be adjusted. Conclusion. The evaluation rubric was reliable and evaluator leniency appeared minimal. However, a criterion-referenced re-analysis suggested a need for further revisions to the rubric and evaluation process.


The American Journal of Pharmaceutical Education | 2013

Improving reliability of a residency interview process.

Michael J. Peeters; Michelle L. Serres; Todd E. Gundrum

Objective. To improve the reliability and discrimination of a pharmacy resident interview evaluation form, and thereby improve the reliability of the interview process. Methods. In phase 1 of the study, authors used a Many-Facet Rasch Measurement model to optimize an existing evaluation form for reliability and discrimination. In phase 2, interviewer pairs used the modified evaluation form within 4 separate interview stations. In phase 3, 8 interviewers individually-evaluated each candidate in one-on-one interviews. Results. In phase 1, the evaluation form had a reliability of 0.98 with person separation of 6.56; reproducibly, the form separated applicants into 6 distinct groups. Using that form in phase 2 and 3, our largest variation source was candidates, while content specificity was the next largest variation source. The phase 2 g-coefficient was 0.787, while confirmatory phase 3 was 0.922. Process reliability improved with more stations despite fewer interviewers per station—impact of content specificity was greatly reduced with more interview stations. Conclusion. A more reliable, discriminating evaluation form was developed to evaluate candidates during resident interviews, and a process was designed that reduced the impact from content specificity.


Journal of Hospital Medicine | 2009

Assessing the impact of an educational program on decreasing prescribing errors at a university hospital

Michael J. Peeters; Sharrel Pinto

BACKGROUND Several complex and costly interventions reduce medication errors. Little exists on the effectiveness of providing education and feedback to institutional clinicians as a means of reducing errors. OBJECTIVE To determine the impact on prescribing errors of a pharmacist-led educational intervention. DESIGN Prospective, interrupted time series study. SETTING This study was conducted among internal medicine residents at the 320-bed University of Toledo Medical Center. INTERVENTION The educational intervention was conducted during a 6-month period beginning in November 2006. The intervention included an initial hour-long lecture followed by biweekly and then monthly discussions that used timely, institution-specific examples of prescribing errors. MEASUREMENTS Data were collected at 5 time points: month 0 (preintervention period); months 1, 3, and 6 (intervention period); and month 7 (postintervention period). Errors were identified, transcribed, coded, and entered into a database. The primary outcome was the frequency of prescribing errors during each period. A Bonferroni-adjusted chi-square analysis was conducted with an a priori experiment-wise alpha of 0.05. RESULTS A reduction in prescribing errors of 33% following the first intervention month and a mean 26% reduction during the study period were observed (P<0.0025). The frequencies of preintervention and postintervention errors did not differ significantly. CONCLUSIONS A straightforward educational intervention reduced prescribing errors during the period of active intervention, but this effect was not sustained. Ongoing communication and education about institution-specific medication errors appear warranted.


The American Journal of Pharmaceutical Education | 2016

A Mixed-Methods Analysis in Assessing Students’ Professional Development by Applying an Assessment for Learning Approach

Michael J. Peeters; Varun Vaidya

Objective. To describe an approach for assessing the Accreditation Council for Pharmacy Education’s (ACPE) doctor of pharmacy (PharmD) Standard 4.4, which focuses on students’ professional development. Methods. This investigation used mixed methods with triangulation of qualitative and quantitative data to assess professional development. Qualitative data came from an electronic developmental portfolio of professionalism and ethics, completed by PharmD students during their didactic studies. Quantitative confirmation came from the Defining Issues Test (DIT)—an assessment of pharmacists’ professional development. Results. Qualitatively, students’ development reflections described growth through this course series. Quantitatively, the 2015 PharmD class’s DIT N2-scores illustrated positive development overall; the lower 50% had a large initial improvement compared to the upper 50%. Subsequently, the 2016 PharmD class confirmed these average initial improvements of students and also showed further substantial development among students thereafter. Conclusion. Applying an assessment for learning approach, triangulation of qualitative and quantitative assessments confirmed that PharmD students developed professionally during this course series.


The American Journal of Pharmaceutical Education | 2012

Overcoming Content Specificity in Admission Interviews: The Next Generation?

Michelle L. Serres; Michael J. Peeters

We read with interest the recent articles related to admissions interviews and appreciate that this has been an area of study and publication.1,2 We would, however, like to discuss the differences noted among the 2 most recent publications on this subject. To our understanding, the multiple mini-interview (MMI), as described by Cameron and colleagues, seems to describe the next generation of interviewing and an evolution from more traditional interview formats like that described by Kelsch and colleagues. While not stated directly in the Cameron article, content specificity is an important concern for interviewers, for which the MMI format was designed to address. This concern is not addressed with a single-occurrence traditional interview, including those with multiple interviewers. (Admittedly, our college’s current interview process is similar to that described by Kelsch.) Content specificity has been found within assessment types throughout education and is known to limit reliability.3,4 Literature discussing content specificity has suggested that little can be done to avoid it confounding results. It has, however, been demonstrated by and is a key concept behind the improved reliability of the objective structured clinical examination (OSCE) format over yesteryear’s oral clinical examinations. In fact, the MMI is simply an admissions OSCE.5 Therefore, with larger numbers of MMI stations generally yielding less unreliability due to content specificity, incorporating an MMI seems a current best practice approach to control and minimize this score variability. As expected, in a recent analysis of our recent PGY1 program interview candidates, we also found this. Using generalizability theory, much like others in medical education have, our interview process had a variability that was explained by various facets. We established 4 separate panels/stations, each consisting of 2 interviewers, and interviewed 24 residency candidates. Analyzing the resulting data, candidates accounted for 74% of the variation (ie, true variance that we want), interview stations for 3.4%, interviewers for 2.5% (ie, inter-rater reliability), and candidate-station interaction (ie, content specificity) for 13.5%, while residual error was 6.6%. Notably, our reliability (g coefficient) was 0.787 and could improve to 0.847 if we had only 1 interviewer and 8 separate interview stations. To compare with Kelsch, our intraclass correlation was 0.832 and Cronbach alpha was 0.868. That is, we had slightly less inter-rater divergence, though this caused only minimal variance compared with other variance sources. Sinking substantial resources into attempts to alleviate concerns with inter-rater reliability (ie, training) should have perspective; we did not train our interviewers to use our interview rubric for this event. Others have been more condemning of training.6 While inter-rater congruency and reliability are important and often highly focused-upon area, literature has shown content specificity to have a larger role in decreasing reliability of candidate or participant performance assessment. Along with our own college, we greatly encourage our colleagues across the country to progress towards the MMI; content specificity and true reliability may depend on it.


Currents in Pharmacy Teaching and Learning | 2016

Assessing development in critical thinking: One institution’s experience

Michael J. Peeters; Sai H.S. Boddu

OBJECTIVE Enhancing critical and moral thinking are goals of higher education. We sought to examine thinking development within a Doctor of Pharmacy (Pharm.D.) program. METHODS The California Critical Thinking Skills Test (CCTST), Health Sciences Reasoning Test (HSRT), and the Defining Issues Test (DIT2) were administered to Pharm.D. students over four sessions throughout their didactic studies. Students took tests in their P1 Fall, P1 Spring, P2 Spring, and P3 Spring. While CCTST and HSRT are similar for assessing foundational critical thinking, the DIT2 assesses complex moral thinking. Each thinking test was correlated with academic success by undergraduate and graduate grade-point averages (GPAs). RESULTS The CCTST was administered in P1 Fall (20.1 ± 5.0). For HSRT, mean ± S.D. was P1 Spring: 22.7 ± 3.5, P2 Spring: 22.6 ± 4.8, and P3 Spring: 23.8 ± 4.5. After converting P1-CCTST and P2-HSRT scores using user-manual interpretations, there was no difference on paired comparison (P = 0.22, 0.1 Cohens d). There was a small difference between P1-HSRT and P3-HSRT (P < 0.01, 0.2 Cohens d). Also administered each time, the DIT2 was P1 Fall: 40.4 ± 12.6, P1 Spring: 36.3 ± 13.7, P2 Spring: 44.9 ± 13.6, and P3 Spring: 43.4 ± 15.4. For DIT2, both P1 Fall to P2 Spring and P1 Spring to P3 Spring were significant with small and medium effect-sizes (both P < 0.01, 0.4 and 0.5 Cohens d respectively). Importantly, multiple HSRT, and DIT2 assessments correlated with undergraduate and graduate GPAs. CONCLUSIONS During a Pharm.D. program of study, students developed substantially in moral reasoning though minimally in foundational critical thinking. Both foundational and moral reasoning correlated with academic success. Showing responsiveness to change, the DIT2 appears helpful as a measure of cognitive development for pharmacy education.


Cardiovascular Drugs and Therapy | 2007

Pharmacogenomics of Renin Angiotensin System Inhibitors in Coronary Artery Disease

James P. Tsikouris; Michael J. Peeters

Renin Angiotensin System (RAS) inhibitors comprise some of the most commonly used medications in coronary artery disease (CAD) and its related syndromes. Unfortunately, significant inter-patient variability seems likely in response to these agents; of which, the influence of genetic determinants is of interest. This review summarizes the available RAS inhibitor pharmacogenomic studies which have evaluated RAS polymorphisms that either elucidate mechanism via surrogate endpoint measurements, or predict efficacy via clinical outcomes in CAD related syndromes.Regardless of the endpoint, none of the RAS genotypes conclusively predicts efficacy of RAS inhibitors. In fact, the results of the pharmacogenomic studies were often in direct conflict with one another. Varied results appear due to methodological limitations (e.g., inadequate study power, genotyping error, methods of endpoint measurement), study conceptualization (e.g., overestimating the contribution of polymorphism to disease, lack of haplotype approach), and differences between studies (e.g., genotype frequency, study subject characteristics, the specific medication and dose used). Thus investigators should consider the various methodological limitations to improve upon the current approach to RAS inhibitor pharmacogenomic research in the vast CAD population.

Collaboration


Dive into the Michael J. Peeters's collaboration.

Top Co-Authors

Avatar

Charles F. Seifert

Texas Tech University Health Sciences Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Craig D. Cox

Texas Tech University Health Sciences Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Beth A. Martin

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge