Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Wendy C. Cox is active.

Publication


Featured researches published by Wendy C. Cox.


North Carolina medical journal | 2014

A Renaissance in Pharmacy Education at the University of North Carolina at Chapel Hill

Mary T. Roth; Russell J. Mumper; Scott F. Singleton; Craig R. Lee; Philip T. Rodgers; Wendy C. Cox; Jacqueline E. McLaughlin; Pam Joyner; Robert A. Blouin

The UNC Eshelman School of Pharmacy is transforming its doctor of pharmacy program to emphasize active engagement of students in the classroom, foster scientific inquiry and innovation, and immerse students in patient care early in their education. The admissions process is also being reengineered.


The American Journal of Pharmaceutical Education | 2013

Correlation of the Health Sciences Reasoning Test With Student Admission Variables

Wendy C. Cox; Adam M. Persky; Susan J. Blalock

Objectives. To assess the association between scores on the Health Sciences Reasoning Test (HSRT) and pharmacy student admission variables. Methods. During the student admissions process, cognitive data, including undergraduate grade point average and Pharmacy College Admission Test (PCAT) scores, were collected from matriculating doctor of pharmacy (PharmD) students. Between 2007 and 2009, the HSRT was administered to 329 first-year PharmD students. Correlations between HSRT scores and cognitive data, previous degree, and gender were examined. Results. After controlling for other predictors, 3 variables were significantly associated with HSRT scores: percentile rank on the reading comprehension (p<0.001), verbal (p<0.001), and quantitative (p<0.001) subsections of the PCAT. Conclusions. Scores on the reading comprehension, verbal, and quantitative sections of the PCAT were significantly associated with HSRT scores. Some elements of critical thinking may be measured by these PCAT subsections. However, the HSRT offers information absent in standard cognitive admission criteria.


The American Journal of Pharmaceutical Education | 2014

Association of Health Sciences Reasoning Test scores with academic and experiential performance.

Wendy C. Cox; Jacqueline E. McLaughlin

Objectives. To assess the association of scores on the Health Sciences Reasoning Test (HSRT) with academic and experiential performance in a doctor of pharmacy (PharmD) curriculum. Methods. The HSRT was administered to 329 first-year (P1) PharmD students. Performance on the HSRT and its subscales was compared with academic performance in 29 courses throughout the curriculum and with performance in advanced pharmacy practice experiences (APPEs). Results. Significant positive correlations were found between course grades in 8 courses and HSRT overall scores. All significant correlations were accounted for by pharmaceutical care laboratory courses, therapeutics courses, and a law and ethics course. Conclusion. There was a lack of moderate to strong correlation between HSRT scores and academic and experiential performance. The usefulness of the HSRT as a tool for predicting student success may be limited.


The American Journal of Pharmaceutical Education | 2015

Development and Assessment of the Multiple Mini-Interview in a School of Pharmacy Admissions Model.

Wendy C. Cox; Jacqueline E. McLaughlin; David Singer; Margaret Lewis; Melissa M. Dinkins

Objective. To describe the development, implementation, and evaluation of the multiple mini-interview (MMI) within a doctor of pharmacy (PharmD) admissions model. Methods. Demographic data and academic indicators were collected for all candidates who participated in Candidates’ Day (n=253), along with the score for each MMI station criteria (7 stations). A survey was administered to all candidates who completed the MMI, and another survey was administered to all interviewers to examine perceptions of the MMI. Results. Analyses suggest that MMI stations assessed different attributes as designed, with Cronbach alpha for each station ranging from 0.90 to 0.95. All correlations between MMI station scores and academic indicators were negligible. No significant differences in average station scores were found based on age, gender, or race. Conclusion. This study provides additional support for the use of the MMI as an admissions tool in pharmacy education.


The American Journal of Pharmaceutical Education | 2012

Development of a Course Review Process

Adam M. Persky; Pamela U. Joyner; Wendy C. Cox

Objective. To describe and assess a course review process designed to enhance course quality. Design. A course review process led by the curriculum and assessment committees was designed for all required courses in the doctor of pharmacy (PharmD) program at a school of pharmacy. A rubric was used by the review team to address 5 areas: course layout and integration, learning outcomes, assessment, resources and materials, and learner interaction. Assessment. One hundred percent of targeted courses, or 97% of all required courses, were reviewed from January to August 2010 (n=30). Approximately 3.5 recommendations per course were made, resulting in improvement in course evaluation items related to learning outcomes. Ninety-five percent of reviewers and 85% of course directors agreed that the process was objective and the course review process was important. Conclusion. The course review process was objective and effective in improving course quality. Future work will explore the effectiveness of an integrated, continual course review process in improving the quality of pharmacy education.


The American Journal of Pharmaceutical Education | 2015

Limited Predictive Utility of Admissions Scores and Objective Structured Clinical Examinations for APPE Performance

Jacqueline E. McLaughlin; Julia Khanova; Kelly L. Scolaro; Philip T. Rodgers; Wendy C. Cox

Objective. To examine the relationship between admissions, objective structured clinical examination (OSCE), and advanced pharmacy practice experience (APPE) scores. Methods. Admissions, OSCE, and APPE scores were collected for students who graduated from the doctor of pharmacy (PharmD) program in spring of 2012 and spring of 2013 (n=289). Pearson correlation was used to examine relationships between variables, and independent t test was used to compare mean scores between groups. Results. All relationships among admissions data (undergraduate grade point average, composite PCAT scores, and interview scores) and OSCE and APPE scores were weak, with the strongest association found between the final OSCE and ambulatory care APPEs. Students with low scores on the final OSCE performed lower than others on the acute care, ambulatory care, and community APPEs. Conclusion. This study highlights the complexities of assessing student development of noncognitive professional skills over the course of a curriculum.


The American Journal of Pharmaceutical Education | 2014

Rational and experiential decision-making preferences of third-year student pharmacists.

Jacqueline E. McLaughlin; Wendy C. Cox; Charlene R. Williams; Greene Shepherd

Objective. To examine the rational (systematic and rule-based) and experiential (fast and intuitive) decision-making preferences of student pharmacists, and to compare these preferences to the preferences of other health professionals and student populations. Methods. The Rational-Experiential Inventory (REI-40), a validated psychometric tool, was administered electronically to 114 third-year (P3) student pharmacists. Student demographics and preadmission data were collected. The REI-40 results were compared with student demographics and admissions data to identify possible correlations between these factors. Results. Mean REI-40 rational scores were higher than experiential scores. Rational scores for younger students were significantly higher than students aged 30 years and older (p<0.05). No significant differences were found based on gender, race, or the presence of a prior degree. All correlations between REI-40 scores and incoming grade point average (GPA) and Pharmacy College Admission Test (PCAT) scores were weak. Conclusion. Student pharmacists favored rational decision making over experiential decision making, which was similar to results of studies done of other health professions.


The American Journal of Pharmaceutical Education | 2017

Computer-Assisted Decision Support for Student Admissions Based on Their Predicted Academic Performance

Eugene N. Muratov; Margaret Lewis; Denis Fourches; Alexander Tropsha; Wendy C. Cox

Objective. To develop predictive computational models forecasting the academic performance of students in the didactic-rich portion of a doctor of pharmacy (PharmD) curriculum as admission-assisting tools. Methods. All PharmD candidates over three admission cycles were divided into two groups: those who completed the PharmD program with a GPA ≥ 3; and the remaining candidates. Random Forest machine learning technique was used to develop a binary classification model based on 11 pre-admission parameters. Results. Robust and externally predictive models were developed that had particularly high overall accuracy of 77% for candidates with high or low academic performance. These multivariate models were highly accurate in predicting these groups to those obtained using undergraduate GPA and composite PCAT scores only. Conclusion. The models developed in this study can be used to improve the admission process as preliminary filters and thus quickly identify candidates who are likely to be successful in the PharmD curriculum.


Teaching and Learning in Medicine | 2017

Candidate Evaluation Using Targeted Construct Assessment in the Multiple Mini-Interview: A Multifaceted Rasch Model Analysis

Jacqueline E. McLaughlin; David Singer; Wendy C. Cox

ABSTRACT Construct: A 7-station multiple mini-interview (MMI) circuit was implemented and assessed for 214 candidates rated by 37 interviewers (N = 1,498 ratings). The MMI stations were designed to assess 6 specific constructs (adaptability, empathy, integrity, critical thinking, teamwork [receiving instruction], teamwork [giving instruction]) and one open station about the candidates interest in the school. Background: Despite the apparent benefits of the MMI, construct-irrelevant variance continues to be a topic of study. Refining the MMI to more effectively measure candidate ability is critical to improving our ability to identify and select candidates that are equipped for success within health professions education and the workforce. Approach: Each station assessed a single construct and was rated by a single interviewer who was provided only the name of the candidate and no additional information about the candidates background, application, or prior academic performance. All interviewers received online and in-person training in the fall prior to the MMI and the morning of the MMI. A 3-facet multifaceted Rasch measurement analysis was completed to determine interviewer severity, candidate ability, and MMI station difficulty and examine how the model performed overall (e.g., rating scale). Results: Altogether, the Rasch measures explained 62.84% of the variance in the ratings. Differences in candidate ability explained 45.28% of the variance in the data, whereas differences in interviewer severity explained 16.09% of the variance in the data. None of the interviewers had Infit or Outfit mean-square scores greater than 1.7, and only 2 (5.4%) had mean-square scores less than 0.5. Conclusions: The data demonstrated acceptable fit to the multifaceted Rasch measurement model. This work is the first of its kind in pharmacy and provides insight into the development of an MMI that provides useful and meaningful candidate assessment ratings for institutional decision making.


The American Journal of Pharmaceutical Education | 2016

The Multiple Mini-Interview as an Admission Tool for a PharmD Program Satellite Campus

David Singer; Jacqueline E. McLaughlin; Wendy C. Cox

Objective. To assess the multiple mini-interview (MMI) as an admission tool for a satellite campus. Methods. In 2013, the MMI was implemented as part of a new admissions model at the UNC Eshelman School of Pharmacy. From fall 2013 to spring 2015, 73 candidates were interviewed by 15 raters on the satellite campus in Asheville, North Carolina. A many-facet Rasch measurement (MFRM) with three facets was used to determine the variance in candidate ratings attributable to rater severity, candidate ability, and station difficulty. Candidates were surveyed to explore their perceptions of the MMI. Results. Rasch measures accounted for 48.3% of total variance in candidate scores. Rater severity accounted for 9.1% of the variance, and candidate ability accounted for 36.2% of the variance. Eighty percent of survey respondents (strongly) agreed that interviewers got to know them based on questions they answered. Conclusion. This study suggests that the MMI is a useful and valid tool for candidate selection at a satellite campus.

Collaboration


Dive into the Wendy C. Cox's collaboration.

Top Co-Authors

Avatar

Jacqueline E. McLaughlin

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Adam M. Persky

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Pamela U. Joyner

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Jacqueline M. Zeeman

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

David Singer

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Philip T. Rodgers

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Bethanne Brown

University of Illinois at Chicago

View shared research outputs
Top Co-Authors

Avatar

Bradford Wingo

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Cecilia M. Plaza

American Association of Colleges of Pharmacy

View shared research outputs
Top Co-Authors

Avatar

Charlene R. Williams

University of North Carolina at Chapel Hill

View shared research outputs
Researchain Logo
Decentralizing Knowledge