Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sharon L.K. McDonough is active.

Publication


Featured researches published by Sharon L.K. McDonough.


The American Journal of Pharmaceutical Education | 2011

A quality improvement course review of advanced pharmacy practice experiences.

T. Lynn Stevenson; Lori B. Hornsby; Haley M. Phillippe; Kristi W. Kelley; Sharon L.K. McDonough

Objectives. To determine strengths of and quality improvements needed in advanced pharmacy practice experiences (APPE) through a systematic course review process. Design. Following the “developing a curriculum” (DACUM) format, course materials and assessments were reviewed by the curricular subcommittee responsible for experiential education and by key stakeholders. Course sequence overview and data were presented and discussed. A course review worksheet was completed, outlining strengths and areas for improvement. Assessment. Student feedback was positive. Strengths and areas for improvement were identified. The committee found reviewing the sequence of 8 APPE courses to be challenging. Conclusions. Course reviews are a necessary process in curricular quality improvement but can be difficult to accomplish. We found overall feedback about APPEs was positive and student performance was high. Areas identified as needing improvement will be the focus of continuous quality improvement of the APPE sequence.


The American Journal of Pharmaceutical Education | 2016

Assessing the Value of Online Learning and Social Media in Pharmacy Education

Leslie A. Hamilton; Andrea S. Franks; R. Eric Heidel; Sharon L.K. McDonough; Katie J. Suda

Objective. To assess student preferences regarding online learning and technology and to evaluate student pharmacists’ social media use for educational purposes. Methods. An anonymous 36-question online survey was administered to third-year student pharmacists enrolled in the Drug Information and Clinical Literature Evaluation course. Results. Four hundred thirty-one students completed the survey, yielding a 96% response rate. The majority of students used technology for academic activities, with 90% using smart phones and 91% using laptop computers. Fifty-eight percent of students also used social networking websites to communicate with classmates. Conclusion. Pharmacy students frequently use social media and some online learning methods, which could be a valuable avenue for delivering or supplementing pharmacy curricula. The potential role of social media and online learning in pharmacy education needs to be further explored.


Research in Social & Administrative Pharmacy | 2016

Co-creation of market expansion in point-of-care testing in the United States: Industry leadership perspectives on the community pharmacy segment

Kenneth C. Hohmeier; Sharon L.K. McDonough; Junling Wang

Background: Point‐of‐care testing (POCT) is a specialty of laboratory medicine that occurs at the bedside or near the patient when receiving health services. Despite its established clinical utility and consumer demand in the community pharmacy, the implementation of POCT within this setting has remained modest for a variety of reasons. One possible solution to this problem is the concept of co‐creation – the partnership between consumer and manufacturer in the development of value for a service or device. Objective: Using the theoretical underpinning of co‐creation, this study aimed to investigate perceptions of point‐of‐care‐testing (POCT) industry leadership on the community pharmacy market in the United States to uncover reasons for limited implementation within community pharmacies. Methods: Participants were recruited for this study through the use of snowball sampling. A series of semi‐structured interviews were conducted with the participants via telephone. Interviews were recorded, transcribed, and entered into a qualitative analysis software program to summarize the data. Results: Five key themes were uncovered: gaps in understanding, areas of positive impact, barriers to implementation, facilitators of implementation, and community pharmacy – a potential major player. Conclusions: Through uncovering gaps in perceptions, it may be possible to leverage the U.S. pharmacy industrys size, potential for scalability, and ease of patient access to further patient care.


The American Journal of Pharmaceutical Education | 2017

Examining the Association Between the NAPLEX, Pre-NAPLEX, and Pre- and Post-admission Factors

Marie A. Chisholm-Burns; Christina A. Spivey; Debbie C. Byrd; Sharon L.K. McDonough; Stephanie J. Phelps

Objective. To examine the relationship between the NAPLEX and Pre-NAPLEX among pharmacy graduates, as well as determine effects of pre-pharmacy, pharmacy school, and demographic variables on NAPLEX performance. Methods. A retrospective review of pharmacy graduates’ NAPLEX scores, Pre-NAPLEX scores, demographics, pre-pharmacy academic performance factors, and pharmacy school academic performance factors was performed. Bivariate (eg, ANOVA, independent samples t-test) and correlational analyses were conducted, as was stepwise linear regression to examine the significance of Pre-NAPLEX score and other factors as related to NAPLEX score. Results. One hundred fifty graduates were included, with the majority being female (60.7%) and white (72%). Mean NAPLEX score was 104.7. Mean Pre-NAPLEX score was 68.6. White students had significantly higher NAPLEX scores compared to Black/African American students. NAPLEX score was correlated to Pre-NAPLEX score, race/ethnicity, PCAT composite and section scores, undergraduate overall and science GPAs, pharmacy GPA, and on-time graduation. The regression model included pharmacy GPA and Pre-NAPLEX score. Conclusion. The findings provide evidence that, although pharmacy GPA is the most critical determinant, the Pre-NAPLEX score is also a significant predictor of NAPLEX score.


The American Journal of Pharmaceutical Education | 2016

Student Pharmacists' Perceptions of a Composite Examination in Their First Professional Year.

Sharon L.K. McDonough; Elizabeth L. Alford; Shannon W. Finks; Robert B. Parker; Marie A. Chisholm-Burns; Stephanie J. Phelps

Objective. To assess first-year (P1) pharmacy students’ studying behaviors and perceptions after implementation of a new computerized “composite examination” (CE) testing procedure. Methods. Student surveys were conducted to assess studying behavior and perceptions about the CE before and after its implementation. Results. Surveys were completed by 149 P1 students (92% response rate). Significant changes between survey results before and after the CE included an increase in students’ concerns about the limited number of questions per course on each examination and decreased concerns about the time allotted and the inability to write on the CEs. Significant changes in study habits included a decrease in cramming (studying shortly before the test) and an increase in priority studying (spending more time on one course than another). Conclusion. The CE positively changed assessment practice at the college. It helped overcome logistic challenges in computerized testing and drove positive changes in study habits.


The American Journal of Pharmaceutical Education | 2014

Evaluation of student factors associated with pre-NAPLEX scores.

Marie A. Chisholm-Burns; Christina A. Spivey; Sharon L.K. McDonough; Stephanie J. Phelps; Debbie C. Byrd

Objective: To examine relationships among students’ Pre-NAPLEX scores and prepharmacy, pharmacy school, and demographic variables to better understand factors that may contribute to Pre-NAPLEX performance. Methods: A retrospective review of pharmacy students’ Pre-NAPLEX scores, demographics, prepharmacy factors, and pharmacy school factors was performed. Bivariate (eg, ANOVA) and correlational analyses and stepwise linear regression were conducted to examine the significance of various factors and their relationship to Pre-NAPLEX score. Results: 168 students were included, with the majority being female (60.7%) and White (72%). Mean Pre-NAPLEX score was 68.95 ± 14.5. Non-Hispanic White students had significantly higher Pre-NAPLEX scores compared to minority students (p<0.001). Pre-NAPLEX score was correlated (p<0.001) to race/ethnicity (r=-0.341), PCAT score (r=0.272), and pharmacy school GPA (r=0.346). The regression model (adjusted R2=0.216; p<0.001) included pharmacy school GPA, academic probation, academic remediation, and PCAT composite. Conclusion: This study highlighted that select demographic, prepharmacy, and pharmacy school factors were associated with Pre-NAPLEX outcomes. Such factors may assist colleges/schools of pharmacy in identifying students who may be at risk for poorer NAPLEX performance and may need greater preparation.


Currents in Pharmacy Teaching and Learning | 2017

Assessing self-assessment practices: A survey of U.S. colleges and schools of pharmacy

James S. Wheeler; Sharon L.K. McDonough; Tracy M. Hagemann

OBJECTIVE This study quantifies and describes student self-assessment approaches in colleges of pharmacy across the United States. METHODS Faculty members identified as assessment directors from college websites at U.S. colleges of pharmacy were electronically surveyed. Prior to distribution, feedback and question validation was sought from select assessment directors. Surveys were distributed and recorded, via Qualtrics® survey software and analyzed in Microsoft Excel®. RESULTS Responses were received from 49 colleges of pharmacy (n = 49/134, 37% response rate). The most commonly used strategies were reflective essays (n = 44/49, 90%), portfolios (n = 40/49, 82%), student self-evaluations (n = 35/49, 71%) and questionnaires/surveys/checklists (n = 29/49, 59%). Out of 49 submitted surveys, 35 programs noted students received feedback on self-assessment. Feedback came most commonly from faculty (n = 31/35, 88%). Thirty-four programs responded regarding self-assessment integration including fifteen colleges (n = 15/34, 44%) that integrated self-assessment both into the curriculum and co-curricular activities, while 14 (n = 14/34, 41%) integrated self-assessment exclusively into the curriculum, and five (n = 5/34, 15%) used self-assessment exclusively in co-curricular activities. DISCUSSION AND CONCLUSIONS Student self-assessment is a critical first step of the Continuing Professional Development (CPD) process. Colleges and schools of pharmacy use a wide variety of methods to develop this skill in preparing future practitioners.


The American Journal of Pharmaceutical Education | 2013

Throwing a Curveball: The Impact of Instructor Decisions Regarding Poorly Performing Test Questions

Stephanie J. Phelps; Sharon L.K. McDonough; Robert B. Parker; Shannon W. Finks

To the Editor. Skilled pitchers in the game of baseball often throw a curveball to trick an unsuspecting batter to swing and miss. Much like the batter who is thrown off balance by a tricky pitch, a student may be misled by a tricky question on a multiple-choice test, regardless of whether the item writer intended to “pitch a curveball.” Additionally, decisions regarding what to do with test questions that perform poorly on item analysis can affect scores, and subsequently course grades, in unintended ways. Item analysis of multiple-choice questions is an important tool to ensure test questions fairly assess learning instead of tricking or deceiving students. In addition, students’ test scores can significantly change depending on how we handle poorly performing questions – a situation that can be easily overlooked by course directors. As a result, it is the course director or instructor who is unintentionally deceived. Although there are some generally accepted guidelines for statistically defining a poorly performing test item, how instructors respond to item analysis information may not follow a consistent pattern throughout a curriculum. At the University of Tennessee Health Science Center College of Pharmacy, we have encountered substantial variability among course directors in dealing with contested and poorly performing test questions. Course directors may choose to score examinations by keeping poorly performing questions, counting multiple answers correct, or by eliminating the entire question from the test, with subsequent readjustment of student scores. General agreement exists among our faculty members that when item analysis reports a negative point biserial, these questions should be reviewed and adjustments made. However, there may be situations when the item analysis indicates a valid question (eg, point biserial value of 0.45), yet the course director at his/her discretion omits the question or counts multiple answers as correct, thereby giving all students credit. We have found that such changes can have a substantial, highly variable, and sometimes unpredictable impact on students’ test scores. Consider the following scenarios that we have encountered where instructor decisions throw curveballs in relation to students’ test scores. Example 1: Omitting questions from a test. An examination has 25 questions total. Item analysis reveals 3 relatively poorly performing questions (40%-60% of students responded correctly), yet each question has a high point biserial value, indicating that all are strong assessments of student learning. Based on personal preference, the course director removes these 3 questions from the test, reducing the total possible points to 22. Student 1 got 21 of the 25 questions correct including all 3 of the omitted questions. Thus, this student’s initial test score was 21/25, or 84%. His adjusted score, after omitting the 3 questions, is 18/22, or 82%. Student 2 got 21 of 25 questions correct yet missed all 3 of the omitted questions. Her initial score was 21/25, or 84%, while her adjusted score is 21/22, or 95%. In this case, student 1, who selected the correct answers to the omitted questions, experienced a reduction in his score. However, student 2, who selected the incorrect answers, received a final score that was significantly improved after the omission of the questions. Example 2: re-keying answers to give credit. A test has 25 questions. Three questions, all with high point biserials and item discrimination, are contested by students. After reviewing the item analysis, the course director decides to rekey the answers to give all students credit for those 3 questions. Because student 1 got 21 of the 25 questions correct, including all 3 of the questions of concern, her initial and adjusted examination scores are the same 21/25, or 84%. Student 2 got 21 of 25 questions correct, but on initial scoring missed all 3 of the questions of concern. His initial score was 21/25, or 84%, while his adjusted score is now 24/25, or 96%. In this case, only the student who initially selected the incorrect answer receives an improved score. Example 3: awarding bonus points. Using the example above, instead of altering the 3 questions of concern, the questions are kept, and the examination is scored using the students’ current answers. However, the course director decides to give all students 3 bonus points. The initial examination scores for both students 1 and 2 are 21/25, or 84%, while adjusted scores for both with the additional 3 points are 24/25, or 96%. The awarding of 3 bonus points in this scenario provides a greater relative improvement in examination score for students who performed poorly on the examination compared to those who did well. For example, a student’s initial score is 10/25 (40%) but with the 3 bonus points, his score increases to 13/25 (52%, a relative change of 30%). For the student who got 21/25 initially correct, the score increases from 84% to 96%, a relative increase of 14%. Although all students are treated equally in the awarding of bonus points, some students’ performance may not have merited this adjustment. Another approach would be to give bonus questions, where students have extra chances to perform well but are only given credit for those questions answered correctly. As a result of course directors’ variable approaches to what they perceive as poorly performing test questions, student pharmacists’ frustration over these inconsistencies to examination grading has become apparent to our college. Unlike baseball where a curveball is intentionally thrown to deceive the player and thereby influence the outcome of the game, the ultimate goal for instructors and course directors should be to fairly and accurately assess student learning in a manner that is equitable to all. Throwing out poorly performing examination questions must be carefully considered, lest we impact student scores in an unintended way, and the curveball is instead thrown back at us. Although faculty members may be well intentioned in resolving test item performance issues, scoring adjustment is more complex than some of us realize. Colleges should be encouraged to create mechanisms to monitor, guide, and support faculty in this area.


The American Journal of Pharmaceutical Education | 2009

Factors influencing pharmacy students' attendance decisions in large lectures.

Salisa C. Westrick; Kristen L. Helms; Sharon L.K. McDonough; Michelle L. Breland


Currents in Pharmacy Teaching and Learning | 2015

Going "high stakes" with a pharmacy OSCE: Lessons learned in the transition

Sharon L.K. McDonough; Erika L. Kleppinger; Amy R. Donaldson; Kristen L. Helms

Collaboration


Dive into the Sharon L.K. McDonough's collaboration.

Top Co-Authors

Avatar

Stephanie J. Phelps

University of Tennessee Health Science Center

View shared research outputs
Top Co-Authors

Avatar

Marie A. Chisholm-Burns

University Of Tennessee System

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Christina A. Spivey

University Of Tennessee System

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Robert B. Parker

University of Tennessee Health Science Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge