Kim A. Stewart
University of Denver
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Kim A. Stewart.
Journal of Management Education | 1999
Donald R. Bacon; Kim A. Stewart; William S. Silver
This study empirically identifies which teacher-controlled (contextual) variables have the greatest impact on whether the student will have a great team experience or a miserable one. The results indicate that the clarity of instructions to the team, the longevity of the team experience, and self-selection of teammates all positively affect team experiences. The level of management education, the team size, and the percentage of the course grade associated with team performance did not differ across best and worst team experiences. Contrary to previous empirical findings and conventional wisdom, the use of peer evaluations was negatively associated with good team experiences. Further insights from the data and implications for the use of student teams are discussed.
Journal of Marketing Education | 1998
Donald R. Bacon; Kim A. Stewart; Sue Stewart-Belle
In a study of 49 graduate and 172 undergraduate marketing project teams, the average of the individual abilities on the team was found to predict student team performance. Team size had little effect, and gender diversity had no effect on team performance. Among graduate teams, those with a moderate amount of nationality diversity outperformed teams with high or no nationality diversity. The implications of these and other findings for course administration and team assignment are discussed.
Journal of Marketing Education | 2006
Donald R. Bacon; Kim A. Stewart
The retention curve for knowledge acquired in a consumer behavior course is explored in a longitudinal study, tracking individual students from 8 to 101 weeks following course completion. Rasch measurement is used to link tests and to achieve intervally scaled measures of knowledge. The findings indicate that most of the knowledge gained in the course is lost within 2 years. Evidence is provided that knowledge acquired at a deep level of understanding is more likely to be retained than knowledge acquired at a surface level of understanding, and knowledge tested more than once during a course is more likely to be retained than knowledge tested only once. No significant differences in retention were observed related to material covered in a project. Implications are discussed.
Simulation & Gaming | 2001
Donald R. Bacon; Kim A. Stewart; Elizabeth Scott Anderson
Many simulations, games, and other experiential exercises require participants to function in teams. The authors review the advantages and disadvantages of various methods of assigning participants to teams, including random assignment, self-selection, and facilitator assignment and then introduce and discuss computer-aided methods of team assignment. Guidelines are provided for how to choose an appropriate method of team assignment.
AORN Journal | 2010
Donald R. Bacon; Kim A. Stewart
AORN conducted its 13th annual compensation survey for perioperative nurses in June and July 2015. A multiple regression model was used to examine how a number of variables, including job title, education level, certification, experience, and geographic region, affect nurse compensation. Comparisons between the 2015 data and data from previous years are presented. The effects of other forms of compensation (eg, on-call compensation, overtime, bonuses, shift differentials, benefits) on base compensation rates also are examined. Additional analyses explore the effect of the economic downturn on the perioperative work environment.
Journal of Marketing Education | 2012
Donald R. Bacon; Pallab Paul; Kim A. Stewart; Kausiki Mukhopadhyay
Much has been written about the evaluation of faculty research productivity in promotion and tenure decisions, including many articles that seek to determine the rank of various marketing journals. Yet how faculty evaluators combine journal quality, quantity, and author contribution to form judgments of a scholar’s performance is unclear. A mathematical model of faculty judgment is presented that estimates a scholar’s research productivity that is surprisingly consistent with actual faculty evaluations. The model does not replace human judgment in evaluating a scholar’s research performance, but the model enhances clarity and objectivity in the evaluation process. The method is demonstrated with marketing faculty at one university.
Journal of Management Education | 2017
Donald R. Bacon; Kim A. Stewart
On the long and arduous journey toward effective educational assessment, business schools have progressed in their ability to clearly state measurable learning goals and use direct measures of student learning. However, many schools are wrestling with the last stages of the journey—measuring present learning outcomes, implementing curricular/pedagogical changes, and then measuring postchange outcomes to determine if the implemented changes produced the desired effect. These last steps are particularly troublesome for a reason unrecognized in the assessment literature—inadequate statistical power caused primarily by the use of small student samples. Analyses presented here demonstrate that assessment efforts by smaller schools may never provide the statistical power required to obtain valid results in a reasonable time frame. Consequently, decisions on curricular and pedagogical change are too often based on inaccurate research findings. Rather than waste time and resources toward what essentially is a statistical dead end, an alternate approach is recommended: Schools should examine published pedagogical studies that use direct measures of learning with sufficient statistical power and utilize the findings to improve student learning.
Journal of Management Education | 2001
Donald R. Bacon; Kim A. Stewart
The topic of quality management is so expansive that it raises the issue of how to teach the subject in a way that students find engaging, meaningful, and useful. The Personal Data Analysis Exercise is designed to achieve these objectives by having students apply the quality management process to their everyday lives. In the exercise, students develop valued self-improvement goals that can be accomplished within the semester. They take actions to achieve the goals, collect data to monitor their progress, identify “defects” in their behaviors, and analyze their behaviors using quality tools to find new ways to eliminate the defects and accomplish the goals. The article explains the exercise and includes a teaching guide and instructional aids.
Marketing Education Review | 2016
Donald R. Bacon; Carol J. Johnson; Kim A. Stewart
Response rates in student evaluations of teaching (SET) surveys are often low, especially when conducted online. These lower response rates raise the question of nonresponse bias. This article examines a data set comprising student evaluations of 6,754 business courses occurring over an 11-year period to investigate whether response rate is related to SET scores and distributions. The authors found that an increased response rate was associated with lower average SET scores for high-SET teachers, and associated with higher average SET scores for low-SET teachers. Furthermore, the authors found that low response rates are associated with lower variance in SET scores. Implications are discussed.
Marketing Education Review | 2016
Donald R. Bacon; Yilong (Eric) Zheng; Kim A. Stewart; Carol J. Johnson; Pallab Paul
Although widely used, student evaluations of teaching do not address several factors that should be considered in evaluating teaching performance such as new course preparations, teaching larger classes, and inconvenient class times. Consequently, the incentive exists to avoid certain teaching assignments to achieve high SET scores while minimizing workload. This hinders curriculum innovation and development and potentially creates dysfunction in the college. The authors demonstrate how conjoint analysis can be applied to create a model of teaching evaluation that simultaneously considers many aspects of teaching performance and increases fairness in the appraisal process. Findings from a pilot implementation are discussed.