Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Aaron S. Richmond is active.

Publication


Featured researches published by Aaron S. Richmond.


Assessment & Evaluation in Higher Education | 2008

Curriculum-Embedded Performance Assessment in Higher Education: Maximum Efficiency and Minimum Disruption.

Rhoda Cummings; Cleborne D. Maddux; Aaron S. Richmond

Increasingly, institutions of higher education are required to evaluate student progress and programme effectiveness through implementation of performance assessment practices. Faculty members frequently resist performance assessment because of concerns that assessment activities will increase workloads, reduce time for scholarly activities, eliminate professional autonomy, and reduce faculty work into component parts or discrete technical competences. This paper describes how curriculum‐embedded performance assessment can be used to evaluate student and programme effectiveness without placing an undue burden on faculty. Examples of the use of curriculum‐embedded performance assessment strategies in a graduate‐level educational psychology programme are provided.


Teaching of Psychology | 2011

Promoting Higher Level Thinking in Psychology: Is Active Learning the Answer?.

Aaron S. Richmond; Lisa Kindelberger Hagan

The goal of this study was to investigate which common instructional methods (active vs. direct) best promote higher level thinking in a psychology course. Over a 5-week period, 71 undergraduates were taught psychology using both active learning and direct instruction. Pre- and post-course assessments were coded as either higher or lower level questions based on Krathwohl’s updated Taxonomy of Educational Objectives. Results indicated an interaction effect where higher level thinking was significantly higher in active learning than in direct instruction. In contrast, lower level thinking was not influenced by instructional method. Based on these results, if psychology professors are interested in promoting higher level learning, active learning instruction may be a valuable tool.


Psychology, Learning and Teaching | 2011

Got Neurons? Teaching Neuroscience Mnemonically Promotes Retention and Higher-Order Thinking

Aaron S. Richmond; Russell N. Carney; Joel R. Levin

The purpose of this study was to determine whether introductory psychology students could make effective use of the mnemonic keyword method in: (a) initially acquiring 26 neuroscience terms; (b) retaining this information over time; and (c) applying what they learned to a task requiring some degree of higher-order thinking. In two separate classes, 70 participants were trained either to use the keyword method or their own best method to study the neuroscience terms. After a 5-day delay, students returned to complete an unannounced assessment of the neuroscience terms. Based on a reduced sample of 58 ‘eligible’ participants, results indicated that students using the keyword method outperformed their own-best-method counterparts on immediate, delayed, and higher-order thinking assessments. The findings support the literature on the utility and power of the keyword method in actual psychology classroom learning contexts.


Journal of Moral Education | 2004

In support of the cognitive‐developmental approach to moral education: a response to David Carr

Aaron S. Richmond; Rhoda Cummings

David Carr (2002) has argued against the use of developmental theories as a basis for curriculum development in moral education. Although we find common ground with some aspects of Carrs arguments, we disagree with several of his criticisms of the cognitive‐developmental approach to moral education. He confuses romantic ideology (as espoused by Rousseau and others) and progressive ideology (as espoused by Dewey and others); he assumes that developmental theories have no endpoint or final goal from which to structure moral education; and he argues against the use of psychological inquiry to validate a philosophical ought. This paper is an attempt to clarify Carrs arguments and propose a justification for the developmental approach to moral education.


Teaching of Psychology | 2014

Aspirational Model Teaching Criteria for Psychology

Aaron S. Richmond; Guy A. Boysen; Regan A. R. Gurung; Yvette N. Tazeau; Steven A. Meyers; Mark J. Sciutto

In 2011, the Society for the Teaching of Psychology commissioned a presidential task force to document teaching criteria for model psychology teachers in undergraduate education. The resulting list of criteria reflects activities related to face-to-face course interaction and online teaching, training, and education; course design; implementation of learning experiences; and the assessment process. Specifically, the model encompasses six broad areas, namely training, instructional methods, assessment process, syllabi, content, and student evaluations of teaching. As a developmental tool, the model can serve as a self-guided course for self-assessment of educational practices and can help identify areas of potential development. It can prompt reflection about teaching strengths and weaknesses. The model can also be useful as a guiding structure for tenure and promotion.


Teaching of Psychology | 2015

a + (b1) Professor–Student Rapport + (b2) Humor + (b3) Student Engagement = (Ŷ) Student Ratings of Instructors:

Aaron S. Richmond; Majken B. Berglund; Vadim B. Epelbaum; Eric M. Klein

Teaching effectiveness is often evaluated through student ratings of instruction (SRI). Research suggests that there are many potential factors that can predict student’s perceptions of teaching effectiveness such as professor–student rapport, student engagement, and perceived humor of the instructor. Therefore, we sought to assess whether undergraduate student’s perceptions of professor–student rapport, student engagement, and humor predict scores on retrospective SRIs. The findings suggest that professor–student rapport is the largest predictor (54% of variance) of scores on SRIs, while humor and student engagement were significant, but minor predictors of SRIs. Since SRIs are often used by administration to determine hiring, promotions, and tenure decisions for faculty, understanding what predicts SRIs is important to both faculty and administrators of higher education.


Teaching of Psychology | 2015

Who Are We Studying? Sample Diversity in Teaching of Psychology Research.

Aaron S. Richmond; Kristin A. Broussard; Jillian L. Sterns; Kristina K. Sanders; Justin C. Shardy

The purpose of the current study was to examine the sample diversity of empirical articles published in four premier teaching of psychology journals from 2008 to 2013. We investigated which demographic information was commonly reported and if samples were ethnically representative and whether gender was representative compared to National Department of Education data. Descriptive statistics showed that, in many cases, ethnicity was not reported in teaching of psychology journal articles. When ethnicity was reported, samples were predominantly Caucasian and not typical to that of Department of Education data. Additionally, results indicated there is an overrepresentation of female students in these samples, which is typical of psychology students. Based on the results, teaching of psychology research is accurately sampling the intended population, however, the psychology student population is not representative of the greater college student population. Accordingly, generalizations from teaching of psychology research to teaching other academic domains should be made with caution.


Teaching of Psychology | 2015

The Effect of Immersion Scheduling on Academic Performance and Students' Ratings of Instructors.

Aaron S. Richmond; Bridget C. Murphy; Layton S. Curl; Kristin A. Broussard

During the past decades, little research has investigated the effects of immersion scheduling on the psychology classroom. Therefore, we sought to compare academic performance of students in 2-week immersion psychology courses to that of students in traditional 16-week courses. In Study 1, students who received instruction in a 2-week immersion course significantly outperformed their cohorts in a traditional 16-week course. In order to address potential limitations in the first study, in Study 2, we controlled for individual differences variables (e.g., cumulative grade point average), and results indicated significantly higher academic performance for students in the 2-week immersion course. In both studies, students in the immersion courses consistently evaluated the courses and their instructors significantly higher than those students in the 16-week courses. In light of our results and in contrast to critics, immersion courses may be useful and effective when teaching psychology.


Psychology, Learning and Teaching | 2015

A Primer for Creating a Flipped Psychology Course.

Heather D. Hussey; Aaron S. Richmond; Bethany K. B. Fleck

Instructional design for psychology courses is ever changing. Recently, there has been an explosion of scholarly literature related to flipped classroom pedagogy in higher education. This essentially entails inverting a course so that lectures are viewed outside of class, and class time is devoted to active learning through activities such as demonstrations and group work. Although beneficial to student learning, implementing the flipped course design into a psychology class can be difficult, time consuming, and daunting. As such, we provide a primer for successful implementation of the flipped design. Based on the literature, we describe several teaching tips (e.g., what content to deliver in class versus online) that may aid in the implementation process. Additionally, we describe several common pitfalls to avoid (e.g., apprehension about learning new technology) when implementing the flipped classroom.


Computers in The Schools | 2007

An Examination of Publication Bias in an International Journal of Information Technology in Education

Leping Liu; Suzanne Aberasturi; Kulwadee Axtell; Aaron S. Richmond

Abstract Publication bias refers to a tendency to publish articles with significant results over publications with nonsignificant results. In this article we first review the literature of publication bias focusing on the three major determinants (file drawer significance, file drawer effect size, and file drawer sample size) and two interrelated sources (the editors and researchers) of publication bias. We then present a study that examined the publication bias of an international peer-reviewed journal in the field of information technology in education. All original submissions from that journal over a six-year period were examined. Comparisons were performed and no significant difference was found between accepted and rejected manuscripts on their significance, effect size, and sample size. Findings did not provide any evidence of editorial publication bias. Suggestions for further studies are discussed.

Collaboration


Dive into the Aaron S. Richmond's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bethany K. B. Fleck

Metropolitan State University of Denver

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Regan A. R. Gurung

Metropolitan State University of Denver

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bridget C. Murphy

Metropolitan State University of Denver

View shared research outputs
Researchain Logo
Decentralizing Knowledge