Dena A. Pastor
James Madison University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Dena A. Pastor.
Applied Psychological Measurement | 2006
Dena A. Pastor; S. Natasha Beretvas
The present study illustrates an extension of Kamatas (2001) restricted form of the hierarchical generalized linear model that provides a multilevel longitudinal Rasch measurement model appropriate for use with polytomous responses. This extension can be used to assess average and interindividual change in the latent trait of interest, concurrently with an assessment of the invariance of item locations over time. Responses of 1,353 college students to three subscales (namely, measures of depression/ anxiety, stress, and well-being) of the Outcome Questionnaire were used for demonstration purposes. Interpretation of the results is provided, and the benefits of using a multilevel item response theory model with longitudinal data are discussed.
Educational and Psychological Measurement | 2003
S. Natasha Beretvas; Dena A. Pastor
Traditionally, reliability generalization (RG) studies have used some form of the regression model to summarize score reliability of a measure across samples and examine conditions under which the reliability varies. Oftentimes, the assumptions underlying the use of multiple regression are not satisfied in RG studies. This article describes how the assumptions have been violated and introduces a more sophisticated technique, mixed-effects modeling, that can overcome many of the shortcomings of traditional approaches. A nontechnical introduction of mixed-effects models in the context of RG studies is provided along with an example using internal consistency reliability coefficients from the Beck Depression Inventory that compares results under the mixed-versus the fixed-effects models.
Applied Measurement in Education | 2009
Steven L. Wise; Dena A. Pastor; Xiaojing J. Kong
Previous research has shown that rapid-guessing behavior can degrade the validity of test scores from low-stakes proficiency tests. This study examined, using hierarchical generalized linear modeling, examinee and item characteristics for predicting rapid-guessing behavior. Several item characteristics were found significant; items with more text or those occurring later in the test were related to increased rapid guessing, while the inclusion of a graphic in a item was related to decreased rapid guessing. The sole significant examinee predictor was SAT total score. Implications of these results for measurement professionals developing low-stakes tests are discussed.
Applied Measurement in Education | 2003
Dena A. Pastor
Embedding item response theory (IRT) models within a multilevel modeling framework has been shown by many authors to allow better estimation of the relationships between predictor variables and IRT latent traits (Adams, Wilson, & Wu,1997). A multilevel IRT model recently proposed by Kamata (1998, 2001) yields the additional benefit of being able to accommodate data that are collected in hierarchical settings. This expansion of multilevel IRT models to three levels allows not only the dependency typically found in hierarchical data to be accommodated, but also the estimation of (a) latent traits at different levels and (b) the relationships between predictor variables and latent traits at different levels. The purpose of this article is to provide both a description and application of Kamatas 3-level IRT model. The advantages and disadvantages of using multilevel IRT models in applied research are discussed and directions for future research are given.
Multivariate Behavioral Research | 2018
Dena A. Pastor; Rory A. Lazowski
ABSTRACT The term “multilevel meta-analysis” is encountered not only in applied research studies, but in multilevel resources comparing traditional meta-analysis to multilevel meta-analysis. In this tutorial, we argue that the term “multilevel meta-analysis” is redundant since all meta-analysis can be formulated as a special kind of multilevel model. To clarify the multilevel nature of meta-analysis the four standard meta-analytic models are presented using multilevel equations and fit to an example data set using four software programs: two specific to meta-analysis (metafor in R and SPSS macros) and two specific to multilevel modeling (PROC MIXED in SAS and HLM). The same parameter estimates are obtained across programs underscoring that all meta-analyses are multilevel in nature. Despite the equivalent results, not all software programs are alike and differences are noted in the output provided and estimators available. This tutorial also recasts distinctions made in the literature between traditional and multilevel meta-analysis as differences between meta-analytic choices, not between meta-analytic models, and provides guidance to inform choices in estimators, significance tests, moderator analyses, and modeling sequence. The extent to which the software programs allow flexibility with respect to these decisions is noted, with metafor emerging as the most favorable program reviewed.
Applied Measurement in Education | 2013
Melinda Ann Taylor; Dena A. Pastor
Although federal regulations require testing students with severe cognitive disabilities, there is little guidance regarding how technical quality should be established. It is known that challenges exist with documentation of the reliability of scores for alternate assessments. Typical measures of reliability do little in modeling multiple sources of error, which are characteristic of alternate assessments. Instead, Generalizability theory (G-theory) allows researchers to identify sources of error and analyze the relative contribution of each source. This study demonstrates an application of G-theory to examine reliability for an alternate assessment. A G-study with the facets rater type, assessment attempts, and tasks was examined to determine the relative contribution of each to observed score variance. Results were used to determine the reliability of scores. The assessment design was modified to examine how changes might impact reliability. As a final step, designs that were deemed satisfactory were evaluated regarding the feasibility of adapting them into a statewide standardized assessment and accountability program.
Contemporary Educational Psychology | 2007
Dena A. Pastor; Kenneth E. Barron; B.J. Miller; Susan L. Davis
School Psychology Review | 2007
Steven W. Evans; Zewalanji N. Serpell; Brandon K. Schultz; Dena A. Pastor
Educational and Psychological Measurement | 2007
Melinda A. Taylor; Dena A. Pastor
Aggression and Violent Behavior | 2008
Allen B. Grove; Steven W. Evans; Dena A. Pastor; Samantha D. Mack