Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where André A. Rupp is active.

Publication


Featured researches published by André A. Rupp.


Measurement: Interdisciplinary Research & Perspective | 2008

Unique Characteristics of Diagnostic Classification Models: A Comprehensive Review of the Current State-of-the-Art

André A. Rupp; Jonathan Templin

Diagnostic classification models (DCM) are frequently promoted by psychometricians as important modelling alternatives for analyzing response data in situations where multivariate classifications of respondents are made on the basis of multiple postulated latent skills. In this review paper, a definitional boundary of the space of DCM is developed, core DCM within this space are reviewed, and their defining features are compared and contrasted with those of other latent variable models. The models to which DCM are compared include unrestricted latent class models, multidimensional factor analysis models, and multidimensional item response theory models. Attention is paid to both statistical considerations of model structure, as well as substantive considerations of model use.


Structural Equation Modeling | 2004

To Bayes or Not to Bayes, From Whether to When: Applications of Bayesian Methodology to Modeling.

André A. Rupp; Dipak K. Dey; Bruno D. Zumbo

This article presents relevant research on Bayesian methods and their major applications to modeling in an effort to lay out differences between the frequentist and Bayesian paradigms and to look at the practical implications of these differences. Before research is reviewed, basic tenets and methods of the Bayesian approach to modeling are presented and contrasted with basic estimation results from a frequentist perspective. It is argued that Bayesian methods have become a viable alternative to traditional maximum likelihood-based estimation techniques and may be the only solution for more complex psychometric data structures. Hence, neither the applied nor the theoretical measurement community can afford to neglect the exciting new possibilities that have opened up on the psychometric horizon.


Structural Equation Modeling | 2011

Analysis of Multivariate Social Science Data (2nd ed.)

André A. Rupp; Shauna J. Sweet

Analysis of Multivariate Social Science Data is written by four well-respected scholars in the social sciences and is designed “to give students and social researchers with limited mathematical and statistical knowledge a basic understanding of some of the main multivariate methods and the knowledge to carry them out” (p. ix). Using practical examples and minimizing the use of formal mathematics, the authors aim to provide practitioners with an accessible common-language introduction to what they consider the most important multivariate methods used in social scientific research. We reviewed the second edition of this book; for an alternative review of this edition see Mair (2009), and for a review of the first edition see Austin (2003). In this second edition, the authors have maintained the use of practical examples that were cited by reviewers as a clear strength of the book. This has resulted in an emphasis on example-driven interpretation of statistical results that serve to explain, illustrate, and even to compare and contrast the different methods. The first chapter is designed to orient the reader by providing a brief history of how the book was developed from course notes; it includes detailed explanations of the authors’ nonmathematical approach to instruction, provides a review of basic notation and terminology, and discusses the focal role of examples throughout the book. Each of the subsequent 11 chapters is devoted to exploring a particular analytic method following a progression that the authors describe as moving “from simpler to more complex” (p. x). The first four of these chapters are devoted to methods that the authors collect under the terms descriptive methods, data reduction methods, and methods for data summarization: cluster analysis (Chapter 2), multidimensional scaling (Chapter 3), correspondence analysis (Chapter 4), and principal components analysis (Chapter 5). Simple linear regression, multiple linear regression, logistic regression, and path analysis are all covered in Chapter 6 (new to this edition), which serves as a pivotal transitional chapter between the first and second main sections of the book. In these later chapters,


Educational and Psychological Measurement | 2009

Impact of Missing Data on the Detection of Differential Item Functioning: The Case of Mantel-Haenszel and Logistic Regression Analysis

Alexander Robitzsch; André A. Rupp

This article describes the results of a simulation study to investigate the impact of missing data on the detection of differential item functioning (DIF). Specifically, it investigates how four methods for dealing with missing data (listwise deletion, zero imputation, two-way imputation, response function imputation) interact with two methods of DIF detection (Mantel-Haenszel statistic, logistic regression analysis) under three mechanisms of missingness (data missing completely at random, data missing at random, and data missing not at random) to produce over- or underestimates of the DIF effect sizes and detection rates. Results show that the interaction effects between missingness mechanism, treatment, and rate are most influential for explaining variation in bias, root mean square errors, and rejection rates. An incorrect treatment of missing data can thus lead to severe increases of Type I and Type II error rates. However, the choice between the two DIF detection methods investigated in this study is not important.


Educational and Psychological Measurement | 2011

Performance of the S − χ2 Statistic for Full-Information Bifactor Models

Ying Li; André A. Rupp

This study investigated the Type I error rate and power of the multivariate extension of the S − χ2 statistic using unidimensional and multidimensional item response theory (UIRT and MIRT, respectively) models as well as full-information bifactor (FI-bifactor) models through simulation. Manipulated factors included test length, sample size, latent trait characteristics such as discrimination pattern and intertrait correlations, and model type misspecification. The nominal Type I error rates were observed under all conditions. The power of the S − χ2 statistic for UIRT models was high for MIRT and FI-bifactor models that were structurally most distinct from the UIRT models but was low otherwise. The power of the S − χ2 statistic to detect misfitting between MIRT and FI-bifactor models was low across all conditions because of the structural similarity of these two models. Finally, information-based indices of relative model–data fit and latent variable correlations were obtained, and these showed expected patterns across conditions.


International Journal of Testing | 2017

Incremental Validity of Multidimensional Proficiency Scores from Diagnostic Classification Models: An Illustration for Elementary School Mathematics.

Olga Kunina-Habenicht; André A. Rupp; Oliver Wilhelm

Diagnostic classification models (DCMs) hold great potential for applications in summative and formative assessment by providing discrete multivariate proficiency scores that yield statistically driven classifications of students. Using data from a newly developed diagnostic arithmetic assessment that was administered to 2032 fourth-grade students in Germany, we evaluated whether the multidimensional proficiency scores from the best-fitting DCM have an added value, over and above the unidimensional proficiency score from a simpler unidimensional item response theory model, in explaining variance in external (a) school grades in mathematics and (b) unidimensional proficiency scores from a standards-based large-scale assessment of mathematics. Results revealed high classification reliabilities as well as interpretable parameter estimates for items and students for the best-fitting DCM. However, while DCM scores were moderately correlated with both external criteria, only a negligible incremental validity of the multivariate attribute scores was found.


Archive | 2012

Standardized Diagnostic Assessment Design and Analysis: Key Ideas from Modern Measurement Theory

Hye-Jeong Choi; André A. Rupp; Min Pan

As a response to the ever-increasing demand for diagnostic assessments that can provide more informative feedback about students’ knowledge state, assessment design frameworks are needed that can help designers incorporate relevant cognitive theories into the development, implementation, and analysis process. In this chapter, we describe one prominent framework for principled diagnostic assessment design called evidence-centered design (ECD) (e.g., Mislevy et al. A brief introduction to evidence-centered design. CSE Technical Report 632. Los Angeles: The National Center for Research on Evaluation, Standards, Student Testing (CRESST), Center for Studies in Education, UCLA, 2004) as well as a class of statistical models called diagnostic classification models (DCMs) (e.g., Rupp et al. Diagnostic assessment methods: theory and application. The Guilford Press, New York, 2010) that can make inferences about student profiles within this framework. With respect to DCMs we describe key terminology, concepts, and a unified estimation framework known as the log-linear cognitive diagnosis model (LCDM) (Henson et al. Psychometrika 74(2):191–210, 2009). We present three examples to illustrate how particular DCMs can be specified to address different cognitive theories concerning the process of knowledge processing. At the end of this chapter, we illustrate the utility of DCMs with a real-data set on arithmetic ability in elementary school to illustrate the type of diagnostic inferences we can make about students’ attribute profiles.


Measurement: Interdisciplinary Research & Perspective | 2012

Building Coherent Validation Arguments for the Measurement of Latent Constructs With Unified Statistical Frameworks

André A. Rupp

In the focus article of this issue, von Davier, Naemi, and Roberts essentially coupled (a) a short methodological review of structural similarities of latent variable models with discrete and continuous latent variables and (b) 2 short empirical case studies that show how these models can be applied to real, rather than simulated, large-scale data sets. My reading of their article left me with a few conflicting impressions that I would like to share in this commentary. These impressions concern the alignment between what I felt the authors set out to do rhetorically, on the one hand, and how they went about accomplishing their goal, on the other hand. In short, I argue that both aspects of their work are valuable in isolation but that a stronger evidentiary basis, grounded in richer validation arguments (e.g., Kane, 2006, 2011), needs to be brought to bear to flesh out the nuances of their targeted rhetoric.


Measurement: Interdisciplinary Research & Perspective | 2008

Lost in Translation? Meaning and Decision Making in Actual and Possible Worlds.

André A. Rupp

Borsboom, D. (2008). Latent variable theory. Measurement: Interdisciplinary Research and Perspectives, 6, 25–53. Guttman, L. (1955). The determinacy of factor score matrices with implications for five other basic problems of common-factor theory. The British Journal of Statistical Psychology, 8(2), 65–81. Kestelman, H. (1952). The fundamental equation of factor analysis. British Journal of Psychology, 5, 1–6. Maraun, M. (2007). Myths and confusions: Psychometrics and the latent variable model. Retrieved from http://www.sfu.ca/∼maraun/Mikes%20page-%20Myths%20and%20Confusions.html Maraun, M., & Halpin, P. (2007). MAXEIG: An analytical treatment. Retrieved from http:// www.sfu.ca/∼maraun/Mikes%20page-%20Ongoing%20Research.html Maraun, M., Halpin, P., Slaney, K., Gabriel, S., & Tkatchouk, M. (2007). What the researcher should know about Meehl’s taxometric tools of detection. Retrieved from http://www.sfu.ca/∼maraun/ Mikes%20page-%20Ongoing%20Research.html McDonald, R. P. (1975). Descriptive axioms for common factor theory, image theory and component theory. Psychometrika, 40(2), 137–152. McDonald, R. P. (1974). The measurement of factor indeterminacy. Psychometrika, 39, 203–222. Piaggio, H. (1931). The general factor in Spearman’s theory of intelligence. Nature, 127, 56–57. Rozeboom, W. (1988). Factor indeterminacy: The saga continues. British Journal of Mathematical and Statistical Psychology, 41, 209–226. Wilson, E. B. (1928). On hierarchical correlation systems. Proceedings of the National Academy of Science, 14, 283–291.


International Journal of Learning and Media | 2009

Epistemic Network Analysis: A Prototype for 21st-Century Assessment of Learning

David Williamson Shaffer; David Hatfield; Gina Navoa Svarovsky; Padraig Nash; Aran Nulty; Elizabeth Bagley; Kenneth A. Frank; André A. Rupp; Robert J. Mislevy

Collaboration


Dive into the André A. Rupp's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bruno D. Zumbo

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Hans Anand Pant

Free University of Berlin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David Williamson Shaffer

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rebecca Nugent

Carnegie Mellon University

View shared research outputs
Researchain Logo
Decentralizing Knowledge