Keith A. Markus
John Jay College of Criminal Justice
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Keith A. Markus.
Structural Equation Modeling | 2012
Keith A. Markus
Kline has provided an expanded and partially restructured update on his popular introductory structural equation modeling (SEM) textbook. The 13-chapter book is divided into three sections: “Concepts and Tools” (four chapters), “Core Techniques” (six chapters), and “Advanced Techniques, Avoiding Mistakes” (three chapters). Chapters include summaries, recommended reading, and exercises. I first summarize what the book covers and then close with some evaluative comments.
Structural Equation Modeling | 2002
Keith A. Markus
Statistically equivalent models produce the same range of moment matrices over the domain of their parameter spaces. Raykov and Marcoulides (2001) proposed a proof that leads to the conclusion that all structural equation (SE) models with certain minimal components have infinitely many statistically equivalent models. A variation on their proof covers an even broader class of models. This conclusion has important implications for the application of at least one notion of eliminative induction to structural equation modeling (SEM). Normally, assertions of statistical equivalence imply that the models differ in meaning, giving statistical equivalence its interest. Consequently, a particular complex causal structure provides a counterexample to the proposed proof. This counterexample suggests that a successful proof may require more detailed attention to the concept of semantic equivalence as characterized by different substantive implications. A formal account of semantic equivalence rests on translation between SE models and a model-neutral descriptive language.
Social Indicators Research | 1998
Keith A. Markus
Messicks (1989) theory of test validity is profoundly influential (Hubley and Zumbo, 1996; Angoff, 1988) in part because it brings together disparate contributions into a unified framework for building validity arguments. At the heart of Messicks theory lies a synthesis of realism and constructivism with respect to both scientific facts and measurement. Within this synthesis there remains a tension between the evidential basis and the consequential basis for test interpretation and use. This cannot be sidestepped simply by limiting the evidential basis to test interpretation and the consequential basis to test use: Interpretation and use are not so easily held separate. The roles of constructivism and context in Messicks theory underline the inherent link between facts and values, but the assumption that facts are objective and values are subjective goes unquestioned in Messicks theory. The inherent link between facts and values combines with this assumption to produce the unresolved tension in Messicks theory. This suggests that a unified theory of test validity requires a theory of value justification.
Structural Equation Modeling | 2010
Keith A. Markus
One common application of structural equation modeling (SEM) involves expressing and empirically investigating causal explanations. Nonetheless, several aspects of causal explanation that have an impact on behavioral science methodology remain poorly understood. It remains unclear whether applications of SEM should attempt to provide complete explanations or partial explanations. Moreover, it remains unclear what sorts of things researchers can best take as causes and effects. Finally, the meaning of causal assertions itself remains poorly understood. Attempting to clarify the use of structural equations as causal explanations by addressing these issues has implications for behavioral science methodology because applications of SEM typically remain vague about causation and thus about their substantive conclusions. Research aimed at clarifying these issues can lead to a sharper and more refined use of SEM for causal explanation, and by extension, clarify behavioral science methodology more generally.
Theory & Psychology | 2012
Keith A. Markus; Denny Borsboom
The possibility or impossibility of quantitative measurement in psychology has important ramifications for the nature of psychology as a discipline. Trendler’s (2009) argument for the impossibility of psychological measurement suggests a general and potentially fruitful strategy for further research on this question. However, the specific argument offered by Trendler appears flawed in several respects. It seems to conflate what must hold true with what one must know and also equivocate on the necessary evidence. Moreover, if the argument supported its conclusion, it would rule out qualitative discourse on psychology as well as psychological measurement. Taking Trendler’s argument as an example, one can formulate a general structure to arguments adopting the same basic strategy. An overview of the requirements that such arguments should meet provides a metatheoretical perspective that can assist authors in constructing such arguments and readers in critically evaluating them.
Measurement: Interdisciplinary Research & Perspective | 2008
Keith A. Markus
A theoretical variable such as integrity, conscientiousness, or academic honesty may correspond to either a construct or a concept, but the standard idiom does not distinguish the two. One can describe the difference between constructs and concepts in terms of set theory. Constructs extend over actual cases, whereas concepts extend over both actual and possible cases. As such, theoretical claims made about, say, integrity as a construct differ from claims about integrity as a concept. The restriction of constructs to a specified population plays a central role in test validation and psychometric analyses aimed at distinguishing constructs from one another. The extension of concepts over possible populations plays a central role in the adoption of nonactual possibilities as goals in making efforts toward systemic change and also in the comparison of constructs across populations. The failure of the standard idiom, which conflates constructs with concepts, to provide a vocabulary that captures both population-dependent and population-independent aspects of variables recommends the modification of that idiom to distinguish constructs from concepts. This distinction suggests various changes in practice such as including the intended population in the names of constructs but not concepts.
Archive | 2004
Keith A. Markus
Structural equation (SE) models provide a statistical model summarizing a multivariate probability distribution in terms of linear equations. When researchers interpret a SE model in terms of causal relationships, the SE model serves as a causal model. Researchers also give causal interpretations to models not currently expressible as SE models. Recognizing that both sets of models have expanding membership, the present chapter focuses primarily on the intersection of these two sets: causally interpreted SE models.
Journal of Police and Criminal Psychology | 2000
David E. Brandt; Keith A. Markus
The attitudes toward the police (ATP) of a group of young inner city adolescents were investigated within the context of a program designed to teach dispute resolution skills and promote a dialogue with local police. ATP were measured using a 23 item questionnaire. The results indicated that while ATP were generally positive, girls held more positive ATP than boys and adolescents who reported negative experiences with the police had less favorable ATP. A confirmatory factor analysis of the questionnaire yielded three factors; attitudes toward police behavior, attitudes toward interaction with the police, and attitudes toward interaction with other adults. The results are in general agreement with earlier studies with other populations and have implications for programs designed to improve adolescent relationships with the police.
Multivariate Behavioral Research | 2008
Keith A. Markus
One can distinguish statistical models used in causal modeling from the causal interpretations that align them with substantive hypotheses. Causal modeling typically assumes an efficient causal interpretation of the statistical model. Causal modeling can also make use of mereological causal interpretations in which the state of the parts determines the state of the whole. This interpretation shares several properties with efficient causal interpretations but also differs in terms of other important properties. The availability of alternative causal interpretations of the same statistical models has implications for hypothesis specification, research design, causal inference, data analysis, and the interpretation of research results.
Educational and Psychological Measurement | 2001
Abe Fenster; Keith A. Markus; Carl F. Wiedemann; Marc A. Brackett; John Fernandez
The present study examined the use of the Graduate Record Examination (GRE-Verbal and GRE-Quantitative) and undergraduate grade point average (UGPA) to predict long-term performance in an MA program in forensic psychology. The criterion measures were graduate grade point average (GGPA) and time to completion (TTC). Data were available for 206 graduates. Regression analysis indicated that a linear combination of GRE-V, GRE-Q, and UGPA correlated 0.63 with GGPA. Predictive efficiency was reduced by only 2% of the variance when GRE subscores are combined into a total score. The correlation with TTC was smaller (R = 0.31) but nonetheless translated into meaningful differences in student performance. Most noteworthy, GRE scores and UGPA appear to predict better for forensic psychology than for social sciences in general.