Robert C. MacCallum
University of North Carolina at Chapel Hill
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Robert C. MacCallum.
Psychological Methods | 2005
Kristopher J. Preacher; Derek D. Rucker; Robert C. MacCallum; W. Alan Nicewander
Analysis of continuous variables sometimes proceeds by selecting individuals on the basis of extreme scores of a sample distribution and submitting only those extreme scores to further analysis. This sampling method is known as the extreme groups approach (EGA). EGA is often used to achieve greater statistical power in subsequent hypothesis tests. However, there are several largely unrecognized costs associated with EGA that must be considered. The authors illustrate the effects EGA can have on power, standardized effect size, reliability, model specification, and the interpretability of results. Finally, the authors discuss alternative procedures, as well as possible legitimate uses of EGA. The authors urge researchers, editors, reviewers, and consumers to carefully assess the extent to which EGA is an appropriate tool in their own research and in that of others.
Multivariate Behavioral Research | 2005
Donna L. Coffman; Robert C. MacCallum
The biasing effects of measurement error in path analysis models can be overcome by the use of latent variable models. In cases where path analysis is used in practice, it is often possible to use parcels as indicators of a latent variable. The purpose of the current study was to compare latent variable models in which parcels were used as indicators of the latent variables, path analysis models of the aggregated variables, and models in which reliability estimates were used to correct for measurement error in path analysis models. Results showed that point estimates of path coefficients were smallest for the path analysis models and largest for the latent variable models. It is concluded that, whenever possible, it is better to use a latent variable model in which parcels are used as indicators than a path analysis model using total scale scores.
Psychological Methods | 2006
Robert C. MacCallum; Michael W. Browne; Li Cai
For comparing nested covariance structure models, the standard procedure is the likelihood ratio test of the difference in fit, where the null hypothesis is that the models fit identically in the population. A procedure for determining statistical power of this test is presented where effect size is based on a specified difference in overall fit of the models. A modification of the standard null hypothesis of zero difference in fit is proposed allowing for testing an interval hypothesis that the difference in fit between models is small, rather than zero. These developments are combined yielding a procedure for estimating power of a test of a null hypothesis of small difference in fit versus an alternative hypothesis of larger difference.
Multivariate Behavioral Research | 2010
Sonya K. Sterba; Robert C. MacCallum
Different random or purposive allocations of items to parcels within a single sample are thought not to alter structural parameter estimates as long as items are unidimensional and congeneric. If, additionally, numbers of items per parcel and parcels per factor are held fixed across allocations, different allocations of items to parcels within a single sample are thought not to meaningfully alter model fit—at least when items are normally distributed. We show analytically that, although these statements hold in the population, they do not necessarily hold in the sample. We show via a simulation that, even under these conservative conditions, the magnitude of within-sample item-to-parcel-allocation variability in structural parameter estimates and model fit can alter substantive conclusions when sampling error is high (e.g., low N, low item communalities, few items per few parcels). We supply a software tool that facilitates reporting and ameliorating the consequences of item-to-parcel-allocation variability. The tools utility is demonstrated on an empirical example involving the Neuroticism-Extroversion-Openness (NEO) Personality Inventory and the Computer Assisted Panel Study data set.
Structural Equation Modeling | 2010
Robert C. MacCallum; Taehun Lee; Michael W. Browne
Two general frameworks have been proposed for evaluating statistical power of tests of model fit in structural equation modeling (SEM). Under the Satorra–Saris (1985) approach, to evaluate the power of the test of fit of Model A, a Model B, within which A is nested, is specified as the alternative hypothesis and considered as the true model. We then determine the power of the test of fit of A when B is true. Under the MacCallum–Browne–Sugawara (1996) approach, power is evaluated with respect to the test of fit of Model A against an alternative hypothesis specifying a true degree of model misfit. We then determine the power of the test of fit of A when a specified degree of misfit is assumed to exist as the alternative hypothesis. In both approaches the phenomenon of isopower is present, which means that different alternative hypotheses (in the Satorra–Saris approach) or combinations of alternative hypotheses and other factors (in the MacCallum–Browne–Sugawara approach) yield the same level of power. We show how these isopower alternatives can be defined and identified in both frameworks, and we discuss implications of isopower for understanding the results of power analysis in applications of SEM.
Multivariate Behavioral Research | 2011
Jolynn Pek; Robert C. MacCallum
The detection of outliers and influential observations is routine practice in linear regression. Despite ongoing extensions and development of case diagnostics in structural equation models (SEM), their application has received limited attention and understanding in practice. The use of case diagnostics informs analysts of the uncertainty of model estimates under different subsets of the data and highlights unusual and important characteristics of certain cases. We present several measures of case influence applicable in SEM and illustrate their implementation, presentation, and interpretation with two empirical examples: (a) a common factor model on verbal and visual ability (Holzinger & Swineford, 1939) and (b) a general structural equation model assessing the effect of industrialization on democracy in a mediating model using country-level data (Bollen, 1989; Bollen & Arminger, 1991). Throughout these examples, three issues are emphasized. First, cases may impact different aspects of results as identified by different measures of influence. Second, the important distinction between outliers and influential cases is highlighted. Third, the concept of good and bad cases is introduced—these are influential cases that improve/worsen overall model fit based on their presence in the sample. We conclude with a discussion on the utility of detecting influential cases in SEM and present recommendations for the use of measures of case influence.
Psychological Methods | 2012
Robert C. MacCallum; Michael C. Edwards; Li Cai
Muthén and Asparouhov (2012) have proposed and demonstrated an approach to model specification and estimation in structural equation modeling (SEM) using Bayesian methods. Their contribution builds on previous work in this area by (a) focusing on the translation of conventional SEM models into a Bayesian framework wherein parameters fixed at zero in a conventional model can be respecified using small-variance priors and (b) implementing their approach in software that is widely accessible. We recognize potential benefits for applied researchers as discussed by Muthén and Asparouhov, and we also see a tradeoff in that effective use of the proposed approach introduces increased demands in terms of expertise of users to navigate new complexities in model specification, parameter estimation, and evaluation of results. We also raise cautions regarding the issues of model modification and model fit. Although we see significant potential value in the use of Bayesian SEM, we also believe that effective use will require an awareness of these complexities.
Structural Equation Modeling | 2015
Taehun Lee; Robert C. MacCallum
In applications of structural equation modeling (SEM), investigators obtain and interpret parameter estimates that are computed so as to produce optimal model fit. The obtained parameter estimates are optimal in the sense that model fit would deteriorate to some degree if any of those estimates were changed. If a small change of a parameter estimate has large influence on model fit, such a parameter can be called highly influential, whereas if a substantial perturbation of a parameter estimate has negligible influence on model fit, that parameter can be called uninfluential. This is the idea of parameter influence. This article covers 2 approaches to quantifying parameter influence. One existing approach determines the direction vector of parameter perturbation causing maximum deterioration in model fit. In this article, we propose a new approach for quantifying the influence of individual parameters on model fit. In this new approach, the influence of individual parameters is quantified as the degree of perturbation required to produce a prespecified value of change in model fit. Using empirical examples, we illustrate how these 2 methods can be effectively employed, complementing each other and as a complement to conventional approaches to interpretation of parameter estimates obtained in empirical data analyses.
Psychological Methods | 2017
Taehun Lee; Robert C. MacCallum; Michael W. Browne
Extending work by Waller (2008) on fungible regression coefficients, we propose a method for computation of fungible parameter estimates in structural equation modeling. Such estimates are defined as distinct alternative solutions for parameter estimates, where all fungible solutions yield identical model fit that is only slightly worse than the fit provided by optimal estimates. When such alternative estimates are found to be highly discrepant from optimal estimates, then substantive interpretation based on optimal estimates is called into question. We present a computational method and 3 illustrations showing the potential impact of this approach in applied research, and we discuss implications and issues for further research. (PsycINFO Database Record
Psychometrika | 2015
Robert C. MacCallum; Anthony O’Hagan
Wu and Browne (Psychometrika, 79, 2015) have proposed an innovative approach to modeling discrepancy between a covariance structure model and the population that the model is intended to represent. Their contribution is related to ongoing developments in the field of Uncertainty Quantification (UQ) on modeling and quantifying effects of model discrepancy. We provide an overview of basic principles of UQ and some relevant developments and we examine the Wu–Browne work in that context. We view the Wu–Browne contribution as a seminal development providing a foundation for further work on the critical problem of model discrepancy in statistical modeling in psychological research.