Timothy Hayes
University of Southern California
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Timothy Hayes.
Psychology and Aging | 2015
Timothy Hayes; Satoshi Usami; Ross Jacobucci; John J. McArdle
In this article, we describe a recent development in the analysis of attrition: using classification and regression trees (CART) and random forest methods to generate inverse sampling weights. These flexible machine learning techniques have the potential to capture complex nonlinear, interactive selection models, yet to our knowledge, their performance in the missing data analysis context has never been evaluated. To assess the potential benefits of these methods, we compare their performance with commonly employed multiple imputation and complete case techniques in 2 simulations. These initial results suggest that weights computed from pruned CART analyses performed well in terms of both bias and efficiency when compared with other methods. We discuss the implications of these findings for applied researchers.
Structural Equation Modeling | 2016
Satoshi Usami; Timothy Hayes; John J. McArdle
This research focuses on the problem of model selection between the latent change score (LCS) model and the autoregressive cross-lagged (ARCL) model when the goal is to infer the longitudinal relationship between variables. We conducted a large-scale simulation study to (a) investigate the conditions under which these models return statistically (and substantively) different results concerning the presence of bivariate longitudinal relationships, and (b) ascertain the relative performance of an array of model selection procedures when such different results arise. The simulation results show that the primary sources of differences in parameter estimates across models are model parameters related to the slope factor scores in the LCS model (specifically, the correlation between the intercept factor and the slope factor scores) as well as the size of the data (specifically, the number of time points and sample size). Among several model selection procedures, correct selection rates were higher when using model fit indexes (i.e., comparative fit index, root mean square error of approximation) than when using a likelihood ratio test or any of several information criteria (i.e., Akaike’s information criterion, Bayesian information criterion, consistent AIC, and sample-size-adjusted BIC).
Multivariate Behavioral Research | 2015
Satoshi Usami; Timothy Hayes; John J. McArdle
The present paper focuses on the relationship between latent change score (LCS) and autoregressive cross-lagged (ARCL) factor models in longitudinal designs. These models originated from different theoretical traditions for different analytic purposes, yet they share similar mathematical forms. In this paper, we elucidate the mathematical relationship between these models and show that the LCS model is reduced to the ARCL model when fixed effects are assumed in the slope factor scores. Additionally, we provide an applied example using height and weight data from a gerontological study. Throughout the example, we emphasize caution in choosing which model (ARCL or LCS) to apply due to the risk of obtaining misleading results concerning the presence and direction of causal precedence between two variables. We suggest approaching model specification not only by comparing estimates and fit indices between the LCS and ARCL models (as well as other models) but also by giving appropriate weight to substantive and theoretical considerations, such as assessing the justifiability of the assumption of random effects in the slope factor scores.
Structural Equation Modeling | 2017
Satoshi Usami; Timothy Hayes; John J. McArdle
When conducting longitudinal research, the investigation of between-individual differences in patterns of within-individual change can provide important insights. In this article, we use simulation methods to investigate the performance of a model-based exploratory data mining technique—structural equation model trees (SEM trees; Brandmaier, Oertzen, McArdle, & Lindenberger, 2013)—as a tool for detecting population heterogeneity. We use a latent-change score model as a data generation model and manipulate the precision of the information provided by a covariate about the true latent profile as well as other factors, including sample size, under the possible influences of model misspecifications. Simulation results show that, compared with latent growth curve mixture models, SEM trees might be very sensitive to model misspecification in estimating the number of classes. This can be attributed to the lower statistical power in identifying classes, resulting from smaller differences of parameters prescribed by the template model between classes.
Multivariate Behavioral Research | 2017
Timothy Hayes; John J. McArdle
This simulation study investigated two families of classification and regression trees (CART) and random forests approaches for addressing missing data in small samples. The first approach uses CART and random forests analyses to model the probability of dropout and create inverse probability weights (Hayes, Usami, Jacobucci, & McArdle, 2015). The second approach addresses missing data by using CART and random forest analyses to generate multiple imputations (Doove, van Buuren, & Dusseldorp, 2014). We simulated data from a five-timepoint latent growth curve model with small and moderate sample sizes (N = 60 or 200) and correlated auxiliary variables.We then generated longitudinal dropout using a simple but deleteriousmissing not at random (MNAR)mechanism in which 30% of participants dropped out at the second timepoint according to either their scores on the main dependent variable (Y) or their true latent slopes. In each condition, we analyzed (a) the full data sets (complete data) as a benchmark. We then analyzed the data sets withmissing data. Standard approaches included (b) complete cases (listwise deletion) and (c) full informationmaximum likelihood (FIML). As a benchmark for the weighting methods, we used (d) inverse probability weights generated from the true missing data selection model as a benchmark for other weighting methods: inverse probability weights from predicted probabilities estimated by (e) logistic regression, (f) CART, and (g) random forest analyses predicting nondropout. Finally, we used (h) Bayesian regression, (i) CART, and (j) random forest approaches to multiple imputation using the mice package in R (van Buuren &Groothuis-Oudshoorn, 2011). MNAR dropout caused substantial bias under listwise deletion (b) and FIML estimation (c). The weighting methods (e, f, g) outperformed the multiple
Automation in Construction | 2015
Arsalan Heydarian; Joao P. Carneiro; David Jason Gerber; Burcin Becerik-Gerber; Timothy Hayes; Wendy Wood
Journal of Consumer Psychology | 2012
Wendy Wood; Timothy Hayes
adaptive agents and multi agents systems | 2012
Jun-young Kwak; Pradeep Varakantham; Rajiv T. Maheswaran; Milind Tambe; Farrokh Jazizadeh; Geoffrey Kavulya; Laura Klein; Burcin Becerik-Gerber; Timothy Hayes; Wendy Wood
Building and Environment | 2015
Saba Khashe; Arsalan Heydarian; David Jason Gerber; Burcin Becerik-Gerber; Timothy Hayes; Wendy Wood
Rethinking Comprehensive Design: Speculative Counterculture, Proceedings of the 19th International Conference on Computer-Aided Architectural Design Research in Asia (CAADRIA 2014) / Kyoto 14-16 May 2014, pp. 729–738 | 2014
Arsalan Heydarian; Joao P. Carneiro; David Jason Gerber; Burcin Becerik-Gerber; Timothy Hayes; Wendy Wood