Daniel W. Heck
University of Mannheim
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Daniel W. Heck.
Journal of Mathematical Psychology | 2014
Daniel W. Heck; Morten Moshagen; Edgar Erdfelder
Abstract The Fisher information approximation (FIA) is an implementation of the minimum description length principle for model selection. Unlike information criteria such as AIC or BIC, it has the advantage of taking the functional form of a model into account. Unfortunately, FIA can be misleading in finite samples, resulting in an inversion of the correct rank order of complexity terms for competing models in the worst case. As a remedy, we propose a lower-bound N ′ for the sample size that suffices to preclude such errors. We illustrate the approach using three examples from the family of multinomial processing tree models.
Memory & Cognition | 2014
Christine Platzer; Arndt Bröder; Daniel W. Heck
Decision situations are typically characterized by uncertainty: Individuals do not know the values of different options on a criterion dimension. For example, consumers do not know which is the healthiest of several products. To make a decision, individuals can use information about cues that are probabilistically related to the criterion dimension, such as sugar content or the concentration of natural vitamins. In two experiments, we investigated how the accessibility of cue information in memory affects which decision strategy individuals rely on. The accessibility of cue information was manipulated by means of a newly developed paradigm, the spatial-memory-cueing paradigm, which is based on a combination of the looking-at-nothing phenomenon and the spatial-cueing paradigm. The results indicated that people use different decision strategies, depending on the validity of easily accessible information. If the easily accessible information is valid, people stop information search and decide according to a simple take-the-best heuristic. If, however, information that comes to mind easily has a low predictive validity, people are more likely to integrate all available cue information in a compensatory manner.
Comprehensive Results in Social Psychology | 2017
Quentin Frederik Gronau; Sara van Erp; Daniel W. Heck; Joseph Cesario; Kai J. Jonas; Eric-Jan Wagenmakers
ABSTRACTEarlier work found that – compared to participants who adopted constrictive body postures – participants who adopted expansive body postures reported feeling more powerful, showed an increase in testosterone and a decrease in cortisol, and displayed an increased tolerance for risk. However, these power pose effects have recently come under considerable scrutiny. Here, we present a Bayesian meta-analysis of six preregistered studies from this special issue, focusing on the effect of power posing on felt power. Our analysis improves on standard classical meta-analyses in several ways. First and foremost, we considered only preregistered studies, eliminating concerns about publication bias. Second, the Bayesian approach enables us to quantify evidence for both the alternative and the null hypothesis. Third, we use Bayesian model-averaging to account for the uncertainty with respect to the choice for a fixed-effect model or a random-effect model. Fourth, based on a literature review, we obtained an em...
Behavior Research Methods | 2018
Daniel W. Heck; Nina R. Arnold; Denis Arnold
Multinomial processing tree (MPT) models are a class of measurement models that account for categorical data by assuming a finite number of underlying cognitive processes. Traditionally, data are aggregated across participants and analyzed under the assumption of independently and identically distributed observations. Hierarchical Bayesian extensions of MPT models explicitly account for participant heterogeneity by assuming that the individual parameters follow a continuous hierarchical distribution. We provide an accessible introduction to hierarchical MPT modeling and present the user-friendly and comprehensive R package TreeBUGS, which implements the two most important hierarchical MPT approaches for participant heterogeneity—the beta-MPT approach (Smith & Batchelder, Journal of Mathematical Psychology 54:167-183, 2010) and the latent-trait MPT approach (Klauer, Psychometrika 75:70-98, 2010). TreeBUGS reads standard MPT model files and obtains Markov-chain Monte Carlo samples that approximate the posterior distribution. The functionality and output are tailored to the specific needs of MPT modelers and provide tests for the homogeneity of items and participants, individual and group parameter estimates, fit statistics, and within- and between-subjects comparisons, as well as goodness-of-fit and summary plots. We also propose and implement novel statistical extensions to include continuous and discrete predictors (as either fixed or random effects) in the latent-trait MPT model.
Psychological Review | 2017
Daniel W. Heck; Edgar Erdfelder
When making inferences about pairs of objects, one of which is recognized and the other is not, the recognition heuristic states that participants choose the recognized object in a noncompensatory way without considering any further knowledge. In contrast, information-integration theories such as parallel constraint satisfaction (PCS) assume that recognition is merely one of many cues that is integrated with further knowledge in a compensatory way. To test both process models against each other without manipulating recognition or further knowledge, we include response times into the r-model, a popular multinomial processing tree model for memory-based decisions. Essentially, this response-time-extended r-model allows to test a crucial prediction of PCS, namely, that the integration of recognition-congruent knowledge leads to faster decisions compared to the consideration of recognition only—even though more information is processed. In contrast, decisions due to recognition-heuristic use are predicted to be faster than decisions affected by any further knowledge. Using the classical German-cities example, simulations show that the novel measurement model discriminates between both process models based on choices, decision times, and recognition judgments only. In a reanalysis of 29 data sets including more than 400,000 individual trials, noncompensatory choices of the recognized option were estimated to be slower than choices due to recognition-congruent knowledge. This corroborates the parallel information-integration account of memory-based decisions, according to which decisions become faster when the coherence of the available information increases.
Statistics & Probability Letters | 2015
Daniel W. Heck; Eric-Jan Wagenmakers; Richard D. Morey
We compared Bayes factors to normalized maximum likelihood for the simple case of selecting between an order-constrained versus a full binomial model. This comparison revealed two qualitative differences in testing order constraints regarding data dependence and model preference.
Journal of Mathematical Psychology | 2016
Daniel W. Heck; Eric-Jan Wagenmakers
Many psychological theories that are instantiated as statistical models imply order constraints on the model parameters. To fit and test such restrictions, order constraints of the form θi≤θjθi≤θj can be reparameterized with auxiliary parameters η∈[0,1]η∈[0,1] to replace the original parameters by θi=η⋅θjθi=η⋅θj. This approach is especially common in multinomial processing tree (MPT) modeling because the reparameterized, less complex model also belongs to the MPT class. Here, we discuss the importance of adjusting the prior distributions for the auxiliary parameters of a reparameterized model. This adjustment is important for computing the Bayes factor, a model selection criterion that measures the evidence in favor of an order constraint by trading off model fit and complexity. We show that uniform priors for the auxiliary parameters result in a Bayes factor that differs from the one that is obtained using a multivariate uniform prior on the order-constrained original parameters. As a remedy, we derive the adjusted priors for the auxiliary parameters of the reparameterized model. The practical relevance of the problem is underscored with a concrete example using the multi-trial pair-clustering model.
Applied Psychological Measurement | 2018
Robert Miller; Stefan Scherbaum; Daniel W. Heck; Thomas Goschke; Soeren Enge
Inferring processes or constructs from performance data is a major hallmark of cognitive psychometrics. Particularly, diffusion modeling of response times (RTs) from correct and erroneous responses using the Wiener distribution has become a popular measurement tool because it provides a set of psychologically interpretable parameters. However, an important precondition to identify all of these parameters is a sufficient number of RTs from erroneous responses. In the present article, we show by simulation that the parameters of the Wiener distribution can be recovered from tasks yielding very high or even perfect response accuracies using the shifted Wald distribution. Specifically, we argue that error RTs can be modeled as correct RTs that have undergone censoring by using techniques from parametric survival analysis. We illustrate our reasoning by fitting the Wiener and (censored) shifted Wald distribution to RTs from six participants who completed a Go/No-go task. In accordance with our simulations, diffusion modeling using the Wiener and the shifted Wald distribution yielded identical parameter estimates when the number of erroneous responses was predicted to be low. Moreover, the modeling of error RTs as censored correct RTs substantially improved the recovery of these diffusion parameters when premature trial timeout was introduced to increase the number of omission errors. Thus, the censored shifted Wald distribution provides a suitable means for diffusion modeling in situations when the Wiener distribution cannot be fitted without parametric constraints.Cognitive psychologists commonly strive to infer processes/constructs from chronometric data. In this regard, diffusion modeling of response times (RTs) from correct and erroneous responses using the Wiener distribution has become popular because it provides a set of interpretable parameters. An important precondition to estimate these parameters, however, is a sufficient number of RTs from erroneous responses. In the present article, we show by simulation that the parameters of the Wiener distribution can even be recovered from tasks yielding very high or even perfect response accuracies using the shifted Wald distribution. Specifically, we argue that error RTs can be considered as correct RTs that have undergone informative censoring. Such censoring can be accounted for by techniques of survival analyses that should also be applied whenever task trials are experimentally terminated prior to response execution. We illustrate our reasoning by fitting the Wiener and censored shifted Wald distribution to RTs from 101 participants who completed a Go/No-go task. In accordance with our simulations, Wiener and shifted Wald modeling yielded the same parameter estimates when the number of erroneous responses was predicted to be low. Moreover, the modeling of error RTs as censored correct RTs improved the parameter plausibility with narrow response time windows. Finally, we investigate the relation between the number of RTs and the precision of parameter estimates from Wiener and censored shifted Wald modeling, and provide R code for the convenient application of the outlined analytical procedures.
Cognitive Psychology | 2017
Daniel W. Heck; Benjamin E. Hilbig; Morten Moshagen
Decision strategies explain how people integrate multiple sources of information to make probabilistic inferences. In the past decade, increasingly sophisticated methods have been developed to determine which strategy explains decision behavior best. We extend these efforts to test psychologically more plausible models (i.e., strategies), including a new, probabilistic version of the take-the-best (TTB) heuristic that implements a rank order of error probabilities based on sequential processing. Within a coherent statistical framework, deterministic and probabilistic versions of TTB and other strategies can directly be compared using model selection by minimum description length or the Bayes factor. In an experiment with inferences from given information, only three of 104 participants were best described by the psychologically plausible, probabilistic version of TTB. Similar as in previous studies, most participants were classified as users of weighted-additive, a strategy that integrates all available information and approximates rational decisions.
Frontiers in Psychology | 2015
Edgar Erdfelder; Marta Castela; Martha Michalkiewicz; Daniel W. Heck
Mantonakis et al. (2009) suggested the pairwise-competition model (PCM) of preference construction. The basic idea is that humans form preferences among choice objects (e.g., types of wines) sequentially, such that the preferred object is the ultimate winner of a sequence of pairwise competitions: The object sampled first is compared to the second, the winner of this competition is then compared to the third object, resulting in a winner that is compared to the fourth, and so on. For a set of m objects, this recursive process terminates after m-1 pairwise competitions once the final object has been compared against the previous favorite. Canic and Pachur (2014) recently proposed a formal specification and elaboration of the PCM. Moreover, using computer simulations, they showed how variations in the models parameters affect primacy and recency effects in sequential choice and how the model can account for various patterns of serial position effects documented in the literature. We appreciate the contributions of Mantonakis and collaborators as well as Canic and Pachur. Clearly, innovative theoretical ideas and their formal clarification are key elements of progress in science. However, we argue that making use of the full spectrum of formal modeling advantages necessarily involves fitting the model to empirical data. By narrowing down model-based analyses to computer simulations—as Canic and Pachur (2014) did—important questions remain unanswered and misleading conclusions cannot be ruled out. First, simulating a model and showing that it predicts serial-position effects similar to those found in empirical data does not imply that the model fits these data. A successful model should account for the entire distribution of observed data, not just for selected qualitative aspects such as primacy and recency effects. Formal goodness-of-fit tests provide the necessary information but simulations do not. Second, if there are several nested versions of a model as in case of the PCM, simulations cannot tell us which model version provides the best account of the data taking model complexity into account (as influenced, among others, by the number of estimated parameters). The model fitting toolbox, in contrast, includes appropriate model selection measures, enabling us to identify the most parsimonious model version that is consistent with the data. Third, model fitting involves parameter estimation, that is, identification of the parameter values that best account for an existing set of data. It also provides statistical tests for parameters, enabling statistically sound evaluations of parameter differences between groups or conditions. Simulations, in contrast, do not provide such an elaborated framework. Because they ignore sampling variability in empirical data, they may mistakenly suggest parameter differences when in fact there are none. To illustrate the advantages of model fitting, we apply Canic and Pachurs (2014) formalization of the PCM to the data of Mantonakis and collaborators1.