Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David Kellen is active.

Publication


Featured researches published by David Kellen.


Behavior Research Methods | 2013

MPTinR: Analysis of multinomial processing tree models in R

Henrik Singmann; David Kellen

We introduce MPTinR, a software package developed for the analysis of multinomial processing tree (MPT) models. MPT models represent a prominent class of cognitive measurement models for categorical data with applications in a wide variety of fields. MPTinR is the first software for the analysis of MPT models in the statistical programming language R, providing a modeling framework that is more flexible than standalone software packages. MPTinR also introduces important features such as (1) the ability to calculate the Fisher information approximation measure of model complexity for MPT models, (2) the ability to fit models for categorical data outside the MPT model class, such as signal detection models, (3) a function for model selection across a set of nested and nonnested candidate models (using several model selection indices), and (4) multicore fitting. MPTinR is available from the Comprehensive R Archive Network at http://cran.r-project.org/web/packages/MPTinR/.


Psychonomic Bulletin & Review | 2010

Toward a complete decision model of item and source recognition: A discrete-state approach

Karl Christoph Klauer; David Kellen

In source-monitoring experiments, participants study items from two sources (A and B). At test, they are presented Source A items, Source B items, and new items. They are asked to decide whether a test item is old or new (item memory) and whether it is a Source A or a Source B item (source memory). Hautus, Macmillan, and Rotello (2008) developed models, couched in a bivariate signal detection framework, that account for item and source memory across several data sets collected in a confidence-rating response format. The present article enlarges the set of candidate models with a discrete-state model. The model is a straightforward extension of Bayen, Murnane, and Erdfelders (1996) multinomial model of source discrimination to confidence ratings. On the basis of the evaluation criteria adopted by Hautus et al., it provides a better account of the data than do Hautus et al.s models.


Psychonomic Bulletin & Review | 2013

Recognition memory models and binary-response ROCs: A comparison by minimum description length

David Kellen; Karl Christoph Klauer; Arndt Bröder

Model comparison in recognition memory has frequently relied on receiver operating characteristics (ROC) data. We present a meta-analysis of binary-response ROC data that builds on previous such meta-analyses and extends them in several ways. Specifically, we include more data and consider a much more comprehensive set of candidate models. Moreover, we bring to bear modern developments in model selection on the current selection problem. The new methods are based on the minimum description length framework, leading to the normalized maximum likelihood (NML) index for assessing model performance, taking into account differences between the models in flexibility due to functional form. Overall, NML results for individual ROC data indicate a preference for a discrete-state model that assumes a mixture of detection and guessing states.


Memory | 2013

Validating a two-high-threshold measurement model for confidence rating data in recognition.

Arndt Bröder; David Kellen; Julia Schütz; Constanze Rohrmeier

Signal Detection models as well as the Two-High-Threshold model (2HTM) have been used successfully as measurement models in recognition tasks to disentangle memory performance and response biases. A popular method in recognition memory is to elicit confidence judgements about the presumed old/new status of an item, allowing for the easy construction of ROCs. Since the 2HTM assumes fewer latent memory states than response options are available in confidence ratings, the 2HTM has to be extended by a mapping function which models individual rating scale usage. Unpublished data from 2 experiments in Bröder and Schütz (2009) validate the core memory parameters of the model, and 3 new experiments show that the response mapping parameters are selectively affected by manipulations intended to affect rating scale use, and this is independent of overall old/new bias. Comparisons with SDT show that both models behave similarly, a case that highlights the notion that both modelling approaches can be valuable (and complementary) elements in a researchers toolbox.


PLOS ONE | 2014

Intuitive Logic Revisited: New Data and a Bayesian Mixed Model Meta-Analysis

Henrik Singmann; Karl Christoph Klauer; David Kellen

Recent research on syllogistic reasoning suggests that the logical status (valid vs. invalid) of even difficult syllogisms can be intuitively detected via differences in conceptual fluency between logically valid and invalid syllogisms when participants are asked to rate how much they like a conclusion following from a syllogism (Morsanyi & Handley, 2012). These claims of an intuitive logic are at odds with most theories on syllogistic reasoning which posit that detecting the logical status of difficult syllogisms requires effortful and deliberate cognitive processes. We present new data replicating the effects reported by Morsanyi and Handley, but show that this effect is eliminated when controlling for a possible confound in terms of conclusion content. Additionally, we reanalyze three studies () without this confound with a Bayesian mixed model meta-analysis (i.e., controlling for participant and item effects) which provides evidence for the null-hypothesis and against Morsanyi and Handleys claim.


Journal of Experimental Psychology: Learning, Memory and Cognition | 2014

Discrete-state and continuous models of recognition memory: Testing core properties under minimal assumptions

David Kellen; Karl Christoph Klauer

A classic discussion in the recognition-memory literature concerns the question of whether recognition judgments are better described by continuous or discrete processes. These two hypotheses are instantiated by the signal detection theory model (SDT) and the 2-high-threshold model, respectively. Their comparison has almost invariably relied on receiver operating characteristic data. A new model-comparison approach based on ranking judgments is proposed here. This approach has several advantages: It does not rely on particular distributional assumptions for the models, and it does not require costly experimental manipulations. These features permit the comparison of the models by means of simple paired-comparison tests instead of goodness-of-fit results and complex model-selection methods that are predicated on many auxiliary assumptions. Empirical results from 2 experiments are consistent with a continuous memory process such as the one assumed by SDT.


Psychonomic Bulletin & Review | 2014

The impact of subjective recognition experiences on recognition heuristic use: A multinomial processing tree approach

Marta Castela; David Kellen; Edgar Erdfelder; Benjamin E. Hilbig

The recognition heuristic (RH) theory states that, in comparative judgments (e.g., Which of two cities has more inhabitants?), individuals infer that recognized objects score higher on the criterion (e.g., population) than unrecognized objects. Indeed, it has often been shown that recognized options are judged to outscore unrecognized ones (e.g., recognized cities are judged as larger than unrecognized ones), although different accounts of this general finding have been proposed. According to the RH theory, this pattern occurs because the binary recognition judgment determines the inference and no other information will reverse this. An alternative account posits that recognized objects are chosen because knowledge beyond mere recognition typically points to the recognized object. A third account can be derived from the memory-state heuristic framework. According to this framework, underlying memory states of objects (rather than recognition judgments) determine the extent of RH use: When two objects are compared, the one associated with a “higher” memory state is preferred, and reliance on recognition increases with the “distance” between their memory states. The three accounts make different predictions about the impact of subjective recognition experiences—whether an object is merely recognized or recognized with further knowledge—on RH use. We estimated RH use for different recognition experiences across 16 published data sets, using a multinomial processing tree model. Results supported the memory-state heuristic in showing that RH use increases when recognition is accompanied by further knowledge.


Psychological Review | 2015

Signal detection and threshold modeling of confidence-rating ROCs: A critical test with minimal assumptions.

David Kellen; Karl Christoph Klauer

An ongoing discussion in the recognition-memory literature concerns the question of whether recognition judgments reflect a direct mapping of graded memory representations (a notion that is instantiated by signal detection theory) or whether they are mediated by a discrete-state representation with the possibility of complete information loss (a notion that is instantiated by threshold models). These 2 accounts are usually evaluated by comparing their (penalized) fits to receiver operating characteristic data, a procedure that is predicated on substantial auxiliary assumptions, which if violated can invalidate results. We show that the 2 accounts can be compared on the basis of critical tests that invoke only minimal assumptions. Using previously published receiver operating characteristic data, we show that confidence-rating judgments are consistent with a discrete-state account. (PsycINFO Database Record


Psychological Research-psychologische Forschung | 2014

Analyzing distributional properties of interference effects across modalities: Chances and challenges

Kerstin Dittrich; David Kellen; Christoph Stahl

In research investigating Stroop or Simon effects, data are typically analyzed at the level of mean response time (RT), with results showing faster responses for compatible than for incompatible trials. However, this analysis provides only limited information as it glosses over the shape of the RT distributions and how they may differ across tasks and experimental conditions. These limitations have encouraged the analysis of RT distributions using delta plots. In the present review, we aim to bring together research on distributional properties of auditory and visual interference effects. Extending previous reviews on distributional properties of the Simon effect, we additionally review studies reporting distributional analyses of Stroop effects. We show that distributional analyses of sequential effects (i.e., taking into account congruency of the previous trial) capture important similarities and differences of interference effects across tasks (Simon, Stroop) as well as across sensory modalities, despite some challenges associated to this approach.


Frontiers in Psychology | 2014

Concerns with the SDT approach to causal conditional reasoning: a comment on Trippas, Handley, Verde, Roser, McNair, and Evans (2014).

Henrik Singmann; David Kellen

Signal Detection Theory (SDT; Wickens, 2002) is a prominent measurement model that characterizes observed classification responses in terms of discriminability and response bias. In recent years, SDT has been increasingly applied within the psychology of reasoning (Rotello and Heit, 2009; Dube et al., 2010; Heit and Rotello, 2010, 2014; Trippas et al., 2013). SDT assumes that different stimulus types (e.g., valid and invalid syllogisms) are associated with different (presumably Gaussian) evidence or argument-strength distributions. Responses (e.g., “Valid” and “Invalid”) are produced by comparing the argument-strength of each syllogism with a set of established response criteria (Figure ​(Figure1A).1A). The response profile associated to each stimulus type can be represented as a Receiver Operating Charateristics (ROC) function by plotting performance pairs (i.e., hits and false-alarms) along different response criteria, which Gaussian SDT predicts to be curvilinear (Figure ​(Figure1B1B). Figure 1 (A) A graphical representation of the SDT model for a syllogistic reasoning task. (B) ROC curve representing the cumulative probabilities for hypothetical pairs of hits and false-alarms (“valid” responses to valid and invalid syllogisms, ... Trippas et al. (2014; henceforth THVRME) applied SDT to causal-conditional reasoning and make two points: (1) that SDT provides an informative characterization of data from a reasoning experiment with two orthogonal factors such as believability and argument validity; (2) that an inspection of the shape of causal-conditional ROCs provides insights on the suitability of normative theories with the consequence to consider affirmation and denial problems separately. The goal of this comment is to make two counterarguments: First, to point out that the SDT model is often unable to provide an informative characterization of data in designs as discussed by THVRME as it fails to unambiguously separate argument strength and response bias. THVRMEs conclusion that “believability had no effect on accuracy […] but seemed to affect response bias” (p. 4) solely hinge on arbitrary assumptions. Second, that THVRMEs reliance on ROC shape to justify a separation between affirmation and denial problems is unnecessary and misguided.

Collaboration


Dive into the David Kellen's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chad Dubé

University of South Florida

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge