Daniel R. Cavagnaro
California State University, Fullerton
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Daniel R. Cavagnaro.
Neural Computation | 2010
Daniel R. Cavagnaro; Jay I. Myung; Mark A. Pitt; Janne V. Kujala
Discriminating among competing statistical models is a pressing issue for many experimentalists in the field of cognitive science. Resolving this issue begins with designing maximally informative experiments. To this end, the problem to be solved in adaptive design optimization is identifying experimental designs under which one can infer the underlying model in the fewest possible steps. When the models under consideration are nonlinear, as is often the case in cognitive science, this problem can be impossible to solve analytically without simplifying assumptions. However, as we show in this letter, a full solution can be found numerically with the help of a Bayesian computational trick derived from the statistics literature, which recasts the problem as a probability density simulation in which the optimal design is the mode of the density. We use a utility function based on mutual information and give three intuitive interpretations of the utility function in terms of Bayesian posterior estimates. As a proof of concept, we offer a simple example application to an experiment on memory retention.
Psychonomic Bulletin & Review | 2011
Daniel R. Cavagnaro; Mark A. Pitt; Jay I. Myung
An ideal experiment is one in which data collection is efficient and the results are maximally informative. This standard can be difficult to achieve because of uncertainties about the consequences of design decisions. We demonstrate the success of a Bayesian adaptive method (adaptive design optimization, ADO) in optimizing design decisions when comparing models of the time course of forgetting. Across a series of testing stages, ADO intelligently adapts the retention interval in order to maximally discriminate power and exponential models. Compared with two different control (non-adaptive) methods, ADO distinguishes the models decisively, with the results unambiguously favoring the power model. Analyses suggest that ADO’s success is due in part to its flexibility in adjusting to individual differences. This implementation of ADO serves as an important first step in assessing its applicability and usefulness to psychology.
Philosophical Transactions of the Royal Society B | 2009
Michel Regenwetter; Bernard Grofman; Anna Popova; William Messner; Daniel R. Cavagnaro
Behavioural social choice has been proposed as a social choice parallel to seminal developments in other decision sciences, such as behavioural decision theory, behavioural economics, behavioural finance and behavioural game theory. Behavioural paradigms compare how rational actors should make certain types of decisions with how real decision makers behave empirically. We highlight that important theoretical predictions in social choice theory change dramatically under even minute violations of standard assumptions. Empirical data violate those critical assumptions. We argue that the nature of preference distributions in electorates is ultimately an empirical question, which social choice theory has often neglected. We also emphasize important insights for research on decision making by individuals. When researchers aggregate individual choice behaviour in laboratory experiments to report summary statistics, they are implicitly applying social choice rules. Thus, they should be aware of the potential for aggregation paradoxes. We hypothesize that such problems may substantially mar the conclusions of a number of (sometimes seminal) papers in behavioural decision research.
Journal of Risk and Uncertainty | 2016
Daniel R. Cavagnaro; Gabriel J. Aranovich; Samuel M. McClure; Mark A. Pitt; Jay I. Myung
The tendency to discount the value of future rewards has become one of the best-studied constructs in the behavioral sciences. Although hyperbolic discounting remains the dominant quantitative characterization of this phenomenon, a variety of models have been proposed and consensus around the one that most accurately describes behavior has been elusive. To help bring some clarity to this issue, we propose an Adaptive Design Optimization (ADO) method for fitting and comparing models of temporal discounting. We then conduct an ADO experiment aimed at discriminating among six popular models of temporal discounting. Rather than supporting a single underlying model, our results show that each model is inadequate in some way to describe the full range of behavior exhibited across subjects. The precision of results provided by ADO further identify specific properties of models, such as accommodating both increasing and decreasing impatience, that are mandatory to describe temporal discounting broadly.
Decision | 2017
Michel Regenwetter; Daniel R. Cavagnaro; Anna Popova; Ying Guo; Chris E. Zwilling; Shiau Hong Lim; Jeffrey R. Stevens
Behavioral theories of intertemporal choice involve many moving parts. Most descriptive theories model how time delays and rewards are perceived, compared, and/or combined into preferences or utilities. Most behavioral studies neglect to spell out how such constructs translate into heterogeneous observable choices. We consider several broad models of transitive intertemporal preference and combine these with several mathematically formal, yet very general, models of heterogeneity. We evaluate 20 probabilistic models of intertemporal choice using binary choice data from two large-scale experiments. Our analysis documents the interplay between heterogeneity and parsimony in accounting for empirical data: We find evidence for heterogeneity across individuals and across stimulus sets that can be accommodated with transitive models of varying complexity. We do not find systematic violations of transitivity in our data. Future work should continue to tackle the complex trade-off between parsimony and heterogeneity.
Psychological Methods | 2018
Daniel R. Cavagnaro
Within modern psychology, computational and statistical models play an important role in describing a wide variety of human behavior. Model selection analyses are typically used to classify individuals according to the model(s) that best describe their behavior. These classifications are inherently probabilistic, which presents challenges for performing group-level analyses, such as quantifying the effect of an experimental manipulation. We answer this challenge by presenting a method for quantifying treatment effects in terms of distributional changes in model-based (i.e., probabilistic) classifications across treatment conditions. The method uses hierarchical Bayesian mixture modeling to incorporate classification uncertainty at the individual level into the test for a treatment effect at the group level. We illustrate the method with several worked examples, including a reanalysis of the data from Kellen, Mata, and Davis-Stober (2017), and analyze its performance more generally through simulation studies. Our simulations show that the method is both more powerful and less prone to type-1 errors than Fishers exact test when classifications are uncertain. In the special case where classifications are deterministic, we find a near-perfect power-law relationship between the Bayes factor, derived from our method, and the p value obtained from Fishers exact test. We provide code in an online supplement that allows researchers to apply the method to their own data. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Psychological Methods | 2018
Michel Regenwetter; Daniel R. Cavagnaro
Statistical analyses of data often add some additional constraints to a theory and leave out others, so as to convert the theory into a testable hypothesis. In the case of binary data, such as yes/no responses, or such as the presence/absence of a symptom or a behavior, theories often actually predict that certain response probabilities change monotonically in a specific direction and/or that certain response probabilities are bounded from above or below in specific ways. A regression analysis is not really true to such a theory in that it may leave out parsimonious constraints and in that extraneous assumptions like linearity or log-linearity, or even the assumption of a functional relationship, are dictated by the method rather than the theory. That mismatch may well bias the results of empirical analysis and jeopardize attempts at meaningful replication of psychological research. This tutorial shows how contemporary order-constrained methods can shed more light on such questions, using far weaker auxiliary assumptions, while also formulating more detailed, nuanced, and concise hypotheses, and allowing for quantitative model selection.
Decision | 2018
Denis M. McCarthy; Daniel R. Cavagnaro; Mason H. Price; Nicholas Brown; Sanghyuk Park
Alcohol intoxication is well known to impair a number of cognitive abilities required for sound decision making. We tested whether an intoxicating dose of alcohol altered whether individuals satisfied a basic property of rational decision making, transitivity of preference. Our study was within-subjects in design and our analysis teased apart stable, yet error-prone, preferences from variable, error-free preferences. We find that alcohol intoxication does not appear to play a major role in determining whether subjects violate transitivity. For a minority of individuals, we find that alcohol intoxication does impact how they select among and/or perceive lotteries with similar attribute values. This, in turn, can cause them to alter various aspects of their preference structure.
Social Science Research Network | 2016
Daniel R. Cavagnaro; Berk A. Sensoy; Yingdi Wang; Michael S. Weisbach
Using a large sample of institutional investors’ investments in private equity funds raised between 1991 and 2011, we estimate the extent to which investors’ skill affects their returns. Bootstrap analyses show that the variance of actual performance is higher than would be expected by chance, suggesting that some investors consistently outperform. Extending the Bayesian approach of Korteweg and Sorensen (2017), we estimate that a one standard deviation increase in skill leads to an increase in annual returns of between one and two percentage points. These results are stronger in the earlier part of the sample period and for venture funds.Using a large sample of institutional investors’ private equity investments in venture and buyout funds, we estimate the extent to which investors’ skill affects returns from private equity investments. We first consider whether investors have differential skill by comparing the distribution of investors’ returns relative to the bootstrapped distribution that would occur if funds were randomly distributed across investors. We find that the variance of actual performance is higher than the bootstrapped distribution, suggesting that higher and lower skilled investors consistently outperform and underperform. We then use a Bayesian approach developed by Korteweg and Sorensen (2015) to estimate the incremental effect of skill on performance. The results imply that a one standard deviation increase in skill leads to about a three percentage point increase in returns, suggesting that variation in institutional investors’ skill is an important driver of their returns.
International Encyclopedia of the Social & Behavioral Sciences (Second Edition) | 2015
Daniel R. Cavagnaro
This article is a revision of the previous edition article by I.J. Myung, volume 4, pp. 2453–2457,