Philip M. Fernbach
University of Colorado Boulder
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Philip M. Fernbach.
Psychological Science | 2013
Philip M. Fernbach; Todd Rogers; Craig R. Fox; Steven A. Sloman
People often hold extreme political attitudes about complex policies. We hypothesized that people typically know less about such policies than they think they do (the illusion of explanatory depth) and that polarized attitudes are enabled by simplistic causal models. Asking people to explain policies in detail both undermined the illusion of explanatory depth and led to attitudes that were more moderate (Experiments 1 and 2). Although these effects occurred when people were asked to generate a mechanistic explanation, they did not occur when people were instead asked to enumerate reasons for their policy preferences (Experiment 2). Finally, generating mechanistic explanations reduced donations to relevant political advocacy groups (Experiment 3). The evidence suggests that people’s mistaken sense that they understand the causal processes underlying policies contributes to political polarization.
Journal of Experimental Psychology: Learning, Memory and Cognition | 2009
Philip M. Fernbach; Steven A. Sloman
The authors proposed and tested a psychological theory of causal structure learning based on local computations. Local computations simplify complex learning problems via cues available on individual trials to update a single causal structure hypothesis. Structural inferences from local computations make minimal demands on memory, require relatively small amounts of data, and need not respect normative prescriptions as inferences that are principled locally may violate those principles when combined. Over a series of 3 experiments, the authors found (a) systematic inferences from small amounts of data; (b) systematic inference of extraneous causal links; (c) influence of data presentation order on inferences; and (d) error reduction through pretraining. Without pretraining, a model based on local computations fitted data better than a Bayesian structural inference model. The data suggest that local computations serve as a heuristic for learning causal structure.
Journal of Experimental Psychology: General | 2011
Philip M. Fernbach; Adam Darlow; Steven A. Sloman
In this article, we address the apparent discrepancy between causal Bayes net theories of cognition, which posit that judgments of uncertainty are generated from causal beliefs in a way that respects the norms of probability, and evidence that probability judgments based on causal beliefs are systematically in error. One purported source of bias is the ease of reasoning forward from cause to effect (predictive reasoning) versus backward from effect to cause (diagnostic reasoning). Using causal Bayes nets, we developed a normative formulation of how predictive and diagnostic probability judgments should vary with the strength of alternative causes, causal power, and prior probability. This model was tested through two experiments that elicited predictive and diagnostic judgments as well as judgments of the causal parameters for a variety of scenarios that were designed to differ in strength of alternatives. Model predictions fit the diagnostic judgments closely, but predictive judgments displayed systematic neglect of alternative causes, yielding a relatively poor fit. Three additional experiments provided more evidence of the neglect of alternative causes in predictive reasoning and ruled out pragmatic explanations. We conclude that people use causal structure to generate probability judgments in a sophisticated but not entirely veridical way.
Psychological Science | 2010
Philip M. Fernbach; Adam Darlow; Steven A. Sloman
People are renowned for their failure to consider alternative hypotheses. We compare neglect of alternative causes when people make predictive versus diagnostic probability judgments. One study with medical professionals reasoning about psychopathology and two with undergraduates reasoning about goals and actions or about causal transmission yielded the same results: neglect of alternative causes when reasoning from cause to effect but not when reasoning from effect to cause. The findings suggest that framing a problem as a diagnostic-likelihood judgment can reduce bias.
Cognition | 2010
Steven A. Sloman; Philip M. Fernbach; York Hagmayer
The paper sets out to reveal conditions enabling diagnostic self-deception, peoples tendency to deceive themselves about the diagnostic value of their own actions. We characterize different types of self-deception in terms of the distinction between intervention and observation in causal reasoning. One type arises when people intervene but choose to view their actions as observations in order to find support for a self-serving diagnosis. We hypothesized that such self-deception depends on imprecision in the environment that allows leeway to represent ones own actions as either observations or interventions. Four experiments tested this idea using a dot-tracking task. Participants were told to go as quickly as they could and that going fast indicated either above-average or below-average intelligence. Precision was manipulated by varying the vagueness in feedback about performance. As predicted, self-deception was observed only when feedback on the task used vague terms rather than precise values. The diagnosticity of the feedback did not matter.
Journal of Experimental Psychology: Learning, Memory and Cognition | 2013
Philip M. Fernbach; Christopher D. Erb
The authors propose and test a causal model theory of reasoning about conditional arguments with causal content. According to the theory, the acceptability of modus ponens (MP) and affirming the consequent (AC) reflect the conditional likelihood of causes and effects based on a probabilistic causal model of the scenario being judged. Acceptability of MP is a judgment of causal power, the probability that the antecedent cause is efficacious in bringing about the consequent effect. Acceptability of AC is a judgment of diagnostic strength, the probability of the antecedent cause given the consequent effect. The model proposes that acceptability judgments are derived from a causal Bayesian network with a common effect structure in which the probability of the consequent effect is a function of the antecedent cause, alternative causes, and disabling conditions. In 2 experiments, the model was tested by collecting judgments of the causal parameters of conditionals and using them to derive predictions for MP and AC acceptability using 0 free parameters. To assess the validity of the model, its predictions were fit to the acceptability ratings and compared to the fits of 3 versions of Mental Models Theory. The fits of the causal model theory were superior. Experiment 3 provides direct evidence that people engage in a causal analysis and not a direct calculation of conditional probability when assessing causal conditionals. The causal model theory represents a synthesis across the disparate literatures on deductive, probabilistic, and causal reasoning. (PsycINFO Database Record (c) 2013 APA, all rights reserved).
Journal of Consumer Research | 2015
Philip M. Fernbach; Christina Kan; John G. Lynch
When consumers perceive that a resource is limited and may be insufficient to accomplish goals, they recruit and enact plans to cope with the shortage. We distinguish two common strategies: efficiency planning yields savings by stretching the resource, whereas priority planning does so by sacrificing less important goals. Using a variety of methods to explore both financial and time planning, we investigate how the two types of planning differ, how they vary with constraint, and how they interrelate. Relative to efficiency planning, priority planning is perceived as yielding larger one-time savings, but it feels more costly because it requires trade-offs within-resource (e.g., money for money) as opposed to cross-resource (e.g., time for money). As constraint increases and greater resource savings are required, prioritization becomes more likely. However, the shift to prioritization is often insufficient, and consumers tend to react to insufficient prioritization dysfunctionally, making a bad situation worse. Budgeting helps consumers behave more adaptively. Budgeters respond to constraint with more priority planning than nonbudgeters, and they report fewer dysfunctional behaviors, like overspending and impulsive shopping.
Argument & Computation | 2013
Philip M. Fernbach; Bob Rehder
The paper explores the idea that causality-based probability judgments are determined by two competing drives: one towards veridicality and one towards effort reduction. Participants were taught the causal structure of novel categories and asked to make predictive and diagnostic probability judgments about the features of category exemplars. We found that participants violated the predictions of a normative causal Bayesian network model because they ignored relevant variables (Experiments 1–3) and because they failed to integrate over hidden variables (Experiment 2). When the task was made easier by stating whether alternative causes were present or absent as opposed to uncertain, judgments approximated the normative predictions (Experiment 3). We conclude that augmenting the popular causal Bayes net computational framework with cognitive shortcuts that reduce processing demands can provide a more complete account of causal inference.
Psychology of Learning and Motivation | 2009
Steven A. Sloman; Philip M. Fernbach; Scott Ewing
Abstract This chapter has three objectives. First, we formulate a coarse model of the process of moral judgment to locate the role of causal analysis. We propose that causal analysis occurs in the very earliest stages of interpreting an event and that early moral appraisals depend on it as do emotional responses and deliberative reasoning. Second, we argue that causal models offer the best representation for formulating psychological principles of moral appraisal. Causal models directly represent causes, consequences, and the structural relations among them. In other words, they represent mechanisms. Finally, we speculate that moral appraisals reflect the similarity between an idealized causal model of moral behavior and a causal model of the event being judged.
Information-Knowledge-Systems Management archive | 2011
Steven A. Sloman; Philip M. Fernbach
We characterize what is known about how people represent, reason about, and predict the behavior of complex systems. People tend to simplify complex systems in three ways: First, people resort to heuristics that are selective in the information they consider. These heuristics often yield satisfactory results though they can lead to systematic error. Second, when people do try to take more information into account, they often use a model that has a simple linear form that ignores most of the interactions and sources of unpredictability in the system. Finally, when going beyond heuristics and simple linear combinations, people tend to build a mental causal model that reflects the causal structure of the system by representing qualitative structure relating the mechanisms that lead from causes to effects. The bulk of the chapter concerns the nature of causal models. Although people excel at representing how individual mechanisms work and how they are linked to each other, they tend to neglect cycles of causation, often fail to reason quantitatively, and sometimes ignore relevant variables.