Björn Meder
Max Planck Society
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Björn Meder.
Psychonomic Bulletin & Review | 2008
Björn Meder; York Hagmayer; Michael R. Waldmann
Previous research has shown that people are capable of deriving correct predictions for previously unseen actions from passive observations of causal systems (Waldmann & Hagmayer, 2005). However, these studies were limited, since learning data were presented as tabulated data only, which may have turned the task more into a reasoning rather than a learning task. In two experiments, we therefore presented learners with trial-by-trial observational learning input referring to a complex causal model consisting of four events. To test the robustness of the capacity to derive correct observational and interventional inferences, we pitted causal order against the temporal order of learning events. The results show that people are, in principle, capable of deriving correct predictions after purely observational trial-by-trial learning, even with relatively complex causal models. However, conflicting temporal information can impair performance, particularly when the inferences require taking alternative causal pathways into account.
Medical Decision Making | 2014
Nicolai Bodemer; Björn Meder; Gerd Gigerenzer
Background. Treatment benefits and harms are often communicated as relative risk reductions and increases, which are frequently misunderstood by doctors and patients. One suggestion for improving understanding of such risk information is to also communicate the baseline risk. We investigated 1) whether the presentation format of the baseline risk influences understanding of relative risk changes and 2) the mediating role of people’s numeracy skills. Method. We presented laypeople (N = 1234) with a hypothetical scenario about a treatment that decreased (Experiments 1a, 2a) or increased (Experiments 1b, 2b) the risk of heart disease. Baseline risk was provided as a percentage or a frequency. In a forced-choice paradigm, the participants’ task was to judge the risk in the treatment group given the relative risk reduction (or increase) and the baseline risk. Numeracy was assessed using the Lipkus 11-item scale. Results. Communicating baseline risk in a frequency format facilitated correct understanding of a treatment’s benefits and harms, whereas a percentage format often impeded understanding. For example, many participants misinterpreted a relative risk reduction as referring to an absolute risk reduction. Participants with higher numeracy generally performed better than those with lower numeracy, but all participants benefitted from a frequency format. Limitations are that we used a hypothetical medical scenario and a nonrepresentative sample. Conclusions. Presenting baseline risk in a frequency format improves understanding of relative risk information, whereas a percentage format is likely to lead to misunderstandings. People’s numeracy skills play an important role in correctly understanding medical information. Overall, communicating treatment benefits and harms in the form of relative risk changes remains problematic, even when the baseline risk is explicitly provided.
Trends in Cognitive Sciences | 2013
Björn Meder; Fabrice Le Lec; Magda Osman
Economic crises bring to the fore deep issues for the economic profession and their models. Given that cognitive science shares with economics many theoretical frameworks and research tools designed to understand decision-making behavior, should economists be the only ones re-examining their conceptual ideas and empirical methods? We argue that economic crises demonstrate different forms of uncertainty, which remind cognitive scientists of a pervasive problem: how best to conceptualize and study decision making under uncertainty.
Cognitive Processing | 2010
Michael R. Waldmann; Björn Meder; Momme von Sydow; York Hagmayer
The main goal of the present research was to demonstrate the interaction between category and causal induction in causal model learning. We used a two-phase learning procedure in which learners were presented with learning input referring to two interconnected causal relations forming a causal chain (Experiment 1) or a common-cause model (Experiments 2a, b). One of the three events (i.e., the intermediate event of the chain, or the common cause) was presented as a set of uncategorized exemplars. Although participants were not provided with any feedback about category labels, they tended to induce categories in the first phase that maximized the predictability of their causes or effects. In the second causal learning phase, participants had the choice between transferring the newly learned categories from the first phase at the cost of suboptimal predictions, or they could induce a new set of optimally predictive categories for the second causal relation, but at the cost of proliferating different category schemes for the same set of events. It turned out that in all three experiments learners tended to transfer the categories entailed by the first causal relation to the second causal relation.
Archive | 2014
Björn Meder; Gerd Gigerenzer
Is the mind an “intuitive statistician”? Or are humans biased and error-prone when it comes to probabilistic thinking? While researchers in the 1950s and 1960s suggested that people reason approximately in accordance with the laws of probability theory, research conducted in the heuristics-and-biases program during the 1970s and 1980s concluded the opposite. To overcome this striking contradiction, psychologists more recently began to identify and characterize the circumstances under which people—both children and adults—are capable of sound probabilistic thinking. One important insight from this line of research is the power of representation formats. For instance, information presented by means of natural frequencies, numerical or pictorial, fosters the understanding of statistical information and improves probabilistic reasoning, whereas conditional probabilities tend to impede understanding. We review this research and show how its findings have been used to design effective tools and teaching methods for helping people—be it children or adults, laypeople or experts—to reason better with statistical information. For example, using natural frequencies to convey statistical information helps people to perform better in Bayesian reasoning tasks, such as understanding the implications of diagnostic test results, or assessing the potential benefits and harms of medical treatments. Teaching statistical thinking should be an integral part of comprehensive education, to provide children and adults with the risk literacy needed to make better decisions in a changing and uncertain world.
Cognitive Science | 2011
York Hagmayer; Björn Meder; Momme von Sydow; Michael R. Waldmann
The goal of the present set of studies is to explore the boundary conditions of category transfer in causal learning. Previous research has shown that people are capable of inducing categories based on causal learning input, and they often transfer these categories to new causal learning tasks. However, occasionally learners abandon the learned categories and induce new ones. Whereas previously it has been argued that transfer is only observed with essentialist categories in which the hidden properties are causally relevant for the target effect in the transfer relation, we here propose an alternative explanation, the unbroken mechanism hypothesis. This hypothesis claims that categories are transferred from a previously learned causal relation to a new causal relation when learners assume a causal mechanism linking the two relations that is continuous and unbroken. The findings of two causal learning experiments support the unbroken mechanism hypothesis.
Memory & Cognition | 2016
Momme von Sydow; York Hagmayer; Björn Meder
A probabilistic causal chain A→B→C may intuitively appear to be transitive: If A probabilistically causes B, and B probabilistically causes C, A probabilistically causes C. However, probabilistic causal relations can only guaranteed to be transitive if the so-called Markov condition holds. In two experiments, we examined how people make probabilistic judgments about indirect relationships A→C in causal chains A→B→C that violate the Markov condition. We hypothesized that participants would make transitive inferences in accordance with the Markov condition although they were presented with counterevidence showing intransitive data. For instance, participants were successively presented with data entailing positive dependencies A→B and B→C. At the same time, the data entailed that A and C were statistically independent. The results of two experiments show that transitive reasoning via a mediating event B influenced and distorted the induction of the indirect relation between A and C. Participants’ judgments were affected by an interaction of transitive, causal-model-based inferences and the observed data. Our findings support the idea that people tend to chain individual causal relations into mental causal chains that obey the Markov condition and thus allow for transitive reasoning, even if the observed data entail that such inferences are not warranted.
The Open Psychology Journal | 2010
Björn Meder; Tobias Gerstenberg; York Hagmayer; Michael R. Waldmann
Recently, a number of rational theories have been put forward which provide a coherent formal framework for modeling different types of causal inferences, such as prediction, diagnosis, and action planning. A hallmark of these theories is their capacity to simultaneously express probability distributions under observational and interventional scenarios, thereby rendering it possible to derive precise predictions about interventions (“doing”) from passive observations (“seeing”). In Part 1 of the paper we discuss different modeling approaches for formally representing interventions and review the empirical evidence on how humans draw causal inferences based on observations or interventions. We contrast deterministic interventions with imperfect actions yielding unreliable or unknown outcomes. In Part 2, we discuss alternative strategies for making interventional decisions when the causal structure is unknown to the agent. A Bayesian approach of rational causal inference, which aims to infer the structure and its parameters from the available data, provides the benchmark model. This account is contrasted with a heuristic approach which knows categories of causes and effects but neglects further structural information. The results of computer simulations show that despite its computational parsimony the heuristic approach achieves very good performance compared to the Bayesian model.
Cognitive Science | 2017
Charley M. Wu; Eric Schulz; Maarten Speekenbrink; Jonathan D. Nelson; Björn Meder
We introduce the spatially correlated multi-armed bandit as a task coupling function learning with exploration-exploitation. Participants interact with bi-variate reward functions on a two-dimensional grid, with the goal of either gaining the largest average score or finding the largest payoff. By providing an opportunity to learn the underlying reward function through spatial correlations, we model to what extent people form beliefs about unexplored payoffs and how that guides search behavior. Participants adapted to assigned payoff conditions, performed better in smooth than in rough environments, and—surprisingly—sometimes seemed to perform equally well in short as in long search horizons. Our modeling results indicate a tendency towards local search options, which when accounted for, suggests participants were best-described as forming only very local inferences about unexplored regions, combined with a search strategy that directly trades off between exploiting high expected rewards and exploring to reduce uncertainty.
Cognitive Science | 2018
Vincenzo Crupi; Jonathan D. Nelson; Björn Meder; Gustavo Cevolani; Katya Tentori
Searching for information is critical in many situations. In medicine, for instance, careful choice of a diagnostic test can help narrow down the range of plausible diseases that the patient might have. In a probabilistic framework, test selection is often modeled by assuming that peoples goal is to reduce uncertainty about possible states of the world. In cognitive science, psychology, and medical decision making, Shannon entropy is the most prominent and most widely used model to formalize probabilistic uncertainty and the reduction thereof. However, a variety of alternative entropy metrics (Hartley, Quadratic, Tsallis, Rényi, and more) are popular in the social and the natural sciences, computer science, and philosophy of science. Particular entropy measures have been predominant in particular research areas, and it is often an open issue whether these divergences emerge from different theoretical and practical goals or are merely due to historical accident. Cutting across disciplinary boundaries, we show that several entropy and entropy reduction measures arise as special cases in a unified formalism, the Sharma-Mittal framework. Using mathematical results, computer simulations, and analyses of published behavioral data, we discuss four key questions: How do various entropy models relate to each other? What insights can be obtained by considering diverse entropy models within a unified framework? What is the psychological plausibility of different entropy models? What new questions and insights for research on human information acquisition follow? Our work provides several new pathways for theoretical and empirical research, reconciling apparently conflicting approaches and empirical findings within a comprehensive and unified information-theoretic formalism.