Laura Martignon
Max Planck Society
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Laura Martignon.
Theory and Decision | 2002
Laura Martignon; Ulrich Hoffrage
This article provides an overview of recent results on lexicographic, linear, and Bayesian models for paired comparison from a cognitive psychology perspective. Within each class, we distinguish subclasses according to the computational complexity required for parameter setting. We identify the optimal model in each class, where optimality is defined with respect to performance when fitting known data. Although not optimal when fitting data, simple models can be astonishingly accurate when generalizing to new data. A simple heuristic belonging to the class of lexicographic models is Take The Best (Gigerenzer & Goldstein (1996) Psychol. Rev. 102: 684). It is more robust than other lexicographic strategies which use complex procedures to establish a cue hierarchy. In fact, it is robust due to its simplicity, not despite it. Similarly, Take The Best looks up only a fraction of the information that linear and Bayesian models require; yet it achieves performance comparable to that of models which integrate information. Due to its simplicity, frugality, and accuracy, Take The Best is a plausible candidate for a psychological model in the tradition of bounded rationality. We review empirical evidence showing the descriptive validity of fast and frugal heuristics.
Neural Computation | 2000
Laura Martignon; Gustavo Deco; Kathryn Blackmond Laskey; Mathew E. Diamond; Winrich A. Freiwald; Eilon Vaadia
Recent advances in the technology of multiunit recordings make it possible to test Hebbs hypothesis that neurons do not function in isolation but are organized in assemblies. This has created the need for statistical approaches to detecting the presence of spatiotemporal patterns of more than two neurons in neuron spike train data. We mention three possible measures for the presence of higher-order patterns of neural activationcoefficients of log-linear models, connected cumulants, and redundanciesand present arguments in favor of the coefficients of log-linear models. We present test statistics for detecting the presence of higher-order interactions in spike train data by parameterizing these interactions in terms of coefficients of log-linear models. We also present a Bayesian approach for inferring the existence or absence of interactions and estimating their strength. The two methods, the frequentist and the Bayesian one, are shown to be consistent in the sense that interactions that are detected by either method also tend to be detected by the other. A heuristic for the analysis of temporal patterns is also proposed. Finally, a Bayesian test is presented that establishes stochastic differences between recorded segments of data. The methods are applied to experimental data and synthetic data drawn from our statistical models. Our experimental data are drawn from multiunit recordings in the prefrontal cortex of behaving monkeys, the somatosensory cortex of anesthetized rats, and multiunit recordings in the visual cortex of behaving monkeys.
Annals of the New York Academy of Sciences | 2008
Elke Kurz-Milcke; Gerd Gigerenzer; Laura Martignon
Why is it that the public can read and write but only a few understand statistical information? Why are elementary distinctions, such as that between absolute and relative risks, not better known? In the absence of statistical literacy, key democratic ideals, such as informed consent and shared decision making in health care, will remain science fiction. In this chapter, we deal with tools for transparency in risk communication. The focus is on graphical and analog representations of risk. Analog representations use a separate icon or sign for each individual in a population. Like numerical representations, some graphical forms are transparent, whereas others indiscernibly mislead the reader. We review cases of (1) tree diagrams for representing natural versus relative frequency, (2) decision trees for the representation of fast and frugal decision making, (3) bar graphs for representing absolute versus relative risk, (4) population diagrams for the analog representation of risk, and (5) a format of representation that employs colored tinker cubes for the encoding of information about individuals in a population. Graphs have long enjoyed the status of being “worth a thousand words” and hence of being more readily accessible to human understanding than long‐winded symbolic representations. This is both true and false. Graphical tools can be just as well employed for transparent and nontransparent risk communications.
Archive | 1999
Gerd Gigerenzer; Jean Czerlinski; Laura Martignon
Rationality and optimality are the guiding concepts of the probabilistic approach to cognition, but they are not the only reasonable guiding concepts. Two concepts from the other end of the spectrum, simplicity and frugality, have also inspired models of cognition. These fast and frugal models are justified by their psychological plausibility and adaptedness to natural environments. For example, the real world provides only scarce information, the real world forces us to rush when gathering and processing information and the real world does not cut itself up into variables whose errors are conveniently independently normally distributed, as many optimal models assume.
Frontiers in Psychology | 2011
Korbinian Moeller; Laura Martignon; Silvia Wessolowski; Joachim Engel; Hans-Christoph Nuerk
Children typically learn basic numerical and arithmetic principles using finger-based representations. However, whether or not reliance on finger-based representations is beneficial or detrimental is the subject of an ongoing debate between researchers in neurocognition and mathematics education. From the neurocognitive perspective, finger counting provides multisensory input, which conveys both cardinal and ordinal aspects of numbers. Recent data indicate that children with good finger-based numerical representations show better arithmetic skills and that training finger gnosis, or “finger sense,” enhances mathematical skills. Therefore neurocognitive researchers conclude that elaborate finger-based numerical representations are beneficial for later numerical development. However, research in mathematics education recommends fostering mentally based numerical representations so as to induce children to abandon finger counting. More precisely, mathematics education recommends first using finger counting, then concrete structured representations and, finally, mental representations of numbers to perform numerical operations. Taken together, these results reveal an important debate between neurocognitive and mathematics education research concerning the benefits and detriments of finger-based strategies for numerical development. In the present review, the rationale of both lines of evidence will be discussed.
Minds and Machines | 1999
Laura Martignon; Michael Schmitt
Intractability and optimality are two sides of one coin: Optimal models are often intractable, that is, they tend to be excessively complex, or NP-hard. We explain the meaning of NP-hardness in detail and discuss how modem computer science circumvents intractability by introducing heuristics and shortcuts to optimality, often replacing optimality by means of sufficient sub-optimality. Since the principles of decision theory dictate balancing the cost of computation against gain in accuracy, statistical inference is currently being reshaped by a vigorous new trend: the science of simplicity. Simple models, as we show for specific cases, are not just tractable, they also tend to be robust. Robustness is the ability of a model to extract relevant information from data, disregarding noise.Recently, Gigerenzer, Todd and the ABC Research Group (1999) have put forward a collection of fast and frugal heuristics as simple, boundedly rational inference strategies used by the unaided mind in real world inference problems. This collection of heuristics has suggestively been called the adaptive toolbox. In this paper we will focus on a comparison task in order to illustrate the simplicity and robustness of some of the heuristics in the adaptive toolbox in contrast to the intractability and the fragility of optimal solutions. We will concentrate on three important classes of models for comparison-based inference and, in each of these classes, search for that to be used as benchmarks to evaluate the performance of fast and frugal heuristics: lexicographic trees, linear modes and Bayesian networks. Lexicographic trees are interesting because they are particularly simple models that have been used by humans throughout the centuries. Linear models have been traditionally used by cognitive psychologists as models for human inference, while Bayesian networks have only recently been introduced in statistics and computer science. Yet it is the Bayesian networks that are the best possible benchmarks for evaluating the fast and frugal heuristics, as we will show in this paper.
Sonderforschungsbereich 504 Publications | 1999
Stefan Krauss; Laura Martignon; Ulrich Hoffrage
We present empirical evidence that human reasoning follows the rules of probability theory, if information is presented in anatural formats¶. Human reasoning has often been evaluated in terms of humansA ability to deal with probabilities. Yet, in nature we do not observe probabilities, we rather count samples and their subsets. Our concept of Markov frequencies generalizes Gigerenzer & Hoffrageas anatural frequencies, which are known to foster insight in Bayesian situations with one cue. Markov Frequencies allow to visualize Bayesian inference problems even with an arbitrary number of cues.
Frontiers in Psychology | 2015
Ulrich Hoffrage; Stefan Krauss; Laura Martignon; Gerd Gigerenzer
Representing statistical information in terms of natural frequencies rather than probabilities improves performance in Bayesian inference tasks. This beneficial effect of natural frequencies has been demonstrated in a variety of applied domains such as medicine, law, and education. Yet all the research and applications so far have been limited to situations where one dichotomous cue is used to infer which of two hypotheses is true. Real-life applications, however, often involve situations where cues (e.g., medical tests) have more than one value, where more than two hypotheses (e.g., diseases) are considered, or where more than one cue is available. In Study 1, we show that natural frequencies, compared to information stated in terms of probabilities, consistently increase the proportion of Bayesian inferences made by medical students in four conditions—three cue values, three hypotheses, two cues, or three cues—by an average of 37 percentage points. In Study 2, we show that teaching natural frequencies for simple tasks with one dichotomous cue and two hypotheses leads to a transfer of learning to complex tasks with three cue values and two cues, with a proportion of 40 and 81% correct inferences, respectively. Thus, natural frequencies facilitate Bayesian reasoning in a much broader class of situations than previously thought.
Journal of Experimental Child Psychology | 2016
Mirjam Wasner; Hans-Christoph Nuerk; Laura Martignon; Stephanie Roesch; Korbinian Moeller
Recent studies indicated that finger gnosis (i.e., the ability to perceive and differentiate ones own fingers) is associated reliably with basic numerical competencies. In this study, we aimed at examining whether finger gnosis is also a unique predictor for initial arithmetic competencies at the beginning of first grade-and thus before formal math instruction starts. Therefore, we controlled for influences of domain-specific numerical precursor competencies, domain-general cognitive ability, and natural variables such as gender and age. Results from 321 German first-graders revealed that finger gnosis indeed predicted a unique and relevant but nevertheless only small part of the variance in initial arithmetic performance (∼1%-2%) as compared with influences of general cognitive ability and numerical precursor competencies. Taken together, these results substantiated the notion of a unique association between finger gnosis and arithmetic and further corroborate the theoretical idea of finger-based representations contributing to numerical cognition. However, the only small part of variance explained by finger gnosis seems to limit its relevance for diagnostic purposes.
Decision | 2017
Jan K. Woike; Ulrich Hoffrage; Laura Martignon
This article relates natural frequency representations of cue-criterion relationships to fast-and-frugal heuristics for inferences based on multiple cues. In the conceptual part of this work, three approaches to classification are compared to one another: The first uses a natural Bayesian classification scheme, based on profile memorization and natural frequencies. The second is based on naïve Bayes, a heuristic that assumes conditional independence between cues (given the criterion). The third approach is to construct fast-and-frugal classification trees, which can be conceived as pruned versions of diagnostic natural frequency trees. Fast-and-frugal trees can be described as lexicographic classifiers but can also be related to another fundamental class of models, namely linear models. Linear classifiers with fixed thresholds and noncompensatory weights coincide with fast-and-frugal trees—not as processes but in their output. Various heuristic principles for tree construction are proposed. In the second, empirical part of this article, the classification performance of the three approaches when making inferences under uncertainty (i.e., out of sample) is evaluated in 11 medical data sets in terms of Receiver Operating Characteristics (ROC) diagrams and predictive accuracy. Results show that the two heuristic approaches, naïve Bayes and fast-and-frugal trees, generally outperform the model that is normative when fitting known data, namely classification based on natural frequencies (or, equivalently, profile memorization). The success of fast-and-frugal trees is grounded in their ecological rationality: Their construction principles can exploit the structure of information in the data sets. Finally, implications, applications, limitations, and possible extensions of this work are discussed.