Marie Juanchich
Kingston University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Marie Juanchich.
Psychonomic Bulletin & Review | 2014
Miroslav Sirota; Marie Juanchich; York Hagmayer
The presentation of a Bayesian inference problem in terms of natural frequencies rather than probabilities has been shown to enhance performance. The effect of individual differences in cognitive processing on Bayesian reasoning has rarely been studied, despite enabling us to test process-oriented variants of the two main accounts of the facilitative effect of natural frequencies: The ecological rationality account (ERA), which postulates an evolutionarily shaped ease of natural frequency automatic processing, and the nested sets account (NSA), which posits analytical processing of nested sets. In two experiments, we found that cognitive reflection abilities predicted normative performance equally well in tasks featuring whole and arbitrarily parsed objects (Experiment 1) and that cognitive abilities and thinking dispositions (analytical vs. intuitive) predicted performance with single-event probabilities, as well as natural frequencies (Experiment 2). Since these individual differences indicate that analytical processing improves Bayesian reasoning, our findings provide stronger support for the NSA than for the ERA.
Medical Decision Making | 2014
Miroslav Sirota; Marie Juanchich; Olga Kostopoulou; Róbert Hanák
Accurate perception of medical probabilities communicated to patients is a cornerstone of informed decision making. People, however, are prone to biases in probability perception. Recently, Pighin and others extended the list of such biases with evidence that “1-in-X” ratios (e.g., “1 in 12”) led to greater perceived probability and worry about health outcomes than “N-in-X*N” ratios (e.g., “10 in 120”). Subsequently, the recommendation was to avoid using “1-in-X” ratios when communicating probabilistic information to patients. To warrant such a recommendation, we conducted 5 well-powered replications and synthesized the available data. We found that 3 out of the 5 replications yielded statistically nonsignificant findings. In addition, our results showed that the “1-in-X” effect was not moderated by numeracy, cognitive reflection, age, or gender. To quantify the evidence for the effect, we conducted a Bayes factor meta-analysis and a traditional meta-analysis of our 5 studies and those of Pighin and others (11 comparisons, N = 1131). The meta-analytical Bayes factor, which allowed assessment of the evidence for the null hypothesis, was very low, providing decisive evidence to support the existence of the “1-in-X” effect. The traditional meta-analysis showed that the overall effect was significant (Hedges’ g = 0.42, 95% CI 0.29–0.54). Overall, we provide decisive evidence for the existence of the “1-in-X” effect but suggest that it is smaller than previously estimated. Theoretical and practical implications are discussed.
Science | 2012
Miroslav Sirota; Marie Juanchich
THE SCIENTIFIC COMMUNITY INCREASINGLY RECOGNIZES THE IMPORTANCE OF COMMUNICATING effectively and responsibly with the public (1), but questions remain about whether and how to “frame” scientifi c information about controversial issues such as climate change, evolution, and embryonic stem cell research (2, 3). Recent discussions about the biological determinants of behavior in voles provide an opportunity to refl ect on how scientists can frame information in ways that are both illuminating and responsible. By modulating the density and distribution of vasopressin receptors in specifi c regions of the brain, scientists can get ordinarily “promiscuous” montane voles to behave more like “monogamous” prairie voles (4). By the time this research was reported in the popular media, it had become a story about the discovery of a “gene for” “monogamy,” “fi delity,” “promiscuity,” or “divorce” in humans (5). Consider the major frames we identifi ed in the media coverage of this research: (i) “genetic determinism,” the idea that a single gene controls even complex social behaviors such as sexual monogamy (6); (ii) “triumph for reductionism,” the suggestion that soon we will understand love in terms that refer exclusively to physics and chemistry (7); (iii) “humans are like voles,” a parallel allowing wide-ranging extrapolation (8); (iv) “happiness drug,” the idea that applying lessons learned from this research to biotechnology efforts could save a relationship or marriage (9); and (v) “dangers of social manipulation,” which has led to stories about trust sprays of potential use to the military, department stores, politicians, and stalkers (10). These frames, albeit crude and oversimplifi ed, can help members of the public understand how research relates to broader social trends, issues, and debates. By paying close attention to the dominant frames used in highly publicized cases like this one, scientists can take advantage of these strengths while preemptively highlighting their potential weaknesses. For example, to correct a common source of misunderstanding in the “humans are like voles” frame, experts could emphasize that ordinary usage of terms such as “monogamy” can differ substantially from their technical applications in biology. (Spending 56% of the time with one’s spouse, 19% of the time alone, and 25% of the time copulating with strangers would not qualify as monogamous by ordinary human standards.) They could also suggest alternative frames that counteract weaknesses in existing ones. Instead of focusing on “happiness drugs,” for example, scientists could contextualize this research as a potential treatment for autism, thereby focusing attention on the more realistic relevance of this work. By recognizing the benefi ts of particular frames while preempting and mitigating common misperceptions, scientists can work with the media to develop frames that promote public interest in scientifi c advances without contributing to sensationalism and diminished scientifi c credibility. DANIEL J. MCKAUGHAN* AND KEVIN C. ELLIOTT Department of Philosophy, Boston College, Chestnut Hill, MA 02467, USA. Department of Philosophy, University of South Carolina, Columbia, SC 29208, USA.
Thinking & Reasoning | 2013
Marie Juanchich; Miroslav Sirota; Tzur M. Karelitz; Gaëlle Villejoubert
Verbal probabilities are a common mean for communicating risk and uncertainties in many decision-making settings (e.g., finance, medicine, military). They are considered directional because they elicit a focus on either the outcome occurrence (e.g., there is a chance) or on its non-occurrence (e.g., it is unlikely). According to a quantitative perspective, directionality is dependent on the vague probabilistic meaning conveyed by verbal probabilities—e.g., p(outcome) > .50 = > focus on outcome occurrence. In contrast a more qualitative perspective suggests that directionality depends on contextual factors. The present study tested whether the directionality of verbal probabilities was determined by their vague probabilistic meaning, by contextually manipulated variables (i.e., representativeness and base rate), or by a combination of both. Participants provided their own expressions to describe the guilt of a suspect and then assessed the vague probabilistic meaning and directionality associated with those expressions. Results showed that directionality was mainly determined by the vague probabilistic meaning but also by the base rate of guilt. Although attention focus on the occurrence or the non-occurrence of the target outcome is dependent on vague probabilistic meaning, it cannot be fully accounted for by it.
Journal of Risk Research | 2016
Marie Juanchich; Miroslav Sirota
Most research into uncertainty focuses on how people estimate probability magnitude. By contrast, this paper focuses on how people interpret the concept of probability and why they often misinterpret it. In a weather forecast context, we hypothesised that the absence of an explicit reference class and the polysemy of the percentage format are causing incorrect probability interpretations, and test two interventions to help people make better probability interpretation. In two studies (N = 1337), we demonstrate that most people from the UK and the US do not interpret probabilities of precipitation correctly. The explicit mention of the reference class helped people to interpret probabilities of precipitation better when the target area was explicit; but this was not the case when it was not specified. Furthermore, the polysemy of the percentage format is not likely to cause these misinterpretations, since a non-polysemous format (e.g. verbal probability) did not facilitate a correct probability interpretation in our studies. A Bayes factor analysis supported both of these conclusions. We discuss theoretical and applied implications of our findings.
Frontiers in Psychology | 2015
Gaëlle Vallée-Tourangeau; Miroslav Sirota; Marie Juanchich; Frédéric Vallée-Tourangeau
Price (in Bayes, 1958) introduced Bayess theorem as a precise and accurate method for measuring the strength of an inductive argument. He contrasted Bayesian reasoning with common sense, which, he argued, is imbued with vagueness and often erroneous. Nearly two centuries later, Prices claim was put to the test by psychologists who examined how people revise their opinions in light of new evidence (e.g., Phillips and Edwards, 1966; Kahneman and Tversky, 1973). For the past four decades, scholars have debated whether common sense can or cannot approximate Bayesian reasoning.
Journal of Experimental Psychology: Applied | 2017
Marie Juanchich; Miroslav Sirota
We tested whether people focus on extreme outcomes to predict climate change and assessed the gap between the frequency of the predicted outcome and its perceived probability while controlling for climate change beliefs. We also tested 2 cost-effective interventions to reduce the preference for extreme outcomes and the frequency–probability gap by manipulating the probabilistic format: numerical or dual-verbal-numerical. In 4 experiments, participants read a scenario featuring a distribution of sea level rises, selected a sea rise to complete a prediction (e.g., “It is ‘unlikely’ that the sea level will rise . . . inches”) and judged the likelihood of this sea rise occurring. Results showed that people have a preference for predicting extreme climate change outcomes in verbal predictions (59% in Experiments 1–4) and that this preference was not predicted by climate change beliefs. Results also showed an important gap between the predicted outcome frequency and participants’ perception of the probability that it would occur. The dual-format reduced the preference for extreme outcomes for low and medium probability predictions but not for high ones, and none of the formats consistently reduced the frequency–probability gap.
Journal of Experimental Psychology: Applied | 2018
Miroslav Sirota; Marie Juanchich; Jean-François Bonnefon
According to the “1-in-X” effect, “1-in-X” ratios (e.g., 1 in 12) trigger a higher subjective probability than do numerically equivalent “N-in-X*N” ratios (e.g., 3 in 36). Here we tested the following: (a) the effect on objective measures, (b) its consequences for decision-making, (c) whether this effect is a form of bias by measuring probability accuracy, and (d) its amplification in people with lower health literacy and numeracy. In parallel-designed experiments, 975 participants from the general adult population participated in 1 of 5 experiments following a 2(format: “1-in-X” or “N-in-X*N”) × 4(scenarios) mixed design. Participants assessed the risk of contracting a disease on either a verbal probability scale (Experiment 1) or a numerical probability/frequency scale with immediate (Experiments 2–3) or delayed presentation (Experiments 4–5). Participants also made a health-related decision and completed a health literacy and numeracy scale. The “1-in-X” ratios yielded higher probability perceptions than did the “N-in-X*N” ratios and affected relevant decisions. Critically, the “1-in-X” ratios led to a larger objective overestimation of numerical probabilities than did the “N-in-X*N” ratios. People with lower levels of health literacy and numeracy were not more sensitive to the bias. Health professionals should use “1-in-X” ratios with great caution when communicating to patients, because they overestimate health risks.
Behavior Research Methods | 2018
Miroslav Sirota; Marie Juanchich
The Cognitive Reflection Test, measuring intuition inhibition and cognitive reflection, has become extremely popular because it reliably predicts reasoning performance, decision-making, and beliefs. Across studies, the response format of CRT items sometimes differs, based on the assumed construct equivalence of tests with open-ended versus multiple-choice items (the equivalence hypothesis). Evidence and theoretical reasons, however, suggest that the cognitive processes measured by these response formats and their associated performances might differ (the nonequivalence hypothesis). We tested the two hypotheses experimentally by assessing the performance in tests with different response formats and by comparing their predictive and construct validity. In a between-subjects experiment (n = 452), participants answered stem-equivalent CRT items in an open-ended, a two-option, or a four-option response format and then completed tasks on belief bias, denominator neglect, and paranormal beliefs (benchmark indicators of predictive validity), as well as on actively open-minded thinking and numeracy (benchmark indicators of construct validity). We found no significant differences between the three response formats in the numbers of correct responses, the numbers of intuitive responses (with the exception of the two-option version, which had a higher number than the other tests), and the correlational patterns of the indicators of predictive and construct validity. All three test versions were similarly reliable, but the multiple-choice formats were completed more quickly. We speculate that the specific nature of the CRT items helps build construct equivalence among the different response formats. We recommend using the validated multiple-choice version of the CRT presented here, particularly the four-option CRT, for practical and methodological reasons. Supplementary materials and data are available at https://osf.io/mzhyc/.
Medical Decision Making | 2017
Miroslav Sirota; Marie Juanchich; Dafina Petrova; Rocio Garcia-Retamero; Lukasz Walasek; Sudeep Bhatia
Background. Previous research has shown that format effects, such as the “1-in-X” effect—whereby “1-in-X” ratios lead to a higher perceived probability than “N-in-N*X” ratios—alter perceptions of medical probabilities. We do not know, however, how prevalent this effect is in practice; i.e., how often health professionals use the “1-in-X” ratio. Methods. We assembled 4 different sources of evidence, involving experimental work and corpus studies, to examine the use of “1-in-X” and other numerical formats quantifying probability. Results. Our results revealed that the use of the “1-in-X” ratio is prevalent and that health professionals prefer this format compared with other numerical formats (i.e., the “N-in-N*X”, %, and decimal formats). In Study 1, UK family physicians preferred to communicate prenatal risk using a “1-in-X” ratio (80.4%, n = 131) across different risk levels and regardless of patients’ numeracy levels. In Study 2, a sample from the UK adult population (n = 203) reported that most GPs (60.6%) preferred to use “1-in-X” ratios compared with other formats. In Study 3, “1-in-X” ratios were the most commonly used format in a set of randomly sampled drug leaflets describing the risk of side effects (100%, n = 94). In Study 4, the “1-in-X” format was the most commonly used numerical expression of medical probabilities or frequencies on the UK’s NHS website (45.7%, n = 2,469 sentences). Conclusions. The prevalent use of “1-in-X” ratios magnifies the chances of increased subjective probability. Further research should establish clinical significance of the “1-in-X” effect.