Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Anders Winman is active.

Publication


Featured researches published by Anders Winman.


Attention Perception & Psychophysics | 1993

Realism of confidence in sensory discrimination: The underconfidence phenomenon

Mats Björkman; Peter Juslin; Anders Winman

This paper documents a very pervasive underconfidence bias in the area of sensory discrimination. In order to account for this phenomenon, a subjective distance theory of confidence in sensory discrimination is proposed. This theory, based on the law of comparative judgment and the assumption of confidence as an increasing function of the perceived distance between stimuli, predicts underconfidence—that is, that people should perform better than they express in their confidence assessments. Due to the fixed sensitivity of the sensory system, this underconfidence bias is practically impossible to avoid. The results of Experiment 1 confirmed the prediction of underconfidence with the help of present-day calibration methods and indkated-a-good quantitative fit of the theory. The results of Experiment 2 showed that prolonged experience of outcome feedback (160 trials) had no effect on underconfidence. It is concluded that the subjective distance theory provides a better explanation of the underconfidence phenomenon than-do previous accounts in terms of subconscious processes.


Psychological Review | 2007

The Naïve intuitive statistician : A naïve sampling model of intuitive confidence intervals

Peter Juslin; Anders Winman; Patrik Hansson

The perspective of the naïve intuitive statistician is outlined and applied to explain overconfidence when people produce intuitive confidence intervals and why this format leads to more overconfidence than other formally equivalent formats. The naïve sampling model implies that people accurately describe the sample information they have but are naïve in the sense that they uncritically take sample properties as estimates of population properties. A review demonstrates that the naïve sampling model accounts for the robust and important findings in previous research as well as provides novel predictions that are confirmed, including a way to minimize the overconfidence with interval production. The authors discuss the naïve sampling model as a representative of models inspired by the naïve intuitive statistician.


Journal of Experimental Psychology: Learning, Memory and Cognition | 2003

Cue abstraction and exemplar memory in categorization

Peter Juslin; Sari Jones; Henrik Olsson; Anders Winman

In this article, the authors compare 3 generic models of the cognitive processes in a categorization task. The cue abstraction model implies abstraction in training of explicit cue-criterion relations that are mentally integrated to form a judgment, the lexicographic heuristic uses only the most valid cue, and the exemplar-based model relies on retrieval of exemplars. The results from 2 experiments showed that, in lieu of the lexicographic heuristic, most participants spontaneously integrate cues. In contrast to single-system views, exemplar memory appeared to dominate when the feedback was poor, but when the feedback was rich enough to allow the participants to discern the task structure, it was exploited for abstraction of explicit cue-criterion relations.


Journal of Experimental Psychology: Learning, Memory and Cognition | 2004

Subjective Probability Intervals: How to Reduce Overconfidence by Interval Evaluation.

Anders Winman; Patrik Hansson; Peter Juslin

Format dependence implies that assessment of the same subjective probability distribution produces different conclusions about over- or underconfidence depending on the assessment format. In 2 experiments, the authors demonstrate that the overconfidence bias that occurs when participants produce intervals for an uncertain quantity is almost abolished when they evaluate the probability that the same intervals include the quantity. The authors successfully apply a method for adaptive adjustment of probability intervals as a debiasing tool and discuss a tentative explanation in terms of a naive sampling model. According to this view, people report their experiences accurately, but they are naive in that they treat both sample proportion and sample dispersion as unbiased estimators, yielding small bias in probability evaluation but strong bias in interval production.


Psychological Review | 2009

Probability theory : Not the very guide of life

Peter Juslin; Håkan Nilsson; Anders Winman

Probability theory has long been taken as the self-evident norm against which to evaluate inductive reasoning, and classical demonstrations of violations of this norm include the conjunction error and base-rate neglect. Many of these phenomena require multiplicative probability integration, whereas people seem more inclined to linear additive integration, in part, at least, because of well-known capacity constraints on controlled thought. In this article, the authors show with computer simulations that when based on approximate knowledge of probabilities, as is routinely the case in natural environments, linear additive integration can yield as accurate estimates, and as good average decision returns, as estimates based on probability theory. It is proposed that in natural environments people have little opportunity or incentive to induce the normative rules of probability theory and, given their cognitive constraints, linear additive integration may often offer superior bounded rationality.


Journal of Experimental Psychology: General | 2009

Linda is not a bearded lady : Configural weighting and adding as the cause of extension errors

Håkan Nilsson; Anders Winman; Peter Juslin; Göran Hansson

This article explores the configural weighted average (CWA) hypothesis suggesting that extension biases, like conjunction and disjunction errors, occur because people estimate compound probabilities by taking a CWA of the constituent probabilities. The hypothesis suggests a process consistent with well-known cognitive constraints, which nonetheless achieves high robustness and bounded rationality in noisy real-life environments. Predictions by the CWA hypothesis are that in error-free data, conjunction and disjunction errors should be the rule rather than the exception when pairs of statements are randomly sampled from an environment, the rate of extension errors should increase when noise in data is decreased, and that adding a likely component should increase the probability of a conjunction. Four experiments generally verify the predictions by the hypothesis, demonstrating that extension errors are frequent also when tasks are selected according to representative design.


Journal of Experimental Psychology: Learning, Memory and Cognition | 2008

The Role of Short-Term Memory Capacity and Task Experience for Overconfidence in Judgment Under Uncertainty

Patrik Hansson; Peter Juslin; Anders Winman

Research with general knowledge items demonstrates extreme overconfidence when people estimate confidence intervals for unknown quantities, but close to zero overconfidence when the same intervals are assessed by probability judgment. In 3 experiments, the authors investigated if the overconfidence specific to confidence intervals derives from limited task experience or from short-term memory limitations. As predicted by the naive sampling model (P. Juslin, A. Winman, & P. Hansson, 2007), overconfidence with probability judgment is rapidly reduced by additional task experience, whereas overconfidence with intuitive confidence intervals is minimally affected even by extensive task experience. In contrast to the minor bias with probability judgment, the extreme overconfidence bias with intuitive confidence intervals is correlated with short-term memory capacity. The proposed interpretation is that increased task experience is not sufficient to cure the overconfidence with confidence intervals because it stems from short-term memory limitations.


Attention Perception & Psychophysics | 1996

Underconfidence in sensory discrimination: the interaction between experimental setting and response strategies.

Henrik Olsson; Anders Winman

In a recent issue of this journal, Baranski and Petrusic (1994) presented empirical data revealing overconfidence in sensory discrimination. In this paper, we propose an explanation of Baranski and Petrusic’s results, based on an idiosyncrasy in the experimental setting that misleads subjects who are using an unwarranted symmetry assumption. Experiment 1 showed that when this hypothesis is controlled for, a large underconfidence bias is obtained with Baranski and Petrusic’s procedure. The results of Experiment 2 confirmed that overconfidence is difficult to obtain in subject-controlled sensory discrimination tasks, even for a very low proportion of correct responses. The different results obtained in sensory and cognitive tasks suggest that one should not uncritically draw parallels between confidence in sensory and cognitive judgments.


Frontiers in Psychology | 2013

Measuring acuity of the approximate number system reliably and validly: the evaluation of an adaptive test procedure.

Marcus Lindskog; Anders Winman; Peter Juslin; Leo Poom

Two studies investigated the reliability and predictive validity of commonly used measures and models of Approximate Number System acuity (ANS). Study 1 investigated reliability by both an empirical approach and a simulation of maximum obtainable reliability under ideal conditions. Results showed that common measures of the Weber fraction (w) are reliable only when using a substantial number of trials, even under ideal conditions. Study 2 compared different purported measures of ANS acuity as for convergent and predictive validity in a within-subjects design and evaluated an adaptive test using the ZEST algorithm. Results showed that the adaptive measure can reduce the number of trials needed to reach acceptable reliability. Only direct tests with non-symbolic numerosity discriminations of stimuli presented simultaneously were related to arithmetic fluency. This correlation remained when controlling for general cognitive ability and perceptual speed. Further, the purported indirect measure of ANS acuity in terms of the Numeric Distance Effect (NDE) was not reliable and showed no sign of predictive validity. The non-symbolic NDE for reaction time was significantly related to direct w estimates in a direction contrary to the expected. Easier stimuli were found to be more reliable, but only harder (7:8 ratio) stimuli contributed to predictive validity.


Journal of Experimental Psychology: Learning, Memory and Cognition | 2001

High-level reasoning and base-rate use : Do we need cue-competition to explain the inverse base-rate effect?

Peter Juslin; Pia Wennerholm; Anders Winman

Previous accounts of the inverse base-rate effect (D. L. Medin & S. M. Edelson, 1988) have revolved around the concept of cue-competition. In this article, the authors propose that high-level reasoning in the form of an eliminative inference mechanism may contribute to the effect. A quantitative implementation of this idea demonstrates that it has the power by itself to produce the pattern of base-rate effects in the Medin and Edelson (1988) design. Four predictions are derived that contradict the predictions by attention to distinctive input (ADIT; J. K. Kruschke, 1996), up to date the most successful account of the inverse base-rate effect. Results from 3 experiments disconfirm the predictions by ADIT and demonstrate the importance of high-level reasoning in designs of the Medin and Edelson kind. Implications for the interpretation of the inverse base-rate effect and the attention-shifting mechanisms presumed by ADIT are discussed.

Collaboration


Dive into the Anders Winman's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge