Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Peter Ayton is active.

Publication


Featured researches published by Peter Ayton.


Memory & Cognition | 2004

The hot hand fallacy and the gambler’s fallacy: Two faces of subjective randomness?

Peter Ayton; Ilan Fischer

The representativeness heuristic has been invoked to explain two opposing expectations—that random sequences will exhibit positive recency (the hot hand fallacy) and that they will exhibit negative recency (the gambler’s fallacy). We propose alternative accounts for these two expectations: (1) The hot hand fallacy arises from the experience of characteristic positive recency in serial fluctuations in human performance. (2) The gambler’s fallacy results from the experience of characteristic negative recency in sequences of natural events, akin to sampling without replacement. Experiment 1 demonstrates negative recency in subjects’ expectations for random binary outcomes from a roulette game, simultaneously with positive recency in expectations for another statistically identical sequence—the successes and failures of their predictions for the random outcomes. These findings fit our proposal but are problematic for the representativeness account. Experiment 2 demonstrates that sequence recency influences attributions that human performance or chance generated the sequence.


Journal of Experimental Psychology: Learning, Memory and Cognition | 2009

Exaggerated risk: prospect theory and probability weighting in risky choice

Petko Kusev; Paul van Schaik; Peter Ayton; John Dent; Nick Chater

In 5 experiments, we studied precautionary decisions in which participants decided whether or not to buy insurance with specified cost against an undesirable event with specified probability and cost. We compared the risks taken for precautionary decisions with those taken for equivalent monetary gambles. Fitting these data to Tversky and Kahnemans (1992) prospect theory, we found that the weighting function required to model precautionary decisions differed from that required for monetary gambles. This result indicates a failure of the descriptive invariance axiom of expected utility theory. For precautionary decisions, people overweighted small, medium-sized, and moderately large probabilities-they exaggerated risks. This effect is not anticipated by prospect theory or experience-based decision research (Hertwig, Barron, Weber, & Erev, 2004). We found evidence that exaggerated risk is caused by the accessibility of events in memory: The weighting function varies as a function of the accessibility of events. This suggests that peoples experiences of events leak into decisions even when risk information is explicitly provided. Our findings highlight a need to investigate how variation in decision content produces variation in preferences for risk.


Futures | 1986

The psychology of forecasting

George Wright; Peter Ayton

Abstract This article reviews research on the elicitation of consistent, coherent and valid probability forecasts. From this a general model of the judgemental forecasting process is developed showing the interdependence of consistency, coherence and validity. The authors describe a forecasting aid which elicits consistent and coherent forecasts and argue that research should now focus on the validity of judgemental forecasting by attempting to identify the characteristics of those individuals who can produce valid judgements.


Archive | 1992

On the Competence and Incompetence of Experts

Peter Ayton

The widespread and unexceptional use of the term “expert” suggests that there is general public acceptance of the validity of the concept of an expert. For example, in news reports of particular “specialist” areas such as foreign politics, economics, and transport disasters, it is quite routine for particular individuals, presented as experts, to be explicitly consulted, and asked for their analyses, judgments, and opinions, which are quoted and duly accorded some weight and prominence.


Organizational Behavior and Human Decision Processes | 1992

Judgmental probability forecasting in the immediate and medium term

George Wright; Peter Ayton

Abstract This study investigates judgmental probability forecasting of nonpersonal events in the immediate and medium term. Forecasts for desirable events were found to be better calibrated and less overconfident in the immediate term than the medium term. Implications for decision analysis practice are discussed. In addition, forecasting responses and performance showed strong relative individual consistency across forecasting periods, indicating that it may well be possible to select good all-round forecasters. Finally, the relationship of coherence in forecasting response to subsequent forecasting performance is analyzed and discussed.


Medical Decision Making | 2013

How to Discriminate between Computer-Aided and Computer-Hindered Decisions: A Case Study in Mammography

Andrey Povyakalo; Eugenio Alberdi; Lorenzo Strigini; Peter Ayton

Background. Computer aids can affect decisions in complex ways, potentially even making them worse; common assessment methods may miss these effects. We developed a method for estimating the quality of decisions, as well as how computer aids affect it, and applied it to computer-aided detection (CAD) of cancer, reanalyzing data from a published study where 50 professionals (“readers”) interpreted 180 mammograms, both with and without computer support. Method. We used stepwise regression to estimate how CAD affected the probability of a reader making a correct screening decision on a patient with cancer (sensitivity), thereby taking into account the effects of the difficulty of the cancer (proportion of readers who missed it) and the reader’s discriminating ability (Youden’s determinant). Using regression estimates, we obtained thresholds for classifying a posteriori the cases (by difficulty) and the readers (by discriminating ability). Results. Use of CAD was associated with a 0.016 increase in sensitivity (95% confidence interval [CI], 0.003–0.028) for the 44 least discriminating radiologists for 45 relatively easy, mostly CAD-detected cancers. However, for the 6 most discriminating radiologists, with CAD, sensitivity decreased by 0.145 (95% CI, 0.034–0.257) for the 15 relatively difficult cancers. Conclusions. Our exploratory analysis method reveals unexpected effects. It indicates that, despite the original study detecting no significant average effect, CAD helped the less discriminating readers but hindered the more discriminating readers. Such differential effects, although subtle, may be clinically significant and important for improving both computer algorithms and protocols for their use. They should be assessed when evaluating CAD and similar warning systems.


International Journal of Forecasting | 1989

Judgemental probability forecasts for personal and impersonal events

George Wright; Peter Ayton

Abstract This paper reports the results of empirical tests of psychological hypotheses concerning the influence of perceived controllability and desirability on response and performance measures of judgemental forecasts. Subjects were required to estimate, for the imminent four-week period, the likelihoods of occurrence of personal events for themselves and their peers and general world events. We find that the two factors studied have varying, but predictable, influences on the three types of forecast and that individual differences in forecasting behaviour and reliability show some evidence of consistency.


Expert Systems | 1997

Arguments for Qualitative Risk Assessment: The StAR Risk Adviser

David K. Hardman; Peter Ayton

Calculating risk is relatively straightforward when there is reliable statistical evidence on which to base a judgment. However, novel technologies are often characterised by a lack of such historical data, which creates a problem for risk assessment. In fact, numerical risk assessments can be positively misleading in such situations. We describe a decision support system – StAR – that gives quantitative assessments where appropriate, but which is also able to provide qualitative risk assessments based on arguments for and against the presence of risk. The user is presented with a summary statement of risk, together with the arguments that underlie this assessment. Furthermore, the user is able to search beyond these top-level arguments in order to discover more about the available evidence. Here we suggest that this approach is well-suited to the way in which people naturally make decisions, and we show how the StAR approach has been implemented in the domain of toxicological risk assessment.


Knowledge Engineering Review | 1995

Bias in human judgement under uncertainty

Peter Ayton; Eva Pascoe

The claim is frequently made that human judgement and reasoning are vulnerable to cognitive biases. Such biases are assumed to be inherent in that they are attributed to the nature of the mental processes that produce judgement. In this paper, we review the psychological evidence for this claim in the context of the debate concerning human judgemental competence under uncertainty. We consider recent counter-arguments which suggest that the evidence for cognitive biases may be dependent on observations of performance on inappropriate tasks and by comparisons with inappropriate normative standards. We also consider the practical implications for the design of decision support systems.


Memory & Cognition | 1988

Decision time, subjective probability, and task difficulty

George Wright; Peter Ayton

This study analyzed the relationships between decision time, subjective probability, and task difficulty in the context of a probability assessment task involving memory search. The results indicate that decision time and subjective probability do not yield identical functions. Also, decision time increases as subjective task difficulty increases. A similar relationship obtains between decision time and a measure of objective task difficulty. These latter two findings are inconsistent with Hogarth’s (1975) prediction of a nonmonotonic relationship between decision time and task difficulty.

Collaboration


Dive into the Peter Ayton's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

George Wright

University of Strathclyde

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

George Wright

University of Strathclyde

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge