Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Fergus Bolger is active.

Publication


Featured researches published by Fergus Bolger.


Technological Forecasting and Social Change | 1991

Delphi: A reevaluation of research and theory

Gene Rowe; George Wright; Fergus Bolger

Abstract This paper examines critically the Delphi technique to determine whether it succeeds in alleviating the “process loss” typical of interacting groups. After briefly reviewing the technique, we consider problems with Delphi from two perspectives. First, we examine methodological and technical difficulties and the problems these have brought about in experimental applications. We suggest that important differences exist between the typical laboratory Delphi and the original concept of Delphi. These differences, reflecting a lack of control of important group characteristics/factors (such as the relative level of panelist expertise), make comparisons between Delphi studies unrealistic, as are generalizations from laboratory studies to the ideal of Delphi. This conclusion diminishes the power of those former Delphi critiques that have largely dismissed the procedure because of the variability of laboratory study results. Second, having noted the limited usefulness of the majority of studies for answering questions on the effectiveness of Delphi, we look at the technique from a theoritical/ mechanical perspective. That is, by drawing upon ideas/findings from other areas of research, we attempt to discern whether the structure of the Delphi procedure itself might reasonably be expected to function as intended. We conclude that inadequacies in the nature of feedback typically supplied in applications of Delphi tend to ensure that any small gains in the resolution of “process loss” are offset by the removal of any opportunity for group “process gain”. Some solutions to this dilemma are advocated; they are based on an analysis of the process of judgment change within groups and a consideration of factors that increase the validity of statistical/ nominal groups over their constituent individual components.


decision support systems | 1994

Assessing the quality of expert judgment: issues and analysis

Fergus Bolger; George Wright

Abstract Frequently the same biases have been manifest in experts as by students in the laboratory, but expertise studies are often no more ecologically valid than laboratory studies because the methods used in both are similar. Further, real-world tasks vary in their learnability, or the availability of outcome feedback necessary for a judge to improve performance with experience. We propose that good performance will be manifest when both ecological validity and learnability are high, but that performance will be poor when one of these is low. Finally, we suggest how researchers and practitioners might use these task-analytic constructs in order to identify true expertise for the formulation of decision support.


Quarterly Journal of Experimental Psychology | 1993

Context-sensitive heuristics in statistical reasoning

Fergus Bolger; Nigel Harvey

Previous work has shown that people use anchor-and-adjust heuristics to forecast future data points from previous ones in the same series. We report three experiments that show that they use different versions of this heuristic for different types of series. To forecast an untrended series, our subjects always took a weighted average of the long-term mean of the series and the last data point. In contrast, the way that they forecast a trended series depended on the serial dependences in it. When these were low, people forecast by adding a proportion of the last difference in the series to the last data point. When stronger serial dependences made this difference less similar to the next one, they used a version of the averaging heuristic that they employed for untrended series. This could take serial dependences into account and included a separate component for trend. These results suggest that people use a form of the heuristic that is well adapted to the nature of the series that they are forecasting. However, we also found that the size of their adjustments tended to be suboptimal. They overestimated the degree of serial dependence in the data but underestimated trends. This biased their forecasts.


International Journal of Forecasting | 1996

Graphs versus tables: Effects of data presentation format on judgemental forecasting

Nigel Harvey; Fergus Bolger

Abstract We report two experiments designed to study the effect of data presentation format on the accuracy of judgemental forecasts. In the first one, people studied 44 different 20-point time series and forecast the 21st and 22nd points of each one. Half the series were presented graphically and half were in tabular form. Root mean square error ( RMSE ) in forecasts was decomposed into constant error (to measure bias) and variable error (to measure inconsistency). For untrended data, RMSE was somewhat higher with graphical presentation: inconsistency and an overforecasting bias were both greater with this format. For trended data, RMSE was higher with tabular presentation. This was because underestimation of trends with this format was so much greater than with graphical presentation that it overwhelmed the smaller but opposing effects that were observed with untrended series. In the second experiment, series were more variable but very similar results were obtained.


International Journal of Forecasting | 2004

The effects of feedback on judgmental interval predictions

Fergus Bolger

Abstract The majority of studies of probability judgment have found that judgments tend to be overconfident and that the degree of overconfidence is greater the more difficult the task. Further, these effects have been resistant to attempts to ‘debias’ via feedback. We propose that under favourable conditions, provision of appropriate feedback should lead to significant improvements in calibration, and the current study aims to demonstrate this effect. To this end, participants first specified ranges within which the true values of time series would fall with a given probability. After receiving feedback, forecasters constructed intervals for new series, changing their probability values if desired. The series varied systematically in terms of their characteristics including amount of noise, presentation scale, and existence of trend. Results show that forecasts were initially overconfident but improved significantly after feedback. Further, this improvement was not simply due to ‘hedging’, i.e. shifting to very high probability estimates and extremely wide intervals; rather, it seems that calibration improvement was chiefly obtained by forecasters learning to evaluate the extent of the noise in the series.


Archive | 1992

Reliability and Validity in Expert Judgment

Fergus Bolger; George Wright

As the world of human affairs becomes increasingly more complex, our reliance upon expert judgment grows correspondingly. Technological, economic, legal, and political developments—to name but a few—place ever-larger information-processing demands upon us, thereby forcing specialization. A single person can no longer be a master of his or her whole field and, consequently, knowledge becomes distributed among a number of specialist experts.


Experimental Psychology | 2008

Market Entry Decisions

Fergus Bolger; Briony D. Pulford; Andrew M. Colman

In a market entry game, the number of entrants usually approaches game-theoretic equilibrium quickly, but in real-world markets business start-ups typically exceed market capacity, resulting in chronically high failure rates and suboptimal industry profits. Excessive entry has been attributed to overconfidence arising when expected payoffs depend partly on skill. In an experimental test of this hypothesis, 96 participants played 24 rounds of a market entry game, with expected payoffs dependent partly on skill on half the rounds, after their confidence was manipulated and measured. The results provide direct support for the hypothesis that high levels of confidence are largely responsible for excessive entry, and they suggest that absolute confidence, independent of interpersonal comparison, rather than confidence about ones abilities relative to others, drives excessive entry decisions when skill is involved.


Knowledge Engineering Review | 1995

Cognitive expertise research and knowledge engineering

Fergus Bolger

This paper is a review of research into cognitive expertise. The review is organized in terms of a simple model of the knowledge and cognitive processes that might be expected to be enhanced in experts relative to non-experts. This focus on cognitive competence underlying expert performance permits the identification of skills and knowledge that we might wish to capture and model in expert systems. The competence perspective also indicates areas of weakness in human experts. In these areas, we might wish to support or replace the expert with, for example, a normative system rather than attempting to model his or her knowledge.


Quarterly Journal of Experimental Psychology | 2001

Collecting information: Optimizing outcomes, screening options, or facilitating discrimination?

Nigel Harvey; Fergus Bolger

Collection of information prior to a decision may be integrated into a compensatory choice process; if it is, the information packet that is collected should be the one that produces the highest net gain. Alternatively, information may be collected in order to screen out options that fail to meet minimum standards; if this is the case, people should not choose options on which they have not collected available information. We tested these and other predictions from the two approaches in four experiments. Participants were given specific information about three attributes of each choice option but only probabilistic information about a fourth one. They rated attractiveness of options, decided whether to collect specific information about the fourth attribute of each one, rated options again, and then selected one of them. Data were consistent with neither of the above approaches. Instead they suggested that people collect information in order to facilitate their ability to discriminate between the attractiveness of options.


Proceedings of the National Academy of Sciences of the United States of America | 2014

Delphi: Somewhere between Scylla and Charybdis?

Fergus Bolger; Gene Rowe

We strongly concur with most of Morgan’s points regarding expert knowledge elicitation (EKE) for policy making (1); however, we contest the somewhat idealistic tone of Morgan’s paper, which we believe counters the goal of better policy making otherwise promoted. Furthermore, we disagree on two specific issues raised by Morgan that can be seen as located within this general criticism: the role of combination and consensus, and the value of the Delphi method.

Collaboration


Dive into the Fergus Bolger's collaboration.

Top Co-Authors

Avatar

George Wright

University of Strathclyde

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nigel Harvey

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

M. Sinan Gönül

Middle East Technical University

View shared research outputs
Top Co-Authors

Avatar

Mustafa Bayindir

Middle East Technical University

View shared research outputs
Top Co-Authors

Avatar

Andrew Stranieri

Federation University Australia

View shared research outputs
Researchain Logo
Decentralizing Knowledge