Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where George Wright is active.

Publication


Featured researches published by George Wright.


International Journal of Forecasting | 1999

The Delphi technique as a forecasting tool: issues and analysis

Gene Rowe; George Wright

Abstract This paper systematically reviews empirical studies looking at the effectiveness of the Delphi technique, and provides a critique of this research. Findings suggest that Delphi groups outperform statistical groups (by 12 studies to two with two ‘ties’) and standard interacting groups (by five studies to one with two ‘ties’), although there is no consistent evidence that the technique outperforms other structured group procedures. However, important differences exist between the typical laboratory version of the technique and the original concept of Delphi, which make generalisations about ‘Delphi’ per se difficult. These differences derive from a lack of control of important group, task, and technique characteristics (such as the relative level of panellist expertise and the nature of feedback used). Indeed, there are theoretical and empirical reasons to believe that a Delphi conducted according to ‘ideal’ specifications might perform better than the standard laboratory interpretations. It is concluded that a different focus of research is required to answer questions on Delphi effectiveness, focusing on an analysis of the process of judgment change within nominal groups.


Technological Forecasting and Social Change | 1991

Delphi: A reevaluation of research and theory

Gene Rowe; George Wright; Fergus Bolger

Abstract This paper examines critically the Delphi technique to determine whether it succeeds in alleviating the “process loss” typical of interacting groups. After briefly reviewing the technique, we consider problems with Delphi from two perspectives. First, we examine methodological and technical difficulties and the problems these have brought about in experimental applications. We suggest that important differences exist between the typical laboratory Delphi and the original concept of Delphi. These differences, reflecting a lack of control of important group characteristics/factors (such as the relative level of panelist expertise), make comparisons between Delphi studies unrealistic, as are generalizations from laboratory studies to the ideal of Delphi. This conclusion diminishes the power of those former Delphi critiques that have largely dismissed the procedure because of the variability of laboratory study results. Second, having noted the limited usefulness of the majority of studies for answering questions on the effectiveness of Delphi, we look at the technique from a theoritical/ mechanical perspective. That is, by drawing upon ideas/findings from other areas of research, we attempt to discern whether the structure of the Delphi procedure itself might reasonably be expected to function as intended. We conclude that inadequacies in the nature of feedback typically supplied in applications of Delphi tend to ensure that any small gains in the resolution of “process loss” are offset by the removal of any opportunity for group “process gain”. Some solutions to this dilemma are advocated; they are based on an analysis of the process of judgment change within groups and a consideration of factors that increase the validity of statistical/ nominal groups over their constituent individual components.


Archive | 2001

Expert Opinions in Forecasting: The Role of the Delphi Technique

Gene Rowe; George Wright

Expert opinion is often necessary in forecasting tasks because of a lack of appropriate or available information for using statistical procedures. But how does one get the best forecast from experts? One solution is to use a structured group technique, such as Delphi, for eliciting and combining expert judgments. In using the Delphi technique, one controls the exchange of information between anonymous panelists over a number of rounds (iterations), taking the average of the estimates on the final round as the group judgment. A number of principles are developed here to indicate how to conduct structured groups to obtain good expert judgments. These principles, applied to the conduct of Delphi groups, indicate how many and what type of experts to use (five to 20 experts with disparate domain knowledge); how many rounds to use (generally two or three); what type of feedback to employ (average estimates plus justifications from each expert); how to summarize the final forecast (weight all experts’ estimates equally); how to word questions (in a balanced way with succinct definitions free of emotive terms and irrelevant information); and what response modes to use (frequencies rather than probabilities or odds, with coherence checks when feasible). Delphi groups are substantially more accurate than individual experts and traditional groups and somewhat more accurate than statistical groups (which are made up of noninteracting individuals whose judgments are aggregated). Studies support the advantage of Delphi groups over traditional groups by five to one with one tie, and their advantage over statistical groups by 12 to two with two ties. We anticipate that by following these principles, forecasters may be able to use structured groups to harness effectively expert opinion.


Risk Analysis | 2001

Differences in Expert and Lay Judgments of Risk: Myth or Reality?

Gene Rowe; George Wright

This article evaluates the nine empirical studies that have been conducted on expert versus lay judgments of risk. Contrary to received wisdom, this study finds that there is little empirical evidence for the propositions (1) that experts judge risk differently from members of the public or (2) that experts are more veridical in their risk assessments. Methodological weaknesses in the early research are documented, and it is shown that the results of more recent studies are confounded by social and demographic factors that have been found to correlate with judgments of risk. Using a task-analysis taxonomy, a template is provided for the documentation of future studies of expert-lay differences/similarities that will facilitate analytic comparison.


Organization Studies | 2002

Confronting Strategic Inertia in a Top Management Team: Learning from Failure

Gerard P. Hodgkinson; George Wright

Recently there has been a growing interest in the use of scenario-planning techniques and related procedures such as cognitive mapping as a basis for facilitating organizational learning and strategic renewal. The overwhelming impression conveyed within the popular management literature is that the application of these techniques invariably leads to successful outcomes. To the extent that this is not the case, the absence of documented accounts of instances where these techniques have failed may mislead would-be users into embarking on inappropriate courses of action, unaware of their fundamental limitations. In keeping with a number of recent calls to make organizational research and management theory more relevant to the world of practice, we present a reflective account of our own (largely unsuccessful) attempt to apply these potentially powerful methods of intervention in the context of a private sector organization. Drawing on the rich seam of qualitative data gathered over the course of our work with the senior management team of the organization concerned, we explore the reasons why our attempts to utilize these methods did not yield the benefits anticipated. The data are analyzed using Janis and Manns (1977) Conflict Theory of Decision Making. It is argued that the primary reason why our process intervention failed is that the participants adopted a series of defensive avoidance strategies, amplified by a series of psychodynamic processes initiated by the Chief Executive Officer (CEO). We contend that these defensive avoidance strategies served as a means of coping with the unacceptably high levels of decisional stress, which arose as a result of having to confront a variety of alternatives, each with potentially threatening consequences for the longterm wellbeing of the organization.


Journal of Management Studies | 2001

Enhancing Strategy Evaluation in Scenario Planning: A Role for Decision Analysis

Paul Goodwin; George Wright

Scenario planning can be a useful and attractive tool in strategic management. In a rapidly changing environment it can avoid the pitfalls of more traditional methods. Moreover, it provides a means of addressing uncertainty without recourse to the use of subjective probabilities, which can suffer from serious cognitive biases. However, one underdeveloped element of scenario planning is the evaluation of alternative strategies across the range of scenarios. If this is carried out informally then inferior strategies may be selected, while those formal evaluation procedures that have been suggested in relation to scenario planning are unlikely to be practical in most contexts. This paper demonstrates how decision analysis can be used to structure the strategy evaluation process in a way which avoids the problems associated with earlier proposals. The method is flexible, versatile and transparent and leads to a clear and documented rationale for the selection of a particular strategy.


decision support systems | 1994

Assessing the quality of expert judgment: issues and analysis

Fergus Bolger; George Wright

Abstract Frequently the same biases have been manifest in experts as by students in the laboratory, but expertise studies are often no more ecologically valid than laboratory studies because the methods used in both are similar. Further, real-world tasks vary in their learnability, or the availability of outcome feedback necessary for a judge to improve performance with experience. We propose that good performance will be manifest when both ecological validity and learnability are high, but that performance will be poor when one of these is low. Finally, we suggest how researchers and practitioners might use these task-analytic constructs in order to identify true expertise for the formulation of decision support.


International Journal of Forecasting | 1993

Improving judgmental time series forecasting: A review of the guidance provided by research

Paul Goodwin; George Wright

Abstract This study reviews the research literature on judgmental time series forecasting in order to assess: (i) the quality of inferences about judgmental forecasting in practice which can be drawn from this research; (ii) what is currently known about the processes employed by people when producing judgmental forecasts; (iii) the current evidence that certain strategies can lead to more accurate judgmental forecasts. A key focus of the paper is the identification of areas where further research is needed.


Journal of Multi-criteria Decision Analysis | 1999

Future‐focussed thinking: combining scenario planning with decision analysis

George Wright; Paul Goodwin

This paper first describes current practice in decision analysis and argues that nothing in the techniques application is likely to challenge the strategic decision makers current worldview of the course of future events that are modelled in the decision tree. By contrast, a scenario planning intervention in an organization has the potential to increase perceived threat and thus lead to a step change in strategic decision making. Strategic decisions are made against a backcloth of the operation of psychological processes that act, it is argued, to reduce the perceived level of environmental threat and result in strategic inertia. For this reason, it is recommended that scenario planning should be adopted as a standard procedure because of its ability to challenge individual and organizational worldviews. The use of scenario planning prior to conventional decision analysis is termed as ‘future-focussed thinking’, and parallels are drawn between the current advocated approach and that of Keeneys value-focussed thinking. Both serve to prompt the creation of enhanced options for subsequent evaluation by conventional decision analytic techniques. Copyright


International Journal of Forecasting | 1996

The role and validity of judgment in forecasting

George Wright; Michael Lawrence; Fred Collopy

Abstract All forecasting methods involve judgment but forecasting techniques are often dichotomised as judgmental or statistical. Most forecasting research has focused on the development and testing of statistical techniques. However, in practice, human reasoning and judgment play a primary role. Even when statistical methods are used, results are often adjusted in accord with expert judgment (Bunn and Wright 1991). This editorial introduces the papers included in this special issue of the International Journal of Forecasting and places them within a broader research context. The discussion of this context is structured in three sections: judgmental probability forecasting; judgmental time series forecasting; and interaction of judgment and statistical models.

Collaboration


Dive into the George Wright's collaboration.

Top Co-Authors

Avatar

George Cairns

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Peter Ayton

City University London

View shared research outputs
Top Co-Authors

Avatar

Ron Bradfield

University of Strathclyde

View shared research outputs
Top Co-Authors

Avatar

George Burt

University of Strathclyde

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Keith Fletcher

University of East Anglia

View shared research outputs
Researchain Logo
Decentralizing Knowledge