Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Matthew Welsh is active.

Publication


Featured researches published by Matthew Welsh.


Australian Journal of Psychology | 2006

Investigating the construct validity associated with microworld research: A comparison of performance under different management structures across expert and non-expert naturalistic decision-making groups

Taryn Chapman; Ted Nettelbeck; Matthew Welsh; Vanessa Mills

Abstract The use of computer-simulated microworlds has become increasingly popular to test concepts related to naturalistic decision-making (NDM) in a controlled laboratory environment. However, the construct validity for such methods is unclear. The current study followed previous microworld-based studies that compared indirect (macromanagement) methods of management with direct methods (micromanagement). To explore the construct validity of microworld research, the current study compared performance scores generated by participants with experience in a prototypical NDM environment, with those without such experience. Using a networked computer simulation for firefighting, 10 Army officers and 10 civilians played the role of Fire Chief within three-person command and control teams. The two subordinates were confederates. Comparison of management structures supported previous results indicating that indirect control produces significantly better NDM performance. However, no difference was found between th...


SPE Hydrocarbon Economics and Evaluation Symposium | 2014

Uncertainty vs. variability: what's the difference and why is it important?

Steve Begg; Reidar Brumer Bratvold; Matthew Welsh

Technical professionals are often asked to estimate “ranges” for uncertain quantities. It is important that they distinguish whether they are being asked for variability ranges or uncertainty ranges. Likewise, it is important for modelers to know if they are building models of variability or uncertainty, and their relationship, if any. We discuss and clarify the distinction between uncertainty and variability through strict definition, illustrative analogy and numerical examples. Uncertainty means we do not know the value (or outcome) of some quantity, eg the average porosity of a specific reservoir (or the porosity of a core-sized piece of rock at some point within the reservoir). Variability refers to the multiple values a quantity has at different locations, times or instances - eg the average porosities of a collection of different reservoirs (or the range of core-plugs porosities at different locations within a specific reservoir). Uncertainty is quantified by a probability distribution which depends upon our state of information about the likelihood of what the single, true value of the uncertain quantity is. Variability is quantified by a distribution of frequencies of multiple instances of the quantity, derived from observed data. That both are represented by ‘distributions’ is a major source of confusion, which can lead to uncritical adoption of frequency distributions to represent uncertainty, and thus to erroneous risk assessments and bad decisions. For example, the variability of natural phenomena is sometimes well-approximated by normal or log-normal distributions, but such distributions may not be appropriate to represent the uncertainty in outcome of a single occurrence. We show there is no objectively ‘right’ probability distribution for quantifying the uncertainty of an unknown event – it can only be ‘right’ in that it is consistent with the assessor’s information. Thus, different people (or teams or companies) can legitimately hold different probabilities for the same event. Only in very restrictive, arguably unrealistic, situations can we choose to use a frequency distribution derived from variability data as a probability distribution to represent our uncertainty in an event’s outcome. Our experience as educators of students and oil & gas industry personnel suggests that significant confusion exists in their understanding of the distinction between variability and uncertainty. This paper thus provides a resource for technical professionals and teachers to clarify the distinction between the two, or to correct it where it has been wrongly taught, and thereby help to improve decision-making.


SPE Asia Pacific Oil & Gas Conference and Exhibition | 2006

An Oil and Gas Decision-Making Taxonomy

S.I. Mackie; Matthew Welsh; Michael D. Lee

Business under-performance in the upstream oil and gas industry, and the failure of many decisions to return expected results, has led to a growing interest over the past few years in understanding the impacts of current decision-making tools and processes and their relationship with decision outcomes. Improving oil and gas decision-making is thus, increasingly, seen as reliant on an understanding of what types of decisions are involved, how they should be made in order to be optimal, and how they actually are made in the “real world”. There has been significant work carried out within the discipline of cognitive psychology, observing how people actually make decisions. However, little is known as to whether these general observations apply to decision-making in the upstream oil and gas industry. Nor has there been work on how the results might be used to improve decision-making in the industry. This paper documents the development of a theoretical Oil and Gas Decision Making Taxonomy (OGDMT) that seeks to lay a “level playing field” decision space within which to judge the processes and tools of optimal decision-making as the first step in this research. The OGDMT builds on established ideas in the human decision-making literature, but is itself novel, and involves four different dimensions: level of investigation; task constraint; value function; and the information structure of the environment. It is concluded that decision scenarios at different places in the taxonomy will likely involve different decision-making tools, data and processes for the achievement of optimal decision-making. The results of this work can be applied, for example, to the question of whether decisions about reserves should be made using deterministic or probabilistic tools, data and processes. Introduction “The last 10 years might be called ‘a decade of unprofitable growth’ for many upstream companies” [1] were the words chosen by Ed Merrow to describe the results of a study that looked at over one thousand upstream oil and gas projects undertaken during the 1990s and the early part of the new century. This conclusion is one reflection on the underperformance of the industry. Searching for ways to improve decision-making in the oil and gas industry is an area that has evolved in response to this concern [2, 3]. Within the discipline of cognitive psychology, much work has been carried out in observing how people actually make decisions [4-7]. However little is known as to how these observations can be applied within the industry, despite recent work beginning to move beyond simply pointing out the ways in which people make decisions and trying to show the applicability of these tendencies to oil and gas decisions [8, 9] In commenting on decision research, Cooksey made the following pertinent comment: “In decision research, we should not be thinking ‘either-or’ but ‘which, when and why’ with respect to philosophical, theoretical, and methodological stances and with respect to learning from a wide range of disciplines.” [10] p 362. Essentially, the argument is that discussions or arguments about which decision-making school of thought to follow are not helpful. The real focus should be to think about which decision-making methodology to use as well as when and why that would be the best. The primary premise is that there are optimal processes and tools (Cooksey’s “which”) to use for certain types of decision-making (Cooksey’s “when”). The secondary premise is that Cooksey’s “why” can be answered by showing, from both laboratory and real world experiments, that when decision-making tools and processes are tailored for the type of decision, optimal decision-making will result. But in order to look at the “which, when and why” of oil and gas decision-making it is first be necessary to determine the decision type. This means there is a need to categorize or classify decisions – something not yet covered in the upstream oil and gas decision-making literature. A previous effort, to develop a framework or classification of oil and gas decision-making, was attempted by the United Kingdom Offshore Operators Association (UKOOA). In 1999 the association published a set of industry guidelines [11], which were designed to assist operators with a more open, transparent, soundly-based and context-appropriate decisionmaking process as it related to offshore health, safety and environment (HSE). Called the Decision Support Framework for Major Accident Hazard Safety, it looked at decision context as the basis for making decisions, and even made some recommendations for optimal methodologies. This framework or taxonomy, however, addresses only the more surface engineering based decisions, or what may be termed SPE 100699 An Oil and Gas Decision-Making Taxonomy S.I. Mackie and M.B. Welsh, SPE, U. of Adelaide, and M.D. Lee, U. of California, Irvine


The APPEA Journal | 2018

Why are decisions for oil and gas projects not always made the way they ‘should’ be?

David Newman; Steve Begg; Matthew Welsh

The outcomes of many business decisions do not live up to expectations or possibilities. A literature review of neuroscience and psychological factors that affect decision making has been undertaken, highlighting many reasons why it is hard for people to be good decision makers, particularly in complex and uncertain situations such as oil and gas projects. One way to diminish the impact of these human factors is to use the structured methodology and tools of Decision Analysis, which have been developed and used over 50 years, for making good decisions. Interviews with senior personnel from oil and gas operating companies, followed up by a larger-scale survey, were conducted to determine whether or how Decision Analysis and Decision Quality are used and why they are used in particular ways. The results showed that Decision Analysis and Decision Quality are not used as often as the participants think they should be; some 90% of respondents believed that they should be used for key project decisions, but only ~50% said that they are used. Six propositions were tested for why Decision Analysis and Decision Quality are not used more, and the following three were deemed to be supported: • Decision Analysis and Decision Quality are not well understood. • There is reliance on experience and judgment for decision-making. • Projects are schedule-driven. Further research is proposed to determine the underlying causes, and tackle those, with the aim being to improve business outcomes by determining how to influence decision makers to use Decision Analysis and Decision Quality more effectively.


Cognitive Science | 2005

An empirical evaluation of models of text document similarity

Michael D. Lee; Brandon Pincombe; Matthew Welsh


Cognitive Science | 2013

The cognitive reflection test: how much more than numerical ability?

Matthew Welsh; Nicholas R. Burns; Paul Delfabbro


Organizational Behavior and Human Decision Processes | 2012

Seeing is believing: Priors trust and base rate neglect

Matthew Welsh; Daniel J. Navarro


SPE Annual Technical Conference and Exhibition | 2007

Modeling the Economic Impact of Cognitive Biases on Oil and Gas Decisions

Matthew Welsh; S.H. Begg; Reidar Brumer Bratvold


SPE Annual Technical Conference and Exhibition | 2005

Cognitive Biases in the Petroleum Industry: Impact and Remediation

Matthew Welsh; Reidar Brumer Bratvold; S.H. Begg


SPE Annual Technical Conference and Exhibition | 2004

Problems with the elicitation of uncertainty

Matthew Welsh; S.H. Begg; Reidar Brumer Bratvold; Michael D. Lee

Collaboration


Dive into the Matthew Welsh's collaboration.

Top Co-Authors

Avatar

S.H. Begg

University of Adelaide

View shared research outputs
Top Co-Authors

Avatar

Steve Begg

University of Adelaide

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael D. Lee

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chris Smith

University of Adelaide

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge