Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where F. Hutton Barron is active.

Publication


Featured researches published by F. Hutton Barron.


Acta Psychologica | 1992

Selecting a best multiattribute alternative with partial information about attribute weights

F. Hutton Barron

Abstract Use of approximate weights would greatly simplify decision analysis under certainty since detailed weight elicitation could be avoided. This paper examines the degree to which rank order information about weights can be used to identify a best alternative, or falling uniqueness prescribes an easily implemented rule for selecting a ‘best’ alternative. The prescribed rule uses as weights the centroid of the feasible region defined by the rank order information. In conjunction with the rule, the value of the rank order information can be determined using an ‘expected gain from weight precision’ (EGWP) measure, analogous to ‘expected value of perfect information’ in decision analysis under uncertainty.


Acta Psychologica | 1996

The efficacy of SMARTER — Simple Multi-Attribute Rating Technique Extended to Ranking

F. Hutton Barron; Bruce E. Barrett

Abstract A key component in the development of an additive multiattribute value model for selecting the best alternative is obtaining the attribute weights. In this paper, we assume the decision makers weight information set consists of ranked swing weights, that is, a ranking of the importance of the attribute ranges, and in this context use ‘surrogate weights’ derived from this ranking. The particular surrogate weights are called ROC , for rank order centroid weights. The paper presents three sets of results: (1) a summary of the efficacy of using ROC weights to select a best alternative; (2) an extension of the method of analysis underlying the efficacy studies to assess the applicability of ROC weights for the analysts specific value matrix; (3) methods for sensitivity analyses of a specific value matrix. A comprehensive example illustrates all analyses.


Organizational Behavior and Human Performance | 1983

Evaluating credit applications: A validation of multiattribute utility weight elicitation techniques

William G. Stillwell; F. Hutton Barron; Ward Edwards

Abstract Multiattribute Utility Measurement (MAUM) provides a set of tools and procedures that are designed to aid the decision maker who is faced with decision problems of such complexity and ambiguity that unaided, intuitive judgment is likely to lead to the selection of suboptimal alternatives. Attempts to validate MAUM procedures have been primarily of three types: (1) behavioral tests of axiom systems derived from assumptions about what constitutes reasonable behavior; (2) convergent validation, in which the results of different procedures or even different subjects are compared; and (3) criterion validation, in which judgments and their resultant decisions are compared with some external criterion. From a behavioral point of view, the last of these, criterion validity, is by far the strongest. Past efforts at criterion validation of MAUM have suffered from three limitations: the subjects were not experts, alternative weight elicitation procedures were not compared, and the strength of the criterion used in each case is open to question. The purpose of this experiment was to provide an empirical comparison of a number of alternative MAUM weight elicitation procedures in a situation that offered a meaningful external crtierion along with subjects expert in its use. High quality decisions resulted from weight judgments provided in response to all weight elicitation procedures as long as single dimensions were first individually scaled and then weighted for aggregation. A procedure in which alternatives were rated holistically and weights and single dimension utility functions derived statistically showed poorer quality decisions. Thus, the “divide and conquer” theme of MAUM was upheld.


Acta Psychologica | 1984

Empirical and theoretical relationships between value and utility functions

F. Hutton Barron; Detlof von Winterfeldt; Gregory W. Fischer

Abstract Two fundamentally different measurement approaches are used to model multiatribute preferences. The first, based on expected utility theory, uses preferences among gambles to construct a utility function, u, over multiattribute outcomes. The second, founded on difference measurement, asks for judgments about strength of preference to derive a value function, v. Our purposes are threefold: to clarify possible theoretical relationships between u and v; to demonstrate behavioral differences between u and v; and to discuss the usefulness of a u, v distinction for interpretation and application. Both the theory of functional equations and measurement theory uniqueness theorems provide closed form functional relationships among the four (additive and/or multiplicative) decomposed utility and value functions. Fischer (1977) provides the data to empirically examine relationships between u and v.


Acta Psychologica | 1983

Validation and error in multiplicative utility functions

F. Hutton Barron

Abstract An approach to the concept of error in utility assessment is proposed. Four kinds of errors are considered and each kind is related to four separate elicitation methods - all in the context of a general multiplicative multiattribute utility model. The methods are a Keeney-Raiffa (1976) procedure; SMART, for Simple Multi-Attribute Rating Technique (Edwards 1977); SJT, for a Social Judgment Theory based regression model (Hammond et al. 1975); and HOPE, for Holistic Orthogonal Parameter Estimation (Barron and Person 1979). The individual judgments elicited are either holistic - in which the entity to be evaluated is considered as a whole - or decomposed - in which attention is directed to one or two aspects at a time. If a general multiplicative model can be assumed to be an appropriate representation of the decision makers basic preference structure, error can occur in the direct estimation of the scaling constants and univariate utility functions for decomposition methods (Keeney-Raiffa and SMART), or in the holistic assessments for holistic methods (SJT and HOPE). Individual estimates may be merely subject to random noise or may be substantially incorrect. The utility model may be incorrectly specified; finally, all four methods may be subject to systematic error. The four assessment methods are considered in conjunction with errors of each kind.


Archive | 1999

Linear Inequalities and the Analysis of Multi-Attribute Value Matrices

F. Hutton Barron; Bruce E. Barrett

Barron and Barrett, 1996(b) demonstrate empirically that a surrogate weight vector, rank order centroid (ROC) weights, based only on ranked swing weights, is surprisingly efficacious in general in selecting a best multi-attribute alternative. An Excel-based simulation, EMAR, allows one to assess the applicability of the general result to any particular value matrix. In this paper we extend EMAR to partial information sets of weights other than a strict ranking. We also apply these procedures to examine the effect of reducing the number of attributes.


Organizational Behavior and Human Decision Processes | 1994

SMARTS and SMARTER: Improved Simple Methods for Multiattribute Utility Measurement

Ward Edwards; F. Hutton Barron


Management Science | 1996

Decision quality using ranked attribute weights

F. Hutton Barron; Bruce E. Barrett


Decision Sciences | 1987

INFLUENCE OF MISSING ATTRIBUTES ON SELECTING A BEST MULTIATTRIBUTED ALTERNATIVE

F. Hutton Barron


Interfaces | 1985

Payoff Matrices Pay Off at Hallmark

F. Hutton Barron

Collaboration


Dive into the F. Hutton Barron's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ward Edwards

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Detlof von Winterfeldt

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

William G. Stillwell

University of Southern California

View shared research outputs
Researchain Logo
Decentralizing Knowledge