Sarah Lichtenstein
Oregon Research Institute
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sarah Lichtenstein.
Policy Sciences | 1978
Baruch Fischhoff; Paul Slovic; Sarah Lichtenstein; Stephen J. Read; Barbara Combs
One of the fundamental questions addressed by risk-benefit analysis is “How safe is safe enough?” Chauncey Starr has proposed that economic data be used to reveal patterns of acceptable risk-benefit tradeoffs. The present study investigates an alternative technique, in which psychometric procedures were used to elicit quantitative judgments of perceived risk, acceptable risk, and perceived benefit for each of 30 activities and technologies. The participants were seventy-six members of the League of Women Voters. The results indicated little systematic relationship between perceived existing risks and benefits of the 30 risk items. Current risk levels were generally viewed as unacceptably high. When current risk levels were adjusted to what would be considered acceptable risk levels, however, risk was found to correlate with benefit. Nine descriptive attributes of risk were also studied. These nine attributes seemed to tap two basic dimensions of risk. These dimensions proved to be effective predictors of the tradeoff between acceptable risk and perceived benefit. The limitations of the present study and the relationship between this technique and Starrs technique are discussed, along with the implications of the findings for policy decisions.
Organizational Behavior and Human Performance | 1971
Paul Slovic; Sarah Lichtenstein
Abstract In recent years there have been several hundred studies within the rather narrowly-defined topic of information utilization in judgment and decision making. Much of this work has been accomplished within two basic schools of research, which we have labeled the “regression” and the “Bayesian” approaches. Each has its characteristic tasks and characteristic information that must be processed to accomplish these tasks. For the most part, researchers have tended to work strictly within a single approach and there has been minimal communication between the resultant subgroups of workers. Our objective here is to present a review and comparative analysis of these two approaches. Within each, we examine (a) the models that have been developed for describing and prescribing the use of information in decision making; (b) the major experimental paradigms, including the types of judgment, prediction, and decision tasks and the kinds of information that have been available to the decision maker in these tasks; (c) the key independent variables that have been manipulated in experimental studies; and (d) the major empirical results and conclusions. In comparing these approaches, we seek the answers to two basic questions. First, do the specific models and methods characteristic of different paradigms direct the researchers attention to certain problems and cause him to neglect others that may be equally important? Second, can a researcher studying a particular substantive problem increase his understanding by employing diverse models and diverse experimental methods?
Archive | 1982
Sarah Lichtenstein; Baruch Fischhoff; Lawrence D. Phillips
From the subjectivist point of view (de Finetti, 1937/1964), a probability is a degree of belief in a proposition. It expresses a purely internal state; there is no “right,” “correct,” or “objective” probability residing somewhere “in reality” against which ones degree of belief can be compared. In many circumstances, however, it may become possible to verify the truth or falsity of the proposition to which a probability was attached. Today, one assesses the probability of the proposition “it will rain tomorrow.” Tomorrow, one looks at the rain gauge to see whether or not it has rained. When possible, such verification can be used to determine the adequacy of probability assessments. Winkler and Murphy (1968b) have identified two kinds of “goodness” in probability assessments: normative goodness, which reflects the degree to which assessments express the assessors true beliefs and conform to the axioms of probability theory, and substantive goodness, which reflects the amount of knowledge of the topic area contained in the assessments. This chapter reviews the literature concerning yet another aspect of goodness, called calibration. If a person assesses the probability of a proposition being true as .7 and later finds that the proposition is false, that in itself does not invalidate the assessment. However, if a judge assigns .7 to 10,000 independent propositions, only 25 of which subsequently are found to be true, there is something wrong with these assessments.
Journal of Experimental Psychology: Human Learning & Memory | 1978
Sarah Lichtenstein
A series of 5 experiments with 660 adult Ss studied how people judge the frequency of death from various causes. The judgments exhibited a highly consistent but systematically biased subjective scale of frequency. Two kinds of bias were identified: (a) a tendency to overestimate small frequencies and underestimate larger ones; and (b) a tendency to exaggerate the frequency of some specific causes and to underestimate the frequency of others, at any given level of objective frequency. These biases were traced to a number of possible sources, including disproportionate exposure, memorability, or imaginability of various events. Ss were unable to correct for these sources of bias when specifically instructed to avoid them. Comparisons with previous laboratory studies are discussed, along with methods for improving frequency judgments and the implications of the present findings for the management of societal hazards. (36 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Policy and practice in health and safety | 2005
Paul Slovic; Baruch Fischhoff; Sarah Lichtenstein
Subjective judgments, whether by experts or lay people, are a major component in any risk assessment. If such judgments are faulty, efforts at public and environmental protection are likely to be misdirected. The present paper begins with an analysis of biases exhibited by lay people and experts when they make judgments about risk. Next, the similarities and differences between lay and expert evaluations are examined in the context of a specific set of activities and technologies. Finally, some special issues are discussed, including the difficulty of reconciling divergent opinions about risk, the possible irrelevance of voluntariness as a determinant of acceptable risk, the importance of catastrophic potential in determing perceptions and triggering social conflict, and the need to facilitate public participation in the management of hazards.
Environment | 1979
Paul Slovic; Baruch Fischhoff; Sarah Lichtenstein
People respond to the hazards they perceive. If their perceptions are faulty, efforts at public and environmental protection are likely to be misdirected. In order to improve hazard management, a risk assessment industry has developed over the last decade which combines the efforts of physical, biological, and social scientists in an attempt to identify hazards and measure the frequency and magnitude of their consequences.**
Journal of Risk and Uncertainty | 1993
Robin Gregory; Sarah Lichtenstein; Paul Slovic
The use of contingent valuation (CV) methods for estimating the economic value of environmental improvements and damages has increased significantly. However, doubts exist regarding the validity of the usual willingness to pay CV methods. In this article, we examine the CV approach in light of recent findings from behavioral decision research regarding the constructive nature of human preferences. We argue that a principal source of problems with conventional CV methods is that they impose unrealistic cognitive demands upon respondents. We propose a new CV approach, based on the value-structuring capabilities of multiattribute utility theory and decision analysis, and discuss its advantages and disadvantages.
Archive | 1977
Sarah Lichtenstein; Baruch Fischhoff; Lawrence D. Phillips
From the subjectivist point of view (de Finetti, 1937) a probability is a degree of belief in a proposition whose truth has not been ascertained. A probability expresses a purely internal state; there is no “right” or “correct” probability that resides somewhere “in reality” against which it can be compared. However, in many circumstances, it may become possible to verify the truth o£ falsity of the proposition to which a probability was attached. Today, we assess the probability of the proposition“it will rain tomorrow”. Tomorrow, we go outside and look at the rain gauge to see whether or not it has rained. When verification is possible, we can use it to gauge the adequacy of our probability assessments.
Organizational Behavior and Human Performance | 1980
Sarah Lichtenstein; Baruch Fischhoff
Two experiments attempted to improve the quality of peoples probability assessments through intensive training. The first involved 11 sessions of 200 assessments each followed by comprehensive feedback. It produced considerable learning, almost all of which was accomplished after receipt of the first feedback. There was modest generalization to several related probability assessment tasks, but no generalization at all to two others. The second experiment reduced the training to three sessions. It revealed the same pattern of learning and limited generalization. About one-third of all subjects appeared to use probabilities quite appropriately on some tasks before training began. Further research is needed to understand why the training worked as well as it did, why that training did not always generalize, and why some individuals seemed to need no training at all.
Archive | 1988
Baruch Fischhoff; Paul Slovic; Sarah Lichtenstein
An article of faith among students of value, choice, and attitude judgments is that people have reasonably well-defined opinions regarding the desirability of various events. Although these opinions may not be intuitively formulated in numerical (or even verbal) form, careful questioning can elicit judgments representing peoples underlying values. From this stance, elicitation procedures are neutral tools, bias-free channels that translate subjective feelings into scientifically usable expressions. They impose no views on respondents beyond focusing attention on those value issues of interest to the investigator. What happens, however, in cases where people do not know, or have difficulty appraising, what they want? Under such circumstances, elicitation procedures may become major forces in shaping the values expressed, or apparently expressed, in the judgments they require. They can induce random error (by confusing the respondent), systematic error (by hinting at what the “correct” response is), or unduly extreme judgments (by suggesting clarity and coherence of opinion that are not warranted). In such cases, the method becomes the message. If elicited values are used as guides for future behavior, they may lead to decisions not in the decision makers best interest, to action when caution is desirable (or the opposite), or to the obfuscation of poorly formulated views needing careful development and clarification. The topic of this chapter is the confrontation between those who hold (possibly inchoate) values and those who elicit values.