Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michael E. Doherty is active.

Publication


Featured researches published by Michael E. Doherty.


Quarterly Journal of Experimental Psychology | 1977

Confirmation bias in a simulated research environment: An experimental study of scientific inference

Clifford R. Mynatt; Michael E. Doherty; Ryan D. Tweney

Numerous authors (e.g., Popper, 1959) argue that scientists should try to falsify rather than confirm theories. However, recent empirical work (Wason and Johnson-Laird, 1972) suggests the existence of a confirmation bias, at least on abstract problems. Using a more realistic, computer controlled environment modeled after a real research setting, subjects in this study first formulated hypotheses about the laws governing events occurring in the environment. They then chose between pairs of environments in which they could: (I) make observations which would probably confirm these hypotheses, or (2) test alternative hypotheses. Strong evidence for a confirmation bias involving failure to choose environments allowing tests of alternative hypotheses was found. However, when subjects did obtain explicit falsifying information, they used this information to reject incorrect hypotheses.


Quarterly Journal of Experimental Psychology | 1978

Consequences of confirmation and disconfirmation in a simulated research environment

Clifford R. Mynatt; Michael E. Doherty; Ryan D. Tweney

Advanced undergraduate science majors attempted for approximately 10h each to discover the laws governing a dynamic system. The system included 27 fixed objects, some of which influenced the direction of a moving particle. At a given time, any one screen of a nine-screen matrix could be observed on a plasma display screen. Confirmatory strategies were the rule, even though half the subjects had been carefully instructed in strong inference. Falsification was counterproductive for some subjects. It seems that a firm base of inductive generalizations, supported by confirmatory research, is a prerequisite to useful implementation of a falsification strategy.


Quarterly Journal of Experimental Psychology | 1980

Strategies of rule discovery in an inference task

Ryan D. Tweney; Michael E. Doherty; Winifred J. Worner; Daniel B. Pliske; Clifford R. Mynatt; Kimberly A. Gross; Daniel L. Arkkelin

It has long been known that subjects in certain inference tasks will seek evidence which can confirm their present hypotheses, even in situations where disconfirmatory evidence could be more informative. We sought to alter this tendency in a series of experiments which employed a rule discovery task, the 2-4-6 problem first described by Wason. The first experiment instructionally modified subjects confirmatory tendencies. While a disconfirmatory strategy was easily induced, it did not lead to greater efficiency in discovering the rule. The second experiment introduced subjects to the possibility of disconfirmation only after they had developed a strongly held hypothesis through the use of confirmatory evidence. This manipulation also failed to alter the efficiency of rule discovery. In the third experiment, subjects were taught to use multiple hypotheses at each step, in the manner of Platts “Strong Inference”. This operation actually worsened performance. Finally, in the fourth experiment, the structure of the problem was altered slightly by asking subjects to seek two interrelated rules. A dramatic increase in performance resulted, perhaps because information which in previous tasks was seen as merely erroneous could now be related to an alternative rule. The four studies have broad implications for the psychological study of inference processes in general, and for the study of scientific inference in particular.


Organizational Behavior and Human Performance | 1978

Can we learn anything about interviewing real people from “interviews” of paper people? Two studies of the external validity of a paradigm

Charles D. Gorman; William H. Clover; Michael E. Doherty

Abstract Two investigations of the external validity of the paper-people analog are presented. One, a laboratory study, had advanced graduate students in industrial psychology make predictions about undergraduates based on test data plus an interview and also make predictions based on the test data (i.e., paper people) alone. Complete criterion data allowed traditional validity analyses to be carried out. The second study had highly experienced interviewers rate the paper credentials of people whom they had interviewed in the past. Judgments based on interviewees were compared with judgments made on paper people in the two studies. Sufficient data are presented to allow readers to draw their own conclusions concerning the representativeness of the paper-people paradigm. Our own conclusion is that the answer to the question raised in the title is “No”.


Thinking & Reasoning | 1996

Social Judgement Theory

Michael E. Doherty; Elke M. Kurz

This paper first explores a number of themes in the psychological system developed by the Austrian-American psychologist, Egon Brunswik, focusing on those that had a formative influence on Social Judgement Theory. We show that while perception was a recurring ground for Brunswiks empirical and theoretical work, his psychology was a psychology of cognition in the broadest sense. Next, two major themes in Social Judgement Theory— functionalism and probabilism— are described, and the elegant formulation known as Brunswiks Lens Model is introduced. Some methodological and theoretical implications of these themes are presented. The paper concludes with Hammonds Cognitive Continuum Theory (CCT), which is a theory describing modes of cognition and how those modes are influenced by task characteristics.


Organizational Behavior and Human Decision Processes | 1989

A note on the assessment of self-insight in judgment research

Barbara A Reilly; Michael E. Doherty

Abstract A widely accepted finding in research on human judgment is that people have relatively poor insight into the weighting schemes they use when they make holistic judgments. The empirical research supporting this generalization rests on indices of self-insight produced directly by subjects. This note replicates the usual finding, but reports an alternative method of assessing self-insight, i.e., the proportion of subjects who recognize their own policies when those policies are represented in terms of Usefulness Indices. Forty college senior accounting majors made holistic judgments of 160 hypothetical job offers, each offer being composed of 19 attributes. For all 40 students, predicted judgments using regression weights correlated more highly with actual judgments than did the predicted judgments using subjective weights, i.e., the direct ratings of the importance of the attributes. Eleven of the students subsequently attempted to identify their own policies. The probability of a correct identification was .025, yet 7 of the 11 students selected their own policies ( p = 1.84 × 10 −9 ). This suggests that people have far better self-insight than hitherto believed, but that they cannot adequately express that insight by the subjective weighting procedures that have been commonly used.


Quarterly Journal of Experimental Psychology | 1993

Information relevance, working memory, and the consideration of alternatives

Clifford R. Mynatt; Michael E. Doherty; William Dragan

People routinely focus on one hypothesis and avoid consideration of alternative hypotheses on problems requiring decisions between possible states of the world–-for example, on the “pseudodiagnosticity” task (Doherty, Mynatt, Tweney, & Schiavo, 1979). In order to account for behaviour on such “inference” problems, it is proposed that people can hold in working memory, and operate upon, but one alternative at a time, and that they have a bias to test the hypothesis they think true. In addition to being an ex post facto explanation of data selection in inference tasks, this conceptualization predicts that there are situations in which people will consider alternatives. These are: 1. “action” problems, where the alternatives are possible courses of action; 2. “inference” problems, in which evidence favours an alternative hypothesis. Experiment 1 tested the first prediction. Subjects were given action or inference problems, each with two alternatives and two items of data relevant to each alternative. They received probabilistic information about the relation between one datum and one alternative and picked one value from among the other three possible pairs of such relations. Two findings emerged; (1) a strong tendency to select information about only one alternative with inferences; and (2) a strong tendency, compared to inferences, to select information about both alternatives with actions. Experiment 2 tested the second prediction. It was predicted that data suggesting that one alternative was incorrect would lead many subjects to consider, and select information about, the other alternative. For actions, it was predicted that this manipulation would have no effect. Again the data were as predicted.


Journal of Applied Psychology | 1988

Importance ratings in job analysis: Note on the misinterpretation of factor analyses.

C. J. Cranny; Michael E. Doherty

Devices and methods are provided for forming a vascular graft by axially distending a blood vessel to induce growth. The device preferably comprises a stretching mechanism which includes (i) a rigid body; (ii) a pair of posts comprising a first post and a second post which are connected to the body; (iii) a driver element slidably secured to the body and disposed between the pair of posts; and (iv) a device for sliding the driver element away from the pair of posts to axially distend a blood vessel positioned between the pair of posts and the driver element. Preferably, the device is implanted, for example using endoscopic techniques, for use in vivo, although the device also can be used in vitro.


Memory & Cognition | 1996

On people's understanding of the diagnostic implications of probabilistic data

Michael E. Doherty; Randall Chadwick; Hugh Garavan; David Barr; Clifford R. Mynatt

Two lines of prior research into the conditions under which people seek information are examined in light of two statistical definitions of diagnosticity. Five experiments are reported. In two, subjects selected information in order to test a hypothesis. In the remaining three, they selected information in order to convince someone else of the truth of a known hypothesis. A total of 567 university students served as subjects. The two primary conclusions were as follows: (1) When the task is highly structured by the environment, subjects select information diagnostically, and (2) when the task is less structured, so that subjects must seek relevant information not manifest, they select information pseudodiagnostically. Possible relations to other laboratory inference tasks and to clinical judgment are discussed.


Organizational Behavior and Human Decision Processes | 1986

Social desirability response bias as one source of the discrepancy between subjective weights and regression weights

Kelly J Brookhouse; Robert M. Guion; Michael E. Doherty

Abstract This study of the judgment process investigated whether social desirability response bias is more closely associated with directly weighted subjective weights than less directly assessed regression weights. Graduating college students indicated their preferences for 11 job characteristics in four different tasks: (a) a statistical weighting task completed honestly, (b) a statistical weighting task completed to “look good” to a recruiter, (c) a subjective weighting task completed honestly, and (d) a subjective weighting task completed to “look good” to a recruiter. Predicted judgments were calculated from the weights obtained in each condition. Pearson correlation coefficients using these predicted judgments were computed between conditions; the relationship between the honest and positive impression predicted judgments derived from subjective weights was significantly greater than the relationship between honest and positive impression predicted judgments derived from regression weights for 19 of 23 participants. It was concluded that social desirability response bias was more closely related to subjective weights than to regression Weights.

Collaboration


Dive into the Michael E. Doherty's collaboration.

Top Co-Authors

Avatar

Clifford R. Mynatt

Bowling Green State University

View shared research outputs
Top Co-Authors

Avatar

Ryan D. Tweney

Bowling Green State University

View shared research outputs
Top Co-Authors

Avatar

Stuart M. Keeley

Bowling Green State University

View shared research outputs
Top Co-Authors

Avatar

Richard B. Anderson

Bowling Green State University

View shared research outputs
Top Co-Authors

Avatar

R. James Holzworth

Bowling Green State University

View shared research outputs
Top Co-Authors

Avatar

Gregory L. Brake

Bowling Green State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kenneth M. Shemberg

Bowling Green State University

View shared research outputs
Top Co-Authors

Avatar

Raymond O'Connor Jr.

Bowling Green State University

View shared research outputs
Top Co-Authors

Avatar

William K. Balzer

Bowling Green State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge