Timothy M. Franz
St. John Fisher College
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Timothy M. Franz.
Journal of Personality and Social Psychology | 1998
James R. Larson; Caryn Christensen; Timothy M. Franz; Ann S. Abbott
The impact of group discussion on the decision-making effectiveness of medical teams was examined. Three-person teams of physicians diagnosed 2 hypothetical medical cases. Some of the information about each case was given to all team members prior to discussion (shared information), whereas the rest was divided among them (unshared information). Compared with unshared information, shared information was more likely to be pooled during discussion and was pooled earlier. In addition, team leaders were consistently more likely than other members to ask questions and to repeat shared information and, over time, also became more likely than others to repeat unshared information. Finally, pooling unshared (but not shared) information improved the overall accuracy of the team diagnoses, whereas repeating both shared and unshared information affected bias (but not accuracy) in the diagnoses.
Personality and Social Psychology Bulletin | 1998
James R. Larson; Pennie G. Foster-Fishman; Timothy M. Franz
This study found that during group decision-making discussions, shared information (i.e., information held by all group members) was brought into discussion earlier, and was more likely to be mentioned overall, than was unshared information (i.e., unique information held by just one member or another). These results are consistent with a dynamic information sampling model of group discussion. It also was found that groups with a participative leader discussed more information (both shared and unshared) than groups with a directive leader, but that directive leaders were more likely to repeat information (especially unshared) than participative leaders. Finally, it was found that leadership style and the information held by the leader prior to discussion interacted to influence group decision quality. The relevance of these findings for existing contingency theories of leadership is discussed.
Journal of General Internal Medicine | 2005
Charles P. Friedman; Guido G. Gatti; Timothy M. Franz; Gwendolyn Murphy; Fredric M. Wolf; Paul S. Heckerling; Paul L. Fine; Thomas M. Miller; Arthur S. Elstein
AbstractOBJECTIVE: This study explores the alignment between physicians’ confidence in their diagnoses and the “correctness” of these diagnoses, as a function of clinical experience, and whether subjects were prone to over-or underconfidence. DESIGN: Prospective, counterbalanced experimental design. SETTING: Laboratory study conducted under controlled conditions at three academic medical centers. PARTICIPANTS: Seventy-two senior medical students, 72 senior medical residents, and 72 faculty internists. INTERVENTION: We created highly detailed, 2-to 4-page synopses of 36 diagnostically challenging medical cases, each with a definitive correct diagnosis. Subjects generated a differential diagnosis for each of 9 assigned cases, and indicated their level of confidence in each diagnosis. MEASUREMENTS AND MAIN RESULTS: A differential was considered “correct” if the clinically true diagnosis was listed in that subject’s hypothesis list. To assess confidence, subjects rated the likelihood that they would, at the time they generated the differential, seek assistance in reaching a diagnosis. Subjects’ confidence and correctness were “mildly” aligned (k=.314 for all subjects, .285 for faculty, .227 for residents, and .349 for students). Residents were overconfident in 41% of cases where their confidence and correctness were not aligned, whereas faculty were overconfident in 36% of such cases and students in 25%. CONCLUSIONS: Even experienced clinicians may be unaware of the correctness of their diagnoses at the time they make them. Medical decision support systems, and other interventions designed to reduce medical errors, cannot rely exclusively on clinicians’ perceptions of their needs for such support.
Small Group Research | 2002
Timothy M. Franz; James R. Larson
Experts were predicted to impact information sharing during discussion in two ways. First, expert members were expected to contribute more information themselves during discussion; and second, they were expected to cause nonexpert members to contribute more. Furthermore, it was predicted that identification of the expert and the task type would accentuate these differences. These predictions were tested in a study where one third of the groups had an identified expert, one third had an unidentified expert, and one third had no expert. Half the groups were asked to identify a correct answer, whereas the other half were asked to give their opinion. Results provided support for experts’contributing more information to group discussion; however, no support was found for their increasing other members’ contributions. Identification of expertise and task type both accentuated information sharing by experts. These results are discussed in terms of implications of experts on information sampling and decision-making groups.
Basic and Applied Social Psychology | 1999
Timothy A. Lavery; Timothy M. Franz; Jennifer Winquist; James R. Larson
This study was conducted to examine whether the amount of unshared information (i.e., information that only one group member or another possesses prior to discussion) exchanged within groups is related to group-judgment accuracy when the correct response is not apparent to the members prior to discussion. Thirty-nine 3-person groups were asked to make a series of 36 judgments regarding the probability that hypothetical high school dropouts would return to school. These judgments were based on a set of information, part of which was given to all group members prior to discussion (shared information) and part of which was divided among them (unshared information). Moreover, this information was distributed to the members in such a way that their individual prediscussion preferences would tend to be either inaccurate (hidden profiles) or accurate (manifest profiles), relative to the optimal group judgment based on all of the information that was given to the group as a whole (i.e., both shared and unshared i...
Journal of Social Psychology | 2013
Andrea R. French; Timothy M. Franz; Laura L. Phelan; Bruce E. Blaine
ABSTRACT This study replicated and extended Olson and Fazio (2006) by testing whether evaluative conditioning is a means to reduce negative stereotypes about Muslim and other Arab persons. Specifically, evaluative conditioning was hypothesized to lower implicit biases against Muslim and Arab persons. The FreeIAT was used to measure implicit biases. Participants in the evaluative conditioning group showed a significant lowering in implicit biases. Explicit measures of bias were not affected by the conditioning procedure.
Studies in health technology and informatics | 1998
Charles P. Friedman; Arthur S. Elstein; Fredric M. Wolf; Gwendolyn C. Murphy; Timothy M. Franz; Paul L. Fine; Paul S. Heckerling; Thomas M. Miller
Within medical informatics there is widespread interest in computer-based decision support and the evaluation of its impact. It is widely recognized that the measurement of dependent variables, or outcomes, represents the most challenging aspect of this work. This paper describes and reports the reliability and validity of an outcome metric for studies of diagnostic decision support. The results of this study will guide the analytic methods used in our ongoing multi-site study of the effects of decision support on diagnostic reasoning. Our measurement approach conceptualizes the quality of a diagnostic hypothesis set as having two components summed to generate a composite index: a Plausibility Component derived from ratings of each hypothesis in the set, whether correct or incorrect; and a Location Component derived from the location of the correct diagnosis if it appears in the set. The reliability of this metric is determined by the extent of interrater agreement on the plausibility of diagnostic hypotheses. Validity is determined by the extent to which the index generates scores that make sense on inspection (face validity), as well as the extent to which the component scores are non-redundant and discriminate the performance of novices and experts (construct validity). Using data from the pilot and main phases of our ongoing study (n = 124 subjects working 1116 cases), the reliability of our diagnostic quality metric was found to be 0.85-0.88. The metric was found to generate, on inspection, no clearly counterintuitive scores. Using data from the pilot phase of our study (n = 12 subjects working 108 cases), the component scores were moderately correlated (r = 0.68). The composite index, computed by equally weighting both components, was found to discriminate the hypotheses of medical students and attending physicians by 0.97 standard deviation units. Based on these findings, we have adopted this metric for use in our further research exploring the impact of decision support systems on diagnostic reasoning and will make it available to the informatics research community.
Journal of Personality and Social Psychology | 1996
James R. Larson; Caryn Christensen; Ann S. Abbott; Timothy M. Franz
JAMA | 1999
Charles P. Friedman; Arthur S. Elstein; Fredric M. Wolf; Gwendolyn Murphy; Timothy M. Franz; Paul S. Heckerling; Paul L. Fine; Thomas M. Miller; Vijoy Abraham
Medical Decision Making | 2000
Caryn Christensen; James R. Larson; Ann S. Abbott; Anthony Ardolino; Timothy M. Franz; Carol A. Pfeiffer