Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Robert Rosenthal is active.

Publication


Featured researches published by Robert Rosenthal.


American Sociological Review | 1965

Organizational stress : studies in role conflict and ambiguity

Robert L. Kahn; Donald Wolfe; Robert P. Quinn; J. Diedrick Snoek; Robert Rosenthal

Wolfe; Robert P. Quinn; J. Diedrick Snoek; Robert A. Rosenthal Review by: Harry Levinson Administrative Science Quarterly, Vol. 10, No. 1, Special Issue on Professionals in Organizations (Jun., 1965), pp. 125-129 Published by: Sage Publications, Inc. on behalf of the Johnson Graduate School of Management, Cornell University Stable URL: http://www.jstor.org/stable/2391654 . Accessed: 17/06/2014 01:25


Psychological Bulletin | 1992

Comparing correlated correlation coefficients

Xiao-Li Meng; Robert Rosenthal; Donald B. Rubin

The purpose of this article is to provide simple but accurate methods for comparing correlation coefficients between a dependent variable and a set of independent variables. The methods are simple extensions of Dunn & Clarks (1969) work using the Fisher z transformation and include a test and confidence interval for comparing two correlated correlations, a test for heterogeneity, and a test and confidence interval for a contrast among k (>2) correlated correlations. Also briefly discussed is why the traditional Hotellings t test for comparing correlated correlations is generally not appropriate in practice


The New England Journal of Medicine | 2008

Selective Publication of Antidepressant Trials and Its Influence on Apparent Efficacy

Erick H. Turner; Annette M. Matthews; Eftihia Linardatos; Robert A. Tell; Robert Rosenthal

BACKGROUND Evidence-based medicine is valuable to the extent that the evidence base is complete and unbiased. Selective publication of clinical trials--and the outcomes within those trials--can lead to unrealistic estimates of drug effectiveness and alter the apparent risk-benefit ratio. METHODS We obtained reviews from the Food and Drug Administration (FDA) for studies of 12 antidepressant agents involving 12,564 patients. We conducted a systematic literature search to identify matching publications. For trials that were reported in the literature, we compared the published outcomes with the FDA outcomes. We also compared the effect size derived from the published reports with the effect size derived from the entire FDA data set. RESULTS Among 74 FDA-registered studies, 31%, accounting for 3449 study participants, were not published. Whether and how the studies were published were associated with the study outcome. A total of 37 studies viewed by the FDA as having positive results were published; 1 study viewed as positive was not published. Studies viewed by the FDA as having negative or questionable results were, with 3 exceptions, either not published (22 studies) or published in a way that, in our opinion, conveyed a positive outcome (11 studies). According to the published literature, it appeared that 94% of the trials conducted were positive. By contrast, the FDA analysis showed that 51% were positive. Separate meta-analyses of the FDA and journal data sets showed that the increase in effect size ranged from 11 to 69% for individual drugs and was 32% overall. CONCLUSIONS We cannot determine whether the bias observed resulted from a failure to submit manuscripts on the part of authors and sponsors, from decisions by journal editors and reviewers not to publish, or both. Selective reporting of clinical trial results may have adverse consequences for researchers, study participants, health care professionals, and patients.


Behavioral and Brain Sciences | 1978

Interpersonal expectancy effects: the first 345 studies

Robert Rosenthal; Donald B. Rubin

The research area of interpersonal expectancy effects originally derived from a general consideration of the effects of experimenters on the results of their research. One of these is the expectancy effect, the tendency for experimenters to obtain results they expect, not simply because they have correctly anticipated natures response but rather because they have helped to shape that response through their expectations. When behavioral researchers expect certain results from their human (or animal) subjects they appear unwittingly to treat them in such a way as to increase the probability that they will respond as expected. In the first few years of research on this problem of the interpersonal (or interorganism) self-fulfilling prophecy, the “prophet” was always an experimenter and the affected phenomenon was always the behavior of an experimental subject. In more recent years, however, the research has been extended from experimenters to teachers, employers, and therapists whose expectations for their pupils, employees, and patients might also come to serve as interpersonal self-fulfilling prophecies. Our general purpose is to summarize the results of 345 experiments investigating interpersonal expectancy effects. These studies fall into eight broad categories of research: reaction time, inkblot tests, animal learning, laboratory interviews, psychophysical judgments, learning and ability, person perception, and everyday life situations. For the entire sample of studies, as well as for each specific research area, we (1) determine the overall probability that interpersonal expectancy effects do in fact occur, (2) estimate their average magnitude so as to evaluate their substantive and methodological importance, and (3) illustrate some methods that may be useful to others wishing to summarize quantitatively entire bodies of research (a practice that is, happily, on the increase).


Psychological Bulletin | 1995

Writing Meta-Analytic Reviews

Robert Rosenthal

This article describes what should typically be included in the introduction, method, results, and discussion sections of a meta-analytic review. Method sections include information on literature searches, criteria for inclusion of studies, and a listing of the characteristics recorded for each study. Results sections include information describing the distribution of obtained effect sizes, central tendencies, variability, tests of significance, confidence intervals, tests for heterogeneity, and contrasts (univariate or multivariate). The interpretation of meta-analytic results is often facilitated by the inclusion of the binomial effect size display procedure, the coefficient of robustness, file drawer analysis, and, where overall results are not significant, the counternull value of the obtained effect size and power analysis.


American Psychologist | 1989

Statistical Procedures and the Justification of Knowledge in Psychological Science

Ralph L. Rosnow; Robert Rosenthal

Justification, in the vernacular language of philosophy of science, refers to the evaluation, defense, and confirmation of claims of truth. In this article, we examine some aspects of the rhetoric of justification, which in part draws on statistical data analysis to shore up facts and inductive inferences. There are a number of problems of methodological spirit and substance that in the past have been resistant to attempts to correct them. The major problems are discussed, and readers are reminded of ways to clear away these obstacles to justification.


Child Development | 2000

Effect Size, Practical Importance, and Social Policy for Children

Kathleen McCartney; Robert Rosenthal

Real decisions for real children are influenced by the papers developmentalists write, regardless of whether we ever intended our papers to be used in the policy arena. Yet most social scientists seldom analyze data in ways that are most useful to policymakers. The primary purpose of this paper is to share three ideas concerning how to evaluate the practical importance of a finding or set of findings. First, for research to be most useful not only in the policy arena but also more generally, significance tests need to be accompanied by effect size estimates. The practical importance of an effect size depends on the scientific context (i.e., measurement, design, and method) as well as the empirical literature context. Second, researchers need to use all existing data when weighing in on a policy debate; here, meta-analyses are particularly useful. Finally, researchers need to be careful about embracing null or small findings, because effects may well be small due to measurement problems alone, particularly early in the history of a research domain.


Psychological Science | 2000

Contrasts and Correlations in Effect-Size Estimation

Ralph L. Rosnow; Robert Rosenthal; Donald B. Rubin

This article describes procedures for presenting standardized measures of effect size when contrasts are used to ask focused questions of data. The simplest contrasts consist of comparisons of two samples (e.g., based on the independent t statistic). Useful effect-size indices in this situation are members of the g family (e.g., Hedgess g and Cohens d) and the Pearson r. We review expressions for calculating these measures and for transforming them back and forth, and describe how to adjust formulas for obtaining g or d from t, or r from g, when the sample sizes are unequal. The real-life implications of d or g calculated from t become problematic when there are more than two groups, but the correlational approach is adaptable and interpretable, although more complex than in the case of two groups. We describe a family of four conceptually related correlation indices: the alerting correlation, the contrast correlation, the effect-size correlation, and the BESD (binomial effect-size display) correlation. These last three correlations are identical in the simple setting of only two groups, but differ when there are more than two groups.


Journal of Personality and Social Psychology | 2003

Quantifying Construct Validity: Two Simple Measures

Drew Westen; Robert Rosenthal

Construct validity is one of the most central concepts in psychology. Researchers generally establish the construct validity of a measure by correlating it with a number of other measures and arguing from the pattern of correlations that the measure is associated with these variables in theoretically predictable ways. This article presents 2 simple metrics for quantifying construct validity that provide effect size estimates indicating the extent to which the observed pattern of correlations in a convergent-discriminant validity matrix matches the theoretically predicted pattern of correlations. Both measures, based on contrast analysis, provide simple estimates of validity that can be compared across studies, constructs, and measures meta-analytically, and can be implemented without the use of complex statistical procedures that may limit their accessibility.


Psychological Methods | 2003

r equivalent : A Simple Effect Size Indicator

Robert Rosenthal; Donald B. Rubin

The purpose of this article is to propose a simple effect size estimate (obtained from the sample size, N, and a p value) that can be used (a) in meta-analytic research where only sample sizes and p values have been reported by the original investigator, (b) where no generally accepted effect size estimate exists, or (c) where directly computed effect size estimates are likely to be misleading. This effect size estimate is called r(equivalent) because it equals the sample point-biserial correlation between the treatment indicator and an exactly normally distributed outcome in a two-treatment experiment with N/2 units in each group and the obtained p value. As part of placing r(equivalent) into a broader context, the authors also address limitations of r(equivalent).

Collaboration


Dive into the Robert Rosenthal's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jinni A. Harrigan

California State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Elisha Babad

Hebrew University of Jerusalem

View shared research outputs
Researchain Logo
Decentralizing Knowledge