Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hannah R. Rothstein is active.

Publication


Featured researches published by Hannah R. Rothstein.


Archive | 2006

Publication bias in meta-analysis : prevention, assessment and adjustments

Hannah R. Rothstein; Alex J. Sutton; Michael Borenstein

Preface. Acknowledgements. Notes on Contributors. Chapter 1: Publication Bias in Meta-Analysis (Hannah R. Rothstein, Alexander J. Sutton and Michael Borenstein). Part A: Publication bias in context. Chapter 2: Publication Bias: Recognizing the Problem, Understanding Its Origins and Scope, and Preventing Harm (Kay Dickersin). Chapter 3: Preventing Publication Bias: Registries and Prospective Meta-Analysis (Jesse A. Berlin and Davina Ghersi). Chapter 4: Grey Literature and Systematic Reviews (Sally Hopewell, Mike Clarke and Sue Mallett). Part B: Statistical methods for assessing publication bias. Chapter 5: The Funnel Plot (Jonathan A.C. Sterne, Betsy Jane Becker and Matthias Egger). Chapter 6: Regression Methods to Detect Publication and Other Bias in Meta-Analysis (Jonathan A.C. Sterne and Matthias Egger). Chapter 7: Failsafe N or File-Drawer Number (Betsy Jane Becker). Chapter 8: The Trim and Fill Method (Sue Duval). Chapter 9: Selection Method Approaches (Larry V. Hedges and Jack Vevea). Chapter 10: Evidence Concerning the Consequences of Publication and Related Biases (Alexander J. Sutton). Chapter 11: Software for Publication Bias (Michael Borenstein). Part C: Advanced and emerging approaches. Chapter 12: Bias in Meta-Analysis Induced by Incompletely Reported Studies (Alexander J. Sutton and Therese D. Pigott). Chapter 13: Assessing the Evolution of Effect Sizes over Time (Thomas A. Trikalinos and John P.A. Ioannidis). Chapter 14: Do Systematic Reviews Based on Individual Patient Data Offer a Means of Circumventing Biases Associated with Trial Publications? (Lesley Stewart, Jayne Tierney and Sarah Burdett). Chapter 15: Differentiating Biases from Genuine Heterogeneity: Distinguishing Artifactual from Substantive Effects (John P.A. Ioannidis). Chapter 16: Beyond Conventional Publication Bias: Other Determinants of Data Suppression (Scott D. Halpern and Jesse A. Berlin). Appendices. Appendix A: Data Sets. Appendix B: Annotated Bibliography (Hannah R. Rothstein and Ashley Busing). Glossary. Index.


Psychological Bulletin | 2010

Violent video game effects on aggression, empathy, and prosocial behavior in Eastern and Western countries: A meta-analytic review.

Craig A. Anderson; Akiko Shibuya; Nobuko Ihori; Edward L. Swing; Brad J. Bushman; Akira Sakamoto; Hannah R. Rothstein; Muniba Saleem

Meta-analytic procedures were used to test the effects of violent video games on aggressive behavior, aggressive cognition, aggressive affect, physiological arousal, empathy/desensitization, and prosocial behavior. Unique features of this meta-analytic review include (a) more restrictive methodological quality inclusion criteria than in past meta-analyses; (b) cross-cultural comparisons; (c) longitudinal studies for all outcomes except physiological arousal; (d) conservative statistical controls; (e) multiple moderator analyses; and (f) sensitivity analyses. Social-cognitive models and cultural differences between Japan and Western countries were used to generate theory-based predictions. Meta-analyses yielded significant effects for all 6 outcome variables. The pattern of results for different outcomes and research designs (experimental, cross-sectional, longitudinal) fit theoretical predictions well. The evidence strongly suggests that exposure to violent video games is a causal risk factor for increased aggressive behavior, aggressive cognition, and aggressive affect and for decreased empathy and prosocial behavior. Moderator analyses revealed significant research design effects, weak evidence of cultural differences in susceptibility and type of measurement effects, and no evidence of sex differences in susceptibility. Results of various sensitivity analyses revealed these effects to be robust, with little evidence of selection (publication) bias.


Research Synthesis Methods | 2010

A basic introduction to fixed‐effect and random‐effects models for meta‐analysis

Michael Borenstein; Larry V. Hedges; Julian P. T. Higgins; Hannah R. Rothstein

There are two popular statistical models for meta-analysis, the fixed-effect model and the random-effects model. The fact that these two models employ similar sets of formulas to compute statistics, and sometimes yield similar estimates for the various parameters, may lead people to believe that the models are interchangeable. In fact, though, the models represent fundamentally different assumptions about the data. The selection of the appropriate model is important to ensure that the various statistics are estimated correctly. Additionally, and more fundamentally, the model serves to place the analysis in context. It provides a framework for the goals of the analysis as well as for the interpretation of the statistics. In this paper we explain the key assumptions of each model, and then outline the differences between the models. We conclude with a discussion of factors to consider when choosing between the two models. Copyright


Journal of Occupational Health Psychology | 2008

Effects of Occupational Stress Management Intervention Programs: A Meta-analysis

Katherine M. Richardson; Hannah R. Rothstein

A meta-analysis was conducted to determine the effectiveness of stress management interventions in occupational settings. Thirty-six experimental studies were included, representing 55 interventions. Total sample size was 2,847. Of the participants, 59% were female, mean age was 35.4, and average length of intervention was 7.4 weeks. The overall weighted effect size (Cohens d) for all studies was 0.526 (95% confidence interval = 0.364, 0.687), a significant medium to large effect. Interventions were coded as cognitive-behavioral, relaxation, organizational, multimodal, or alternative. Analyses based on these subgroups suggested that intervention type played a moderating role. Cognitive-behavioral programs consistently produced larger effects than other types of interventions, but if additional treatment components were added the effect was reduced. Within the sample of studies, relaxation interventions were most frequently used, and organizational interventions continued to be scarce. Effects were based mainly on psychological outcome variables, as opposed to physiological or organizational measures. The examination of additional moderators such as treatment length, outcome variable, and occupation did not reveal significant variations in effect size by intervention type.


BMJ | 2016

ROBINS-I: a tool for assessing risk of bias in non-randomised studies of interventions

Jonathan A C Sterne; Miguel A. Hernán; Barnaby C Reeves; Jelena Savovic; Nancy D Berkman; Meera Viswanathan; David Henry; Douglas G. Altman; Mohammed T Ansari; Isabelle Boutron; James Carpenter; An-Wen Chan; Rachel Churchill; Jonathan J Deeks; Asbjørn Hróbjartsson; Jamie Kirkham; Peter Jüni; Yoon K. Loke; Theresa D Pigott; Craig Ramsay; Deborah Regidor; Hannah R. Rothstein; Lakhbir Sandhu; Pasqualina Santaguida; Holger J. Schunemann; B. Shea; Ian Shrier; Peter Tugwell; Lucy Turner; Jeffrey C. Valentine

Non-randomised studies of the effects of interventions are critical to many areas of healthcare evaluation, but their results may be biased. It is therefore important to understand and appraise their strengths and weaknesses. We developed ROBINS-I (“Risk Of Bias In Non-randomised Studies - of Interventions”), a new tool for evaluating risk of bias in estimates of the comparative effectiveness (harm or benefit) of interventions from studies that did not use randomisation to allocate units (individuals or clusters of individuals) to comparison groups. The tool will be particularly useful to those undertaking systematic reviews that include non-randomised studies.


Journal of Educational and Behavioral Statistics | 2010

How Many Studies Do You Need?: A Primer on Statistical Power for Meta-Analysis

Jeffrey C. Valentine; Therese D. Pigott; Hannah R. Rothstein

In this article, the authors outline methods for using fixed and random effects power analysis in the context of meta-analysis. Like statistical power analysis for primary studies, power analysis for meta-analysis can be done either prospectively or retrospectively and requires assumptions about parameters that are unknown. The authors provide some suggestions for thinking about these parameters, in particular for the random effects variance component. The authors also show how the typically uninformative retrospective power analysis can be made more informative. The authors then discuss the value of confidence intervals, show how they could be used in addition to or instead of retrospective power analysis, and also demonstrate that confidence intervals can convey information more effectively in some situations than power analyses alone. Finally, the authors take up the question “How many studies do you need to do a meta-analysis?” and show that, given the need for a conclusion, the answer is “two studies,” because all other synthesis techniques are less transparent and/or are less likely to be valid. For systematic reviewers who choose not to conduct a quantitative synthesis, the authors provide suggestions for both highlighting the current limitations in the research base and for displaying the characteristics and results of studies that were found to meet inclusion criteria.


BMJ | 2008

Reasons or excuses for avoiding meta-analysis in forest plots.

John P. A. Ioannidis; Nikolaos A. Patsopoulos; Hannah R. Rothstein

Heterogeneous data are a common problem in meta-analysis. John Ioannidis, Nikolaos Patsopoulos, and Hannah Rothstein show that final synthesis is possible and desirable in most cases


Journal of Applied Psychology | 1990

Interrater reliability of job performance ratings: Growth to asymptote level with increasing opportunity to observe.

Hannah R. Rothstein

The interrater reliabilities of ratings of 9,975 ratees from 79 organizations were examined as a function of length of exposure to the ratee. It was found that there was a strong, nonlinear relationship between months of exposure and interrater reliability. The correlation between a logarithmic transformation of months of experience and reliability was .73 for one type of ratings and .65 for another type. The relationship was strongest during the first 12 months on the job. Implications for estimating reliabilities in individual and meta-analytic studies and for performance appraisal were presented, and possible explanations of the reliability-variance relationship were advanced


Journal of Applied Psychology | 1990

Biographical data in employment selection: Can validities be made generalizable?

Hannah R. Rothstein; Frank L. Schmidt; Frank Erwin; William A. Owens; C. Paul Sparks

The hypothesis was examined that organizational specificity of biodata validity results from the methods typically used to select and key items. In this study, items were initially screened for job relevance, keying was based on large samples from multiple organizations, and items were retained only if they showed validity across organizations. Cross-validation was performed on approximately 11,000 first-line supervisors in 79 organizations. The resulting validities were meta-analyzed across organizations, age levels, sex, and levels of education, supervisory experience, and company tenure. In all cases, validities were generalizable. Validities were also stable across time and did not appear to stem from measurement of knowledge, skills, or abilities acquired through job experience. Finally, these results provide additional evidence against the hypothesis of situational specificity of validities, the first large-sample evidence in a noncognitive domain. Substantial evidence now indicates that the two most valid predictors of job performance are cognitive ability tests and biodata instruments. The quantitative review of the literature by Hunter and Hunter (1984) has estimated the average validity of tests of general cognitive ability against supervisory ratings of overall job performance as .47, whereas the average (cross


Journal of Applied Psychology | 1993

Refinements in Validity Generalization Methods: Implications for the Situational Specificity Hypothesis

Frank L. Schmidt; Kenneth Law; John E. Hunter; Hannah R. Rothstein; Kenneth Pearlman; Michael A. McDaniel

Using a large database, this study examined three refinements of validity generalization procedures: (a) a more accurate procedure for correcting the residual SD for range restriction to estimate SDP, (b) use of f instead of study-observed rs in the formula for sampling error variance, and (c) removal of non-Pearson rs. The first procedure does not affect the amount of variance accounted for by artifacts. The addition of the second and third procedures increased the mean percentage of validity variance accounted for by artifacts from 70% to 82%, a 17% increase. The cumulative addition of all three procedures decreased the mean SDf estimate from .150 to .106, a 29% decrease. Six additional variance-producing artifacts were identified that could not be corrected for. In light of these, we concluded that the obtained estimates of mean SDP and mean validity variance accounted for were consistent with the hypothesis that the true mean SDP value is close to zero. These findings provide further evidence against the situational specificity hypothesis. The first published validity generalization research study (Schmidt & Hunter, 1977) hypothesized that if all sources of artifactual variance in cognitive test validities could be controlled methodologically through study design (e.g., construct validity of tests and criterion measures, computational errors) or corrected for (e.g., sampling error, measurement error), there might be no remaining variance in validities across settings. That is, not only would validity be generalizable based on 90% credibility values in the estimated true validity distributions, but all observed variance in validities would be shown to be artifactual and the situational specificity hypothesis would be shown to be false even in its limited form. However, subsequent validity generalization research (e.g., Pearlman, Schmidt, & Hunter, 1980; Schmidt, Gast-Rosenberg, & Hunter, 1980; Schmidt, Hunter, Pearlman, & Shane, 1979) was based on data drawn from the general published and unpublished research literature, and therefore it was not possible to control or correct for the sources of artifactual variance that can generally be controlled for only through study design and execution (e.g., computational and typographical errors, study differences in criterion contamination). Not unexpectedly, many of these meta-analyses accounted for less than 100% of observed validity variance, and the average across studies was also less than 100% (e.g., see Pearlman et al., 1980; Schmidt et al., 1979). The conclusion that the validity of cognitive abilities tests in employment is generalizable is now widely accepted (e.g., see

Collaboration


Dive into the Hannah R. Rothstein's collaboration.

Top Co-Authors

Avatar

Michael Borenstein

Long Island Jewish Medical Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael A. McDaniel

Virginia Commonwealth University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge