Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Felix Thoemmes is active.

Publication


Featured researches published by Felix Thoemmes.


Multivariate Behavioral Research | 2011

A Systematic Review of Propensity Score Methods in the Social Sciences.

Felix Thoemmes; Eun Sook Kim

The use of propensity scores in psychological and educational research has been steadily increasing in the last 2 to 3 years. However, there are some common misconceptions about the use of different estimation techniques and conditioning choices in the context of propensity score analysis. In addition, reporting practices for propensity score analyses often lack important details that allow other researchers to confidently judge the appropriateness of reported analyses and potentially to replicate published findings. In this article we conduct a systematic literature review of a large number of published articles in major areas of social science that used propensity scores up until the fall of 2009. We identify common errors in estimation, conditioning, and reporting of propensity score analyses and suggest possible solutions.


Perspectives on Psychological Science | 2014

Continuously Cumulating Meta-Analysis and Replicability

Sanford L. Braver; Felix Thoemmes; Robert Rosenthal

The current crisis in scientific psychology about whether our findings are irreproducible was presaged years ago by Tversky and Kahneman (1971), who noted that even sophisticated researchers believe in the fallacious Law of Small Numbers—erroneous intuitions about how imprecisely sample data reflect population phenomena. Combined with the low power of most current work, this often leads to the use of misleading criteria about whether an effect has replicated. Rosenthal (1990) suggested more appropriate criteria, here labeled the continuously cumulating meta-analytic (CCMA) approach. For example, a CCMA analysis on a replication attempt that does not reach significance might nonetheless provide more, not less, evidence that the effect is real. Alternatively, measures of heterogeneity might show that two studies that differ in whether they are significant might have only trivially different effect sizes. We present a nontechnical introduction to the CCMA framework (referencing relevant software), and then explain how it can be used to address aspects of replicability or more generally to assess quantitative evidence from numerous studies. We then present some examples and simulation results using the CCMA approach that show how the combination of evidence can yield improved results over the consideration of single studies.


Psychological Science | 2012

Military Training and Personality Trait Development Does the Military Make the Man, or Does the Man Make the Military?

Joshua J. Jackson; Felix Thoemmes; Kathrin Jonkmann; Oliver Lüdtke; Ulrich Trautwein

Military experience is an important turning point in a person’s life and, consequently, is associated with important life outcomes. Using a large longitudinal sample of German males, we examined whether personality traits played a role during this period. Results indicated that personality traits prospectively predicted the decision to enter the military. People lower in agreeableness, neuroticism, and openness to experience during high school were more likely to enter the military after graduation. In addition, military training was associated with changes in personality. Compared with a control group, military recruits had lower levels of agreeableness after training. These levels persisted 5 years after training, even after participants entered college or the labor market. This study is one of the first to identify life experiences associated with changes in personality traits. Moreover, our results suggest that military experiences may have a long-lasting influence on individual-level characteristics.


Psychological Methods | 2010

Campbell's and Rubin's Perspectives on Causal Inference

Stephen G. West; Felix Thoemmes

Donald Campbells approach to causal inference (D. T. Campbell, 1957; W. R. Shadish, T. D. Cook, & D. T. Campbell, 2002) is widely used in psychology and education, whereas Donald Rubins causal model (P. W. Holland, 1986; D. B. Rubin, 1974, 2005) is widely used in economics, statistics, medicine, and public health. Campbells approach focuses on the identification of threats to validity and the inclusion of design features that may prevent those threats from occurring or render them implausible. Rubins approach focuses on the precise specification of both the possible outcomes for each participant and assumptions that are mathematically sufficient to estimate the causal effect. In this article, the authors compare the perspectives provided by the 2 approaches on randomized experiments, broken randomized experiments in which treatment nonadherence or attrition occurs, and observational studies in which participants are assigned to treatments on an unknown basis. The authors highlight dimensions on which the 2 approaches have different emphases, including the roles of constructs versus operations, threats to validity versus assumptions, methods of addressing threats to internal validity and violations of assumptions, direction versus magnitude of causal effects, role of measurement, and causal generalization. The authors conclude that investigators can benefit from drawing on the strengths of both approaches in designing research.


Structural Equation Modeling | 2010

POWER ANALYSIS FOR COMPLEX MEDIATIONAL DESIGNS USING MONTE CARLO METHODS

Felix Thoemmes; David P. MacKinnon; Mark Reiser

Applied researchers often include mediation effects in applications of advanced methods such as latent variable models and linear growth curve models. Guidance on how to estimate statistical power to detect mediation for these models has not yet been addressed in the literature. We describe a general framework for power analyses for complex mediational models. The approach is based on the well-known technique of generating a large number of samples in a Monte Carlo study, and estimating power as the percentage of cases in which an estimate of interest is significantly different from zero. Examples of power calculation for commonly used mediational models are provided. Power analyses for the single mediator, multiple mediators, 3-path mediation, mediation with latent variables, moderated mediation, and mediation in longitudinal designs are described. Annotated sample syntax for Mplus is appended and tabled values of required sample sizes are shown for some models.


American Journal of Health Behavior | 2010

Long-term effects of a worksite health promotion program for firefighters.

David P. MacKinnon; Diane L. Elliot; Felix Thoemmes; Kerry S. Kuehl; Esther L. Moe; Linn Goldberg; Ginger Lockhart Burrell; Krista W. Ranby

OBJECTIVE To describe effects of 2 worksite health promotion programs for firefighters, both immediate outcomes and the long-term consequences for 4 years following the interventions. METHODS At baseline, 599 firefighters were assessed, randomized by fire station to control and 2 different intervention conditions, and reevaluated with 6 annual follow-up measurements. RESULTS Both a team-centered peer-taught curriculum and an individual motivational interviewing intervention demonstrated positive effects on BMI, with team effects on nutrition behavior and physical activity at one year. Most differences between intervention and control groups dissipated at later annual assessments. However, the trajectory of behaviors across time generally was positive for all groups, consistent with lasting effects and diffusion of program benefits across experimental groups within the worksites. CONCLUSIONS Although one-year programmatic effects did not remain over time, the long-term pattern of behaviors suggested these worksites as a whole were healthier more than 3 years following the interventions.


Basic and Applied Social Psychology | 2015

Reversing Arrows in Mediation Models Does Not Distinguish Plausible Models

Felix Thoemmes

Reversing arrows in the classic tri-variate X-M-Y mediation models as a test to check whether one mediation model is superior to another is inadmissible. Presenting evidence that one tri-variate mediation model yields a significant indirect effect, whereas one with some reversed arrows does not, is not proof or even evidence that one model should be preferred. In fact, the significance of the indirect or any other effect can never be used to infer whether one model should be preferred over another, if the models are in the same so-called equivalence class. The practice of running several mediation models with reversed arrows to decide which model to prefer should be abandoned. The only way to choose among equivalent models is through assumptions that are either fulfilled by design features or invoked based on theory. Similar arguments about reversing arrows in mediation models have been made before, but this current work is the first to derive this result analytically for the complete (Markovian) equivalence class of the tri-variate mediation model.


Journal of Personality and Social Psychology | 2008

Two Ways to Be Complex and Why They Matter: Implications for Attitude Strength and Lying

Lucian Gideon Conway; Felix Thoemmes; Amy M. Allison; Kirsten Hands Towgood; Michael J. Wagner; Kathleen Davey; Amanda Salcido; Amanda Nicole Stovall; Daniel P. Dodds; Kate Bongard; Kathrene Conway

Integrative complexity broadly measures the structural complexity of statements. This breadth, although beneficial in multiple ways, can potentially hamper the development of specific theories. In response, the authors developed a model of complex thinking, focusing on 2 different ways that people can be complex within the integrative complexity system and subsequently developed measurements of each of these 2 routes: Dialectical complexity focuses on a dialectical tension between 2 or more competing perspectives, whereas elaborative complexity focuses on complexly elaborating on 1 singular perspective. The authors posit that many variables have different effects on these 2 forms of complexity and subsequently test this idea in 2 different theoretical domains. In Studies 1a, 1b, and 2, the authors demonstrate that variables related to attitude strength (e.g., domain importance, extremism, domain accessibility) decrease dialectical complexity but increase elaborative complexity. In Study 3, the authors show that counterattitudinal lying decreases dialectical complexity but increases elaborative complexity, implicating a strategic (as opposed to a cognitive strain) view of the lying-complexity relationship. The authors argue that this dual demonstration across 2 different theoretical domains helps establish the utility of the new model and measurements as well as offer the potential to reconcile apparent conflicts in the area of cognitive complexity.


Multivariate Behavioral Research | 2011

The Use of Propensity Scores for Nonrandomized Designs With Clustered Data

Felix Thoemmes; Stephen G. West

In this article we propose several modeling choices to extend propensity score analysis to clustered data. We describe different possible model specifications for estimation of the propensity score: single-level model, fixed effects model, and two random effects models. We also consider both conditioning within clusters and conditioning across clusters. We examine the underlying assumptions of these modeling choices and the type of randomized experiment approximated by each approach. Using a simulation study, we compare the relative performance of these modeling and conditioning choices in reducing bias due to confounding variables at both the person and cluster levels. An applied example based on a study by Hughes, Chen, Thoemmes, and Kwok (2010) is provided in which the effect of retention in Grade 1 on passing an achievement test in Grade 3 is evaluated. We find that models that consider the clustered nature of the data both in estimation of the propensity score and conditioning on the propensity score performed best in our simulation study; however, other modeling choices also performed well. The applied example illustrates practical limitations of these models when cluster sizes are small.


Journal of Consulting and Clinical Psychology | 2014

Propensity scores as a basis for equating groups: basic principles and application in clinical treatment outcome research.

Stephen G. West; Heining Cham; Felix Thoemmes; Babette Renneberg; Julian Schulze; Matthias Weiler

A propensity score is the probability that a participant is assigned to the treatment group based on a set of baseline covariates. Propensity scores provide an excellent basis for equating treatment groups on a large set of covariates when randomization is not possible. This article provides a nontechnical introduction to propensity scores for clinical researchers. If all important covariates are measured, then methods that equate on propensity scores can achieve balance on a large set of covariates that mimics that achieved by a randomized experiment. We present an illustration of the steps in the construction and checking of propensity scores in a study of the effectiveness of a health coach versus treatment as usual on the well-being of seriously ill individuals. We then consider alternative methods of equating groups on propensity scores and estimating treatment effects including matching, stratification, weighting, and analysis of covariance. We illustrate a sensitivity analysis that can probe for the potential effects of omitted covariates on the estimate of the causal effect. Finally, we briefly consider several practical and theoretical issues in the use of propensity scores in applied settings. Propensity score methods have advantages over alternative approaches to equating groups particularly when the treatment and control groups do not fully overlap, and there are nonlinear relationships between covariates and the outcome.

Collaboration


Dive into the Felix Thoemmes's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ingo Zettler

University of Copenhagen

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Peter M. Steiner

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge