Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rebecca M. Kuiper is active.

Publication


Featured researches published by Rebecca M. Kuiper.


Psychological Methods | 2015

A critique of the cross-lagged panel model.

Ellen L. Hamaker; Rebecca M. Kuiper; Raoul P. P. P. Grasman

The cross-lagged panel model (CLPM) is believed by many to overcome the problems associated with the use of cross-lagged correlations as a way to study causal influences in longitudinal panel data. The current article, however, shows that if stability of constructs is to some extent of a trait-like, time-invariant nature, the autoregressive relationships of the CLPM fail to adequately account for this. As a result, the lagged parameters that are obtained with the CLPM do not represent the actual within-person relationships over time, and this may lead to erroneous conclusions regarding the presence, predominance, and sign of causal influences. In this article we present an alternative model that separates the within-person process from stable between-person differences through the inclusion of random intercepts, and we discuss how this model is related to existing structural equation models that include cross-lagged relationships. We derive the analytical relationship between the cross-lagged parameters from the CLPM and the alternative model, and use simulations to demonstrate the spurious results that may arise when using the CLPM to analyze data that include stable, trait-like individual differences. We also present a modeling strategy to avoid this pitfall and illustrate this using an empirical data set. The implications for both existing and future cross-lagged panel research are discussed.


Psychological Methods | 2010

Comparisons of Means Using Exploratory and Confirmatory Approaches

Rebecca M. Kuiper; Herbert Hoijtink

This article discusses comparisons of means using exploratory and confirmatory approaches. Three methods are discussed: hypothesis testing, model selection based on information criteria, and Bayesian model selection. Throughout the article, an example is used to illustrate and evaluate the two approaches and the three methods. We demonstrate that confirmatory hypothesis testing techniques have more power-that is, have a higher probability of rejecting a false null hypothesis-and confirmatory model selection techniques have a higher probability of choosing the correct or the best hypothesis than their exploratory counterparts. Furthermore, we show that if more than one hypothesis has to be evaluated, model selection has advantages over hypothesis testing. Another, more elaborate example is used to further illustrate confirmatory model selection. The article concludes with recommendations: When a researcher is able to specify reasonable expectations and hypotheses, confirmatory model selection should be used; otherwise, exploratory model selection should be used.


Bayesian Evaluation of Informative Hypotheses. | 2008

An Evaluation of Bayesian Inequality Constrained Analysis of Variance

Herbert Hoijtink; Rafaele J. C. Huntjens; Albert Reijntjes; Rebecca M. Kuiper; Paul A. Boelen

In Chapters 2, 3, and 4 inequality constrained analysis of variance was introduced and illustrated. This chapter contains an evaluation of inequality constrained analysis of variance. Section 5.2 contains an evaluation from the perspective of psychologists on the use of inequality constrained analysis of variance. The questions raised will be discussed in Sections 5.3 and 5.4. Among other things, the interpretation of posterior model probabilities and the sensitivity of Bayesian model selection with respect to the choice of the prior distribution will be discussed.


Statistics in Biopharmaceutical Research | 2014

Identification of the Minimum Effective Dose for Normally Distributed Endpoints Using a Model Selection Approach

Rebecca M. Kuiper; Daniel Gerhard; Ludwig A. Hothorn

When identifying the minimum effective dose (MED) or the lowest observed adverse event level (LOAEL), researchers usually employ multiple comparison procedures (MCPs). The ones preferred are the Dunnett-type difference-to-placebo and ratio-to-control test. In this article, we will use a model selection criterion, namely the generalized order-restricted information criterion (GORIC). The GORIC can evaluate a set of hypotheses regarding the response means directly and simultaneously, where in this case in each hypothesis a different dose is hypothesized to be the MED or LOAEL. It takes different patterns of increasing response with increasing dose into account without pooling response means, whereas MCPs with order restrictions do. The GORIC chooses the best hypothesis of the set leading to the identification of the MED and, depending on the set, to a specific pattern of response means. We will show, by simulation, that the GORIC has advantages in identifying the MED or LOAEL, besides theoretical ones. Supplementary materials for this article are available online.


Sociological Methods & Research | 2013

Combining Statistical Evidence from Several Studies: A Method Using Bayesian Updating and an Example from Research on Trust Problems in Social and Economic Exchange

Rebecca M. Kuiper; Vincent Buskens; Werner Raub; Herbert Hoijtink

The effect of an independent variable on a dependent variable is often evaluated with hypothesis testing. Sometimes, multiple studies are available that test the same hypothesis. In such studies, the dependent variable and the main predictors might differ, while they do measure the same theoretical concepts. In this article, we present a Bayesian updating method that can be used to quantify the joint evidence in multiple studies regarding the effect of one variable of interest. We apply our method to four studies on how trust in social and economic exchange depends on experience from previous exchange with the same partner. In addition, we examine five hypothetical situations in which the results from the separate studies are less clear-cut than in our trust example.


Structural Equation Modeling | 2018

Drawing Conclusions from Cross-Lagged Relationships: Re-Considering the Role of the Time-Interval

Rebecca M. Kuiper; Oisín Ryan

The cross-lagged panel model (CLPM), a discrete-time (DT) SEM model, is frequently used to gather evidence for (reciprocal) Granger-causal relationships when lacking an experimental design. However, it is well known that CLPMs can lead to different parameter estimates depending on the time-interval of observation. Consequently, this can lead to researchers drawing conflicting conclusions regarding the sign and/or dominance of relationships. Multiple authors have suggested the use of continuous-time models to address this issue. In this article, we demonstrate the exact circumstances under which such conflicting conclusions occur. Specifically, we show that such conflicts are only avoided in general in the case of bivariate, stable, nonoscillating, first-order systems, when comparing models with uniform time-intervals between observations. In addition, we provide a range of tools, proofs, and guidelines regarding the comparison of discrete- and continuous-time parameter estimates.


Archive | 2018

A Continuous-Time Approach to Intensive Longitudinal Data: What, Why, and How?

Oisín Ryan; Rebecca M. Kuiper; Ellen L. Hamaker

The aim of this chapter is to (a) provide a broad didactical treatment of the first-order stochastic differential equation model—also known as the continuous-time (CT) first-order vector autoregressive (VAR(1)) model—and (b) argue for and illustrate the potential of this model for the study of psychological processes using intensive longitudinal data. We begin by describing what the CT-VAR(1) model is and how it relates to the more commonly used discrete-time VAR(1) model. Assuming no prior knowledge on the part of the reader, we introduce important concepts for the analysis of dynamic systems, such as stability and fixed points. In addition we examine why applied researchers should take a continuous-time approach to psychological phenomena, focusing on both the practical and conceptual benefits of this approach. Finally, we elucidate how researchers can interpret CT models, describing the direct interpretation of CT model parameters as well as tools such as impulse response functions, vector fields, and lagged parameter plots. To illustrate this methodology, we reanalyze a single-subject experience-sampling dataset with the R package ctsem; for didactical purposes, R code for this analysis is included, and the dataset itself is publicly available.


British Journal of Mathematical and Statistical Psychology | 2015

Properties of hypothesis testing techniques and (Bayesian) model selection for exploration-based and theory-based (order-restricted) hypotheses.

Rebecca M. Kuiper; Tim Nederhoff; Irene Klugkist

In this paper, the performance of six types of techniques for comparisons of means is examined. These six emerge from the distinction between the method employed (hypothesis testing, model selection using information criteria, or Bayesian model selection) and the set of hypotheses that is investigated (a classical, exploration-based set of hypotheses containing equality constraints on the means, or a theory-based limited set of hypotheses with equality and/or order restrictions). A simulation study is conducted to examine the performance of these techniques. We demonstrate that, if one has specific, a priori specified hypotheses, confirmation (i.e., investigating theory-based hypotheses) has advantages over exploration (i.e., examining all possible equality-constrained hypotheses). Furthermore, examining reasonable order-restricted hypotheses has more power to detect the true effect/non-null hypothesis than evaluating only equality restrictions. Additionally, when investigating more than one theory-based hypothesis, model selection is preferred over hypothesis testing. Because of the first two results, we further examine the techniques that are able to evaluate order restrictions in a confirmatory fashion by examining their performance when the homogeneity of variance assumption is violated. Results show that the techniques are robust to heterogeneity when the sample sizes are equal. When the sample sizes are unequal, the performance is affected by heterogeneity. The size and direction of the deviations from the baseline, where there is no heterogeneity, depend on the effect size (of the means) and on the trend in the group variances with respect to the ordering of the group sizes. Importantly, the deviations are less pronounced when the group variances and sizes exhibit the same trend (e.g., are both increasing with group number).


JGZ Tijdschrift voor jeugdgezondheidszorg | 2013

Effecten van het lesprogramma ‘Levensvaardigheden’ op gezondheid en gedrag van leerlingen van het voortgezet onderwijs

P Kocken; F. Pannebakker; M. Fekkes; Rebecca M. Kuiper; C Gravesteijn; R Diekstra

Methoden Op grond van een knelpuntenanalyse zijn uitgangsvragen geformuleerd. Voor het beantwoorden van deze vragen zijn internationale richtlijnen bestudeerd, aangevuld met systematisch literatuuronderzoek. De methodologische kwaliteit van de gevonden studies is beoordeeld met behulp van GRADE. Hieruit zijn aanbevelingen geformuleerd die vervolgens in de praktijk zijn getest. De aanbevelingen over diagnostiek van koemelkallergie in de JGZ-richtlijn zijn onderdeel van de richtlijn Diagnostiek van koemelkallergie bij kinderen in Nederland (2012) van de Nederlandse Vereniging voor Kindergeneeskunde.


Biometrika | 2011

An Akaike-type information criterion for model selection under inequality constraints

Rebecca M. Kuiper; Herbert Hoijtink; Mervyn J. Silvapulle

Collaboration


Dive into the Rebecca M. Kuiper's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Daniel Gerhard

University of Canterbury

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge