Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kristynn J. Sullivan is active.

Publication


Featured researches published by Kristynn J. Sullivan.


Behavior Research Methods | 2011

Characteristics of Single-Case Designs Used to Assess Intervention Effects in 2008.

William R. Shadish; Kristynn J. Sullivan

This article reports the results of a study that located, digitized, and coded all 809 single-case designs appearing in 113 studies in the year 2008 in 21 journals in a variety of fields in psychology and education. Coded variables included the specific kind of design, number of cases per study, number of outcomes, data points and phases per case, and autocorrelations for each case. Although studies of the effects of interventions are a minority in these journals, within that category, single-case designs are used more frequently than randomized or nonrandomized experiments. The modal study uses a multiple-baseline design with 20 data points for each of three or four cases, where the aim of the intervention is to increase the frequency of a desired behavior; but these characteristics vary widely over studies. The average autocorrelation is near to but significantly different from zero; but autocorrelations are significantly heterogeneous. The results have implications for the contributions of single-case designs to evidence-based practice and suggest a number of future research directions.


Journal of School Psychology | 2014

Using generalized additive (mixed) models to analyze single case designs.

William R. Shadish; Alain F. Zuur; Kristynn J. Sullivan

This article shows how to apply generalized additive models and generalized additive mixed models to single-case design data. These models excel at detecting the functional form between two variables (often called trend), that is, whether trend exists, and if it does, what its shape is (e.g., linear and nonlinear). In many respects, however, these models are also an ideal vehicle for analyzing single-case designs because they can consider level, trend, variability, overlap, immediacy of effect, and phase consistency that single-case design researchers examine when interpreting a functional relation. We show how these models can be implemented in a wide variety of ways to test whether treatment is effective, whether cases differ from each other, whether treatment effects vary over cases, and whether trend varies over cases. We illustrate diagnostic statistics and graphs, and we discuss overdispersion of data in detail, with examples of quasibinomial models for overdispersed data, including how to compute dispersion and quasi-AIC fit indices in generalized additive models. We show how generalized additive mixed models can be used to estimate autoregressive models and random effects and discuss the limitations of the mixed models compared to generalized additive models. We provide extensive annotated syntax for doing all these analyses in the free computer program R.


Neuropsychological Rehabilitation | 2014

A d-statistic for single-case designs that is equivalent to the usual between-groups d-statistic

William R. Shadish; Larry V. Hedges; James E. Pustejovsky; Jonathan G. Boyajian; Kristynn J. Sullivan; Alma Andrade; Jeannette L. Barrientos

We describe a standardised mean difference statistic (d) for single-case designs that is equivalent to the usual d in between-groups experiments. We show how it can be used to summarise treatment effects over cases within a study, to do power analyses in planning new studies and grant proposals, and to meta-analyse effects across studies of the same question. We discuss limitations of this d-statistic, and possible remedies to them. Even so, this d-statistic is better founded statistically than other effect size measures for single-case design, and unlike many general linear model approaches such as multilevel modelling or generalised additive models, it produces a standardised effect size that can be integrated over studies with different outcome measures. SPSS macros for both effect size computation and power analysis are available.


Psychological Methods | 2015

An introduction to modeling longitudinal data with generalized additive models: applications to single-case designs.

Kristynn J. Sullivan; William R. Shadish; Peter M. Steiner

Single-case designs (SCDs) are short time series that assess intervention effects by measuring units repeatedly over time in both the presence and absence of treatment. This article introduces a statistical technique for analyzing SCD data that has not been much used in psychological and educational research: generalized additive models (GAMs). In parametric regression, the researcher must choose a functional form to impose on the data, for example, that trend over time is linear. GAMs reverse this process by letting the data inform the choice of functional form. In this article we review the problem that trend poses in SCDs, discuss how current SCD analytic methods approach trend, describe GAMs as a possible solution, suggest a GAM model testing procedure for examining the presence of trend in SCDs, present a small simulation to show the statistical properties of GAMs, and illustrate the procedure on 3 examples of different lengths. Results suggest that GAMs may be very useful both as a form of sensitivity analysis for checking the plausibility of assumptions about trend and as a primary data analysis strategy for testing treatment effects. We conclude with a discussion of some problems with GAMs and some future directions for research on the application of GAMs to SCDs.


Behavior Research Methods | 2013

Bayesian estimates of autocorrelations in single-case designs

William R. Shadish; David Rindskopf; Larry V. Hedges; Kristynn J. Sullivan

Researchers in the single-case design tradition have debated the size and importance of the observed autocorrelations in those designs. All of the past estimates of the autocorrelation in that literature have taken the observed autocorrelation estimates as the data to be used in the debate. However, estimates of the autocorrelation are subject to great sampling error when the design has a small number of time points, as is typically the situation in single-case designs. Thus, a given observed autocorrelation may greatly over- or underestimate the corresponding population parameter. This article presents Bayesian estimates of the autocorrelation that greatly reduce the role of sampling error, as compared to past estimators. Simpler empirical Bayes estimates are presented first, in order to illustrate the fundamental notions of autocorrelation sampling error and shrinkage, followed by fully Bayesian estimates, and the difference between the two is explained. Scripts to do the analyses are available as supplemental materials. The analyses are illustrated using two examples from the single-case design literature. Bayesian estimation warrants wider use, not only in debates about the size of autocorrelations, but also in statistical methods that require an independent estimate of the autocorrelation to analyze the data.


Multivariate Behavioral Research | 2013

Abstract: Modeling Single-Case Designs With Generalized Additive Models

Kristynn J. Sullivan; William R. Shadish

Modeling Single-Case Designs With Generalized Additive Models Kristynn J. Sullivan & William R. Shadish University of California, Merced Single case designs (SCDs) are short time series that assess intervention effects by measuring units repeatedly over time both in the presence and absence of treatment. Interest in the statistical analysis and meta-analysis of these designs has been growing in recent years (Shadish, Rindskopf, & Hedges, 2008). When statistically analyzing SCD data, one must take into account both (a) trend, or systematic, nonzero, change over time that is not dependent on the implementation of a treatment and (b) autocorrelation, or the serial dependence of observations nested within the same person. Various methods have been proposed for the statistical analysis of SCDs, but all of these methods have flaws when it comes to dealing with trend, autocorrelation, or both. This article proposes modeling SCD data with Generalized Additive Models (GAMs), a semiparametric method from which it is possible to estimate the functional form of trend directly from the data, arguably capturing the true functional form better than linear regression methods in which the researcher must decide which functional form to impose on the data. In addition, autocorrelation estimates can be inflated when trend is not modeled properly, and so this article also shows how to calculate the autocorrelation from residuals extracted from GAM models, which tends to result in shrunken estimates. Four different GAM models are implemented on each SCD data set to assess potential nonlinearities, using the mgcv package in R (Wood, 2010), and are compared with the generalized linear model. To illustrate the implementation of these models, example time series data sets were selected from a larger data set of SCDs (Shadish & Sullivan, 2011). For each of the example cases, the model that best fit the data had either the trend or trend-treatment interaction term smoothed, indicating at least slight nonlinearity for those parameters. Though this is only a small subset of available data, these findings cast doubt on the typical assumption of SCD researchers that trend, if it exists, is linear. In addition, extracting the residuals from the best fitting model resulted in a shrunken autocorrelation in most of the examples. Shadish, W. R., Rindskopf, D. M., & Hedges, L. V. (2008). The state of the science in the meta-analysis of single-case experimental designs. Evidence-Based Communication Assessment and Intervention, 3, 188–196. Shadish, W. R., & Sullivan, K. J. (2011). Characteristics of single case designs used to assess intervention effects in 2008. Behavior Research Methods, 43, 971–980. Wood, S. (2010). mgcv: GAMs with GCV/AIC/REML smoothness estimation and GAMMs by PQL. Retrieved from http://cran.r-project.org/package=mgcv Correspondence concerning this abstract should be addressed to Kristynn J. Sullivan, 5200 N. Lake Road, Merced, CA 95343. E-mail: [email protected]


Journal of Orthopaedic Surgery and Research | 2016

Demographic factors in hip fracture incidence and mortality rates in California, 2000–2011

Kristynn J. Sullivan; Lisa Husak; Maria Altebarmakian; W. Timothy Brox


Archive | 2012

Theories of Causation in Psychological Science

William R. Shadish; Kristynn J. Sullivan


Archive | 2014

Analyzing Single-Case Designs: d, G, Hierarchical Models, Bayesian Estimators, Generalized Additive Models, and the Hopes and Fears of Researchers About Analyses

William R. Shadish; Larry V. Hedges; James E. Pustejovsky; David Rindskopf; Jonathan G. Boyajian; Kristynn J. Sullivan


Behavior Research Methods | 2014

Erratum to: Characteristics of single-case designs used to assess intervention effects in 2008

William R. Shadish; Kristynn J. Sullivan

Collaboration


Dive into the Kristynn J. Sullivan's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David Rindskopf

City University of New York

View shared research outputs
Top Co-Authors

Avatar

James E. Pustejovsky

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alma Andrade

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lisa Husak

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Peter M. Steiner

University of Wisconsin-Madison

View shared research outputs
Researchain Logo
Decentralizing Knowledge