David Rindskopf
City University of New York
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by David Rindskopf.
Multivariate Behavioral Research | 1988
David Rindskopf; Tedd Rose
This paper shows how confirmatory factor analysis can be used to test second- (and higher-) order factor models in the areas of the structure of abilities, allometry, and the separation of specific and error variance estimates. In the latter area, an idea of Joreskogs is extended to include a new conceptualization of the estimation of validity. Second-order models are placed in a hierarchy of factor analysis models, which shows how the fit of various models can be compared. The concept of discriminability is introduced to describe a situation in which two models may both be identified, and yet the goodness-of-fit for both will be the same. This problem can usually be avoided by careful design of a study. Several examples are discussed.
Sociological Methods & Research | 1984
David Rindskopf
Heywood cases represent the most common form of a series of related problems in confirmatory factor analysis and structural equation modeling. Other problems include factor loadings and factor correlations outside the usual range, large variances of parameter estimates, and high correlations between parameter estimates. The concept of empirical underidentification is used here to show how these problems can arise, and under what conditions they can be controlled. The discussion is centered around examples showing how small factor loadings, factor correlations near zero, and factor correlations near one can lead to empirical underidentification.
Remedial and Special Education | 2013
Thomas R. Kratochwill; John H. Hitchcock; Robert H. Horner; Joel R. Levin; Samuel L. Odom; David Rindskopf; William R. Shadish
In an effort to responsibly incorporate evidence based on single-case designs (SCDs) into the What Works Clearinghouse (WWC) evidence base, the WWC assembled a panel of individuals with expertise in quantitative methods and SCD methodology to draft SCD standards. In this article, the panel provides an overview of the SCD standards recommended by the panel (henceforth referred to as the Standards) and adopted in Version 1.0 of the WWC’s official pilot standards. The Standards are sequentially applied to research studies that incorporate SCDs. The design standards focus on the methodological soundness of SCDs, whereby reviewers assign the categories of Meets Standards, Meets Standards With Reservations, and Does Not Meet Standards to each study. Evidence criteria focus on the credibility of the reported evidence, whereby the outcome measures that meet the design standards (with or without reservations) are examined by reviewers trained in visual analysis and categorized as demonstrating Strong Evidence, Moderate Evidence, or No Evidence. An illustration of an actual research application of the Standards is provided. Issues that the panel did not address are presented as priorities for future consideration. Implications for research and the evidence-based practice movement in psychology and education are discussed. The WWC’s Version 1.0 SCD standards are currently being piloted in systematic reviews conducted by the WWC. This document reflects the initial standards recommended by the authors as well as the underlying rationale for those standards. It should be noted that the WWC may revise the Version 1.0 standards based on the results of the pilot; future versions of the WWC standards can be found at http://www.whatworks.ed.gov.
Psychological Methods | 1998
C. Keith Haddock; David Rindskopf; William R. Shadish
Many meta-analysts incorrectly use correlations or standardized mean difference statistics to compute effect sizes on dichotomous data. Odds ratios and their logarithms should almost always be preferred for such data. This article reviews the issues and shows how to use odds ratios in meta-analytic data, both alone and in combination with other effect size estimators. Examples illustrate procedures for estimating the weighted average of such effect sizes and methods for computing variance estimates, confidence intervals, and homogeneity tests. Descriptions of fixedand random-effects models help determine whether effect sizes are functions of study characteristics, and a random-effects regression model, previously unused for odds ratio data, is described. Although all but the latter of these procedures are already widely known in areas such as medicine and epidemiology, the absence of their use in psychology suggests a need for this description.
Psychometrika | 1984
David Rindskopf
The most widely-used computer programs for structural equation models analysis are the LISREL series of Jöreskog and Sörbom. The only types of constraints which may be made directly are fixing parameters at a constant value and constraining parameters to be equal. Rindskopf (1983) showed how these simple properties could be used to represent models with more complicated constraints, namely inequality constraints on unique variances. In this paper, two new concepts are introduced which enable a much wider variety of constraints to be made. The concepts, “phantom” and “imaginary” latent variables, allow fairly general equality and inequality constraints on factor loadings and structural model coefficients.
Evidence-based Communication Assessment and Intervention | 2008
William R. Shadish; David Rindskopf; Larry V. Hedges
The articles in the previous, special issue of Evidence-Based Communication Assessment and Intervention provided an excellent review of the meta-analysis of single-case designs. This article weaves commentary about those articles into a larger narrative about two major lines of attack on this problem: the use of parametric approaches like regression and multilevel modeling, and the development of parametric and nonparametric effect-size estimators. On each of these two topics, we describe an agenda of research topics that need to be addressed; and we also introduce a new effect-size estimator that may prove to be comparable to the usual standardized mean difference statistics (d) widely used in between-groups analysis. The article ends with observations about ways in which developments in the meta-analysis of single-case designs may have far wider implications than previously appreciated. Source of funding: U.S. Department of Education, Institute of Education Science, Grant # H324U050001-06
Biological Psychiatry | 2008
Joel R. Sneed; David Rindskopf; David C. Steffens; K. Ranga Rama Krishnan; Steven P. Roose
BACKGROUND Vascular depression has been proposed as a unique diagnostic subtype in late life, yet no study has evaluated whether the specified clinical features associated with the illness are jointly indicative of an underlying diagnostic class. METHODS We applied latent class analysis to two independent clinical samples: the prospective, cohort design, Neurocognitive Outcomes of Depression in the Elderly (NCODE) study and the 8-week, multicenter, double blind, placebo-controlled Old-Old study. RESULTS A two-class model consisting of vascular and nonvascular depressed patients provided an excellent fit to the data in both studies, chi(2)(6) = 2.02, p = .90 in the NCODE study and chi(2)(6) = 7.024, p = .32 in the Old-Old study. Although all of the proposed features of vascular depression were useful in identifying the illness, deep white matter lesion burden emerged with perfect sensitivity (1.00) and near-perfect specificity (.95), making it the only indicator necessary to determine class membership. CONCLUSIONS These findings, replicated across two independent clinical samples, provide the first support for the internal validity of vascular depression as a subtype of late-life depression.
Psychometrika | 1983
David Rindskopf
Current computer programs for analyzing linear structural models will apparently handle only two types of constraints: fixed parameters, and equality of parameters. An important constraint not handled is inequality; this is particularly crucial for preventing negative variance estimates. In this paper, a method is described for imposing several kinds of inequality constraints in models, without the necessity for having computer programs which explicitly allow such constraints. The examples discussed include the prevention of Heywood cases, extension to inequalities of parameters to be greater than a specified value, and imposing ordered inequalities.
Psychological Methods | 2013
William R. Shadish; Eden Nagler Kyse; David Rindskopf
Several authors have proposed the use of multilevel models to analyze data from single-case designs. This article extends that work in 2 ways. First, examples are given of how to estimate these models when the single-case designs have features that have not been considered by past authors. These include the use of polynomial coefficients to model nonlinear change, the modeling of counts (Poisson distributed) or proportions (binomially distributed) as outcomes, the use of 2 different ways of modeling treatment effects in ABAB designs, and applications of these models to alternating treatment and changing criterion designs. Second, issues that arise when multilevel models are used for the analysis of single-case designs are discussed; such issues can form part of an agenda for future research on this topic. These include statistical power and assumptions, applications to more complex single-case designs, the role of exploratory data analyses, extensions to other kinds of outcome variables and sampling distributions, and other statistical programs that can be used to do such analyses.
Behavior Research Methods | 2009
William R. Shadish; Isabel C. C. Brasil; David A. Illingworth; Kristen D. White; Rodolfo Galindo; Eden D. Nagler; David Rindskopf
Certain research tasks require extracting data points from graphs and charts. Using 91 graphs that presented results from single-case designs, we investigated whether pairs of coders extract similar data from the same graphs (reliability), and whether the extracted data match numerical descriptions of the graph that the original author may have presented in tables or text (validity). Coders extracted data using the UnGraph computer program. Extraction proved highly reliable over several different kinds of analyses. Coders nearly always extracted identical numbers of data points, and the values they assigned to those data points were nearly identical. Extraction also proved highly valid, with the means of extracted data correlating nearly perfectly with means reported in tables or text and with very few discrepancies in any single case. These results suggest that researchers can use extracted data with a high degree of confidence that they are nearly identical to the original data.