Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michael C. Edwards is active.

Publication


Featured researches published by Michael C. Edwards.


Psychological Methods | 2007

Item Factor Analysis: Current Approaches and Future Directions

R. J. Wirth; Michael C. Edwards

The rationale underlying factor analysis applies to continuous and categorical variables alike; however, the models and estimation methods for continuous (i.e., interval or ratio scale) data are not appropriate for item-level data that are categorical in nature. The authors provide a targeted review and synthesis of the item factor analysis (IFA) estimation literature for ordered-categorical data (e.g., Likert-type response scales) with specific attention paid to the problems of estimating models with many items and many factors. Popular IFA models and estimation methods found in the structural equation modeling and item response theory literatures are presented. Following this presentation, recent developments in the estimation of IFA parameters (e.g., Markov chain Monte Carlo) are discussed. The authors conclude with considerations for future research on IFA, simulated examples, and advice for applied researchers.


Journal of Abnormal Psychology | 2007

Externalizing symptoms among children of alcoholic parents: Entry points for an antisocial pathway to alcoholism.

Andrea M. Hussong; R. J. Wirth; Michael C. Edwards; Patrick J. Curran; Laurie Chassin; Robert A. Zucker

The authors examined heterogeneity in risk for externalizing symptoms in children of alcoholic parents, as it may inform the search for entry points into an antisocial pathway to alcoholism. That is, they tested whether the number of alcoholic parents in a family, the comorbid subtype of parental alcoholism, and the gender of the child predicted trajectories of externalizing symptoms over the early life course, as assessed in high-risk samples of children of alcoholic parents and matched controls. Through integrative analyses of 2 independent, longitudinal studies, they showed that children with either an antisocial alcoholic parent or 2 alcoholic parents were at greatest risk for externalizing symptoms. Moreover, children with a depressed alcoholic parent did not differ from those with an antisocial alcoholic parent in reported symptoms. These findings were generally consistent across mother, father, and adolescent reports of symptoms; child gender and child age (ages 2 through 17); and the 2 independent studies examined. Multialcoholic and comorbid-alcoholic families may thus convey a genetic susceptibility to dysregulation along with environments that both exacerbate this susceptibility and provide few supports to offset it.


Medical Care | 2007

Practical issues in the application of item response theory: A demonstration using items from the Pediatric Quality of Life Inventory (PedsQL) 4.0 generic core scales

Cheryl D. Hill; Michael C. Edwards; David Thissen; Michelle M. Langer; R. J. Wirth; Tasha M. Burwinkle; James W. Varni

Background:Item response theory (IRT) is increasingly being applied to health-related quality of life instrument development and refinement. This article discusses results obtained using categorical confirmatory factor analysis (CCFA) to check IRT model assumptions and the application of IRT in item analysis and scale evaluation. Objectives:To demonstrate the value of CCFA and IRT in examining a health-related quality of life measure in children and adolescents. Methods:This illustration uses data from 10,241 children and their parents on items from the 4 subscales of the PedsQL 4.0 Generic Core Scales. CCFA was applied to confirm domain dimensionality and identify possible locally dependent items. IRT was used to assess the strength of the relationship between the items and the constructs of interest and the information available across the latent construct. Results:CCFA showed generally strong support for 1-factor models for each domain; however, several items exhibited evidence of local dependence. IRT revealed that the items generally exhibit favorable characteristics and are related to the same construct within a given domain. We discuss the lessons that can be learned by comparing alternate forms of the same scale, and we assess the potential impact of local dependence on the item parameter estimates. Conclusions:This article describes CCFA methods for checking IRT model assumptions and provides suggestions for using these methods in practice. It offers insight into ways information gained through IRT can be applied to evaluate items and aid in scale construction.


Psychological Assessment | 2010

A Reexamination of the Factor Structure of the Center for Epidemiologic Studies Depression Scale: Is a One-Factor Model Plausible?

Michael C. Edwards; Jennifer S. Cheavens; Jane E. Heiy; Kelly C. Cukrowicz

The Center for Epidemiologic Studies Depression Scale (CES-D) is one of the most widely used measures of depressive symptoms in research today. The original psychometric work in support of the CES-D (Radloff, 1977) described a 4-factor model underlying the 20 items on the scale. Despite a long history of evidence supporting this structure, researchers routinely report single-number summaries from the CES-D. The research described in this article examines the plausibility of 1-factor model using an initial sample of 595 subjects and a cross-validation sample of 661. After comparing a series of models found in the literature or suggested by analyses, we determined that the good fit of the 4-factor model is mostly due to its ability to model excess covariance associated with the 4 reverse-scored items. A 2-factor model that included a general depression factor and a positive wording method factor loading only on those 4 items had fit that was nearly as good as the original 4-factor model. We conclude that although a 1-factor model may not be the best model for the full 20-item CES-D, it is at least plausible. If a unidimensional set of items is required (e.g., for a unidimensional item response theory analysis), by dropping 5 items, we were able to find a 1-factor model that had very similar fit to the 4-factor model with the original 20 items.


Journal of Child Psychology and Psychiatry | 2009

Deconstructing the PDD clinical phenotype: internal validity of the DSM‐IV

Luc Lecavalier; Kenneth D. Gadow; Carla J. DeVincent; Carrie R. Houts; Michael C. Edwards

BACKGROUND Empirical studies of the structure of autism symptoms have challenged the three-domain model of impairment currently characterizing pervasive developmental disorders (PDD). The objective of this study was to assess the internal validity of the DSM as a conceptual model for describing PDD, while paying particular attention to certain subject characteristics. METHODS Parents and teachers completed a DSM-IV-referenced rating scale for 3- to 12-year-old clinic referrals with a PDD (n = 730). Ratings were submitted to confirmatory factor analysis and different models were assessed for fit. RESULTS Measures of fit indicated that the three-factor solution based on the DSM was superior to other models. Most indices of fit were acceptable, but showed room for improvement. Fit indices varied according to the rater (parent or teacher), childs age (preschool versus school aged), PDD subtype (autism, Aspergers, pervasive developmental disorder not otherwise specified (PDDNOS)), and IQ. CONCLUSIONS More research needs to be done before discarding current classification systems. Subject characteristics, modality of assessment, and procedural variations in statistical analyses impact conclusions about the structure of PDD symptoms.


Research in Human Development | 2009

Measurement and the Study of Change

Michael C. Edwards; R. J. Wirth

Many constructs developmental scientists study cannot be directly observed. In such cases, scales are created that reflect the construct of interest. Observed behaviors are taken as manifestations of an unobserved common cause. As crucial as measurement is to understanding many psychological phenomenon, it is perhaps even more important when the goal of research is to understand how a construct changes over time. In this article we review several approaches to measurement, note features of latent variable measurement models which are ideally suited to the study of change, describe a hypothetical example, and conclude with a discussion of measurement and development.


Structural Equation Modeling | 2008

Incorporating Measurement Nonequivalence in a Cross-Study Latent Growth Curve Analysis

David B. Flora; Patrick J. Curran; Andrea M. Hussong; Michael C. Edwards

A large literature emphasizes the importance of testing for measurement equivalence in scales that may be used as observed variables in structural equation modeling applications. When the same construct is measured across more than one developmental period, as in a longitudinal study, it can be especially critical to establish measurement equivalence, or invariance, across the developmental periods. Similarly, when data from more than one study are combined into a single analysis, it is again important to assess measurement equivalence across the data sources. Yet, how to incorporate nonequivalence when it is discovered is not well described for applied researchers. Here, we present an item response theory approach that can be used to create scale scores from measures while explicitly accounting for nonequivalence. We demonstrate these methods in the context of a latent curve analysis in which data from two separate studies are combined to estimate a single longitudinal model spanning several developmental periods.


Multivariate Behavioral Research | 2012

Ordinary Least Squares Estimation of Parameters in Exploratory Factor Analysis With Ordinal Data

Chun-Ting Lee; Guangjian Zhang; Michael C. Edwards

Exploratory factor analysis (EFA) is often conducted with ordinal data (e.g., items with 5-point responses) in the social and behavioral sciences. These ordinal variables are often treated as if they were continuous in practice. An alternative strategy is to assume that a normally distributed continuous variable underlies each ordinal variable. The EFA model is specified for these underlying continuous variables rather than the observed ordinal variables. Although these underlying continuous variables are not observed directly, their correlations can be estimated from the ordinal variables. These correlations are referred to as polychoric correlations. This article is concerned with ordinary least squares (OLS) estimation of parameters in EFA with polychoric correlations. Standard errors and confidence intervals for rotated factor loadings and factor correlations are presented. OLS estimates and the associated standard error estimates and confidence intervals are illustrated using personality trait ratings from 228 college students. Statistical properties of the proposed procedure are explored using a Monte Carlo study. The empirical illustration and the Monte Carlo study showed that (a) OLS estimation of EFA is feasible with large models, (b) point estimates of rotated factor loadings are unbiased, (c) point estimates of factor correlations are slightly negatively biased with small samples, and (d) standard error estimates and confidence intervals perform satisfactorily at moderately large samples.


Psychological Assessment | 2006

The dimensions of change in therapeutic community treatment instrument.

Maria Orlando; Suzanne L. Wenzel; Pat Ebener; Michael C. Edwards; Wallace Mandell; Kirsten Becker

In this article, the authors describe the refinement and preliminary evaluation of the Dimensions of Change in Therapeutic Community Treatment Instrument (DCI), a measure of treatment process. In Study 1, a 99-item DCI, administered to a cross-sectional sample of substance abuse clients (N = 990), was shortened to 54 items on the basis of results from confirmatory factor analyses and item response theory invariance tests. In Study 2, confirmatory factor analyses of the 54-item DCI, completed by a longitudinal cohort of 993 clients, established and validated an 8-factor solution across 2 subpopulations (adults and adolescents) and 2 time points (treatment entry and 30-days postentry). The results of the 2 studies are encouraging and support use of the 54-item DCI as a tool to measure treatment process.


Journal of Educational and Behavioral Statistics | 2006

An Empirical Bayes Approach to Subscore Augmentation: How Much Strength Can We Borrow?

Michael C. Edwards; Jack L. Vevea

This article examines a subscore augmentation procedure. The approach uses empirical Bayes adjustments and is intended to improve the overall accuracy of measurement when information is scant. Simulations examined the impact of the method on subscale scores in a variety of realistic conditions. The authors focused on two popular scoring methods: summed scores and item response theory scale scores for summed scores. Simulation conditions included number of subscales, length (hence, reliability) of subscales, and the underlying correlations between scales. To examine the relative performance of the augmented scales, the authors computed root mean square error, reliability, percentage correctly identified as falling within specific proficiency ranges, and the percentage of simulated individuals for whom the augmented score was closer to the true score than was the nonaugmented score. The general findings and limitations of the study are discussed and areas for future research are suggested.

Collaboration


Dive into the Michael C. Edwards's collaboration.

Top Co-Authors

Avatar

R. J. Wirth

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Leigh F. Callahan

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alan W. Stacy

Claremont Graduate University

View shared research outputs
Top Co-Authors

Avatar

Andrea M. Hussong

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David Thissen

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge