Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ken Kelley is active.

Publication


Featured researches published by Ken Kelley.


Psychological Methods | 2011

Effect size measures for mediation models: Quantitative strategies for communicating indirect effects.

Kristopher J. Preacher; Ken Kelley

The statistical analysis of mediation effects has become an indispensable tool for helping scientists investigate processes thought to be causal. Yet, in spite of many recent advances in the estimation and testing of mediation effects, little attention has been given to methods for communicating effect size and the practical importance of those effect sizes. Our goals in this article are to (a) outline some general desiderata for effect size measures, (b) describe current methods of expressing effect size and practical importance for mediation, (c) use the desiderata to evaluate these methods, and (d) develop new methods to communicate effect size in the context of mediation analysis. The first new effect size index we describe is a residual-based index that quantifies the amount of variance explained in both the mediator and the outcome. The second new effect size index quantifies the indirect effect as the proportion of the maximum possible indirect effect that could have been obtained, given the scales of the variables involved. We supplement our discussion by offering easy-to-use R tools for the numerical and visual communication of effect size for mediation effects.


Psychological Methods | 2012

On Effect Size.

Ken Kelley; Kristopher J. Preacher

The call for researchers to report and interpret effect sizes and their corresponding confidence intervals has never been stronger. However, there is confusion in the literature on the definition of effect size, and consequently the term is used inconsistently. We propose a definition for effect size, discuss 3 facets of effect size (dimension, measure/index, and value), outline 10 corollaries that follow from our definition, and review ideal qualities of effect sizes. Our definition of effect size is general and subsumes many existing definitions of effect size. We define effect size as a quantitative reflection of the magnitude of some phenomenon that is used for the purpose of addressing a question of interest. Our definition of effect size is purposely more inclusive than the way many have defined and conceptualized effect size, and it is unique with regard to linking effect size to a question of interest. Additionally, we review some important developments in the effect size literature and discuss the importance of accompanying an effect size with an interval estimate that acknowledges the uncertainty with which the population value of the effect size has been estimated. We hope that this article will facilitate discussion and improve the practice of reporting and interpreting effect sizes.


Psychological Methods | 2003

Sample Size for Multiple Regression: Obtaining Regression Coefficients That Are Accurate, Not Simply Significant

Ken Kelley; Scott E. Maxwell

An approach to sample size planning for multiple regression is presented that emphasizes accuracy in parameter estimation (AIPE). The AIPE approach yields precise estimates of population parameters by providing necessary sample sizes in order for the likely widths of confidence intervals to be sufficiently narrow. One AIPE method yields a sample size such that the expected width of the confidence interval around the standardized population regression coefficient is equal to the width specified. An enhanced formulation ensures, with some stipulated probability, that the width of the confidence interval will be no larger than the width specified. Issues involving standardized regression coefficients and random predictors are discussed, as are the philosophical differences between AIPE and the power analytic approaches to sample size planning.


Child Development | 2008

Infant Temperament Moderates Relations Between Maternal Parenting in Early Childhood and Children's Adjustment in First Grade

Anne Dopkins Stright; Kathleen Cranley Gallagher; Ken Kelley

A differential susceptibility hypothesis proposes that children may differ in the degree to which parenting qualities affect aspects of child development. Infants with difficult temperaments may be more susceptible to the effects of parenting than infants with less difficult temperaments. Using latent change curve analyses to analyze data from the National Institute of Child Health and Human Development Study of Early Child Care, the current study found that temperament moderated associations between maternal parenting styles during early childhood and childrens first-grade academic competence, social skills, and relationships with teachers and peers. Relations between parenting and first-grade outcomes were stronger for difficult than for less difficult infants. Infants with difficult temperaments had better adjustment than less difficult infants when parenting quality was high and poorer adjustment when parenting quality was lower.


Journal of Clinical Child and Adolescent Psychology | 2003

Analytic Methods for Questions Pertaining to a Randomized Pretest, Posttest, Follow-Up Design

Joseph R. Rausch; Scott E. Maxwell; Ken Kelley

Delineates 5 questions regarding group differences that are likely to be of interest to researchers within the framework of a randomized pretest, posttest, follow-up (PPF) design. These 5 questions are examined from a methodological perspective by comparing and discussing analysis of variance (ANOVA) and analysis of covariance (ANCOVA) methods and briefly discussing hierarchical linear modeling (HLM) for these questions. This article demonstrates that the pretest should be utilized as a covariate in the model rather than as a level of the time factor or as part of the dependent variable within the analysis of group differences. It is also demonstrated that how the posttest and the follow-up are utilized in the analysis of group differences is determined by the specific question asked by the researcher.


Behavior Research Methods | 2007

Methods for the Behavioral, Educational, and Social Sciences: An R package

Ken Kelley

Methods for the Behavioral, Educational, and Social Sciences (MBESS; Kelley, 2007b) is an open source package for R (R Development Core Team, 2007b), an open source statistical programming language and environment. MBESS implements methods that are not widely available elsewhere, yet are especially helpful for the idiosyncratic techniques used within the behavioral, educational, and social sciences. The major categories of functions are those that relate to confidence interval formation for noncentralt, F, and Χ2 parameters, confidence intervals for standardized effect sizes (which require noncentral distributions), and sample size planning issues from the power analytic and accuracy in parameter estimation perspectives. In addition, MBESS contains collections of other functions that should be helpful to substantive researchers and methodologists. MBESS is a long-term project that will continue to be updated and expanded so that important methods can continue to be made available to researchers in the behavioral, educational, and social sciences.


Behavior Research Methods | 2007

Sample size planning for the coefficient of variation from the accuracy in parameter estimation approach

Ken Kelley

The accuracy in parameter estimation approach to sample size planning is developed for the coefficient of variation, where the goal of the method is to obtain an accurate parameter estimate by achieving a sufficiently narrow confidence interval. The first method allows researchers to plan sample size so that the expected width of the confidence interval for the population coefficient of variation is sufficiently narrow. A modification allows a desired degree of assurance to be incorporated into the method, so that the obtained confidence interval will be sufficiently narrow with some specified probability (e.g., 85% assurance that the 95% confidence interval width will be no wider than ω units). Tables of necessary sample size are provided for a variety of scenarios that may help researchers planning a study where the coefficient of variation is of interest plan an appropriate sample size in order to have a sufficiently narrow confidence interval, optionally with some specified assurance of the confidence interval being sufficiently narrow. Freely available computer routines have been developed that allow researchers to easily implement all of the methods discussed in the article.


Educational and Psychological Measurement | 2005

THE EFFECTS OF NONNORMAL DISTRIBUTIONS ON CONFIDENCE INTERVALS AROUND THE STANDARDIZED MEAN DIFFERENCE: BOOTSTRAP AND PARAMETRIC CONFIDENCE INTERVALS

Ken Kelley

The standardized group mean difference, Cohen’s d, is among the most commonly used and intuitively appealing effect sizes for group comparisons. However, reporting this point estimate alone does not reflect the extent to which sampling error may have led to an obtained value. A confidence interval expresses the uncertainty that exists between d and the population value, δ, it represents. A set of Monte Carlo simulations was conducted to examine the integrity of a noncentral approach analogous to that given by Steiger and Fouladi, as well as two bootstrap approaches in situations in which the normality assumption is violated. Because d is positively biased, a procedure given by Hedges and Olkin is outlined, such that an unbiased estimate of δ can be obtained. The bias-corrected and accelerated bootstrap confidence interval using the unbiased estimate of δ is proposed and recommended for general use, especially in cases in which the assumption of normality may be violated.


American Journal of Nephrology | 2008

Analyzing Change: A Primer on Multilevel Models with Applications to Nephrology

Jocelyn E. Holden; Ken Kelley; Rajiv Agarwal

The analysis of change is central to the study of kidney research. In the past 25 years, newer and more sophisticated methods for the analysis of change have been developed; however, as of yet these newer methods are underutilized in the field of kidney research. Repeated measures ANOVA is the traditional model that is easy to understand and simpler to interpret, but it may not be valid in complex real-world situations. Problems with the assumption of sphericity, unit of analysis, lack of consideration for different types of change, and missing data, in the repeated measures ANOVA context are often encountered. Multilevel modeling, a newer and more sophisticated method for the analysis of change, overcomes these limitations and provides a better framework for understanding the true nature of change. The present article provides a primer on the use of multilevel modeling to study change. An example from a clinical study is detailed and the method for implementation in SAS is provided.


Clinical Journal of The American Society of Nephrology | 2008

Diagnosing Hypertension by Intradialytic Blood Pressure Recordings

Rajiv Agarwal; Tesfamariam Metiku; Getachew G. Tegegne; Robert P. Light; Zerihun Bunaye; Dagim M. Bekele; Ken Kelley

BACKGROUND AND OBJECTIVES The diagnosis of hypertension among hemodialysis patients by predialysis or postdialysis blood pressure (BP) recordings is imprecise and biased and has poor test-retest reliability. The use of intradialytic BP measurements to diagnose hypertension is unknown. DESIGN, SETTING, PARTICIPANTS, & MEASUREMENTS A diagnostic-test study was done with interdialytic ambulatory BP as reference standard. Index BP recordings tested were: predialysis (method 1), postdialysis (method 2), intradialytic (method 3), intradialytic including predialyis and postdialysis (method 4), and the average of predialysis and postdialysis (method 5). Each index BP was recorded over six consecutive dialysis treatments. RESULTS There were differences among index BP measurements in reproducibility, bias, precision, and accuracy. Method 4 was the most reproducible (intraclass correlation coefficient = 0.70 for systolic and diastolic BP). All 5 measurement methods overestimated 44-h ambulatory systolic BP. Methods 2, 3, or 4 overestimated ambulatory systolic BP by only a small amount. Method 4 was the most precise and accurate. For diagnosis of hypertension, BP cut-point by method 4 of 135/75 mmHg, had a sensitivity of 90.4% and specificity of 75.9% for systolic BP (area under ROC curve 0.90). Median cut-off systolic BP of 140 mmHg from a single dialysis provides approximately 80% sensitivity and 80% specificity in diagnosing systolic hypertension; a median cut-off diastolic BP of 80 mmHg provides approximately 75% sensitivity and 75% specificity in diagnosing diastolic hypertension. CONCLUSIONS Consideration of intradialytic BP measurements together with predialysis and postdialysis BP measurements improves the reproducibility, bias, precision, and accuracy of BP measurement compared with predialysis or postdialysis measurements.

Collaboration


Dive into the Ken Kelley's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Keke Lai

University of Notre Dame

View shared research outputs
Top Co-Authors

Avatar

Bhargab Chattopadhyay

University of Texas at Dallas

View shared research outputs
Top Co-Authors

Avatar

Corey M. Angst

Mendoza College of Business

View shared research outputs
Top Co-Authors

Avatar

Jocelyn E. Holden

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge