Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nianbo Dong is active.

Publication


Featured researches published by Nianbo Dong.


Developmental Psychology | 2014

Longitudinal associations between executive functioning and academic skills across content areas.

Mary Wagner Fuhs; Kimberly Turner Nesbitt; Dale C. Farran; Nianbo Dong

This study assessed 562 four-year-old children at the beginning and end of their prekindergarten (pre-k) year and followed them to the end of kindergarten. At each time point children were assessed on 6 measures of executive function (EF) and 5 subtests of the Woodcock-Johnson III academic achievement battery. Exploratory factor analyses yielded EF and achievement factor scores. We examined the longitudinal bidirectional associations between these domains as well as the bidirectional associations among the separate content areas and the EF factor. In the pre-k year, strong bidirectional associations were found for EF skills and mathematics and oral comprehension skills but not for literacy skills. After controlling for pre-k gains in both EF and achievement, EF skills continued to be strong predictors of gains in mathematics in kindergarten and a more moderate predictor of kindergarten language gains. These results provide important information on the interrelationship of the developmental domains of EF and achievement as well as support for efforts to determine effective pre-k activities and/or curricula that can improve childrens EF skills. They also suggest that mathematics activities may be a possible avenue for improving EF skills in young children.


Journal of Research on Educational Effectiveness | 2013

PowerUp!: A Tool for Calculating Minimum Detectable Effect Sizes and Minimum Required Sample Sizes for Experimental and Quasi-Experimental Design Studies

Rebecca A. Maynard; Nianbo Dong

Abstract This paper and the accompanying tool are intended to complement existing supports for conducting power analysis tools by offering a tool based on the framework of Minimum Detectable Effect Sizes (MDES) formulae that can be used in determining sample size requirements and in estimating minimum detectable effect sizes for a range of individual- and group-random assignment design studies and for common quasi-experimental design studies. The paper and accompanying tool cover computation of minimum detectable effect sizes under the following study designs: individual random assignment designs, hierarchical random assignment designs (2-4 levels), block random assignment designs (2-4 levels), regression discontinuity designs (6 types), and short interrupted time-series designs. In each case, the discussion and accompanying tool consider the key factors associated with statistical power and minimum detectable effect sizes, including the level at which treatment occurs and the statistical models (e.g., fixed effect and random effect) used in the analysis. The tool also includes a module that estimates for one and two level random assignment design studies the minimum sample sizes required in order for studies to attain user-defined minimum detectable effect sizes.


Journal of Educational Psychology | 2017

Learning-related cognitive self-regulation measures for prekindergarten children: A comparative evaluation of the educational relevance of selected measures

Mark W. Lipsey; Kimberly Turner Nesbitt; Dale C. Farran; Nianbo Dong; Mary Wagner Fuhs; Sandra Jo Wilson

Many cognitive self-regulation (CSR) measures are related to the academic achievement of prekindergarten children and are thus of potential interest for school readiness screening and as outcome variables in intervention research aimed at improving those skills in order to facilitate learning. The objective of this study was to identify learning-related CSR measures especially suitable for such purposes by comparing the performance of promising candidates on criteria designed to assess their educational relevance for pre-K settings. A diverse set of 12 easily administered measures was selected from among those represented in research on attention, effortful control, and executive function, and applied to a large sample of pre-K children. Those measures were then compared on their ability to predict achievement and achievement gain, responsiveness to developmental change, and concurrence with teacher ratings of CSR-related classroom behavior. Four measures performed well on all those criteria: Peg Tapping, Head-Toes-Knees-Shoulders, the Kansas Reflection-Impulsivity Scale for Preschoolers, and Copy Design. Two others, Dimensional Change Card Sort and Backwards Digit Span, performed well on most of the criteria. Cross-validation with a new sample of children confirmed the initial evaluation of these measures and provided estimates of test–retest reliability.


Journal of Educational and Behavioral Statistics | 2016

Power for Detecting Treatment by Moderator Effects in Two- and Three-Level Cluster Randomized Trials.

Jessaca Spybrook; Benjamin Kelcey; Nianbo Dong

Recently, there has been an increase in the number of cluster randomized trials (CRTs) to evaluate the impact of educational programs and interventions. These studies are often powered for the main effect of treatment to address the “what works” question. However, program effects may vary by individual characteristics or by context, making it important to also consider power to detect moderator effects. This article presents a framework for calculating statistical power for moderator effects at all levels for two- and three-level CRTs. Annotated R code is included to make the calculations accessible to researchers and increase the regularity in which a priori power analyses for moderator effects in CRTs are conducted.


Evaluation Review | 2018

Can Propensity Score Analysis Approximate Randomized Experiments Using Pretest and Demographic Information in Pre-K Intervention Research?

Nianbo Dong; Mark W. Lipsey

Background: It is unclear whether propensity score analysis (PSA) based on pretest and demographic covariates will meet the ignorability assumption for replicating the results of randomized experiments. Purpose: This study applies within-study comparisons to assess whether pre-Kindergarten (pre-K) treatment effects on achievement outcomes estimated using PSA based on a pretest and demographic covariates can approximate those found in a randomized experiment. Methods: Data—Four studies with samples of pre-K children each provided data on two math achievement outcome measures with baseline pretests and child demographic variables that included race, gender, age, language spoken at home, and mother’s highest education. Research Design and Data Analysis—A randomized study of a pre-K math curriculum provided benchmark estimates of effects on achievement measures. Comparison samples from other pre-K studies were then substituted for the original randomized control and the effects were reestimated using PSA. The correspondence was evaluated using multiple criteria. Results and Conclusions: The effect estimates using PSA were in the same direction as the benchmark estimates, had similar but not identical statistical significance, and did not differ from the benchmarks at statistically significant levels. However, the magnitude of the effect sizes differed and displayed both absolute and relative bias larger than required to show statistical equivalence with formal tests, but those results were not definitive because of the limited statistical power. We conclude that treatment effect estimates based on a single pretest and demographic covariates in PSA correspond to those from a randomized experiment on the most general criteria for equivalence.


Evaluation Review | 2016

Meaningful Effect Sizes, Intraclass Correlations, and Proportions of Variance Explained by Covariates for Planning Two- and Three-Level Cluster Randomized Trials of Social and Behavioral Outcomes:

Nianbo Dong; Wendy M. Reinke; Keith C. Herman; Catherine P. Bradshaw; Desiree W. Murray

Background: There is a need for greater guidance regarding design parameters and empirical benchmarks for social and behavioral outcomes to inform assumptions in the design and interpretation of cluster randomized trials (CRTs). Objectives: We calculated the empirical reference values on critical research design parameters associated with statistical power for children’s social and behavioral outcomes, including effect sizes, intraclass correlations (ICCs), and proportions of variance explained by a covariate at different levels (R 2). Subjects: Children from kindergarten to Grade 5 in the samples from four large CRTs evaluating the effectiveness of two classroom- and two school-level preventive interventions. Measures: Teacher ratings of students’ social and behavioral outcomes using the Teacher Observation of Classroom Adaptation–Checklist and the Social Competence Scale–Teacher. Research design: Two types of effect size benchmarks were calculated: (1) normative expectations for change and (2) policy-relevant demographic performance gaps. The ICCs and R 2 were calculated using two-level hierarchical linear modeling (HLM), where students are nested within schools, and three-level HLM, where students were nested within classrooms, and classrooms were nested within schools. Results and Conclusions: Comprehensive tables of benchmarks and ICC values are provided to inform prevention researchers in interpreting the effect size of interventions and conduct power analyses for designing CRTs of children’s social and behavioral outcomes. The discussion also provides a demonstration for how to use the parameter reference values provided in this article to calculate the sample size for two- and three-level CRTs designs.


Multivariate Behavioral Research | 2017

Experimental Power for Indirect Effects in Group-randomized Studies with Group-level Mediators

Ben Kelcey; Nianbo Dong; Jessaca Spybrook; Zuchao Shen

ABSTRACT Mediation analyses have provided a critical platform to assess the validity of theories of action across a wide range of disciplines. Despite widespread interest and development in these analyses, literature guiding the design of mediation studies has been largely unavailable. Like studies focused on the detection of a total or main effect, an important design consideration is the statistical power to detect indirect effects if they exist. Understanding the sensitivity to detect indirect effects is exceptionally important because it directly influences the scale of data collection and ultimately governs the types of evidence group-randomized studies can bring to bear on theories of action. However, unlike studies concerned with the detection of total effects, literature has not established power formulas for detecting multilevel indirect effects in group-randomized designs. In this study, we develop closed-form expressions to estimate the variance of and the power to detect indirect effects in group-randomized studies with a group-level mediator using two-level linear models (i.e., 2-2-1 mediation). The results suggest that when carefully planned, group-randomized designs may frequently be well positioned to detect mediation effects with typical sample sizes. The resulting power formulas are implemented in the R package PowerUpR and the PowerUp!-Mediator software (causalevaluation.org).


Prevention Science | 2018

Sample Size Planning for Cluster-Randomized Interventions Probing Multilevel Mediation

Ben Kelcey; Jessaca Spybrook; Nianbo Dong

Multilevel mediation analyses play an essential role in helping researchers develop, probe, and refine theories of action underlying interventions and document how interventions impact outcomes. However, little is known about how to plan studies with sufficient power to detect such multilevel mediation effects. In this study, we describe how to prospectively estimate power and identify sufficient sample sizes for experiments intended to detect multilevel mediation effects. We outline a simple approach to estimate the power to detect mediation effects with individual- or cluster-level mediators using summary statistics easily obtained from empirical literature and the anticipated magnitude of the mediation effect. We draw on a running example to illustrate several different types of mediation and provide an accessible introduction to the design of multilevel mediation studies. The power formulas are implemented in the R package PowerUpR and the PowerUp software (causalevaluation.org).


Journal of Experimental Education | 2018

Power Analyses for Moderator Effects in Three-Level Cluster Randomized Trials

Nianbo Dong; Benjamin Kelcey; Jessaca Spybrook

ABSTRACT Researchers are often interested in whether the effects of an intervention differ conditional on individual- or group-moderator variables such as childrens characteristics (e.g., gender), teachers background (e.g., years of teaching), and schools characteristics (e.g., urbanity); that is, the researchers seek to examine for whom and under what circumstances an intervention works. Furthermore, the researchers are interested in understanding and interpreting variability in treatment effects through moderation analysis as an approach to exploring the sources of the treatment effect variability. This study develops formulas for power analyses to detect the moderator effects in designing three-level cluster randomized trials (CRTs). We develop the statistical formulas for calculating statistical power, minimum detectable effect size difference, and 95% confidence intervals for cluster or cross-level moderation, nonrandomly varying or random slopes, binary or continuous moderators, and designs with or without covariates. We demonstrate how the calculations can be used in the planning phase of three-level CRTs using the software PowerUp!-Moderator.


Prevention Science | 2018

The Incredible Years Teacher Classroom Management Program: Outcomes from a Group Randomized Trial

Wendy M. Reinke; Keith C. Herman; Nianbo Dong

This group randomized controlled trial (RCT) evaluated the efficacy of the Incredible Years Teacher Classroom Management Program (IY TCM) on student social behavioral and academic outcomes among a large diverse sample of students within an urban context. Participants included 105 teachers and 1817 students in kindergarten to third grade. Three-level hierarchical linear models (HLM) were conducted to examine the overall treatment effects on teacher-reported student behavior and academic outcomes. In addition, multi-level moderation analyses were conducted to examine whether the treatment effects on student outcomes differed by demographic variables and pretest measures of social emotional and disruptive behavior and academics. Findings indicate that IY TCM reduced student emotional dysregulation (d = − 0.14) and increased prosocial behavior (d = 0.13) and social competence (d = 0.13). In addition, students initially lower on measures of social and academic competence demonstrated significant improvements on the same measure at outcome in comparison to similar peers in control classrooms. Practical significance of the findings and implications for schools and policy makers are discussed.

Collaboration


Dive into the Nianbo Dong's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jessaca Spybrook

Western Michigan University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ben Kelcey

University of Cincinnati

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge