Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where John M. Ferron is active.

Publication


Featured researches published by John M. Ferron.


Educational and Psychological Measurement | 2005

The Quality of Factor Solutions in Exploratory Factor Analysis: The Influence of Sample Size, Communality, and Overdetermination.

Kristine Y. Hogarty; Constance V. Hines; Jeffrey D. Kromrey; John M. Ferron; Karen R. Mumford

The purpose of this studywas to investigate the relationship between sample size and the quality of factor solutions obtained from exploratory factor analysis. This research expanded upon the range of conditions previously examined, employing a broad selection of criteria for the evaluation of the quality of sample factor solutions. Results showed that when communalities are high, sample size tended to have less influence on the quality of factor solutions than when communalities are low. Overdetermination of factors was also shown to improve the factor analysis solution. Finally, decisions about the quality of the factor solution depended upon which criteria were examined.


Journal of School Violence | 2002

Emerging Risks of Violence in the Digital Age: Lessons for Educators from an Online Study of Adolescent Girls in the United States.

Ilene R. Berson; Michael J. Berson; John M. Ferron

Abstract This research focuses on the evolving area of cyberviolence and draws on a pioneering study to discuss benefits and risks of online interaction among adolescent girls. This new area of inquiry introduces educators to the social and cultural communities of the Internet, a virtual venue with unique perspectives on power, identity, and gender for children and youth. This research focuses on the evolving area of cyberviolence and draws on a pioneering study to discuss benefits and risks of online interaction among adolescent girls. This new area of inquiry introduces educators to the social and cultural communities of the Internet, a virtual venue with unique perspectives on power, identity, and gender for children and youth.


The Journal of Positive Psychology | 2011

Longitudinal academic outcomes predicted by early adolescents’ subjective well-being, psychopathology, and mental health status yielded from a dual factor model

Shannon M. Suldo; Amanda Thalji; John M. Ferron

This longitudinal investigation examined the utility of subjective well-being (SWB) and psychopathology in predicting subsequent academic achievement and in-school behavior in 300 middle school students. Initial SWB predicted students’ grade point averages (GPAs) 1 year later, initial internalizing psychopathology predicted absences 1 year later, and initial externalizing psychopathology predicted grades, absences, and discipline problems 1 year later. Students’ grades and attendance across time varied as a function of mental health group yielded from a dual factor model. Specifically, students in the troubled mental health group declined at a significantly faster rate on GPAs than youth without psychopathology. In contrast, students in the symptomatic but content group were not significantly different from peers with low psychopathology. At Time 2, the best attendance, grades, and math skills were found among students who had both average/high SWB and low psychopathology 1 year earlier, supporting the long-term utility of complete mental health.


Review of Educational Research | 2009

Multilevel Modeling: A Review of Methodological Issues and Applications

Robert F. Dedrick; John M. Ferron; Melinda R. Hess; Kristine Y. Hogarty; Jeffrey D. Kromrey; Thomas R. Lang; John D. Niles; Reginald S. Lee

This study analyzed the reporting of multilevel modeling applications of a sample of 99 articles from 13 peer-reviewed journals in education and the social sciences. A checklist, derived from the methodological literature on multilevel modeling and focusing on the issues of model development and specification, data considerations, estimation, and inference, was used to analyze the articles. The most common applications were two-level models where individuals were nested within contexts. Most studies were non-experimental and used nonprobability samples. The amount of data at each level varied widely across studies, as did the number of models examined. Analyses of reporting practices indicated some clear problems, with many articles not reporting enough information for a reader to critique the reported analyses. For example, in many articles, one could not determine how many models were estimated, what covariance structure was assumed, what type of centering if any was used, whether the data were consistent with assumptions, whether outliers were present, or how the models were estimated. Guidelines for researchers reporting multilevel analyses are provided.


Behavior Research Methods | 2009

Making treatment effect inferences from multiple-baseline data: The utility of multilevel modeling approaches

John M. Ferron; Bethany A. Bell; Melinda R. Hess; Gianna Rendina-Gobioff; Susan T. Hibbard

Multiple-baseline studies are prevalent in behavioral research, but questions remain about how to best analyze the resulting data. Monte Carlo methods were used to examine the utility of multilevel models for multiplebaseline data under conditions that varied in the number of participants, number of repeated observations per participant, variance in baseline levels, variance in treatment effects, and amount of autocorrelation in the Level 1 errors. Interval estimates of the average treatment effect were examined for two specifications of the Level 1 error structure (σ2I and first-order autoregressive) and for five different methods of estimating the degrees of freedom (containment, residual, between—within, Satterthwaite, and Kenward—Roger). When the Satterthwaite or Kenward—Roger method was used and an autoregressive Level 1 error structure was specified, the interval estimates of the average treatment effect were relatively accurate. Conversely, the interval estimates of the treatment effect variance were inaccurate, and the corresponding point estimates were biased.


Multivariate Behavioral Research | 2002

Effects of Misspecifying the First-Level Error Structure in Two-Level Models of Change.

John M. Ferron; Ron Dailey; Qing Yi

Computer simulation methods were used to examine the sensitivity of model fit criteria to misspecification of the first-level error structure in two-level models of change, and then to examine the impact of misspecification on estimates of the variance parameters, estimates of the fixed effects, and tests of the fixed effects. Fit criteria frequently failed to identify the correct model when series lengths were short. Misspecification led to substantially biased estimates of variance parameters. The estimates of the fixed effects, however, remained unbiased for most conditions, and the tests of fixed effects were robust to misspecification for most conditions. The problems in the fixed effects occurred when nonlinear growth trajectories were coupled with data that were unequally spaced by different amounts for different individuals.


Journal of Experimental Education | 2002

Statistical Power of Randomization Tests Used With Multiple-Baseline Designs

John M. Ferron; Chris Sentovich

Abstract Statistical power was estimated for 3 randomization tests used with multiple-baseline designs. In 1 test, participants were randomly assigned to baseline conditions; in the 2nd, intervention points were randomly assigned; and in the 3rd, the authors used both forms of random assignment. Power was studied for several series lengths (N = 10, 20, 30), several effect sizes (d = 0, 0.5, 1.0, 1.5, 2.0), and several levels of autocorrelation among the errors (p 1 = 0, .1, .2, .3, .4, and .5). Power was found to be similar among the 3 tests. Power was low for effect sizes of 0.5 and 1.0 but was often adequate (> .80) for effect sizes of 1.5 and 2.0.


Journal of Experimental Child Psychology | 2003

Capacity, strategies, and metamemory: Tests of a three-factor model of memory development

Darlene DeMarie; John M. Ferron

Multiple measures of three of the factors (capacity, strategies, and metamemory) hypothesized to cause improvements in memory with age were obtained from 179 children in kindergarten to second grade (younger: ages 5-8) or third and fourth grade (older: ages 8-11) during nine sessions of testing. Confirmatory factor analysis was computed separately for each age group. Results suggested that the fit of the three-factor model was statistically significantly better than a one-factor, general memory model for both age groups. However, the fit indices were borderline, and there was not sufficient evidence for a metamemory factor for younger children. The factors that influence memory performance may differ with age.


Journal of Experimental Education | 1996

The Power of Randomization Tests for Single-Case Phase Designs

John M. Ferron; Patrick Onghena

Abstract Monte Carlo methods were used to estimate the power of randomization tests used with single-case designs involving the random assignment of treatments to phases. The design studied involved 2 treatments and 6 phases. The power was studied for 6 standardized effect sizes (0, .2, .5, .8, 1.1, and 1.4), 4 levels of autocorrelation (1st order autocorrelation coefficients of -.3, 0, .3, and .6), and 5 different phase lengths (4, 5, 6, 7, and 8 observations). Power was estimated for each condition by simulating 10,000 experiments. The results showed an adequate level of power (> .80) when effect sizes were large (1.1 and 1.4), phase lengths exceeded 5, and autocorrelation was not negative.


Journal of Experimental Education | 1995

Analyzing Single-Case Data: The Power of Randomization Tests

John M. Ferron; William B. Ware

Abstract Randomization tests have been proposed as a valid method for analyzing the data of single-case designs. In this study, the power of randomization tests was systematically examined for typical designs that rely on the random assignment of interventions within the sequence of observations. A 30-observation AB design, a 32-observation ABAB design, and a multiple-baseline AB design with 15 observations on each of four individuals were studied. Four levels of autocorrelation were considered, as well as six effect sizes, ranging from 0.0 to 1.4. For each combination of design, autocorrelation, and effect size, power was estimated by generating data for 1,000 experiments. The power estimates were generally found to be low.

Collaboration


Dive into the John M. Ferron's collaboration.

Top Co-Authors

Avatar

Mariola Moeyaert

State University of New York System

View shared research outputs
Top Co-Authors

Avatar

S. Natasha Beretvas

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Wim Van Den Noortgate

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

Jeffrey D. Kromrey

University of South Florida

View shared research outputs
Top Co-Authors

Avatar

Maaike Ugille

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

Robert F. Dedrick

University of South Florida

View shared research outputs
Top Co-Authors

Avatar

Bethany A. Bell

University of South Carolina

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Melinda R. Hess

University of South Florida

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge