Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where James Algina is active.

Publication


Featured researches published by James Algina.


Psychological Methods | 2003

Generalized Eta and Omega Squared Statistics: Measures of Effect Size for Some Common Research Designs

Stephen Olejnik; James Algina

The editorial policies of several prominent educational and psychological journals require that researchers report some measure of effect size along with tests for statistical significance. In analysis of variance contexts, this requirement might be met by using eta squared or omega squared statistics. Current procedures for computing these measures of effect often do not consider the effect that design features of the study have on the size of these statistics. Because research-design features can have a large effect on the estimated proportion of explained variance, the use of partial eta or omega squared can be misleading. The present article provides formulas for computing generalized eta and omega squared statistics, which provide estimates of effect size that are comparable across a variety of research designs.


Review of Educational Research | 1978

Criterion-Referenced Testing and Measurement: A Review of Technical Issues and Developments

Ronald K. Hambleton; Hariharan Swaminathan; James Algina; Douglas Bill Coulson

Glaser (1963) and Popham and Husek (1969) were the first to introduce and to popularize the field of criterion-referenced testing. Their motive was to provide the kind of test score information needed to make a variety of individual and programmatic decisions arising in objectivesbased instructional programs. Norm-referenced tests were seen as less than ideal for providing the desired kind of test score information. At present students at all levels of education are taking criterion-


Journal of Educational and Behavioral Statistics | 2004

An Empirical Comparison of Statistical Models for Value-Added Assessment of School Performance

Carmen D. Tekwe; Randy L. Carter; Chang-Xing Ma; James Algina; Maurice E. Lucas; Jeffrey Roth; Mario Ariet; Thomas Fisher; Michael B. Resnick

Hierarchical Linear Models (HLM) have been used extensively for value-added analysis, adjusting for important student and school-level covariates such as socioeconomic status. A recently proposed alternative, the Layered Mixed Effects Model (LMEM) also analyzes learning gains, but ignores sociodemographic factors. Other features of LMEM, such as its ability to apportion credit for learning gains among multiple schools and its utilization of incomplete observations, make it appealing. A third model that is appealing due to its simplicity is the Simple Fixed Effects Models (SFEM). Statistical and computing specifications are given for each of these models. The models were fitted to obtain value-added measures of school performance by grade and subject area, using a common data set with two years of test scores. We investigate the practical impact of differences among these models by comparing their value-added measures. The value-added measures obtained from the SFEM were highly correlated with those from the LMEM. Thus, due to its simplicity, the SFEM is recommended over LMEM. Results of comparisons of SFEM with HLM were equivocal. Inclusion of student level variables such as minority status and poverty leads to results that differ from those of the SFEM. The question of whether to adjust for such variables is, perhaps, the most important issue faced when developing a school accountability system. Either inclusion or exclusion of them is likely to lead to a biased system. Which bias is most tolerable may depend on whether the system is to be a high-stakes one.


Communications in Statistics - Simulation and Computation | 1998

A comparison of two approaches for selecting covariance structures in the analysis of repeated measurements

H. J. Keselman; James Algina; Rhonda K. Kowalchuk; Russell D. Wolfinger

The mixed model approach to the analysis of repeated measurements allows users to model the covariance structure of their data. That is, rather than using a univariate or a multivariate test statistic for analyzing effects, tests that assume a particular form for the covariance structure, the mixed model approach allows the data to determine the appropriate structure. Using the appropriate covariance structure should result in more powerful tests of the repeated measures effects according to advocates of the mixed model approach. SAS’ (SAS Institute, 1996) mixed model program, PROC MIXED, provides users with two information Criteria for selecting the ‘best’ covariance structure, Akaike (1974) and Schwarz (1978). Our study compared these log likelihood tests to see how effective they would be for detecting various population covariance structures. In particular, the criteria were compared in nonspherical repeated measures designs having equal/unequal group sizes and covariance matrices when data were both ...


Behavior Modification | 2006

Predicting Outcome in Parent-Child Interaction Therapy Success and Attrition

Branlyn E. Werba; Sheila M. Eyberg; Stephen R. Boggs; James Algina

This study explored predictors of treatment response and attrition in Parent-Child Interaction Therapy (PCIT). Participants were 99 families of 3- to 6-year-old children with disruptive behavior disorders. Multiple logistic regression was used to identify those pretreatment child, family, and accessibility factors that were predictive of success or attrition. For all study participants, waitlist group assignment and maternal age were the significant predictors of outcome. For treatment participants (study participants excluding those who dropped out after the initial evaluation but before treatment began), only maternal ratings of parenting stress and maternal inappropriate behavior during parent-child interactions were significant predictors of treatment outcome. These results suggest that for treatment studies of disruptive preschoolers, the benefits of using a waitlist control group may be outweighed by the disproportionate number of dropouts from this group. Once families begin PCIT, however, parent-related variables become salient in predicting treatment outcome.


Structural Equation Modeling | 2001

A Note on Estimating the Joreskog-Yang Model for Latent Variable Interaction Using LISREL 8.3.

James Algina; Bradley C. Moulder

Kenny and Judd (1984) developed a latent variable interaction model for observed variables centered around their population means. They estimated the model by using a covariance matrix calculated from sample-mean-centered variables and products of these variables. Subsequently,Jöreskog and Yang (1996) identified the need to include intercepts for the measurement and structural equations and estimated the model by using a covariance matrix calculated from noncentered observed variables and products of these variables, and means of the observed variables and the products of noncentered variables. Evidence is presented that the Jöreskog-Yang procedure for estimating the Kenny-Judd interaction model is subject to severe convergence problems when implemented in LISREL8.3 and means for the indicators of the latent exogenous variables are nonzero. An alternative procedure is presented that solves the convergence problem and provides consistent estimators of the parameters.


Structural Equation Modeling | 2002

Comparison of Methods for Estimating and Testing Latent Variable Interactions

Bradley C. Moulder; James Algina

Structural equation modeling methods for estimating and testing hypotheses about an interaction between continuous variables were investigated. The methods were (a) Bollens (1996) 2-stage least squares (TSLS) method, Pings (1996) 2-step maximum likelihood (ML) method, and Jaccard and Wans (1995) ML method for the Kenny-Judd model (Kenny & Judd, 1984); (b) a 2-step ML procedure and ML estimation of the Jöreskog-Yang model (Jöreskog & Yang 1996); and (c) ML estimation of a revised Jöreskog-Yang model. The TSLS procedure exhibited more bias and lower power than the other methods. Under ML estimation of the Jöreskog-Yang model, Type I error rates were not well controlled when robust standard errors were used. Among the remaining procedures, the Jaccard-Wan procedure and ML estimation of the revised Jöreskog-Yang procedure were most effective, with the latter having some small advantages over the former.


Psychological Methods | 2008

A generally robust approach for testing hypotheses and setting confidence intervals for effect sizes.

H. J. Keselman; James Algina; Lisa M. Lix; Rand R. Wilcox; Kathleen N. Deering

Standard least squares analysis of variance methods suffer from poor power under arbitrarily small departures from normality and fail to control the probability of a Type I error when standard assumptions are violated. This article describes a framework for robust estimation and testing that uses trimmed means with an approximate degrees of freedom heteroscedastic statistic for independent and correlated groups designs in order to achieve robustness to the biasing effects of nonnormality and variance heterogeneity. The authors describe a nonparametric bootstrap methodology that can provide improved Type I error control. In addition, the authors indicate how researchers can set robust confidence intervals around a robust effect size parameter estimate. In an online supplement, the authors use several examples to illustrate the application of an SAS program to implement these statistical methods.


Psychological Methods | 2005

An Alternative to Cohen's Standardized Mean Difference Effect Size: A Robust Parameter and Confidence Interval in the Two Independent Groups Case.

James Algina; H. J. Keselman; Randall D. Penfield

The authors argue that a robust version of Cohens effect size constructed by replacing population means with 20% trimmed means and the population standard deviation with the square root of a 20% Winsorized variance is a better measure of population separation than is Cohens effect size. The authors investigated coverage probability for confidence intervals for the new effect size measure. The confidence intervals were constructed by using the noncentral t distribution and the percentile bootstrap. Over the range of distributions and effect sizes investigated in the study, coverage probability was better for the percentile bootstrap confidence interval.


Multivariate Behavioral Research | 2003

Sample Size Tables for Correlation Analysis with Applications in Partial Correlation and Multiple Regression Analysis.

James Algina; Stephen Olejnik

Tables for selecting sample size in correlation studies are presented. Some of the tables allow selection of sample size so that r (or r², depending on the statistic the researcher plans to interpret) will be within a target interval around the population parameter with probability .95. The intervals are ±.05, ±.10, ±.15, and ±.20 around the population parameter. Other tables allow selection of sample size to meet a target for power when conducting a .05 test of the null hypothesis that a correlation coefficient is zero. Applications of the tables in partial correlation and multiple regression analyses are discussed. SAS and SPSS computer programs are made available to permit researchers to select sample size for levels of accuracy, probabilities, and parameter values and for Type I error rates other than those used in constructing the tables.

Collaboration


Dive into the James Algina's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rand R. Wilcox

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kevin S. Sutherland

Virginia Commonwealth University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

T. C. Oshima

Georgia State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge