Hariharan Swaminathan
University of Connecticut
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Hariharan Swaminathan.
Applied Psychological Measurement | 1993
H. Jane Rogers; Hariharan Swaminathan
The Mantel-Haenszel (MH) procedure is sensitive to only one type of differential item functioning (DIF). It is not designed to detect DIF that has a non uniform effect across trait levels. By generalizing the model underlying the MH procedure, a more general DIF detection procedure has been developed (Swaminathan & Rogers, 1990). This study compared the performance of this procedure—the logistic regression (LR) procedure—to that of the MH procedure in the detection of uniform and non uniform DIF in a simulation study which examined the distributional properties of the LR and MH test statistics and the relative power of the two proce dures. For both the LR and MH test statistics, the expected distributions were obtained under nearly all conditions. The LR test statistic did not have the expected distribution for very difficult and highly discriminating items. The LR procedure was found to be more powerful than the MH procedure for detecting nonuniform DIF and as powerful in detecting uniform DIF.
Review of Educational Research | 1978
Ronald K. Hambleton; Hariharan Swaminathan; James Algina; Douglas Bill Coulson
Glaser (1963) and Popham and Husek (1969) were the first to introduce and to popularize the field of criterion-referenced testing. Their motive was to provide the kind of test score information needed to make a variety of individual and programmatic decisions arising in objectivesbased instructional programs. Norm-referenced tests were seen as less than ideal for providing the desired kind of test score information. At present students at all levels of education are taking criterion-
Psychometrika | 1985
Hariharan Swaminathan; Janice A. Gifford
A Bayesian procedure is developed for the estimation of parameters in the two-parameter logistic item response model. Joint modal estimates of the parameters are obtained and procedures for the specification of prior information are described. Through simulation studies it is shown that Bayesian estimates of the parameters are superior to maximum likelihood estimates in the sense that they are (a) more meaningful since they do not drift out of range, and (b) more accurate in that they result in smaller mean squared differences between estimates and true values.
Applied Psychological Measurement | 1996
Pankaja Narayanan; Hariharan Swaminathan
This study compared three procedures—the Mantel- Haenszel (MH), the simultaneous item bias (SIB), and the logistic regression (LR) procedures—with respect to their Type I error rates and power to detect nonuniform dif ferential item functioning (DIF). Data were simulated to reflect a variety of conditions: The factors manipulated included sample size, ability distribution differences between the focal and the reference groups, proportion of DIF items in the test, DIF effect sizes, and type of item. 384 conditions were studied. Both the SIB and LR proce dures were equally powerful in detecting nonuniform DIF under most conditions. The MH procedure was not very effective in identifying nonuniform DIF items that had disordinal interactions. The Type I error rates were within the expected limits for the MH procedure and were higher than expected for the SIB and LR proce dures ; the SIB results showed an overall increase of approximately 1% over the LR results. Index terms: differential item functioning, logistic regression statistic, Mantel-Haenszel statistic, nondirectional DIF, simulta neous item bias statistic, SIBTEST, Type I error rate, unidirectional DIF.
Applied Psychological Measurement | 1994
Pankaja Narayanan; Hariharan Swaminathan
Two nonparametric procedures for detecting differ ential item functioning (DIF)—the Mantel-Haenszel (MH) procedure and the simultaneous item bias (SIB) procedure—were compared with respect to their Type I error rates and power. Data were simulated to reflect conditions varying in sample size, ability distribution differences between the focal and reference groups, pro portion of DIF items in the test, DIF effect sizes, and type of item. 1,296 conditions were studied. The SIB and MH procedures were equally powerful in detecting uniform DIF for equal ability distributions. The SIB procedure was more powerful than the MH procedure in detecting DIF for unequal ability distributions. Both procedures had sufficient power to detect DIF for a sample size of 300 in each group. Ability distribution did not have a significant effect on the SIB procedure but did affect the MH procedure. This is important because ability distribu tion differences between two groups often are found in practice. The Type I error rates for the MH statistic were well within the nominal limits, whereas they were slightly higher than expected for the SIB statistic. Com parisons between the detection rates of the two proce dures were made with respect to the various factors. Index terms: differential item functioning, Mantel- Haenszel statistic, power, simultaneous item bias statis tic, SIBTEST, Type I error rates.
Education and Treatment of Children | 2012
Robert H. Horner; Hariharan Swaminathan; George Sugai; Keith Smolkowski
Single-case research designs provide a rigorous research methodology for documenting experimental control. If single-case methods are to gain wider application, however, a need exists to define more clearly (a) the logic of single-case designs, (b) the process and decision rules for visual analysis, and (c) an accepted process for integrating visual analysis and statistical analysis. Considerations for meeting these three needs are discussed.
Journal of School Psychology | 2011
Daniel M. Maggin; Hariharan Swaminathan; Helen J. Rogers; Breda V. O'Keeffe; George Sugai; Robert H. Horner
A new method for deriving effect sizes from single-case designs is proposed. The strategy is applicable to small-sample time-series data with autoregressive errors. The method uses Generalized Least Squares (GLS) to model the autocorrelation of the data and estimate regression parameters to produce an effect size that represents the magnitude of treatment effect from baseline to treatment phases in standard deviation units. In this paper, the method is applied to two published examples using common single case designs (i.e., withdrawal and multiple-baseline). The results from these studies are described, and the method is compared to ten desirable criteria for single-case effect sizes. Based on the results of this application, we conclude with observations about the use of GLS as a support to visual analysis, provide recommendations for future research, and describe implications for practice.
Applied Psychological Measurement | 1996
John Hattie; Krzysztof Krakowski; H. Jane Rogers; Hariharan Swaminathan
A simulation study was conducted to evaluate the dependability of Stouts T index of unidimensionality as used in his DIMTEST procedure. DIMTEST was found to dependably provide indications of unidimensionality, to be reasonably robust, and to allow for a practical demar cation between one and many dimensions. The proce dure was not affected by the method used to identify the initial subset of unidimensional items. It was, how ever, found to be sensitive to whether the multidimen sional data arose from a compensatory model or a partially compensatory model. DIMTEST failed when the matrix of tetrachoric correlations was non-Gramian and hence is not appropriate in such cases.
Journal of Educational and Behavioral Statistics | 1982
Hariharan Swaminathan; Janice A. Gifford
Bayesian estimation procedures based on a hierarchical model for estimating parameters in the Rasch model are described. Through simulation studies it is shown that the Bayesian procedure is superior to the maximum likelihood procedure in that the estimates are (a) more accurate, at least in small samples; and (b) meaningful in that parameters corresponding to perfect item and ability responses can be estimated.
New Horizons in Testing#R##N#Latent Trait Test Theory and Computerized Adaptive Testing | 1983
Hariharan Swaminathan; Janice A. Gifford
Publisher Summary This chapter discusses a study to investigate the efficiency of the Urry procedure and the maximum likelihood procedure to estimate parameters in the three-parameter model, to study the properties of the estimators, and to provide some guidelines regarding the conditions under which they should be employed. In particular, the issues investigated were (1) the accuracy of the two estimation procedures; (2) the relations among the number of items, examinees, and the accuracy of estimation; (3) the effect of the distribution of ability on the estimates of item and ability parameters; and (4) the statistical properties, such as bias and consistency, of the estimators. To investigate the issues mentioned above, artificial data were generated according to the three-parameter logistic model using the DATGEN program of Hambleton and Rovinelli. Data were generated to simulate various testing situations by varying the test length, the number of examinees, and the ability distribution of the examinees. In the Urry estimation procedure, the relationships that exist for item discrimination and item difficulty between the latent trait theory parameters and the classical item parameters are exploited. These relationships are derived under the assumption that ability is normally distributed and that the item characteristic curve is the normal ogive. To study how the departures from the assumption of normally distributed abilities affect the Urry procedure, three ability distributions were considered: normal, uniform, and negatively skewed.