Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Prathiba Natesan is active.

Publication


Featured researches published by Prathiba Natesan.


Applied Psychological Measurement | 2012

Recovery of Graded Response Model Parameters A Comparison of Marginal Maximum Likelihood and Markov Chain Monte Carlo Estimation

Vincent Kieftenbeld; Prathiba Natesan

Markov chain Monte Carlo (MCMC) methods enable a fully Bayesian approach to parameter estimation of item response models. In this simulation study, the authors compared the recovery of graded response model parameters using marginal maximum likelihood (MML) and Gibbs sampling (MCMC) under various latent trait distributions, test lengths, and sample sizes. Sample size and test length explained the largest amount of variance in item and person parameter estimates, respectively. There was little difference in item parameter recovery between MML and MCMC in samples with 300 or more respondents. MCMC recovered some item threshold parameters better in samples with 75 or 150 respondents. Bias in threshold parameter estimates depended on the generating value and the type of threshold. Person parameters were comparable between MCMC and MML/expected a posteriori for all test lengths.


Educational and Psychological Measurement | 2007

Extending Improvement-Over-Chance I-Index Effect Size Simulation Studies to Cover Some Small-Sample Cases

Prathiba Natesan; Bruce Thompson

All effect sizes are sensitive to design flaws and the failure to meet analytic assumptions. But some effect sizes appear to be more robust to assumption violations (e.g., homogeneity of variance). The present study extended prior Monte Carlo research by exploring the robustness of group overlap I indices at the relatively small sample sizes used in some research. I effects are statistically appealing because these indices can be applied across (a) both univariate and multivariate analyses and (b) conditions of either variance homogeneity or variance heterogeneity.


Journal of Psychoeducational Assessment | 2011

An Item Response Theory Analysis of the Mathematics Teaching Efficacy Beliefs Instrument

Vincent Kieftenbeld; Prathiba Natesan; Colleen M. Eddy

The mathematics teaching efficacy beliefs of preservice elementary teachers have been the subject of several studies. A widely used measure in these studies is the Mathematics Teaching Efficacy Beliefs Instrument (MTEBI). The present study provides a detailed analysis of the psychometric properties of the MTEBI using Bayesian item response theory. We discuss local dependence between item pairs, psychometric quality of the items, validity of the scoring procedure, and measurement accuracy for teachers with different efficacy levels. Our findings suggest that in its present form, the test reliability of the MTEBI may not be as high as assumed to date. The scale, wording, and placement of the items need revision. Moreover, additional items need to be constructed to measure below average levels of efficacy more accurately. Ordering the items according to difficulty, we describe the structure of mathematics teaching efficacy beliefs and draw some implications for mathematics teacher educators.


Educational and Psychological Measurement | 2010

Bayesian Estimation of Graded Response Multilevel Models Using Gibbs Sampling: Formulation and Illustration

Prathiba Natesan; Christine A. Limbers; James W. Varni

The present study presents the formulation of graded response models in the multilevel framework (as nonlinear mixed models) and demonstrates their use in estimating item parameters and investigating the group-level effects for specific covariates using Bayesian estimation. The graded response multilevel model (GRMM) combines the formulation of graded response models with the discrimination parameter fixed at one for all items by Tuerlinckx and Wang and of two parameter models by Rijmen and Briggs to offer graded response models with item-specific discrimination parameters. Apart from the contribution to the body of knowledge by formulating GRMMs, the significance of the present study includes providing a meeting point between psychometrics and statistics, overcoming the Neyman—Scott problem by using Bayesian estimation, estimation of abilities of persons with extreme scores, and demonstration of general purpose software for estimating item response theory parameters. Data from the emotional functioning scale on 11,158 healthy and chronically ill children and adolescents were used from the PedsQL 4.0 Generic Core Scales database to illustrate the model. Estimates for the item parameters from WINBUGS using Bayesian priors and Multilog were compared for the GRMM and the ordinary graded response models, respectively.


Frontiers in Psychology | 2016

Bayesian Prior Choice in IRT Estimation Using MCMC and Variational Bayes

Prathiba Natesan; Ratna Nandakumar; Thomas P. Minka; Jonathan D. Rubright

This study investigated the impact of three prior distributions: matched, standard vague, and hierarchical in Bayesian estimation parameter recovery in two and one parameter models. Two Bayesian estimation methods were utilized: Markov chain Monte Carlo (MCMC) and the relatively new, Variational Bayesian (VB). Conditional (CML) and Marginal Maximum Likelihood (MML) estimates were used as baseline methods for comparison. Vague priors produced large errors or convergence issues and are not recommended. For both MCMC and VB, the hierarchical and matched priors showed the lowest root mean squared errors (RMSEs) for ability estimates; RMSEs of difficulty estimates were similar across estimation methods. For the standard errors (SEs), MCMC-hierarchical displayed the largest values across most conditions. SEs from the VB estimation were among the lowest in all but one case. Overall, VB-hierarchical, VB-matched, and MCMC-matched performed best. VB with hierarchical priors are recommended in terms of their accuracy, and cost and (subsequently) time effectiveness.


Frontiers in Psychology | 2015

Comparing interval estimates for small sample ordinal CFA models

Prathiba Natesan

Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading. Therefore, editors and policymakers should continue to emphasize the inclusion of interval estimates in research.


International Journal of Multiple Research Approaches | 2011

Validity of the cultural awareness and beliefs inventory of urban teachers: A parallel mixed methods study

Prathiba Natesan; Gwendolyn C Webb-Hasan; Norvella Carter; Patricia Walter

Abstract As the United States strives to meet the challenges of improving the academic achievement of African American students in large urban school districts, researchers are beginning to examine cultural awareness and beliefs of urban teachers. The present study used a parallel mixed methods design to examine the score-validity and score-reliability of a cultural awareness and beliefs inventory (CABI). This 46-item inventory measured the perceptions of 1,253 urban teachers. Specifically, the CABI measured urban teachers’ cultural awareness and beliefs about their African American students. Construct validity was addressed by establishing internal consistency and content-related, structural, and substantive validities derived from analyses of two data strands. Implications of the study for policy makers, administrators, and educators, and directions for future research are provided.


Journal of Psychoeducational Assessment | 2013

Measuring Urban Teachers’ Beliefs About African American Students A Psychometric Analysis

Prathiba Natesan; Vincent Kieftenbeld

Understanding urban teachers’ beliefs about African American students has become important because (a) many teachers are reluctant to teach students from other cultures, and (b) most teachers are European American. To construct a psychometrically sound measure of teacher beliefs, the authors investigate the measurement properties of a teacher beliefs factor. This factor was selected from an inventory of items that purported to measure urban teachers’ cultural awareness and beliefs. Measurement invariance of the teacher beliefs factor across European American, African American, and Hispanic American teachers addressed its construct validity. The authors examine the psychometric properties of these items using graded response multilevel analysis. The final 5-item factor showed highest level of invariance for African American and European American teachers but did not fit Hispanic American teachers well. All the five items had good psychometric properties. Analyses of latent means showed that African American teachers had more positive beliefs about African American students than European American teachers did. However, the latent scores were bimodally distributed for African American teachers showing that one subgroup of African American teachers had similar beliefs as European American teachers while another subgroup had more positive beliefs.


Journal of Moral Education | 2014

Moral rationality and intuition: An exploration of relationships between the Defining Issues Test and the Moral Foundations Questionnaire

Rebecca J. Glover; Prathiba Natesan; Jie Wang; Danielle Rohr; Lauri McAfee-Etheridge; Dana D. Booker; James Bishop; David Lee; Cory Kildare; Minwei Wu

Explorations of relationships between Haidt’s Moral Foundations Questionnaire (MFQ) and indices of moral decision-making assessed by the Defining Issues Test have been limited to correlational analyses. This study used Harm, Fairness, Ingroup, Authority and Purity to predict overall moral judgment and individual Defining Issues Test-2 (DIT-2) schema scores using responses from 222 undergraduates. Relationships were not confirmed between the separate foundations and the DIT-2 indices. Using the MFQ moral judgment items only, confirmatory factor analyses confirmed higher order constructs called Individualizing and Binding foundations. Structural models using these higher order factors fitted the data well, and findings indicated that the Binding foundations significantly positively predicted Maintaining Norms and negatively predicted both overall moral judgment (N2) and the Postconventional Schema. Neither Individualizing nor Binding foundations significantly predicted Personal Interest. While moral judgments assessed by DIT-2 may not be evoking the MFQ foundations, findings here suggest the MFQ may not be a suitable measure for capturing more advanced moral functioning.


Journal of Educational and Behavioral Statistics | 2011

A Review of Bayesian Item Response Modeling: Theory and Applications

Prathiba Natesan

The primary reason Bayesian methods have become increasingly soughtafter in educational statistics is their flexibility in evaluating complex models.Bayesian estimation of item response models has been argued to be more advan-tageous than marginalized maximum likelihood or maximum likelihood becauseof their ability to estimate parameters for complex data structures, such as hier-archical data or data that violate the basic assumptions of item response theory(IRT), success with smaller samples, no parameter drift, and parameter estimationin extreme response patterns (Albert, 1992; Fox, 2010; Lord, 1986; SwaminathanGAlbert, 1992; Patz & Junker, 1999) and the availability of open-access softwareprogramssuchasRand BUGS that facilitateMCMCestimation.Asaresult,thereisprolificgrowthinthenumberofpublishedarticlesthatuseBayesianestimation.This in turn has created a real need for a comprehensive source of information onBayesian approaches to IRT.Some other, relatively isolated, works on the topic exist. The classicbook, Item response theory: Parameter estimation techniques by Baker andKim (2004) addressed Bayesian estimation in only one chapter. The recentlypublishedExplanatoryitemresponsemodels:Ageneralizedlinearandnonlinearapproach edited by De Boeck and Wilson (2004) provided some examples ofBayesianestimationofitemresponsemodels.However,thistext emphasizedtheformulation of IRT models as generalized linear and nonlinear models and notBayesian estimation procedures. A psychometrician or an advanced student ofpsychometrics did not have a comprehensive guide that focused on the Baye-sian estimation of all aspects of IRT in detail along with some practical

Collaboration


Dive into the Prathiba Natesan's collaboration.

Top Co-Authors

Avatar

Vincent Kieftenbeld

Southern Illinois University Edwardsville

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andrew A. Allen

Florida International University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Baaska Anderson

University of North Texas

View shared research outputs
Top Co-Authors

Avatar

Bruce Thompson

Baylor College of Medicine

View shared research outputs
Top Co-Authors

Avatar

Chetan Tiwari

University of North Texas

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge