Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nambury S. Raju is active.

Publication


Featured researches published by Nambury S. Raju.


Journal of Applied Psychology | 2002

Measurement equivalence: A comparison of methods based on confirmatory factor analysis and item response theory

Nambury S. Raju; Larry J. Laffitte; Barbara M. Byrne

Current interest in the assessment of measurement equivalence emphasizes 2 major methods of analysis. The authors offer a comparison of a linear method (confirmatory factor analysis) and a nonlinear method (differential item and test functioning using item response theory) with an emphasis on their methodological similarities and differences. The 2 approaches test for the equality of true scores (or expected raw scores) across 2 populations when the latent (or factor) score is held constant. Both approaches can provide information about when measurement nonequivalence exists and the extent to which it is a problem. An empirical example is used to illustrate the 2 approaches.


Psychometrika | 1988

The area between two item characteristic curves

Nambury S. Raju

Formulas for computing the exact signed and unsigned areas between two item characteristic curves (ICCs) are presented. It is further shown that when thec parameters are unequal, the area between two ICCs is infinite. The significance of the exact area measures for item bias research is discussed.


Applied Psychological Measurement | 1995

IRT-based internal measures of differential functioning of items and tests

Nambury S. Raju; Wim J. van der Linden; Paul F. Fleer

Internal measures of differential functioning of items and tests (DHFIT) based on item response theory (IRT) are proposed. Within the DFIT context, the new differential test functioning (DTF) index leads to two new measures of differential item functioning (DIF) with the following properties: (1) The compensatory DIF (CDIF) indexes for all items in a test sum to the DTF index for that test and, unlike current DIF procedures, the CDIF index for an item does not assume that the other items in the test are unbi ased ; (2) the noncompensatory DIF (NCDIF) index, which assumes that the other items in the test are unbiased, is comparable to some of the IRT-based DIP indexes; and (3) COIF and NCDIF, as well as DTF, are equally valid for polytomous and multidimensional IRT models. Monte carlo study results, comparing these indexes with Lords χ2 test, the signed area measure, and the unsigned area measure, demonstrate that the DFIT framework is accu rate in assessing DTF, COIF, and NCDIF.


Applied Psychological Measurement | 1990

Determining the Significance of Estimated Signed and Unsigned Areas Between Two Item Response Functions

Nambury S. Raju

Asymptotic sampling distributions (means and vari ances) of estimated signed and unsigned areas between two item response functions (IRFS) are presented for the Rasch model, the two-parameter model, and the three-parameter model with fixed lower asymptotes. In item bias or differential item functioning research, it may be of interest to determine whether the estimated signed and unsigned areas between IRFS calibrated with two different groups are significantly different from 0. The usefulness of these sampling distributions in this context is discussed and illustrated. More em pirical research with the proposed significance tests is necessary.


academy of management annual meeting | 1998

Peer and subordinate performance appraisal measurement equivalence

Todd J. Maurer; Nambury S. Raju; William C. Collins

Confirmatory factor analysis (CFA) and item response theory (IRT) were applied to determine the extent to which peer and subordinate ratings of managers on a team-building skill dimension are directly comparable. Simultaneous CFA in the 2 groups of raters suggested that the 2 sets of ratings are calibrated equivalently, and polytomous IRT methods led to similar conclusions. The results were replicated in independent samples of raters. These are encouraging results for practitioners or researchers who compare ratings from these 2 groups. In addition to presenting the empirical findings from the study and illustrating how CFA and IRT methods of testing measurement equivalence compare, the article shows the unique types of information about performance appraisals that IRT and CFA can provide to researchers and practitioners, with implications for future research.


Applied Psychological Measurement | 1999

A Description and Demonstration of the Polytomous-DFIT Framework.

Claudia Flowers; T. C. Oshima; Nambury S. Raju

Raju, van der Linden, & Fleer (1995) proposed an item response theory based, parametric differential item functioning (DIF) and differential test functioning (DTF) procedure known as differential functioning of items and tests (DFIT). According to Raju et al., the DFIT framework can be used with unidimensional and multidimensional data that are scored dichotomously and/or polytomously. This study examined the polytomous-DFIT framework. Factors manipulated in the simulation were: (1) length of test (20 and 40 items), (2) focal group distribution, (3) number of DIF items, (4) direction of DIF, and (5) type of DIF. The findings provided promising results and indicated directions for future research. The polytomous DFIT framework was effective in identifying DTF and DIF for the simulated conditions. The DTF index did not perform as consistently as the DIF index. The findings are similar to those of unidimensional and multidimensional DFIT studies.


Applied Psychological Measurement | 1995

Analysis of Differential Item Functioning in Translated Assessment Instruments

Glen R. Budgell; Nambury S. Raju; Douglas A. Quartetti

The usefulness of three IRT-based methods and the Mantel-Haenszel technique in evaluating the measure ment equivalence of translated assessment instruments was investigated. A 15-item numerical test and an 18- item reasoning test that were originally developed in English and then translated to French were used. The analyses were based on four groups, each containing 1,000 examinees. Two groups of English-speaking ex aminees were administered the English version of the tests; the other two were French-speaking examinees who were administered the French version of the tests. The percent of items identified with significant differ ential item functioning (DIF) in this study was similar to findings in previous large-sample studies. The four DIF methods showed substantial consistency in identi fying items with significant DIF when replicated. Sug gestions for future research are provided.


Applied Psychological Measurement | 2003

Determining the significance of correlations corrected for unreliability and range restriction

Nambury S. Raju; Paul A. Brand

A new asymptotic formula for estimating the sampling variance of a correlation coefficient corrected for unreliability and range restriction was proposed. A Monte Carlo assessment of the new sampling variance formula has resulted in the following conclusions. First, the formula-based (analytical) sampling variances were very close to the empirically derived sampling variances based on 5,000 replications. Second, the sampling variance formula was quite robust against committing Type I errors. Third, the statistical power was low to moderate in distinguishing between two unattenuated and unrestricted population correlations. Fourth, the new formula produced smaller sampling variances; was closer to nominal alpha levels; and was more powerful when sample size increased, when the population correlation coefficient increased, when range restriction was less severe, and when both the criterion and predictor reliabilities increased.


Journal of Applied Psychology | 1990

A new approach for utility analysis

Nambury S. Raju; Michael J. Burke; Jacques Normand

A new utility analysis approach is presented. It is demonstrated that the new approach does not require the direct estimation of the most problematic component of current utility analysis equations, the standard deviation of Y. The parsimony of the new approach provides the potential for more directly linking decision-theoretic utility analysis with economic and accounting concepts. The development of the new approach highlights the many necessary and untested assumptions of current utility models. It also points to a need for reassessing the psychometric validity of correcting for criterion unreliability in utility analysis. Furthermore, the CREPID and 40% and 70% rules for estimating the standard deviation of Y are shown to be special cases of the new approach. Research on the efficacy of the assumptions and applicability of the new approach is advocated.


Applied Psychological Measurement | 1997

Methodology Review: Estimation of Population Validity and Cross-Validity, and the Use of Equal Weights in Prediction:

Nambury S. Raju; Reyhan Bilgic; Jack E. Edwards; Paul F. Fleer

In multiple regression, optimal linear weights are obtained using an ordinary least squares (OLS) procedure. However, these linear weighted combinations of predictors may not optimally predict the same criterion in the population from which the sample was drawn (population validity) or other samples drawn from the same population (population cross-validity). To achieve more accurate estimates of population validity and population cross-validity, some researchers and practitioners use formulas or traditional empirical methods to obtain the estimates. Others have suggested using the equal weights procedure as an alternative to the formula-based and empirical procedures. This review found that formula-based procedures can be used in place of empirical validation for estimating population validity or in place of empirical cross-validation for estimating population cross-validity. The equal weights procedure is a viable alternative when the observed multiple correlation is low to moderate and the variability among predictor-criterion correlations is low. Despite these findings, it is difficult to recommend one formula-based estimate over another because no single study has included all of the currently available formulas. Suggestions are offered for future research and application of these techniques.

Collaboration


Dive into the Nambury S. Raju's collaboration.

Top Co-Authors

Avatar

T. C. Oshima

Georgia State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jacques Normand

United States Postal Service

View shared research outputs
Top Co-Authors

Avatar

Claudia Flowers

University of North Carolina at Charlotte

View shared research outputs
Top Co-Authors

Avatar

Paul F. Fleer

Illinois Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Adrian Thomas

Illinois Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Daniel V. Lezotte

Illinois Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jack E. Edwards

Government Accountability Office

View shared research outputs
Top Co-Authors

Avatar

Frank L. Schmidt

College of Business Administration

View shared research outputs
Top Co-Authors

Avatar

John E. Hunter

Michigan State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge