Walter L. Leite
University of Florida
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Walter L. Leite.
Educational and Psychological Measurement | 2010
Walter L. Leite; Marilla Svinicki; Yuying Shi
The authors examined the dimensionality of the VARK learning styles inventory. The VARK measures four perceptual preferences: visual (V), aural (A), read/write (R), and kinesthetic (K). VARK questions can be viewed as testlets because respondents can select multiple items within a question. The correlations between items within testlets are a type of method effect. Four multitrait—multimethod confirmatory factor analysis models were compared to evaluate the dimensionality of the VARK. The correlated trait—correlated method model had the best fit to the VARK scores. The estimated reliability coefficients were adequate. The study found preliminary support for the validity of the VARK scores. Potential problems related to item wording and the scale’s scoring algorithm were identified, and cautions with respect to using the VARK with research were raised.
Educational and Psychological Measurement | 2002
S. Natasha Beretvas; Jason L. Meyers; Walter L. Leite
A reliability generalization (RG) study was conducted for the Marlowe-Crowne Social Desirability Scale (MCSDS). The MCSDS is the most commonly used tool designed to assess social desirability bias (SDB). Several short forms, consisting of items from the original 33-item version, are in use by researchers investigating the potential for SDB in responses to other scales. These forms have been used to measure a wide array of populations. Using a mixed-effects model analysis, the predicted score reliability for male adolescents was.53 and the reliability for men’sresponseswaslower than that for women’s. Suggestions are made concerning the necessity for further psychometric evaluations of the MCSDS.
Educational and Psychological Measurement | 2005
Walter L. Leite; S. Natasha Beretvas
The Marlowe-Crowne Social Desirability Scale (MCSDS), the most commonly used social desirability bias (SDB) assessment, conceptualizes SDB as an individual’s need for approval. The Balanced Inventory of Desirable Responding (BIDR) measures SDB as two separate constructs: impression management and self-deception. Scores on SDB scales are commonly used to validate other measures although insufficiently validated themselves. This study used college students’ responses to the MCSDS and the BIDR to investigate their factorial validity. Using confirmatory factor analysis, neither a one-nor a two-factor model was found to be strongly supported. It is recommended that researchers be cautious when using scores on these SDB scales until their dimensionality is better understood.
Health Services Research | 2007
I-Chan Huang; Constantine Frangakis; Mark J. Atkinson; Richard J. Willke; Walter L. Leite; W. Bruce Vogel; Albert W. Wu
OBJECTIVES To compare different approaches to address ceiling effects when predicting EQ-5D index scores from the 10 subscales of the MOS-HIV Health Survey. STUDY DESIGN Data were collected from an HIV treatment trial. Statistical methods included ordinary least squares (OLS) regression, the censored least absolute deviations (CLAD) approach, a standard two-part model (TPM), a TPM with a log-transformed EQ-5D index, and a latent class model (LCM). Predictive accuracy was evaluated using percentage of absolute error (R(1)) and squared error (R(2)) predicted by statistical methods. FINDINGS A TPM with a log-transformed EQ-5D index performed best on R(1); a LCM performed best on R(2). In contrast, the CLAD was worst. Performance of the OLS and a standard TPM were intermediate. Values for R(1) ranged from 0.33 (CLAD) to 0.42 (TPM-L); R(2) ranged from 0.37 (CLAD) to 0.53 (LCM). CONCLUSIONS The LCM and TPM with a log-transformed dependent variable are superior to other approaches in handling data with ceiling effects.
Educational and Psychological Measurement | 2010
Daniel E. Tuccitto; Peter R. Giacobbi; Walter L. Leite
This study tested five confirmatory factor analytic (CFA) models of the Positive Affect Negative Affect Schedule (PANAS) to provide validity evidence based on its internal structure. A sample of 223 club sport athletes indicated their emotions during the past week. Results revealed that an orthogonal two-factor CFA model, specifying error correlations according to Zevon and Tellegen’s mood content categories, provided the best fit to our data. In addition, parameter estimates for this model suggest that PANAS scores are reliable and explain large proportions of item variance. Taken together with previous research, the findings further suggest that the PANAS may be a higher-order measure of affect and includes several consistently problematic items. The authors recommend that affect researchers attempt to improve the PANAS by (a) revising consistently problematic items, (b) adding new items to better capture mood content categories, and (c) providing additional internal structure validity evidence through a diagonally weighted least squares estimation of a second-order PANAS CFA model.
Value in Health | 2008
I-Chan Huang; Chyng-Chuang Hwang; Ming-Yen Wu; Wender Lin; Walter L. Leite; Albert W. Wu
OBJECTIVE There is a debate regarding the use of disease-specific versus generic instruments for health-related quality of life (HRQOL) measures. We tested the psychometric properties of HRQOL measures using the Diabetes-39 (D-39) and the Medical Outcomes Study 36-Item Short-Form Health Survey (SF-36). METHODS This was a cross-sectional study collecting data from 280 patients in Taiwan. Exploratory factor analysis was conducted to evaluate construct validity of the two instruments. Known-groups validity was examined using laboratory indicators (fasting, 2-hour postprandial plasma glucose, and hemoglobin A1c), presence of diabetic complications (retinopathy, nephropathy, neuropathy, diabetic foot disorder, cardiovascular and cerebrovascular disorders), and psychosocial variables (sense of well-being and self-reported diabetes severity). Overall discriminative power of the two instruments was evaluated using the C-statistic. RESULTS Three distinct factors were extracted through factor analysis. These factors tapped all subscales of the D-39, fourphysical subscales of the SF-36, and four mental subscales of the SF-36, respectively. Compared with the SF-36, the D-39 demonstrated superior known-groups validity for 2-hour postprandial plasma glucose groups but was inferior for complication groups. Compared with the SF-36, the D-39 discriminated better between self-reported severity known groups, but was inferior between well-being groups. In overall discriminative power, the D-39 discriminated better between laboratory known groups. The SF-36, however, was superior in discriminating between complication known groups. CONCLUSIONS For psychometric properties, the D-39 and the SF-36 were superior to each other in different regards. The combined use of a disease-specific instrument and a generic instrument may be a useful strategy for diabetes HRQOL assessment.
Structural Equation Modeling | 2007
Walter L. Leite
Univariate latent growth modeling (LGM) of composites of multiple items (e.g., item means or sums) has been frequently used to analyze the growth of latent constructs. This study evaluated whether LGM of composites yields unbiased parameter estimates, standard errors, chi-square statistics, and adequate fit indexes. Furthermore, LGM was compared with 2 alternatives: LGM with fixed error variances and the curve-of-factors model (McArdle, 1988), which is a multivariate latent growth model. It was found that the 2 univariate models only yield adequate results when the items are essentially tau-equivalent and there is strict factorial invariance. The curve-of-factors model was found to produce adequate results in all conditions, but it usually requires large sample sizes.
Criminal Justice and Behavior | 2010
Jeffrey T. Ward; Chris L. Gibson; John H. Boman; Walter L. Leite
Although there have been nearly 20 years of research on self-control theory, the measurement problems of the theory’s core construct linger and call into question the efficacy of self-control as a predictor of crime and delinquency. This study assessed the validity of a recently introduced behavioral measure of self-control, the Retrospective Behavioral Self-Control (RBS) measure, which is argued to remedy the conceptual and empirical problems afflicting prior self-control measures. Using a sample of students at a large southern university, this study finds that although a unidimensional and content-valid 18-item RBS measure is not as strong a predictor of crime and delinquency as the original RBS, it has substantially more predictive power than the most commonly used attitudinal measure of self-control, the Grasmick et al. scale. The implications of these findings for empirical tests of self-control theory as well as future directions for the measurement of self-control are discussed.
Multivariate Behavioral Research | 2008
Walter L. Leite; I-Chan Huang; George A. Marcoulides
This article presents the use of an ant colony optimization (ACO) algorithm for the development of short forms of scales. An example 22-item short form is developed for the Diabetes-39 scale, a quality-of-life scale for diabetes patients, using a sample of 265 diabetes patients. A simulation study comparing the performance of the ACO algorithm and traditionally used methods of item selection is also presented. It is shown that the ACO algorithm outperforms the largest factor loadings and maximum test information item selection methods. The results demonstrate the capabilities of using ACO for creating short-form scales.
British Journal of Mathematical and Statistical Psychology | 2010
Yuying Shi; Walter L. Leite; James Algina
Cross-classified random effects modelling (CCREM) is a special case of multi-level modelling where the units of one level are nested within two cross-classified factors. Typically, CCREM analyses omit the random interaction effect of the cross-classified factors. We investigate the impact of the omission of the interaction effect on parameter estimates and standard errors. Results from a Monte Carlo simulation study indicate that, for fixed effects, both coefficients estimates and accompanied standard error estimates are not biased. For random effects, results are affected at level 2 but not at level 1 by the presence of an interaction variance and/or a correlation between the residual of level two factors. Results from the analysis of the Early Childhood Longitudinal Study and the National Educational Longitudinal Study agree with those obtained from simulated data. We recommend that researchers attempt to include interaction effects of cross-classified factors in their models.