Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yi-Hsin Chen is active.

Publication


Featured researches published by Yi-Hsin Chen.


Journal of Experimental Education | 2011

Relations of Student Perceptions of Teacher Oral Feedback With Teacher Expectancies and Student Self-Concept

Yi-Hsin Chen; Marilyn S. Thompson; Jeffrey D. Kromrey; George H. Chang

In this article, the authors investigated the relations of students’ perceptions of teachers’ oral feedback with teacher expectancies and student self-concept. A sample of 1,598 Taiwanese children in Grades 3 to 6 completed measures of student perceptions of teacher oral feedback and school self-concept. Homeroom teachers identified students for whom they had high or low expectancies. Discriminant analysis indicated student perceptions of positive and negative academic oral feedback were more important than nonacademic feedback in predicting teacher expectancies. A 2-way multivariate analysis of variance showed that boys perceived more negative oral feedback than did girls, and fifth-grade students perceived more negative oral feedback on academic and nonacademic domains than did the third- and fourth-grade students. Furthermore, structural equation modeling results indicated a particularly strong relation between positive academic oral feedback and academic self-concept.


International Journal of Testing | 2008

Cross-Cultural Validity of the TIMSS-1999 Mathematics Test: Verification of a Cognitive Model

Yi-Hsin Chen; Joanna S. Gorin; Marilyn S. Thompson; Kikumi Tatsuoka

As with any test administered across linguistically and culturally diverse groups, evidence suggesting the equivalence of score meaning across countries is needed for valid comparisons. The current study examines the cross-cultural equivalence of score interpretations from the Trends in International Mathematics and Science Study (TIMSS)-1999 from a cognitive-psychometric perspective. A cognitive model describing the knowledge, strategies, and processing skills measured by the TIMSS-R mathematics test was previously validated in several countries. In order to establish the cross-cultural equivalence of TIMSS scores for the Taiwanese student population, the fit of the cognitive model to the Taiwanese item responses was examined. High student-mastery classification rates and good prediction of scores based on attribute mastery probabilities supported the fit of the cognitive model in the current study. Further, we suggest that cognitive-psychometric modeling approaches like those applied in the current study could be useful for examining more substantive issues of score validity and equivalence in test translations and adaptations.


Educational and Psychological Measurement | 2017

Comparing the Performance of Approaches for Testing the Homogeneity of Variance Assumption in One-Factor ANOVA Models

Yan Wang; Patricia Rodríguez de Gil; Yi-Hsin Chen; Jeffrey D. Kromrey; Eun Sook Kim; Thanh Pham; Diep Nguyen; Jeanine L. Romano

Various tests to check the homogeneity of variance assumption have been proposed in the literature, yet there is no consensus as to their robustness when the assumption of normality does not hold. This simulation study evaluated the performance of 14 tests for the homogeneity of variance assumption in one-way ANOVA models in terms of Type I error control and statistical power. Seven factors were manipulated: number of groups, average number of observations per group, pattern of sample sizes in groups, pattern of population variances, maximum variance ratio, population distribution shape, and nominal alpha level for the test of variances. Overall, the Ramsey conditional, O’Brien, Brown–Forsythe, Bootstrap Brown–Forsythe, and Levene with squared deviations tests maintained adequate Type I error control, performing better than the others across all the conditions. The power for each of these five tests was acceptable and the power differences were subtle. Guidelines for selecting a valid test for assessing the tenability of this critical assumption are provided based on average cell size.


Assessment in Education: Principles, Policy & Practice | 2012

Cognitive Diagnosis of Mathematics Performance between Rural and Urban Students in Taiwan.

Yi-Hsin Chen

This study was an empirically substantive examination of mathematics achievement differences between urban and rural Taiwanese students at a cognitive attribute level. Participants were eighth-grade students who participated in the Trends in International Mathematics and Science Study of 1999. The rule-space method was applied to produce a diagnostic description of urban and rural students’ cognitive knowledge, abilities, and skills related to TIMSS mathematics items. The results indicated that students in urban schools performed better than those in rural schools on high-level mathematics contents (Algebra and Geometry) and abstract thinking skills (Proportional reasoning, Logical reasoning, Solution search, and Open-ended items). Furthermore, greater proportions of urban students were classified into the knowledge states with more mastery attributes and greater proportions of rural students occupied the knowledge states with fewer mastery attributes. Detailed discussion and suggestions are provided in the paper.


International Journal of Testing | 2016

Evaluation of Model Fit in Cognitive Diagnosis Models.

Jinxiang Hu; M. David Miller; Anne Corinne Huggins-Manley; Yi-Hsin Chen

Cognitive diagnosis models (CDMs) estimate student ability profiles using latent attributes. Model fit to the data needs to be ascertained in order to determine whether inferences from CDMs are valid. This study investigated the usefulness of some popular model fit statistics to detect CDM fit including relative fit indices (AIC, BIC, and CAIC), and absolute fit indices (RMSEA2, ABS(fcor) and MAX(χ2jj′)). These fit indices were assessed under different CDM settings with respect to Q-matrix misspecification and CDM misspecification. Results showed that relative fit indices selected the correct DINA model most of the times and selected the correct G-DINA model well across most conditions. Absolute fit indices rejected the true DINA model if the Q-matrix was misspecified in any way. Absolute fit indices rejected the true G-DINA model whenever the Q-matrix was under-specified. RMSEA2 could be artificially low when the Q-matrix was over-specified.


Educational Research and Evaluation | 2010

Group Comparisons of Mathematics Performance from a Cognitive Diagnostic Perspective.

Yi-Hsin Chen; John M. Ferron; Marilyn S. Thompson; Joanna S. Gorin; Kikumi Tatsuoka

Traditional comparisons of test score means identify group differences in broad academic areas, but fail to provide substantive description of how the groups differ on the specific cognitive attributes required for success in the academic area. The rule space method (RSM) allows for group comparisons at the cognitive attribute level, which consists of the cognitive knowledge, skills, and abilities an individual can employ to solve a problem. In the current study, we extend RSM group comparison methods to include comparisons of the attribute characteristic curves (ACCs) and provide a method for estimating and plotting the ACCs using SAS. We further investigated Taiwanese mathematics performance on TIMSS-1999 by comparing cognitive attributes between students of different achievement levels as well as between male and female students. The results indicated the highest and lowest achieving students differed most on mastery probabilities for Algebra (C3), Open-ended items (S10), and Rule application in algebra (P4). Gender differences in mathematical skills were quite minimal for Taiwanese students. Detailed discussion of these findings is provided in the paper.


Aids and Behavior | 2017

Psychometric Evaluation of the HIV Disclosure Belief Scale: A Rasch Model Approach

Jinxiang Hu; Julianne M. Serovich; Yi-Hsin Chen; Monique J. Brown; Judy A. Kimberly

This study provides psychometric assessment of an HIV disclosure belief scale (DBS) among men who have sex with men (MSM). This study used baseline data from a clinical trial evaluating the effectiveness of an HIV serostatus disclosure intervention of 338 HIV-positive MSM. The Rasch model was used after unidimensionality and local independence assumptions were tested for application of the model. Results suggest that there was only one item that did not fit the model well. After removing the item, the DBS showed good model-data fit and high item and person reliabilities. This instrument showed measurement invariance across two different age groups, but some items showed differential item functioning between Caucasian and other minority groups. The findings suggest that the DBS is suitable for measuring the HIV disclosure beliefs, but it should be cautioned when the DBS is used to compare the disclosure beliefs between different racial/ethnic groups.


Educational and Psychological Measurement | 2018

Exploring the Test of Covariate Moderation Effects in Multilevel MIMIC Models

Chunhua Cao; Eun Sook Kim; Yi-Hsin Chen; John M. Ferron; Stephen Stark

In multilevel multiple-indicator multiple-cause (MIMIC) models, covariates can interact at the within level, at the between level, or across levels. This study examines the performance of multilevel MIMIC models in estimating and detecting the interaction effect of two covariates through a simulation and provides an empirical demonstration of modeling the interaction in multilevel MIMIC models. The design factors include the location of the interaction effect (i.e., between, within, or across levels), cluster number, cluster size, intraclass correlation (ICC) level, magnitude of the interaction effect, and cross-level measurement invariance status. Type I error, power, relative bias, and root mean square of error of the interaction effects are examined. The results showed that multilevel MIMIC models performed well in detecting the interaction effect at the within or across levels. However, when the interaction effect was at the between level, the performance of multilevel MIMIC models depended on the magnitude of the interaction effect, ICC, and sample size, especially cluster number. Overall, cross-level measurement noninvariance did not make a notable impact on the estimation of interaction in the structural part of multilevel MIMIC models when factor loadings were allowed to be different across levels.


Journal of Negro Education | 2006

Understanding differences in postsecondary educational attainment: A comparison of predictive measures for Black and White students

Marilyn S. Thompson; Joanna S. Gorin; Khawla Obeidat; Yi-Hsin Chen


Archive | 2003

Relations among Teacher Expectancies, Student Perceptions of Teacher Oral Feedback, and Student Self-Concept: An Empirical Study in Taiwanese Elementary Schools.

Yi-Hsin Chen; Marilyn S. Thompson

Collaboration


Dive into the Yi-Hsin Chen's collaboration.

Top Co-Authors

Avatar

Jeffrey D. Kromrey

University of South Florida

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Diep Nguyen

University of South Florida

View shared research outputs
Top Co-Authors

Avatar

Eun Sook Kim

University of South Florida

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yan Wang

University of South Florida

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chunhua Cao

University of South Florida

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Thanh Pham

University of South Florida

View shared research outputs
Researchain Logo
Decentralizing Knowledge