Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yue Yin is active.

Publication


Featured researches published by Yue Yin.


Educational Assessment | 2002

Reasoning Dimensions Underlying Science Achievement: The Case of Performance Assessment.

Carlos C. Ayala; Richard J. Shavelson; Yue Yin; Susan Schultz

Snow argued for multidimensional science achievement in the National Education Longitudinal Study of 1988 (NELS:88) along dimensions of basic knowledge and reasoning, spatial-mechanical reasoning, and quantitative science. We focused the generality of these reasoning dimensions in other multiple-choice tests and performance assessments. Confirmatory factor analyses retrieved the 3 dimensions for a test composed of NELS:88, the Third International Mathematics and Science Study (TIMSS) and the National Assessment of Educational Progress (NAEP) multiple-choice items, and the NELS:88 items alone. We used the latter because factor correlations were lower. We administered 3 reasoning-dimension-linked performance assessments to a subsample of 35 students from the main study. Performance assessments correlated moderately with each other and NELS:88 reasoning scores; the 2 methods partially converged on the dimensions. Performance scores scattered across multiple-choice scores due to the broad reasoning and knowledge spectrum tapped. Findings are tentative; larger samples and cognitive studies of reasoning and knowledge might shed light on convergence.


International Journal of Science Education | 2014

Using Formal Embedded Formative Assessments Aligned with a Short-Term Learning Progression to Promote Conceptual Change and Achievement in Science

Yue Yin; Miki K. Tomita; Richard J. Shavelson

This study examined the effect of learning progression-aligned formal embedded formative assessment on conceptual change and achievement in middle-school science. Fifty-two sixth graders were randomly assigned to either an experimental group or a control group. Both groups were taught about sinking and floating by the same teacher with identical curriculum materials and activities. The experimental group received, in addition, three sets of formal embedded formative assessments with qualitative feedback as to how to improve their understandings aligned with an expected learning progression during instruction. The control group spent the corresponding time between new curriculum activities conducting curriculum-specific extension activities. Overall, the experimental group experienced on average greater conceptual change than the control group. The experimental group also scored higher on average than the control group on general achievement tests, especially the performance assessment. This study supported the contention, then, that embedding formal formative assessments within a curricular sequence built around an expected learning progression is a useful way to promote conceptual change along that learning progression in science classrooms.


Applied Measurement in Education | 2008

Lessons Learned from the Process of Curriculum Developers' and Assessment Developers' Collaboration on the Development of Embedded Formative Assessments

Paul R. Brandon; Donald B. Young; Richard J. Shavelson; Rachael Jones; Carlos C. Ayala; Maria Araceli Ruiz-Primo; Yue Yin; Miki K. Tomita; Erin Marie Furtak

Our project to embed formative student assessments in the Foundational Approaches in Science Teaching curriculum required a close collaboration between curriculum developers at the Curriculum Research & Development Group (CRDG) and assessment developers at the Stanford Educational Assessment Laboratory (SEAL). This was a new endeavor for each organization, and throughout the project, many lessons were learned about embedding assessments and about the collaboration process. In this article, we discuss what we learned about the strengths and weaknesses of the collaboration up to the beginning of the randomized experiment. What we found comported with the literature on research collaborations. For example, past collaborations between CRDG and SEAL facilitated moving the project forward and sustained the collaboration. That said, the physical distance between the groups gave rise to some misunderstandings and led to a commitment to meet face-to-face on a regular basis; we found that conferencing software did not suffice. Moreover, in our zeal to implement formative assessments, the voices of teachers and teacher trainers got muffled until a pilot study confirmed their advice.


Journal of research on technology in education | 2015

Comparing Two Versions of Professional Development for Teachers Using Formative Assessment in Networked Mathematics Classrooms

Yue Yin; Judith Olson; Melfried Olson; Hannah Solvin; Paul R. Brandon

Abstract This study compared two versions of professional development (PD) designed for teachers using formative assessment (FA) in mathematics classrooms that were networked with Texas Instruments Navigator (NAV) technology. Thirty-two middle school mathematics teachers were randomly assigned to one of the two groups: FA-then-NAV group and FA-and-NAV group. The FA-then-NAV group received PD in formative student assessment in the first year and PD in using networked classroom technology for formative assessment in the second year. The FA-and-NAV group received PD in using networked technology to implement formative assessment in two consecutive years. We examined the change of teachers’ self-reported knowledge of formative assessment, self-efficacy in formative assessment, and attitudes toward the use of technology, as well as their evaluations of the two PD versions, by surveying the teachers at pretest, after Year 1 training, after Year 2 training, and after Year 3 (without training). We found significant growth in knowledge about general assessment, knowledge about formative assessment, self-efficacy in formative assessment, value of technology, and confidence in classroom technology for each version. While no significant differences were found between the two versions on the measured constructs at the end, different growth trajectories were observed in each group over the three years. The majority of the teachers reported that they preferred the FA-and-NAV PD version.


Applied Measurement in Education | 2017

Using the Bayes Factors to Evaluate Person Fit in the Item Response Theory

Tianshu Pan; Yue Yin

ABSTRACT In this article, we propose using the Bayes factors (BF) to evaluate person fit in item response theory models under the framework of Bayesian evaluation of an informative diagnostic hypothesis. We first discuss the theoretical foundation for this application and how to analyze person fit using BF. To demonstrate the feasibility of this approach, we further use it to evaluate person fit in simulated and empirical data, and compare the results with those of HT and the infit and outfit statistics. We found that overall BF performed as well as HT statistics and better than the infit and outfit statistics when detecting aberrant responses. Given the BF flexibility in handling data set with a small number of examinees, we suggest that BF can be used as person fit statistics, especially in computerized adaptive tests.


Psychological Methods | 2012

The Relationship between Mean Square Differences and Standard Error of Measurement: Comment on Barchard (2012).

Tianshu Pan; Yue Yin

In the discussion of mean square difference (MSD) and standard error of measurement (SEM), Barchard (2012) concluded that the MSD between 2 sets of test scores is greater than 2(SEM)² and SEM underestimates the score difference between 2 tests when the 2 tests are not parallel. This conclusion has limitations for 2 reasons. First, strictly speaking, MSD should not be compared to SEM because they measure different things, have different assumptions, and capture different sources of errors. Second, the related proof and conclusions in Barchard hold only under the assumptions of equal reliabilities, homogeneous variances, and independent measurement errors. To address the limitations, we propose that MSD should be compared to the standard error of measurement of difference scores (SEMx-y) so that the comparison can be extended to the conditions when 2 tests have unequal reliabilities and score variances.


Journal of Research in Science Teaching | 2005

Comparison of two concept-mapping techniques: Implications for scoring, interpretation, and use

Yue Yin; Jim Vanides; Maria Araceli Ruiz-Primo; Carlos C. Ayala; Richard J. Shavelson


Applied Measurement in Education | 2008

On the Impact of Curriculum-Embedded Formative Assessment on Learning: A Collaboration between Curriculum and Assessment Developers

Richard J. Shavelson; Donald B. Young; Carlos C. Ayala; Paul R. Brandon; Erin Marie Furtak; Maria Araceli Ruiz-Primo; Miki K. Tomita; Yue Yin


Applied Measurement in Education | 2008

On the Impact of Formative Assessment on Student Motivation, Achievement, and Conceptual Change

Yue Yin; Richard J. Shavelson; Carlos C. Ayala; Maria Araceli Ruiz-Primo; Paul R. Brandon; Erin Marie Furtak; Miki K. Tomita; Donald B. Young


Applied Measurement in Education | 2008

On the Fidelity of Implementing Embedded Formative Assessments and Its Relation to Student Learning

Erin Marie Furtak; Maria Araceli Ruiz-Primo; Jonathan T. Shemwell; Carlos C. Ayala; Paul R. Brandon; Richard J. Shavelson; Yue Yin

Collaboration


Dive into the Yue Yin's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Paul R. Brandon

University of Hawaii at Manoa

View shared research outputs
Top Co-Authors

Avatar

Erin Marie Furtak

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge