Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Christina van Barneveld is active.

Publication


Featured researches published by Christina van Barneveld.


SAGE Open | 2011

Modeling Student Motivation and Students’ Ability Estimates From a Large-Scale Assessment of Mathematics

Carlos Zerpa; Krystal Hachey; Christina van Barneveld; Marielle Simon

When large-scale assessments (LSA) do not hold personal stakes for students, students may not put forth their best effort. Low-effort examinee behaviors (e.g., guessing, omitting items) result in an underestimate of examinee abilities, which is a concern when using results of LSA to inform educational policy and planning. The purpose of this study was to explore the relationship between examinee motivation as defined by expectancy-value theory, student effort, and examinee mathematics abilities. A principal components analysis was used to examine the data from Grade 9 students (n = 43,562) who responded to a self-report questionnaire on their attitudes and practices related to mathematics. The results suggested a two-component model where the components were interpreted as task-values in mathematics and student effort. Next, a hierarchical linear model was implemented to examine the relationship between examinee component scores and their estimated ability on a LSA. The results of this study provide evidence that motivation, as defined by the expectancy-value theory and student effort, partially explains student ability estimates and may have implications in the information that get transferred to testing organizations, school boards, and teachers while assessing students’ Grade 9 mathematics learning.


Applied Psychological Measurement | 2007

The Effect of Examinee Motivation on Test Construction Within an IRT Framework

Christina van Barneveld

The purpose of this study is to examine the effects of a false assumption regarding the motivation of examinees on test construction. Simulated data were generated using two models of item responses (the three-parameter logistic item response model alone and in combination with Wise’s examinee persistence model) and were calibrated using a Bayesian method. For the conditions studied, biased item parameter estimates resulted from responses from poorly motivated examinees. Bias in item parameter estimates resulted in bias in item information estimates and test information estimates for an optimally constructed test. The direction and magnitude of the bias depended on conditions studied. The implications of the results for test development companies, examinees, and users of test results are discussed.The purpose of this study is to examine the effects of a false assumption regarding the motivation of examinees on test construction. Simulated data were generated using two models of item responses (the three-parameter logistic item response model alone and in combination with Wise’s examinee persistence model) and were calibrated using a Bayesian method. For the conditions studied, biased item parameter estimates resulted from responses from poorly motivated examinees. Bias in item parameter estimates resulted in bias in item information estimates and test information estimates for an optimally constructed test. The direction and magnitude of the bias depended on conditions studied. The implications of the results for test development companies, examinees, and users of test results are discussed.


Canadian journal of education | 2006

SCHOOL BOARD IMPROVEMENT PLANS IN RELATION TO THE AIP MODEL OF EDUCATIONAL ACCOUNTABILITY: A CONTENT ANALYSIS

Christina van Barneveld; Wendy Stienstra; Sandra Stewart

For this study we analyzed the content of school board improvement plans in relation to the Achievement ‐ Indicators ‐ Policy (AIP) model of educational accountability (Nagy, Demeris, & van Barneveld, 2000). We identified areas of congruence and incongruence between the plans and the model. Results suggested that the content of the improvement plans, which contained a heavy emphasis on large ‐ scale, standardized assessments as a measure of student achievement, were incongruent with some key elements of the educational accountability model, especially as it related to the use of indicator data or policy information. Key words: educational planning, student achievement, large ‐ scale testing, student assessment L’etude presentee ici visait a analyser le contenu des plans de reussite des conseils scolaires a l’egard du modele AIP d’imputabilite en education (Nagy, Demeris et van Barneveld, 2000). Des zones de congruence et de non ‐ congruence entre les plans et le modele ont ete identifies. Les resultats donnent a penser que les plans de reussite faisaient une large place aux epreuves communes en tant que mesure du rendement scolaire et n’etaient pas congruents avec certains elements cles du modele d’imputabilite en education, notamment en ce qui a trait a l’utilisation de donnees indicatrices ou de renseignements sur les politiques. Mots cles : planification de l’education, rendement scolaire, epreuves communes, evaluation des eleves


International Journal of Testing | 2007

Does It Matter if You “Kill” the Patient or Order Too Many Tests? Scoring Alternatives for a Test of Clinical Reasoning Skill

Ruth A. Childs; Jennifer L. Dunn; Christina van Barneveld; Andrew P. Jaciw

This study compares five scoring approaches for a test of clinical reasoning skills. All of the approaches incorporate information about the correct item responses selected and the errors, such as selecting too many responses or selecting a response that is inappropriate and/or harmful to the patient. The approaches are combinations of theoretical framework (classical test theory or item response theory) and number of scores (single or multiple). The implications of these alternatives for score reliability and validity are discussed and the impact on pass-fail decisions is examined. The results support the use of a single score.


IEJME-Mathematics Education | 2009

Factors that Impact Preservice Teachers’ Growth in Conceptual Mathematical Knowledge During a Mathematics Methods Course

Carlos Zerpa; Ann Kajander; Christina van Barneveld


International Journal of Education | 2012

Teaching Practices and Student Motivation that Influence Student Achievement on Large-Scale Assessments

Gul Shahzad Sarwar; Carlos Zerpa; Krystal Hachey; Marielle Simon; Christina van Barneveld


American journal of health education | 2009

An Evaluation of a Classroom Science Intervention Designed to Extend the Bicycle Helmet Safety Message

Moira McPherson; Pamela K. Marsh; William J. Montelpare; Christina van Barneveld; Carlos Zerpa


Canadian journal of education | 2017

The Rights and Responsibility of Test Takers when Large-Scale Testing Is Used for Classroom Assessment

Christina van Barneveld; Karieann Brinson


International journal of applied psychology | 2015

The Effect of Removing Examinees with Low Motivation on Item Response Data Calibration

Carlos Zerpa; Christina van Barneveld


Proceedings of the Canadian Engineering Education Association | 2013

An Exploratory Case Study of the Use of Video Digitizing Technology to Detect Answer-Copying on a Paper-and-Pencil Multiple-Choice Test

Carlos Zerpa; Christina van Barneveld

Collaboration


Dive into the Christina van Barneveld's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge