Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Theodorus Johannes Hendrikus Maria Eggen is active.

Publication


Featured researches published by Theodorus Johannes Hendrikus Maria Eggen.


Computers in Education | 2012

Effects of feedback in a computer-based assessment for learning

Fabienne van der Kleij; Theodorus Johannes Hendrikus Maria Eggen; Caroline F. Timmers; Bernard P. Veldkamp

The effects of written feedback in a computer-based assessment for learning on students learning outcomes were investigated in an experiment at a Higher Education institute in the Netherlands. Students were randomly assigned to three groups, and were subjected to an assessment for learning with different kinds of feedback. These are immediate knowledge of correct response (KCR) + elaborated feedback (EF), delayed KCR + EF, and delayed knowledge of results (KR). A summative assessment was used as a post-test. No significant effect was found for the feedback condition on student achievement on the post-test. Results suggest that students paid more attention to immediate than to delayed feedback. Furthermore, the time spent reading feedback was positively influenced by students attitude and motivation. Students perceived immediate KCR + EF feedback to be more useful for learning than KR. Students also had a more positive attitude towards feedback in a CBA when they received KCR + EF rather than KR only.


Applied Psychological Measurement | 1999

Item Selection in Adaptive Testing with the Sequential Probability Ratio Test

Theodorus Johannes Hendrikus Maria Eggen

Wald’s (1947) sequential probability ratio test can be implemented as an adaptive test for classifying examinees into categories. However, current implementations use an item selection method that is either random or based on Fisher information (FI), a criterion related to optimized examinee trait estimates. In this study, a method based on Kullback-Leibler information (KLI) was evaluated. Simulation studies were conducted for two- and three-category classifications in which item selection methods based on FI and KLI were compared. Results showed that testing algorithms using KLI-based item selection performed better than or as well as those using FI-based item selection.


Review of Educational Research | 2015

Effects of Feedback in a Computer-Based Learning Environment on Students’ Learning Outcomes A Meta-Analysis

Fabienne van der Kleij; Remco C.W. Feskens; Theodorus Johannes Hendrikus Maria Eggen

In this meta-analysis, we investigated the effects of methods for providing item-based feedback in a computer-based environment on students’ learning outcomes. From 40 studies, 70 effect sizes were computed, which ranged from −0.78 to 2.29. A mixed model was used for the data analysis. The results show that elaborated feedback (EF; e.g., providing an explanation) produced larger effect sizes (0.49) than feedback regarding the correctness of the answer (KR; 0.05) or providing the correct answer (KCR; 0.32). EF was particularly more effective than KR and KCR for higher order learning outcomes. Effect sizes were positively affected by EF feedback, and larger effect sizes were found for mathematics compared with social sciences, science, and languages. Effect sizes were negatively affected by delayed feedback timing and by primary and high school. Although the results suggested that immediate feedback was more effective for lower order learning than delayed feedback and vice versa, no significant interaction was found.


Educational and Psychological Measurement | 2000

Computerized Adaptive Testing for Classifying Examinees into three Categories

Theodorus Johannes Hendrikus Maria Eggen; G.J.J.M. Straetmans

The objective of this study was to explore the possibilities for using computerized adaptive testing in situations in which examinees are to be classified into one of three categories. Testing algorithms with two different statistical computation procedures are described and evaluated. The first computation procedure is based on statistical testing and the other on statistical estimation. Item selection methods based on maximum information (MI) considering content and exposure control are considered. The measurement quality of the proposed testing algorithms is reported. The results of the study are that a reduction of at least 22% in the mean number of items can be expected in a computerized adaptive test (CAT) compared to an existing paper-and-pencil placement test. Furthermore, statistical testing is a promising alternative to statistical estimation. Finally, it is concluded that imposing constraints on the MI selection strategy does not negatively affect the quality of the testing algorithms.


Applied Psychological Measurement | 2002

Evaluation of selection procedures for computerized adaptive testing with polytomous items

P.W. van Rijn; Theodorus Johannes Hendrikus Maria Eggen; B.T. Hemker; P.F. Sanders

In the present study, a procedure that has been used to select dichotomous items in computerized adaptive testing was applied to polytomous items. This procedure was designed to select the item with maximum weighted information. In a simulation study, the item information function was integrated over a fixed interval of ability values and the item with the maximum area was selected. This maximum interval information item selection procedure was compared to a maximum point information item selection procedure. Substantial differences between the two item selection procedures were not found when computerized adaptive tests were evaluated on bias and the root mean square of the ability estimate.


Assessment in Education: Principles, Policy & Practice | 2012

High-stakes testing–value, fairness and consequences

Gordon Stobart; Theodorus Johannes Hendrikus Maria Eggen

High-stakes testing has been with us for over two thousand years and is steadily increasing both in scale and range. This special issue considers some of the main uses of these tests (a term used l...


Cadmo | 2011

The effectiveness of methods for providing written feedback through a computer-based assessment for learning: a systematic review

Fabienne van der Kleij; Caroline F. Timmers; Theodorus Johannes Hendrikus Maria Eggen

This study reviews literature regarding the effectiveness of different methods for providing written feedback through a computer-based assessment for learning. In analysing the results, a distinction is made between lower-order and higherorder learning. What little high-quality research is available suggests that students could benefit from knowledge of correct response (KCR) to obtain lower-order learning outcomes. As well, elaborated feedback (EF) seems beneficial for gaining both lower-order and higher-order learning outcomes. Furthermore, this study shows that a number of variables should be taken into account when investigating the effects of feedback on learning outcomes. Implications for future research are discussed.


Assessment in Education: Principles, Policy & Practice | 2015

Integrating data-based decision making, Assessment for Learning and diagnostic testing in formative assessment

Fabienne van der Kleij; Jorine Vermeulen; Kim Schildkamp; Theodorus Johannes Hendrikus Maria Eggen

Recent research has highlighted the lack of a uniform definition of formative assessment, although its effectiveness is widely acknowledged. This paper addresses the theoretical differences and similarities amongst three approaches to formative assessment that are currently most frequently discussed in educational research literature: data-based decision making (DBDM), Assessment for Learning (AfL) and diagnostic testing (DT). Furthermore, the differences and similarities in the implementation of each approach were explored. This analysis shows that although differences exist amongst the theoretical underpinnings of DBDM, AfL and DT, the combination of these approaches can create more informed learning environments. The thoughtful integration of the three assessment approaches should lead to more valid formative decisions, if a range of evidence about student learning is used to continuously optimise student learning.


Computers in Education | 2015

Psychometric analysis of the performance data of simulation-based assessment

Sebastiaan de Klerk; Bernard P. Veldkamp; Theodorus Johannes Hendrikus Maria Eggen

Researchers have shown in multiple studies that simulations and games can be effective and powerful tools for learning and instruction (cf. Mitchell & Savill-Smith, 2004; Kirriemuir & McFarlane, 2004). Most of these studies deploy a traditional pretest-posttest design in which students usually do a paper-based test (pretest) then play the simulation or game and subsequently do a second paper-based test (posttest). Pretest-posttest designs treat the game as a black box in which something occurs that influences subsequent performance on the posttest (Buckley, Gobert, Horwitz, & ODwyer, 2010). Less research has been done in which game play product data or process data itself are used as indicators of student proficiency in some area. However, the last decade researchers have started focusing on what is happening inside the black box to an increasing extent and the literature on the topic is growing. To our knowledge, no systematic reviews have been published that investigate the psychometric analysis of performance data of simulation-based assessment (SBA) and game-based assessment (GBA). Therefore, in Part I of this article, a systematic review on the psychometric analysis of the performance data of SBA is presented. The main question addressed in this review is: What psychometric strategies or models for treating and analyzing performance data from simulations and games are documented in scientific literature?. Then, in Part II of this article, the findings of our review are further illustrated by presenting an empirical example of the - according to our review - most applied psychometric model for the analysis of the performance data of SBA, which is the Bayesian network. Both the results from Part I and Part II assist future research into the use of simulations and games as assessment instruments. We present a review on psychometric analysis of simulation-based assessment.We performed the review from the Evidence-Centered Design framework perspective.The Bayes Net is the most used psychometric model for simulation-based assessment.We present an example of a Bayes Net of a real simulation-based assessment.We make recommendations regarding the development of a simulation-based assessment.


Applied Psychological Measurement | 1986

An empirical Bayesian approach to item banking

Willem J. van der Linden; Theodorus Johannes Hendrikus Maria Eggen

A procedure for the sequential optimization of the calibration of an item bank is given. The procedure is based on an empirical Bayesian approach to a refor mulation of the Rasch model as a model for paired comparisons between the difficulties of test items in which ties are allowed to occur. First, it is shown how a paired-comparisons design deals with the usual in completeness of calibration data and how the item pa rameters can be estimated using this design. Next, the procedure for a sequential optimization of the item pa rameter estimators is given, both for individuals re sponding to pairs of items and for item and examinee groups of any size. The paper concludes with a dis cussion of the choice of the first priors in the proce dure and the problems involved in its generalization to other item response models.

Collaboration


Dive into the Theodorus Johannes Hendrikus Maria Eggen's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Caroline F. Timmers

Saxion University of Applied Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge