Frans J. Prins
Utrecht University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Frans J. Prins.
Assessment & Evaluation in Higher Education | 2005
Frans J. Prins; Dominique Sluijsmans; Paul A. Kirschner; Jan-Willem Strijbos
In this case study our aim was to gain more insight in the possibilities of qualitative formative peer assessment in a computer supported collaborative learning (CSCL) environment. An approach was chosen in which peer assessment was operationalized in assessment assignments and assessment tools that were embedded in the course material. The course concerned a higher education case‐based virtual seminar, in which students were asked to conduct research and write a report in small multidisciplinary teams. The assessment assignments contained the discussion of assessment criteria, the assessment of a group report of a fellow group and writing an assessment report. A list of feedback rules was one of the assessment tools. A qualitative oriented study was conducted, focusing on the attitude of students towards peer assessment and practical use of peer assessment assignments and tools. Results showed that students’ attitude towards peer assessment was positive and that assessment assignments had added value. However, not all students fulfilled all assessment assignments. Recommendations for implementation of peer assessment in CSCL environments as well as suggestions for future research are discussed.
Computers in Human Behavior | 2010
Chris Phielix; Frans J. Prins; Paul A. Kirschner
This study investigated the effects of a peer feedback tool and a reflection tool on social and cognitive performance during computer supported collaborative learning (CSCL). A CSCL-environment was augmented with a peer feedback tool (Radar) and a reflection tool (Reflector) in order to make group members aware of both their individual and their group behavior. Radar visualizes how group members perceive their own social and cognitive performance and that of their peers during collaboration along five dimensions. Reflector stimulates group members to reflect upon their own performance and the performance of the group. A 2x2 factorial between-subjects design was used to examine whether Radar and Reflector would lead to better team development, more group satisfaction, lower levels of group conflict, more positive attitudes toward problem-based collaboration, and a better group product. Results show that groups with Radar perceived their team as being better developed, experienced lower conflict levels, and had a more positive attitude towards collaborative problem solving than groups without Radar. The quality of group products, however, did not differ. The results demonstrate that peer feedback on the social performance of individual group members can enhance the performance and attitudes of a CSCL-group.
Computers in Human Behavior | 2011
Chris Phielix; Frans J. Prins; Paul A. Kirschner; Gijsbert Erkens; Jos Jaspers
A peer feedback tool (Radar) and a reflection tool (Reflector) were used to enhance group performance in a computer-supported collaborative learning environment. Radar allows group members to assess themselves and their fellow group members on six traits related to social and cognitive behavior. Reflector stimulates group members to reflect on their past, present and future group functioning, stimulating them to set goals and formulate plans to improve their social and cognitive performance. The underlying assumption was that group performance would be positively influenced by making group members aware of how they, their peers and the whole group perceive their social and cognitive behavior in the group. Participants were 108 fourth-year high school students working in dyads, triads and groups of four on a collaborative writing task, with or without the tools. Results demonstrate that awareness stimulated by the peer feedback and reflection tools enhances group-process satisfaction and social performance of CSCL-groups.
Computers in Human Behavior | 2002
Marcel V. J. Veenman; Frans J. Prins; Jan J. Elshout
Abstract The aim of this study was to examine the role of metacognitive skillfulness and intellectual ability during initial inductive learning with a complex computer simulation. It was hypothesized that adequate learning behavior and performance is initiated by a high quality of metacognitive skillfulness. Theories proposed by Elshout [Elshout, J. J. (1987). Problem solving and education. In E. de Corte, H. Lodewijks, R. Parmetier, & P. Span (Eds.), Learning and instruction (pp. 259–271). Oxford: Pergamon Books Ltd. Leuven: University Press] and Raaheim [Raaheim, K. (1988). Intelligence and task novelty. In R. J. Sternberg (Ed.), Advances in the psychology of human intelligence (Vol. 4; pp. 73–97). Hillsdale, NJ: Erlbaum] predict that the impact of intellectual ability on learning performance is at most moderate during initial inductive learning. Students with a low or high intellectual ability were asked to induce rules of optics by conducting experiments with lights and lenses in a computerized Optics Lab. Learning performance was assessed using both qualitative and quantitative measures. Results showed that metacognitive skillfulness was positively related to learning behavior and to scores on the qualitative tests. As predicted by Elshout (1987) and Raaheim [1988; Raaheim, K. (1991). Is the high IQ person really in trouble? Why? In H. A. H. Rowe (Ed.), Intelligence: reconceptualization and measurement (pp. 35–46). Hillsdale, NJ: Erlbaum], the impact of intellectual ability on learning performance was moderate during initial inductive learning. Metacognitive skillfulness and intellectual ability appeared to be unrelated. This study shows that during initial inductive learning with a complex computer simulation learners draw heavily on their metacognitive skillfulness, which results mainly in qualitative knowledge. Consequently, complex computer-simulated learning environments are only appropriate for novice learners with high metacognitive skillfulness.
European Journal of Cognitive Psychology | 2008
Fleurie Nievelstein; Tamara van Gog; Henny P. A. Boshuizen; Frans J. Prins
Little research has been conducted on expertise-related differences in conceptual and ontological knowledge in law, even though this type of knowledge is prerequisite for correctly interpreting and reasoning about legal cases, and differences in conceptual and ontological knowledge structures between students and between students and teachers, might lead to miscommunication. This study investigated the extent and organisation of conceptual and ontological knowledge of novices, advanced students, and experts in law, using a card-sorting task and a concept-elaboration task. The results showed that novices used more everyday examples and were less accurate in their elaborations of concepts than advanced students and experts, on top of that, the organisation of their knowledge did not overlap within their group (i.e., no “shared” ontology). Experts gave more judicial examples based on the lawbook and were more accurate in their elaborations than advanced students, and their knowledge was strongly overlapping within their group (i.e., strong ontology). Incorrect conceptual knowledge seems to impede the correct understanding of cases and the correct application of precise and formal rules in law.
Evaluation and Program Planning | 2011
Liesbeth Baartman; Frans J. Prins; Paul A. Kirschner; Cees van der Vleuten
The goal of this article is to contribute to the validation of a self-evaluation method, which can be used by schools to evaluate the quality of their Competence Assessment Program (CAP). The outcomes of the self-evaluations of two schools are systematically compared: a novice school with little experience in competence-based education and assessment, and an innovative school with extensive experience. The self-evaluation was based on 12 quality criteria for CAPs, including both validity and reliability, and criteria stressing the importance of the formative function of assessment, such as meaningfulness and educational consequences. In each school, teachers, management and examination board participated. Results show that the two schools use different approaches to assure assessment quality. The innovative school seems to be more aware of its own strengths and weaknesses, to have a more positive attitude towards teachers, students, and educational innovations, and to explicitly involve stakeholders (i.e., teachers, students, and the work field) in their assessments. This school also had a more explicit vision of the goal of competence-based education and could design its assessments in accordance with these goals.
Assessment & Evaluation in Higher Education | 2012
Marieke van der Schaaf; Liesbeth Baartman; Frans J. Prins
Student portfolios are increasingly used for assessing student competences in higher education, but results about the construct validity of portfolio assessment are mixed. A prerequisite for construct validity is that the portfolio assessment is based on relevant portfolio content. Assessment criteria, are often used to enhance this condition. This study aims to identify whether assessment criteria can improve content, argumentation and communication during teacher moderation while judging student portfolios. Six teachers scored 32 student portfolios in dyads with and without assessment criteria. Their judgement processes were qualitatively analysed. Results indicated that the quality of their judgement processes was low, since teachers based their judgements mainly on their own personal opinion and less on evidence found in the portfolio. Teachers barely paid attention to quality checks and easily agreed with each other. When teachers used assessment criteria, the quality of their judgements slightly improved. They based their judgements more on relevant evidence, used less personal experiences and more often checked the quality of their judgement processes. It is concluded that the quality of teacher portfolio judgement is low, and that the use of assessment criteria can enhance its quality.
Educational Psychology | 2016
Joost Jansen in de Wal; Lisette Hornstra; Frans J. Prins; Thea Peetsma; Ineke van der Veen
This study’s aim was to examine the prevalence, development and domain specificity of fifth- and sixth-grade elementary school students’ achievement goal profiles. Achievement goals were measured for language and mathematics among 722 pupils at three points in time. These data were analysed through latent profile analysis and latent transition analysis. Results indicated that three similar goal profiles could be discerned at all measurement waves for both language and mathematics. Profiles were labelled ‘multiple goals’, ‘approach oriented’ and ‘moderate/indifferent’. In both mathematics and language, around 80% of the participants remained stable in their goal profiles across measurements. Students who transitioned between goal profiles mostly moved from less to more favourable profiles. Profile membership and transitions between profiles were found to be relatively domain general with 60% overlap between domains. The high level of stability over time and across domains suggests that students’ goal profiles represent relatively stable personal dispositions.
Irish Educational Studies | 2012
H. De Bruin; M.F. van der Schaaf; Anne E. Oosterbaan; Frans J. Prins
Several studies concluded that deep reflection is infrequently reached in student portfolios. An explanation for these disappointing conclusions might be that motivation for portfolio reflection determines the quality of reflection. This study aimed to examine the relationship between motivation for using digital portfolios and reflection. Participants were 156 eleventh-grade students in secondary education, whose motivation for composing a digital portfolio was measured by the motivation part of the Motivated Strategies for Learning Questionnaire. Portfolios of 37 of the 156 students were examined in terms of the amount and nature of reflection by means of a coding scheme based on Mezirows model of transformative learning. On average, one-fifth (19.5%) of the paragraphs in a portfolio contained reflection, and paragraphs with deep reflection were hardly found (0.8%). It was concluded that motivation for composing a portfolio was fair, but not related to the amount and nature of reflection. This exploratory study gives rise to further research into factors that might influence the quality of portfolio reflection.
Assessment & Evaluation in Higher Education | 2017
Frans J. Prins; Renske de Kleijn; Jan van Tartwijk
A rubric for research theses was developed, based on the manual of the American Psychological Association, to be used as an assessment tool for teachers and students. The aim was to make students aware of what is expected, get familiar with criteria, and interpret teacher and peer feedback. In two studies, it was examined whether students use and value these functions. In the first study, a rubric was provided to 105 Educational Sciences students working on their bachelor’s thesis. Questionnaire data indicated that students did value the rubric for the intended functions, although rubric use was not related to ability. In a panel interview, teachers stated that the number of proficiency levels should be increased to be able to distinguish between good and excellent students adequately, and that a criterion concerning student’s role during supervision should be added. Therefore, in the second study, 11 teachers were interviewed about their motives to give high grades and about the supervision process. This lead to an extra criterion concerning student’s role during supervision and an additional proficiency level to assess excellent performance. It is argued that an adequate course organisation is conditional for the rubric’s effectiveness.