Philipp Sonnleitner
University of Luxembourg
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Philipp Sonnleitner.
Journal of Intelligence | 2017
Samuel Greiff; Matthias Stadler; Philipp Sonnleitner; Christian C. P. Wolff; Romain Martin
In this rejoinder, we respond to two commentaries on the study by Greiff, S.; Stadler, M.; Sonnleitner, P.; Wolff, C.; Martin, R. Sometimes less is more: Comparing the validity of complex problem solving measures. Intelligence 2015, 50, 100–113. The study was the first to address the important comparison between a classical measure of complex problem solving (CPS) and the more recent multiple complex systems (MCS) approach regarding their validity. In the study, we investigated the relations between one classical microworld as the initially developed method (here, the Tailorshop) with three more recently developed multiple complex systems (MCS; here, MicroDYN, Genetics Lab, and MicroFIN) tests. We found that the MCS tests showed higher levels of convergent validity with each other than with the Tailorshop even after reasoning was controlled for, thus empirically distinguishing between the two approaches. The commentary by Kretzschmar and the commentary by Funke, Fischer, and Holt expressed several concerns with how our study was conducted, our data was analyzed, and our results were interpreted. Whereas we acknowledge and agree with some of the more general statements made in these commentaries, we respectfully disagree with others, or we consider them to be at least partially in contrast with the existing literature and the currently available empirical evidence.
International Conference on Informatics Engineering and Information Science | 2011
Cyril Hazotte; Hélène Mayer; Younes Djaghloul; Thibaud Latour; Philipp Sonnleitner; Martin Brunner; Ulrich Keller; Eric François; Romain Martin
The purpose of this paper is to present the important characteristics of the intelligence measurement tool called “The Genetics Lab”. This tool, based on web technologies, allows one to assess general intelligence using complex simulations. It has been developed and evaluated to measure cognitive skills of students in Luxembourg. Behind the tool itself, we propose a generic and clear architecture that can be used as groundwork for other evaluation solutions. Previous tools have been affected by various technical weaknesses. The Genetics Lab brings a major contribution by providing a clear architecture and an efficient implementation to address these common issues. To achieve this, we will explore in depth the main fields of e-assessment such as traces, instructions, scoring, localized and multilingual content. The proposed tool is reusable and highly adaptive. It also makes data collection resulting from the user’s test a flawless and one-step process.
Environmental Education Research | 2017
Philipp Sonnleitner; Ariane König; Tea Sikharulidze
Abstract This paper gives an example of how computer-based problem-solving scenarios can be embedded in a course on sustainability, in order to illustrate the highly versatile way in which such scenarios can be used to structure and evaluate learning on complexity on an individual level, as well as learning in diverse groups. After defining criteria, a computer-based problem-solving scenario has to meet in order to be useful for training competencies associated with confronting complexity, the application of one specific scenario, the Genetics Lab, is empirically evaluated on base of three student cohorts. Results suggest that existing approaches to sustainability education can be substantially complemented by computer-based problem-solving scenarios, offering genuine learning opportunities and deepening and personalizing the comprehension of known phenomena in complex problem-solving. The paper closes by offering lessons learned from the presented approach and gives advice and outlook on future applications of such scenarios in sustainability education.
Educational Research and Evaluation | 2011
Klaus D. Kubinger; Christine Hohensinn; Sandra Hofer; Lale Khorramdel; Martina Frebort; Stefana Holocher-Ertl; Manuel Reif; Philipp Sonnleitner
In large-scale assessments, it usually does not occur that every item of the applicable item pool is administered to every examinee. Within item response theory (IRT), in particular the Rasch model (1960), this is not really a problem because item calibration works nevertheless. The different test booklets only need to be conceptualized according to a connected incomplete block design. Yet, connectedness of such a design should best be fulfilled severalfold, since deletion of some items in the course of the item pools IRT calibration may become necessary. The real challenge, however, is to meet constraints determined by numerous moderator variables such as different response formats and several topics of content – all the more so, if several ability dimensions are under consideration, the testing duration is strongly limited or individual scoring and feedback is an issue. In this article, we offer a report of how to deal with the resulting problems. Experience is based on the governmental project of the Austrian Educational Standards (Kubinger et al., 2007).
Intelligence | 2013
Philipp Sonnleitner; Ulrich Keller; Romain Martin; Martin Brunner
Intelligence | 2013
Samuel Greiff; Andreas Fischer; Sascha Wüstenberg; Philipp Sonnleitner; Martin Brunner; Romain Martin
Psychological test and assessment modeling | 2012
Philipp Sonnleitner; Martin Brunner; Samuel Greiff; Joachim Funke; Ulrich Keller; Romain Martin; Cyril Hazotte; Hélène Mayer; Thibaud Latour
Intelligence | 2015
Samuel Greiff; Matthias Stadler; Philipp Sonnleitner; Christian C. P. Wolff; Romain Martin
Psychology Science Quarterly | 2008
Philipp Sonnleitner
Intelligence | 2013
Martin Brunner; Katarzyna Gogol; Philipp Sonnleitner; Ulrich Keller; Stefan Krauss; Franzis Preckel