Yanjin Long
Carnegie Mellon University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Yanjin Long.
artificial intelligence in education | 2013
Yanjin Long; Vincent Aleven
Self-assessment and study choice are two important metacognitive processes involved in Self-Regulated Learning. Yet not much empirical work has been conducted in ITSs to investigate how we can best support these two processes and improve students’ learning outcomes. The present work redesigned an Open Learner Model (OLM) with three features aimed at supporting self-assessment (self-assessment prompts, delaying the update of the skill bars and progress information on the problem type level). We also added a problem selection feature. A 2x2 experiment with 62 7th graders using variations of an ITS for linear equation solving found that students who had access to the OLM performed significantly better on the post-test. To the best of our knowledge, the study is the first experimental study that shows an OLM enhances students’ learning outcomes with an ITS. It also helps establish that self-assessment has key influence on student learning of problem solving tasks.
intelligent tutoring systems | 2014
Yanjin Long; Vincent Aleven
Integrating gamification features in ITSs has become a popular theme in ITSs research. This work focuses on gamification of shared student/system control over problem selection in a linear equation tutor, where the system adaptively selects the problem type while the students select the individual problems. In a 2x2+1+1 classroom experiment with 267 middle school students, we studied the effect, on learning and enjoyment, of two ways of gamifying shared problem selection: performance-based rewards and the possibility to redo completed problems, both common design patterns in games. We also included two ecological control conditions: a standard ITS and a popular algebra game, DragonBox 12+. A novel finding was that of the students who had the freedom to re-practice problems, those who were not given rewards performed significantly better on the post-tests than their counterparts who received rewards. Also, we found that the students who used the tutors learned significantly more than students who used DragonBox 12+. In fact, the latter students did not improve significantly from pre- to post-tests on solving linear equations. Thus, in this study the ITS was more effective than a commercial educational game, even one with great popular acclaim. The results suggest that encouraging re-practice of previously solved problems through rewards is detrimental to student learning, compared to solving new problems. It also produces design recommendations for incorporating gamification features in ITSs.
artificial intelligence in education | 2013
Yanjin Long; Vincent Aleven
According to Self-Regulated Learning theories, self-assessment by students can facilitate in-depth reflection and help direct effective self-regulated learning. Yet, not much work has investigated the relation between students’ self-assessment and learning outcomes in Intelligent Tutoring Systems (ITSs). This paper investigates this relation with classrooms using the Geometry Cognitive Tutor. We designed a paper-based skill diary that helps students take advantage of the tutor’s Open Learner Model to self-assess their problem-solving skills periodically, and investigated whether it can support students’ self-assessment and learning. In an experiment with 122 high school students, students in the experimental group were prompted periodically to fill out the skill diaries, whereas the control group answered general questions that did not involve active self-assessment. The experimental group performed better on the post-test, and the skill diaries helped lower-performing students to significantly improve their learning outcomes and self-assessment accuracy. This work is among the first empirical studies that successfully establish the beneficial role of self-assessment in students’ learning of problem-solving tasks in ITSs.
artificial intelligence in education | 2015
Yanjin Long; Zachary Aman; Vincent Aleven
Making effective problem selection decisions is an important yet challenging self-regulated learning (SRL) skill. Although efforts have been made to scaffold students’ problem selection in intelligent tutoring systems (ITS), little work has tried to support students’ learning of the transferable problem selection skill that can be applied when the scaffolding is not in effect. The current work uses a user-centered design approach to extend an ITS for equation solving, Lynnette, so the new designs may motivate and help students learn to apply a general, transferable rule for effective problem selection, namely, to select problem types that are not fully mastered (“Mastery Rule”). We conducted user research through classroom experimentation, interviews and storyboards. We found that the presence of an Open Learner Model significantly improves students’ problem selection decisions, which has not been empirically established by prior work; also, lack of motivation, especially lack of a mastery-approach orientation, may cause difficulty in applying the Mastery Rule. Based on our user research, we designed prototypes of tutor features that aim to foster a mastery-approach orientation as well as transfer of the learned Mastery Rule when the scaffolding is faded. The work contributes to the research of supporting SRL in ITSs through a motivational design perspective, and lays foundation for future controlled experiments to evaluate the transfer of the problem selection skill in new tutor units where there is no scaffolding.
ACM Transactions on Computer-Human Interaction | 2017
Yanjin Long; Vincent Aleven
Educational games and intelligent tutoring systems (ITS) both support learning by doing, although often in different ways. The current classroom experiment compared a popular commercial game for equation solving, DragonBox and a research-based ITS, Lynnette with respect to desirable educational outcomes. The 190 participating 7th and 8th grade students were randomly assigned to work with either system for 5 class periods. We measured out-of-system transfer of learning with a paper and pencil pre- and post-test of students’ equation-solving skill. We measured enjoyment and accuracy of self-assessment with a questionnaire. The students who used DragonBox solved many more problems and enjoyed the experience more, but the students who used Lynnette performed significantly better on the post-test. Our analysis of the design features of both systems suggests possible explanations and spurs ideas for how the strengths of the two systems might be combined. The study shows that intuitions about what works, educationally, can be fallible. Therefore, there is no substitute for rigorous empirical evaluation of educational technologies.
artificial intelligence in education | 2011
Yanjin Long; Vincent Aleven
european conference on technology enhanced learning | 2013
Yanjin Long; Vincent Aleven
artificial intelligence in education | 2011
Eliane Stampfer; Yanjin Long; Vincent Aleven; Kenneth R. Koedinger
learning analytics and knowledge | 2018
Yanjin Long; Kenneth Holstein; Vincent Aleven