Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Neil T. Heffernan is active.

Publication


Featured researches published by Neil T. Heffernan.


intelligent tutoring systems | 2006

Detection and analysis of off-task gaming behavior in intelligent tutoring systems

Jason A. Walonoski; Neil T. Heffernan

A major issue in Intelligent Tutoring Systems is off-task student behavior, especially performance-based gaming, where students systematically exploit tutor behavior in order to advance through a curriculum quickly and easily, with as little active thought directed at the educational content as possible. The goal of this research was to explore the phenomena of off-task gaming behavior within the Assistments system. Machine-learned gaming-detection models were developed to investigate underlying factors related to gaming, and an analysis of gaming within the Assistments system was conducted to compare some of the findings of prior studies.


User Modeling and User-adapted Interaction | 2009

Addressing the assessment challenge with an online system that tutors as it assesses

Mingyu Feng; Neil T. Heffernan; Kenneth R. Koedinger

Secondary teachers across the United States are being asked to use formative assessment data (Black and Wiliam 1998a,b; Roediger and Karpicke 2006) to inform their classroom instruction. At the same time, critics of US government’s No Child Left Behind legislation are calling the bill “No Child Left Untested”. Among other things, critics point out that every hour spent assessing students is an hour lost from instruction. But, does it have to be? What if we better integrated assessment into classroom instruction and allowed students to learn during the test? We developed an approach that provides immediate tutoring on practice assessment items that students cannot solve on their own. Our hypothesis is that we can achieve more accurate assessment by not only using data on whether students get test items right or wrong, but by also using data on the effort required for students to solve a test item with instructional assistance. We have integrated assistance and assessment in the ASSISTment system. The system helps teachers make better use of their time by offering instruction to students while providing a more detailed evaluation of student abilities to the teachers, which is impossible under current approaches. Our approach for assessing student math proficiency is to use data that our system collects through its interactions with students to estimate their performance on an end-of-year high stakes state test. Our results show that we can do a reliably better job predicting student end-of-year exam scores by leveraging the interaction data, and the model based on only the interaction information makes better predictions than the traditional assessment model that uses only information about correctness on the test items.


international conference on user modeling adaptation and personalization | 2011

KT-IDEM: introducing item difficulty to the knowledge tracing model

Zachary A. Pardos; Neil T. Heffernan

Many models in computer education and assessment take into account difficulty. However, despite the positive results of models that take difficulty in to account, knowledge tracing is still used in its basic form due to its skill level diagnostic abilities that are very useful to teachers. This leads to the research question we address in this work: Can KT be effectively extended to capture item difficulty and improve prediction accuracy? There have been a variety of extensions to KT in recent years. One such extension was Bakers contextual guess and slip model. While this model has shown positive gains over KT in internal validation testing, it has not performed well relative to KT on unseen in-tutor data or post-test data, however, it has proven a valuable model to use alongside other models. The contextual guess and slip model increases the complexity of KT by adding regression steps and feature generation. The added complexity of feature generation across datasets may have hindered the performance of this model. Therefore, one of the aims of our work here is to make the most minimal of modifications to the KT model in order to add item difficulty and keep the modification limited to changing the topology of the model. We analyze datasets from two intelligent tutoring systems with KT and a model we have called KT-IDEM (Item Difficulty Effect Model) and show that substantial performance gains can be achieved with this minor modification that incorporates item difficulty.


international conference on user modeling adaptation and personalization | 2010

Modeling individualization in a bayesian networks implementation of knowledge tracing

Zachary A. Pardos; Neil T. Heffernan

The field of intelligent tutoring systems has been using the well known knowledge tracing model, popularized by Corbett and Anderson (1995), to track student knowledge for over a decade Surprisingly, models currently in use do not allow for individual learning rates nor individualized estimates of student initial knowledge Corbett and Anderson, in their original articles, were interested in trying to add individualization to their model which they accomplished but with mixed results Since their original work, the field has not made significant progress towards individualization of knowledge tracing models in fitting data In this work, we introduce an elegant way of formulating the individualization problem entirely within a Bayesian networks framework that fits individualized as well as skill specific parameters simultaneously, in a single step With this new individualization technique we are able to show a reliable improvement in prediction of real world data by individualizing the initial knowledge parameter We explore three difference strategies for setting the initial individualized knowledge parameters and report that the best strategy is one in which information from multiple skills is used to inform each students prior Using this strategy we achieved lower prediction error in 33 of the 42 problem sets evaluated The implication of this work is the ability to enhance existing intelligent tutoring systems to more accurately estimate when a student has reached mastery of a skill Adaptation of instruction based on individualized knowledge and learning speed is discussed as well as open research questions facing those that wish to exploit student and skill information in their user models.


Journal of research on technology in education | 2009

A Comparison of Traditional Homework to Computer-Supported Homework

Michael Mendicino; Leena M. Razzaq; Neil T. Heffernan

Abstract This study compared learning for fifth grade students in two math homework conditions. The paper-and-pencil condition represented traditional homework, with review of problems in class the following day. The Web-based homework condition provided immediate feedback in the form of hints on demand and step-by-step scaffolding. We analyzed the results for students who completed both the paper-and-pencil and the Web-based conditions. In this group of 28 students, students learned significantly more when given computer feedback than when doing traditional paper-and-pencil homework, with an effect size of .61. The implications of this study are that, given the large effect size, it may be worth the cost and effort to give Web-based homework when students have access to the needed equipment, such as in schools that have implemented one-to-one computing programs.


Sigkdd Explorations | 2012

The sum is greater than the parts: ensembling models of student knowledge in educational software

Pardos Zachary A; Sujith M. Gowda; Ryan S. Baker; Neil T. Heffernan

Many competing models have been proposed in the past decade for predicting student knowledge within educational software. Recent research attempted to combine these models in an effort to improve performance but have yielded inconsistent results. While work in the 2010 KDD Cup data set showed the benefits of ensemble methods, work in the Genetics Tutor failed to show similar benefits. We hypothesize that the key factor has been data set size. We explore the potential for improving student performance prediction with ensemble methods in a data set drawn from a different tutoring system, the ASSISTments Platform, which contains 15 times the number of responses of the Genetics Tutor data set. We evaluated the predictive performance of eight student models and eight methods of ensembling predictions. Within this data set, ensemble approaches were more effective than any single method with the best ensemble approach producing predictions of student performance 10% better than the best individual student knowledge model.


intelligent tutoring systems | 2006

Prevention of off-task gaming behavior in intelligent tutoring systems

Jason A. Walonoski; Neil T. Heffernan

A major issue in Intelligent Tutoring Systems is off-task student behavior, especially performance-based gaming, where students systematically exploit tutor behavior in order to advance through a curriculum quickly and easily, with as little active thought directed at the educational content as possible. This research developed both active interventions to combat gaming and passive interventions to prevent gaming. Our passive graphical intervention has been well received by teachers, and our experimental results suggest that using a combination of intervention types is effective at reducing off-task gaming behavior.


human factors in computing systems | 1998

Intelligent tutoring systems have forgotten the tutor: adding a cognitive model of human tutors

Neil T. Heffernan

I propose that a more effective intelligent tutoring system (ITS) for the domain of algebra symbolization can be made by building a cognitive model of human tutors and incorporating that model into an ITS. Specifically, I will collect protocols of humans engaged in tutoring and use these to build a model of Socratic dialogue for this domain. I will then test whether the ITS is more effective with such dialogue capabilities.


artificial intelligence in education | 2013

Extending Knowledge Tracing to allow Partial Credit: Using Continuous versus Binary Nodes

Yutao Wang; Neil T. Heffernan

Both Knowledge Tracing and Performance Factors Analysis, are examples of student modeling frameworks commonly used in AIED systems (i.e., Intelligent Tutoring Systems). Both of them use student correctness as a binary input, but student performance on a question might better be represented with a continuous value representing a type of partial credit. Intuitively, a student who has to make more attempts, or has to ask for more hints, deserves a score closer to zero, while students who asks for no hints and just needs to make a second attempt on a question should get a score close to one. In this work, we present a simple change to the Knowledge Tracing model and a simple (non-optimized) method for assigning partial credit. We report our real data experiment result in which we compared the original Knowledge Tracing (OKT) model with this new Knowledge Tracing model that uses partial credit as input (KTPC). The new model outperforms the traditional model reliably. The practical implication of this work is that this new technique can be widely used easily, as it is a small change from the traditional way of fitting KT models.


intelligent tutoring systems | 2004

Applying Machine Learning Techniques to Rule Generation in Intelligent Tutoring Systems

Matthew P. Jarvis; Goss Nuzzo-Jones; Neil T. Heffernan

The purpose of this research was to apply machine learning techniques to automate rule generation in the construction of Intelligent Tutoring Systems. By using a pair of somewhat intelligent iterative-deepening, depth-first searches, we were able to generate production rules from a set of marked examples and domain background knowledge. Such production rules required independent searches for both the “if” and “then” portion of the rule. This automated rule generation allows generalized rules with a small number of sub-operations to be generated in a reasonable amount of time, and provides non-programmer domain experts with a tool for developing Intelligent Tutoring Systems.

Collaboration


Dive into the Neil T. Heffernan's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ryan S. Baker

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mingyu Feng

Worcester Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Cristina Heffernan

Worcester Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Joseph E. Beck

Worcester Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Leena M. Razzaq

Worcester Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Korinn Ostrow

Worcester Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Yutao Wang

Worcester Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Seth Adjei

Worcester Polytechnic Institute

View shared research outputs
Researchain Logo
Decentralizing Knowledge