Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Cristina Heffernan is active.

Publication


Featured researches published by Cristina Heffernan.


international conference on user modeling, adaptation, and personalization | 2007

The Effect of Model Granularity on Student Performance Prediction Using Bayesian Networks

Zachary A. Pardos; Neil T. Heffernan; Brigham Anderson; Cristina Heffernan

A standing question in the field of Intelligent Tutoring Systems and User Modeling in general is what is the appropriate level of model granularity (how many skills to model) and how is that granularity derived? In this paper we will explore models with varying levels of skill generality (1, 5, 39 and 106 skill models) and measure the accuracy of these models by predicting student performance within our tutoring system called ASSISTment as well as their performance on a state standardized test. We employ the use of Bayes nets to model user knowledge and to use for prediction of student responses. Our results show that the finer the granularity of the skill model, the better we can predict student performance for our online data. However, for the standardized test data we received, it was the 39 skill model that performed the best. We view this as support for fine-grained skill models despite the finest grain model not predicting the state test scores the best.


artificial intelligence in education | 2013

Estimating the Effect of Web-Based Homework

Kim M. Kelly; Neil T. Heffernan; Cristina Heffernan; Susan R. Goldman; James W. Pellegrino; Deena Soffer Goldstein

Traditional studies of intelligent tutoring systems have focused on their use in the classroom. Few have explored the advantage of using ITS as a web-based homework (WBH) system, providing correctness-only feedback to students. A second underappreciated aspect of WBH is that teachers can use the data to more efficiently review homework. Universities across the world are employing these WBH systems but there are no known comparisons of this in K12. In this work we randomly assigned 63 thirteen and fourteen year olds to either a traditional homework condition (TH) involving practice without feedback or a WBH condition that added correctness feedback at the end of a problem and the ability to try again. All students used ASSISTments, an ITS, to do their homework but we ablated all of the intelligent tutoring aspects of hints, feedback messages and mastery learning as appropriate to the two practice conditions. We found that students learned reliably more in the web-based homework condition and with an effect size of 0.56. Additionally, teacher use of the homework data lead to a more robust and systematic review of the homework. Future work will further examine modifications to WBH to further improve learning from homework and the role of WBH in formative assessment.


IEEE Transactions on Learning Technologies | 2009

Using Mixed-Effects Modeling to Analyze Different Grain-Sized Skill Models in an Intelligent Tutoring System

Mingyu Feng; Neil T. Heffernan; Cristina Heffernan; Murali Mani

Student modeling and cognitive diagnostic assessment are important issues that need to be addressed for the development and successful application of intelligent tutoring systems (ITS). ITS needs the construction of complex models to represent the skills that students are using and their knowledge states, and practitioners want cognitively diagnostic information at a finer grained level. Traditionally, most assessments treat all questions on the test as sampling a single underlying knowledge component. Can we have our cake and eat it, too? That is, can we have a good overall prediction of a high stakes test, while at the same time be able to tell teachers meaningful information about fine-grained knowledge components? In this paper, we introduce an online intelligent tutoring system that has been widely used. We then present some encouraging results about a fine-grained skill model with the system that is able to predict state test scores. This model allows the system track about 106 knowledge components for eighth grade math. In total, 921 eighth grade students were involved in the study. We show that our fine-grained model could improve prediction compared to other coarser grained models and an IRT-based model. We conclude that this intelligent tutoring system can be a good predictor of performance.


artificial intelligence in education | 2011

Feedback during web-based homework: the role of hints

Ravi Singh; Muhammad Saleem; Prabodha R. Pradhan; Cristina Heffernan; Neil T. Heffernan; Leena M. Razzaq; Matthew D. Dailey; Cristine O'Connor; Courtney Mulcahy

Prior work has shown that computer-supported homework can lead to better results over traditional paper-and-pencil homework. This study about learning from homework involved the comparison of immediate-feedback with tutoring versus a control condition where students got feedback the next day in math class. After analyzing eighth grade students who participated in both conditions, it was found that they gained significantly more (effect size 0.40) with computer-supported homework. This result has practical significance as it suggests an effective improvement over the widely used paper-and-pencil homework. The main result is followed with a second set of studies to better understand this result: is it due to the timeliness of feedback or quality tutoring?


learning analytics and knowledge | 2015

Towards better affect detectors: effect of missing skills, class features and common wrong answers

Yutao Wang; Neil T. Heffernan; Cristina Heffernan

The well-studied Baker et al., affect detectors on boredom, frustration, confusion and engagement concentration with ASSISTments dataset were used to predict state tests scores, college enrollment, and even whether a student majored in a STEM field. In this paper, we present three attempts to improve upon current affect detectors. The first attempt analyzed the effect of missing skill tags in the dataset to the accuracy of the affect detectors. The results show a small improvement after correctly tagging the missing skill values. The second attempt added four features related to student classes for feature selection. The third attempt added two features that described information about student common wrong answers for feature selection. Result showed that two out of the four detectors were improved by adding the new features.


artificial intelligence in education | 2015

Blocking Vs. Interleaving: Examining Single-Session Effects Within Middle School Math Homework

Korinn Ostrow; Neil T. Heffernan; Cristina Heffernan; Zoe Peterson

The benefit of interleaving cognitive content has gained attention in recent years, specifically in mathematics education. The present study serves as a conceptual replication of previous work, documenting the interleaving effect within a middle school sample through brief homework assignments completed within ASSISTments, an adaptive tutoring platform. The results of a randomized controlled trial are presented, examining a practice session featuring interleaved or blocked content spanning three skills: Complementary and Supplementary Angles, Surface Area of a Pyramid, and Compound Probability without Replacement. A second homework session served as a delayed posttest. Tutor log files are analyzed to track student performance and to establish a metric of global mathematics skill for each student. Findings suggest that interleaving is beneficial in the context of adaptive tutoring systems when considering learning gains and average hint usage at posttest. These observations were especially relevant for low skill students.


International Journal of STEM Education | 2018

ElectronixTutor: An Intelligent Tutoring System with Multiple Learning Resources for Electronics.

Arthur C. Graesser; Xiangen Hu; Benjamin D. Nye; Kurt VanLehn; Rohit Kumar; Cristina Heffernan; Neil T. Heffernan; Beverly Park Woolf; Andrew Olney; Vasile Rus; Frank Andrasik; Philip I. Pavlik; Zhiqiang Cai; Jon Wetzel; Brent Morgan; Andrew J. Hampton; Anne Lippert; Lijia Wang; Qinyu Cheng; Joseph E. Vinson; Craig Kelly; Cadarrius McGlown; Charvi A. Majmudar; Bashir I. Morshed; Whitney O. Baer

BackgroundThe Office of Naval Research (ONR) organized a STEM Challenge initiative to explore how intelligent tutoring systems (ITSs) can be developed in a reasonable amount of time to help students learn STEM topics. This competitive initiative sponsored four teams that separately developed systems that covered topics in mathematics, electronics, and dynamical systems. After the teams shared their progress at the conclusion of an 18-month period, the ONR decided to fund a joint applied project in the Navy that integrated those systems on the subject matter of electronic circuits. The University of Memphis took the lead in integrating these systems in an intelligent tutoring system called ElectronixTutor. This article describes the architecture of ElectronixTutor, the learning resources that feed into it, and the empirical findings that support the effectiveness of its constituent ITS learning resources.ResultsA fully integrated ElectronixTutor was developed that included several intelligent learning resources (AutoTutor, Dragoon, LearnForm, ASSISTments, BEETLE-II) as well as texts and videos. The architecture includes a student model that has (a) a common set of knowledge components on electronic circuits to which individual learning resources contribute and (b) a record of student performance on the knowledge components as well as a set of cognitive and non-cognitive attributes. There is a recommender system that uses the student model to guide the student on a small set of sensible next steps in their training. The individual components of ElectronixTutor have shown learning gains in previous decades of research.ConclusionsThe ElectronixTutor system successfully combines multiple empirically based components into one system to teach a STEM topic (electronics) to students. A prototype of this intelligent tutoring system has been developed and is currently being tested. ElectronixTutor is unique in its assembling a group of well-tested intelligent tutoring systems into a single integrated learning environment.


european conference on pattern languages of programs | 2017

Feedback Design Patterns for Math Online Learning Systems

Paul Salvador Inventado; Peter Scupelli; Cristina Heffernan; Neil T. Heffernan

Increasingly, computer-based learning systems are used by educators to facilitate learning. Evaluations of several math learning systems show that they result in significant student learning improvements. Feedback provision is one of the key features in math learning systems that contribute to its success. We have recently been uncovering feedback design patterns as part of a larger pattern language for math problems and learning support in online learning systems. In this paper, we present three feedback design patterns developed from the application of the data-driven design pattern methodology on a large educational dataset collected from actual student data in a math online learning system. These design patterns can help teachers, learning designers, and other stakeholders construct effective feedback for interactive learning activities that facilitate student learning.


artificial intelligence in education | 2014

The ASSISTments Ecosystem: Building a Platform that Brings Scientists and Teachers Together for Minimally Invasive Research on Human Learning and Teaching

Neil T. Heffernan; Cristina Heffernan


Archive | 2006

Using Fine-Grained Skill Models to Fit Student Performance with Bayesian Networks

Zachary A. Pardos; Neil T. Heffernan; Brigham Anderson; Cristina Heffernan

Collaboration


Dive into the Cristina Heffernan's collaboration.

Top Co-Authors

Avatar

Neil T. Heffernan

Worcester Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Ryan S. Baker

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Brigham Anderson

Worcester Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Leena M. Razzaq

Worcester Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Matthew D. Dailey

Worcester Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Mingyu Feng

Worcester Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Muhammad Saleem

Worcester Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Prabodha R. Pradhan

Worcester Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Ravi Singh

Worcester Polytechnic Institute

View shared research outputs
Researchain Logo
Decentralizing Knowledge