Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Zachary A. Pardos is active.

Publication


Featured researches published by Zachary A. Pardos.


learning analytics and knowledge | 2013

Affective states and state tests: investigating how affect throughout the school year predicts end of year learning outcomes

Zachary A. Pardos; Ryan S. Baker; Maria Ofelia Clarissa Z. San Pedro; Sujith M. Gowda; Supreeth M. Gowda

In this paper, we investigate the correspondence between student affect in a web-based tutoring platform throughout the school year and learning outcomes at the end of the year, on a high-stakes mathematics exam. The relationships between affect and learning outcomes have been previously studied, but not in a manner that is both longitudinal and finer-grained. Affect detectors are used to estimate student affective states based on post-hoc analysis of tutor log-data. For every student action in the tutor the detectors give us an estimated probability that the student is in a state of boredom, engaged concentration, confusion, and frustration, and estimates of the probability that they are exhibiting off-task or gaming behaviors. We ran the detectors on two years of log-data from 8th grade student use of the ASSISTments math tutoring system and collected corresponding end of year, high stakes, state math test scores for the 1,393 students in our cohort. By correlating these data sources, we find that boredom during problem solving is negatively correlated with performance, as expected; however, boredom is positively correlated with performance when exhibited during scaffolded tutoring. A similar pattern is unexpectedly seen for confusion. Engaged concentration and frustration are both associated with positive learning outcomes, surprisingly in the case of frustration.


international conference on user modeling adaptation and personalization | 2011

KT-IDEM: introducing item difficulty to the knowledge tracing model

Zachary A. Pardos; Neil T. Heffernan

Many models in computer education and assessment take into account difficulty. However, despite the positive results of models that take difficulty in to account, knowledge tracing is still used in its basic form due to its skill level diagnostic abilities that are very useful to teachers. This leads to the research question we address in this work: Can KT be effectively extended to capture item difficulty and improve prediction accuracy? There have been a variety of extensions to KT in recent years. One such extension was Bakers contextual guess and slip model. While this model has shown positive gains over KT in internal validation testing, it has not performed well relative to KT on unseen in-tutor data or post-test data, however, it has proven a valuable model to use alongside other models. The contextual guess and slip model increases the complexity of KT by adding regression steps and feature generation. The added complexity of feature generation across datasets may have hindered the performance of this model. Therefore, one of the aims of our work here is to make the most minimal of modifications to the KT model in order to add item difficulty and keep the modification limited to changing the topology of the model. We analyze datasets from two intelligent tutoring systems with KT and a model we have called KT-IDEM (Item Difficulty Effect Model) and show that substantial performance gains can be achieved with this minor modification that incorporates item difficulty.


international conference on user modeling adaptation and personalization | 2010

Modeling individualization in a bayesian networks implementation of knowledge tracing

Zachary A. Pardos; Neil T. Heffernan

The field of intelligent tutoring systems has been using the well known knowledge tracing model, popularized by Corbett and Anderson (1995), to track student knowledge for over a decade Surprisingly, models currently in use do not allow for individual learning rates nor individualized estimates of student initial knowledge Corbett and Anderson, in their original articles, were interested in trying to add individualization to their model which they accomplished but with mixed results Since their original work, the field has not made significant progress towards individualization of knowledge tracing models in fitting data In this work, we introduce an elegant way of formulating the individualization problem entirely within a Bayesian networks framework that fits individualized as well as skill specific parameters simultaneously, in a single step With this new individualization technique we are able to show a reliable improvement in prediction of real world data by individualizing the initial knowledge parameter We explore three difference strategies for setting the initial individualized knowledge parameters and report that the best strategy is one in which information from multiple skills is used to inform each students prior Using this strategy we achieved lower prediction error in 33 of the 42 problem sets evaluated The implication of this work is the ability to enhance existing intelligent tutoring systems to more accurately estimate when a student has reached mastery of a skill Adaptation of instruction based on individualized knowledge and learning speed is discussed as well as open research questions facing those that wish to exploit student and skill information in their user models.


international conference on user modeling, adaptation, and personalization | 2007

The Effect of Model Granularity on Student Performance Prediction Using Bayesian Networks

Zachary A. Pardos; Neil T. Heffernan; Brigham Anderson; Cristina Heffernan

A standing question in the field of Intelligent Tutoring Systems and User Modeling in general is what is the appropriate level of model granularity (how many skills to model) and how is that granularity derived? In this paper we will explore models with varying levels of skill generality (1, 5, 39 and 106 skill models) and measure the accuracy of these models by predicting student performance within our tutoring system called ASSISTment as well as their performance on a state standardized test. We employ the use of Bayes nets to model user knowledge and to use for prediction of student responses. Our results show that the finer the granularity of the skill model, the better we can predict student performance for our online data. However, for the standardized test data we received, it was the 39 skill model that performed the best. We view this as support for fine-grained skill models despite the finest grain model not predicting the state test scores the best.


international conference on user modeling adaptation and personalization | 2011

Ensembling predictions of student knowledge within intelligent tutoring systems

Ryan S. Baker; Zachary A. Pardos; Sujith M. Gowda; Bahador B. Nooraei; Neil T. Heffernan

Over the last decades, there have been a rich variety of approaches towards modeling student knowledge and skill within interactive learning environments. There have recently been several empirical comparisons as to which types of student models are better at predicting future performance, both within and outside of the interactive learning environment. However, these comparisons have produced contradictory results. Within this paper, we examine whether ensemble methods, which integrate multiple models, can produce prediction results comparable to or better than the best of nine student modeling frameworks, taken individually. We ensemble model predictions within a Cognitive Tutor for Genetics, at the level of predicting knowledge action-by-action within the tutor. We evaluate the predictions in terms of future performance within the tutor and on a paper post-test. Within this data set, we do not find evidence that ensembles of models are significantly better. Ensembles of models perform comparably to or slightly better than the best individual models, at predicting future performance within the tutor software. However, the ensembles of models perform marginally significantly worse than the best individual models, at predicting post-test performance.


artificial intelligence in education | 2011

Clustering students to generate an ensemble to improve standard test score predictions

Shubhendu Trivedi; Zachary A. Pardos; Neil T. Heffernan

In typical assessment student are not given feedback, as it is harder to predict student knowledge if it is changing during testing. Intelligent Tutoring systems, that offer assistance while the student is participating, offer a clear benefit of assisting students, but how well can they assess students? What is the trade off in terms of assessment accuracy if we allow student to be assisted on an exam. In a prior study, we showed the assistance with assessments quality to be equal. In this work, we introduce a more sophisticated method by which we can ensemble together multiple models based upon clustering students. We show that in fact, the assessment quality as determined by the assistance data is a better estimator of student knowledge. The implications of this study suggest that by using computer tutors for assessment, we can save much instructional time that is currently used for just assessment.


Wiley Interdisciplinary Reviews: Cognitive Science | 2015

Data mining and education.

Kenneth R. Koedinger; Sidney K. D'Mello; Elizabeth A. McLaughlin; Zachary A. Pardos; Carolyn Penstein Rosé

An emerging field of educational data mining (EDM) is building on and contributing to a wide variety of disciplines through analysis of data coming from various educational technologies. EDM researchers are addressing questions of cognition, metacognition, motivation, affect, language, social discourse, etc. using data from intelligent tutoring systems, massive open online courses, educational games and simulations, and discussion forums. The data include detailed action and timing logs of student interactions in user interfaces such as graded responses to questions or essays, steps in rich problem solving environments, games or simulations, discussion forum posts, or chat dialogs. They might also include external sensors such as eye tracking, facial expression, body movement, etc. We review how EDM has addressed the research questions that surround the psychology of learning with an emphasis on assessment, transfer of learning and model discovery, the role of affect, motivation and metacognition on learning, and analysis of language data and collaborative learning. For example, we discuss (1) how different statistical assessment methods were used in a data mining competition to improve prediction of student responses to intelligent tutor tasks, (2) how better cognitive models can be discovered from data and used to improve instruction, (3) how data-driven models of student affect can be used to focus discussion in a dialog-based tutoring system, and (4) how machine learning techniques applied to discussion data can be used to produce automated agents that support student learning as they collaborate in a chat room or a discussion board.


intelligent tutoring systems | 2012

Clustered knowledge tracing

Zachary A. Pardos; Shubhendu Trivedi; Neil T. Heffernan; Gábor N. Sárközy

By learning a more distributed representation of the input space, clustering can be a powerful source of information for boosting the performance of predictive models. While such semi-supervised methods based on clustering have been applied to increase the accuracy of predictions of external tests, they have not yet been applied to improve within-tutor prediction of student responses. We use a widely adopted model for student prediction called knowledge tracing as our predictor and demonstrate how clustering students can improve model accuracy. The intuition behind this application of clustering is that different groups of students can be better fit with separate models. High performing students, for example, might be better modeled with a higher knowledge tracing learning rate parameter than lower performing students. We use a bagging method that exploits clusterings at different values for K in order to capture a variety of different categorizations of students. The method then combines the predictions of each cluster in order to produce a more accurate result than without clustering.


artificial intelligence in education | 2011

Learning what works in its from non-traditional randomized controlled trial data

Zachary A. Pardos; Matthew D. Dailey; Neil T. Heffernan

The well established, gold standard approach to finding out what works in education research is to run a randomized controlled trial (RCT) using a standard pre-test and post-test design. RCTs have been used in the intelligent tutoring community for decades to determine which questions and tutorial feedback work best. Practically speaking, however, ITS creators need to make decisions on what content to deploy without the luxury of running an RCT. Additionally, most log data produced by an ITS is not in a form that can be evaluated for learning effectiveness with traditional methods. As a result, there is much data produced by tutoring systems that we as education researchers would like to be learning from but are not. In prior work we introduced one approach to this problem: a Bayesian knowledge tracing derived method that could analyze the log data of a tutoring system to determine which items were most effective lbr learning among a set of items of the same skill. The method was validated by way of simulations. In the current work we further evaluate this method and introduce a second, learning gain, analysis method for comparison. These methods were applied to 11 experiment datasets that investigated the effectiveness of various forms of tutorial help in a web-based math tutoring system. We found that the tutorial help chosen by the Bayesian method as having the highest rate of learning agreed with the learning gain analysis in 10 out of 11 of the experiments. An additional simulation study is presented comparing the statistical power of each method given different sample sizes. The practical impact of this work is an abundance of knowledge about what works that can now be learned from the thousands of experimental designs intrinsic in datasets of tutoring systems that assign items or feedback conditions in an individually-randomized order.


artificial intelligence in education | 2014

How Should Intelligent Tutoring Systems Sequence Multiple Graphical Representations of Fractions? A Multi-Methods Study

Martina A. Rau; Vincent Aleven; Nikol Rummel; Zachary A. Pardos

Providing learners with multiple representations of learning content has been shown to enhance learning outcomes. When multiple representations are presented across consecutive problems, we have to decide in what sequence to present them. Prior research has demonstrated that interleaving tasks types (as opposed to blocking them) can foster learning. Do the same advantages apply to interleaving representations? We addressed this question using a variety of research methods. First, we conducted a classroom experiment with an intelligent tutoring system for fractions. We compared four practice schedules of multiple graphical representations: blocked, fully interleaved, moderately interleaved, and increasingly interleaved. Based on data from 230 4th and 5th-grade students, we found that interleaved practice leads to better learning outcomes than blocked practice on a number of measures. Second, we conducted a think-aloud study to gain insights into the learning mechanisms underlying the advantage of interleaved practice. Results show that students make connections between representations only when explicitly prompted to do so (and not spontaneously). This finding suggests that reactivation, rather than abstraction, is the main mechanism to account for the advantage of interleaved practice. Third, we used methods derived from Bayesian knowledge tracing to analyze tutor log data from the classroom experiment. Modeling latent measures of students’ learning rates, we find higher learning rates for interleaved practice than for blocked practice. This finding extends prior research on practice schedules, which shows that interleaved practice (compared to blocked practice) impairs students’ problem-solving performance during the practice phase when using raw performance measures such as error rates. Our findings have implications for the design of multi-representational learning materials and for research on adaptive practice schedules in intelligent tutoring systems.

Collaboration


Dive into the Zachary A. Pardos's collaboration.

Top Co-Authors

Avatar

Neil T. Heffernan

Worcester Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Steven Tang

University of California

View shared research outputs
Top Co-Authors

Avatar

Shubhendu Trivedi

Worcester Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Martina A. Rau

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Ryan S. Baker

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Cristina Heffernan

Worcester Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Sujith M. Gowda

Worcester Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Brigham Anderson

Worcester Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gábor N. Sárközy

Worcester Polytechnic Institute

View shared research outputs
Researchain Logo
Decentralizing Knowledge